Releases: cdcai/agents
Version 0.4.1
What's Changed
- Fixing it in post by @beansrowning in #25
- Fixes to OpenAI Batch API asyncio deadlocking by @beansrowning in #26
- Batch API updates by @beansrowning in #27
Full Changelog: v0.4...v0.4.1
Version 0.4
What's Changed
Observability
- Added token and round-trip observability at the provider and agent level
- Added unit tests
Usage
Observable agents and providers should mirror the standard API (and might be merged eventually). If you're already using the standard API, you can just swap the import statement:
import agents.observability as agentsAdding OpenAI Batch API support
I added a new AzureOpenAIBatchProvider provider class which works in conjunction with a new _BatchAPIHelper class to gather all OpenAI web requests into a single batch request, process that request, and return the results back to the requesting agents
This should involve some breaking changes to the API, notably:
- Providers now call
endpoint_fn, which is monkey-patched depending on whether chat or batch endpoints are required. - Providers get async with entry/exit points for async task cleanup in the batch case (superfluous in the chat endpoint case)
- Substantial re-writing of the Processor class (see below)
Example Usage
async def main():
async with agents.AzureOpenAIBatchProvider(
"gpt-4o-batch", batch_size=5, n_workers=2
) as provider:
# Kind of a hacky way to use this, but just for demonstration purposes
proc = agents.BatchProcessorIterable(
[i for i in range(10)], KnockKnockAgent, batch_size=1, provider=provider
)
jokes = await proc.process()Processor Changes
Overhauled the standard Processor logic to handle batch API usage.
Rationale
Previous method fired only as many agents as there were requested workers at any given time. This is inefficient for the Batch API case where we'd like to fire all the agents at once and let the batcher handle processing them all.
API Changes
ProcessorIterable/ProcessorDFare the chat endpoint variantsBatchProcessorIterable/BatchProcessorDFare the batch endpoint variants- Provider is now a required field upon init to handle case batch case
Misc
tests/->test/- Asyncio version of tqdm used instead of standard variant (probably not needed)
- Fixed CI/CD for unit tests by adding test deps requirement file
Full Changelog: v0.3.3...v0.4
Version 0.3.3
What's Changed
- GHA workflow by @beansrowning in #19
- ToolCall class by @beansrowning in #20
- Better Tool Call Handling + Fixing CI/CD by @beansrowning in #21
Full Changelog: v0.3.2...v0.3.3
Version 0.3.2
New Features
OpenAI Provider
I have no use for this, but one can now use standard OpenAI as a provider if you have API access. It works functionally the same, and just extends the AzureOpenAI class to ignore using Azure Entra ID for auth.
@agent_callable() and @async_agent_callable
- Closes #17
Despite OpenAI "encouraging you to define your schemas directly", this is painful and duplicative when most of the metadata needed is already present if you document your code well. Humans manually producing JSON entries for their functions sounds like an anti-pattern to me.
You can, of course, always use Pydantic, but then you're still creating more duplication and one must then pass the Pydantic object thru as kwargs to the callable before evaluating. Safer? I guess. Easier? No, not really.
So scratch all of that. This is some syntactic sugar which auto-generates a nice JSON payload for your tool, provided you've added type-hints to your arguments. You'll only need to bring two additional arguments:
description: What the LLM will see as the description of your tool; what it does and how to use it.variable_description: A dict with keys for each arg, and values giving a short description of that arg
It's pretty smart about figuring out how to code up lists, dicts, literals, unions, etc. Your mileage may vary, and when in doubt, you can always do it the old fashioned way.
Example
import agents
from typing import Optional
class MyAgent(agents.Agent):
@agents.async_agent_callable("A function that returns 'hello, x' when called.", {"x": "The second word in the statement to return"})
async def hello_world(self, x: Optional[str] = None):
if x is None:
x = "world"
return "hello, {x}"Changes
- Fixes issue where
StructuredReturnAgentwarned at init if a different stopping condition is used (now will only warn once per session) - Allows kwargs passed to
BatchProcessorto pass through toAgent, allowing for static formatting strings across batches - Fixes #14
- Fixes #15
- Added a single unit test
Full Changelog: v0.3.1...v0.3.2
Version 0.3.1
What's Changed
- Substantial re-write by @beansrowning in #3
- Adding LLM Providers by @beansrowning in #6
- v0.3.1 by @beansrowning in #7
Full Changelog: v0.3...v0.3.1
Version 0.3
What's Changed
Bug fixes
- Fixed issue where tool argument to agent was modified by reference, leading to growing lists in batch calls
- Better handling for agent calling a tool that isn't defined at init time
- v0.2 by @beansrowning in #2
- Version 0.3 by @beansrowning in #4
Full Changelog: v0.3-alpha...v0.3
v0.3 Alpha
Changes
Docs
- Added examples
- Updated README
User-facing
- Removed a lot of the OG classes and framework that I started with, as it wasn't really serving its purpose anymore
- Added
StoppingConditions as a standard class to trigger when an Agent has finished rather then handle this internally- This is called at the end of every step to determine whether the Agent should terminate or not, and handles answer extraction
- Added Callbacks as a option to handle triggering additional functions at the end of the run with the answer and scratchpad of the calling agent
- A lot of work making abstract classes and getting typing all correct
- Added
response_model_handlerdecorator to handle Pydantic BaseModel validation- Either returns validated BaseModel, or string to pass back to Agent indicating error
- New
StructuredOutputAgent, which is basically just providing whatPredictionAgentdoes, but assumes you can construct the response object before runtime.- Might end up getting rid of the prediction bits as a result
BatchProcessorChangesBatchProcessornow handles the batch object as a kwarg to be inserted into fstring rather than passing as first arg to Agent- You should now include
"{batch}"in BASE_PROMPT attribute where these data should be inserted - Added additional
_batch_formatmethod, which supplies logic to convert batch into string (useful in the DataFrame case)
Full Changelog: v0.2...v0.3-alpha
Version 0.2
Changes
- Now allows for passing of GPT args at runtime
- Some additional docs
- prompt_agent now handles auth errors through backoff decorator instead of catch block
- New structured prediction agents
PredictionAgentprovides text classification on a dataframe from a list of possible labels- Using pydantic to ensure correct output
Version 0.1
Merge pull request #1 from cdcai/dev v0.1