Usage Payload
When usage tracking is enabled, responses include ausage object with token counts and cost.
request_count: Number of model requestscached_tokens: Tokens served from cache (if available)input_tokens: Tokens sent to the modeloutput_tokens: Tokens generated by the modeltotal_tokens: Total tokens for the requesttotal_cost: Estimated cost (USD)reasoning_tokens(optional): Reasoning tokens (if available)
- FastAPI Integration:
POST /get_responsereturns JSON withusage, and the finalevent: messagesinPOST /get_response_streamincludesusage. - Running an Agency: use
/costin the terminal demo to see session usage and cost.
Supported Observability Platforms
Agency Swarm supports three main observability approaches:OpenAI Tracing
Built-in tracing using OpenAI’s native tools
Langfuse
Advanced tracing and debugging platform
AgentOps
Specialized agent monitoring and analytics
Getting Started
Let’s walk through setting up each tracing solution. You can use them individually or combine them for monitoring.- OpenAI Tracing
- Langfuse
- AgentOps
1
Basic Setup
OpenAI tracing is built into Agency Swarm and requires no additional packages.
2
Implementation
3
View Traces
After running your code, view your traces at platform.openai.com/traces
Implementation Example
For a complete working example that demonstrates all three tracing methods with a multi-agent agency, see observability.py in the examples directory. The example shows:- How to set up a basic agency with CEO, Developer, and Data Analyst roles
- Implementation of all three tracing methods (OpenAI, Langfuse, AgentOps)
- A sample tool for data analysis
- Error handling and proper tracing setup