Architecture¶
Core¶
Collectormanages sessions, events, and flushing JSON.aiobs.modelsprovide Pydantic v2 schemas:Session,Event,FunctionEvent,ObservedEvent, andObservabilityExport.
Providers¶
Base provider interface:
aiobs.providers.base.BaseProvider
OpenAI Provider (N-layered):
providers/openai/provider.py: orchestrates API modules.providers/openai/apis/base_api.py: base for API modules.providers/openai/apis/chat_completions.py: instrumentschat.completions.create.providers/openai/apis/models/*: Pydantic models for per-API request/response.
Gemini Provider (N-layered):
providers/gemini/provider.py: orchestrates API modules.providers/gemini/apis/base_api.py: base for API modules.providers/gemini/apis/generate_content.py: instrumentsmodels.generate_content.providers/gemini/apis/generate_videos.py: instrumentsmodels.generate_videos(Veo video generation).providers/gemini/apis/models/*: Pydantic models for per-API request/response.
Flow¶
Call
observer.observe()to start a session and install providers.Make LLM API calls (e.g., OpenAI Chat Completions, Gemini Generate Content, Gemini Generate Videos).
Providers build typed request/response models and record an
Eventwith timing and callsite.observer.flush()serializes anObservabilityExportJSON file.
Trace Tree¶
Events are linked via span_id and parent_span_id fields:
Each decorated function (
@observe) generates a uniquespan_id.Nested calls inherit the parent’s
span_idas theirparent_span_id.The
trace_treefield in the export provides a nested view of the execution.
Example trace tree structure:
{
"trace_tree": [
{
"name": "research",
"span_id": "abc-123",
"children": [
{
"provider": "openai",
"api": "chat.completions",
"parent_span_id": "abc-123"
}
]
}
]
}
Enhanced Prompt Traces¶
Functions decorated with @observe(enh_prompt=True) are tracked separately for prompt analysis:
Each call generates a unique
enh_prompt_id(UUID).The
auto_enhance_afterparameter specifies how many traces before auto-enhancement runs.The
enh_prompt_tracesfield in the export contains a list of allenh_prompt_idvalues.
Example with enhanced prompt tracing:
@observe(enh_prompt=True, auto_enhance_after=10)
def summarize(text: str) -> str:
response = client.chat.completions.create(...)
return response.choices[0].message.content
The JSON export will include:
{
"function_events": [
{
"name": "summarize",
"enh_prompt": true,
"enh_prompt_id": "bd089fd9-7d25-46df-8a6f-028cf06410f7",
"auto_enhance_after": 10,
...
}
],
"trace_tree": [
{
"name": "summarize",
"enh_prompt_id": "bd089fd9-7d25-46df-8a6f-028cf06410f7",
"children": [...]
}
],
"enh_prompt_traces": [
"bd089fd9-7d25-46df-8a6f-028cf06410f7"
]
}
This structure allows collecting and analyzing enhanced prompt traces across multiple JSON files.
Extending aiobs¶
Create custom providers by implementing BaseProvider:
from aiobs import BaseProvider
class MyProvider(BaseProvider):
name = "my-provider"
@classmethod
def is_available(cls) -> bool:
try:
import my_sdk # noqa: F401
return True
except Exception:
return False
def install(self, collector):
# monkeypatch or add hooks into your SDK
# call collector._record_event({ ... normalized payload ... })
def unpatch():
pass
return unpatch
# Register before observe()
from aiobs import observer
observer.register_provider(MyProvider())
observer.observe()