Observare SDK Documentation
Comprehensive telemetry for LangChain agents. View all your agent activity in the Observare dashboard.
Before You Begin
What you will do: Add comprehensive telemetry to your LangChain agents with just 3 lines of code.
Time required: Less than 5 minutes
What you will get: Real-time visibility into agent execution, tool usage, LLM calls, and performance metrics.
Quick Start
Prerequisites
You'll need an Observare API key to use this SDK. to obtain your API key.
Supported frameworks: LangChain with OpenAI as the LLM provider.
Installation
pip install observare-sdkIntegration
Follow these steps to integrate Observare into your LangChain application:
Step 1: Initialize the Telemetry Handler
Import the SDK and initialize it with your API key:
from observare_sdk import AutoTelemetryHandler
handler = AutoTelemetryHandler(api_key="your_observare_api_key")Step 2: Add Handler to Your LLM
Create your LLM instance and add the telemetry handler to the callbacks parameter. Note that streaming must be enabled:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
    model="gpt-4o-mini",
    temperature=0,
    streaming=True,
    callbacks=[handler]
)Step 3: Add Handler to Your Agent
Import the agent modules and create your agent and executor, adding the same handler to the executor callbacks:
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    callbacks=[handler],
    verbose=True
)Step 4: Use Your Agent Normally
Your agent will now automatically send telemetry data to Observare when invoked:
result = executor.invoke({"input": "What is the weather in San Francisco?"})
print(result["output"])Complete Integration Example
Below is a complete example showing how to integrate Observare telemetry with a LangChain agent that has access to search and Wikipedia tools:
This example demonstrates:
- Initializing the telemetry handler with an API key from environment variables
 - Creating an LLM and agent executor with telemetry callbacks
 - Setting up tools for web search and Wikipedia access
 - Using a ReAct prompt template for structured agent reasoning
 - Invoking the agent and automatically sending telemetry to Observare
 
import os
from observare_sdk import AutoTelemetryHandler
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent, Tool
from langchain.prompts import PromptTemplate
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_community.utilities import WikipediaAPIWrapper
handler = AutoTelemetryHandler(
    api_key=os.getenv("OBSERVARE_API_KEY")
)
llm = ChatOpenAI(
    model="gpt-4o-mini",
    temperature=0,
    streaming=True,
    callbacks=[handler]
)
search = DuckDuckGoSearchRun()
wikipedia = WikipediaAPIWrapper()
tools = [
    Tool(
        name="Search",
        func=search.run,
        description="Search the web for current information"
    ),
    Tool(
        name="Wikipedia",
        func=wikipedia.run,
        description="Get information from Wikipedia"
    )
]
prompt = PromptTemplate.from_template("""
You are a helpful AI assistant with access to search and Wikipedia.
You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Question: {input}
{agent_scratchpad}
""")
agent = create_react_agent(llm, tools, prompt)
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    callbacks=[handler],
    verbose=True,
    max_iterations=5
)
if __name__ == "__main__":
    result = executor.invoke({
        "input": "What are the latest developments in quantum computing?"
    })
    
    print("\nFinal Answer:")
    print(result["output"])💡 Pro Tip: After running this code, visit your Observare dashboard to see real-time traces of your agent execution, including all tool calls, LLM interactions, and performance metrics.
What Gets Captured
All telemetry data automatically appears in your Observare dashboard for analysis and monitoring.
Agent Operations
Start, completion, errors with full timing
Tool Executions
Which tools are called, inputs/outputs, performance
LLM Calls
Model usage, token consumption, costs, response times
Performance Metrics
Success rates, average response times, error rates
Ready to Get Started?
Install the SDK and add observability to your AI agents in minutes.