LangChain Integration

Automatic tracking for LangChain agents with zero configuration.

One Line Integration

Add our callback handler to your AgentExecutor and get instant visibility into every step, thought, and action.

Quick Start

1. Install

pip install sentrial[langchain]

2. Initialize

from sentrial import SentrialClient, SentrialCallbackHandler

# Initialize Sentrial
client = SentrialClient(
    api_key="your-api-key",
    project_id="your-project-id"
)

# Create a session
session_id = client.create_session(name="My LangChain Agent")

# Create callback handler
handler = SentrialCallbackHandler(client, session_id)

3. Add to Your Agent

from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI

# Your existing agent setup
llm = ChatOpenAI(model="gpt-4")
agent = create_react_agent(llm, tools, prompt)

# Add Sentrial tracking - that's it!
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    callbacks=[handler],  # 👈 Just add this!
    verbose=True
)

# Run as normal
result = agent_executor.invoke({
    "input": "Help user reset their password"
})

What Gets Tracked Automatically

🧠Chain of Thought

Every reasoning step and decision your agent makes.

Agent thought: "I need to search the knowledge base for password reset articles"

🔧Tool Calls

All tool invocations with inputs and outputs.

Tool: search_kb(query="password reset") → [KB-001, KB-002]

🤖LLM Calls

Prompts, responses, and token usage for every LLM call.

Model: gpt-4 | Tokens: 234 | Duration: 1.2s

Errors

Tool failures and exceptions with full context.

Error: API timeout at search_kb after 30s

Configuration Options

handler = SentrialCallbackHandler(
    client=client,
    session_id=session_id,
    
    # Optional settings
    verbose=True,                    # Print tracking events to console
    track_chain_of_thought=True,     # Track agent reasoning
    track_tool_calls=True,            # Track tool invocations
    track_llm_calls=True,             # Track LLM API calls
    track_errors=True,                # Track tool failures
    ignore_tools=["internal_tool"],   # Don't track specific tools
    track_token_usage=True,           # Track token counts
    track_costs=True,                 # Estimate costs (requires model pricing)
)

Complete Example

from sentrial import SentrialClient, SentrialCallbackHandler
from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.tools import Tool

# Define your tools
def search_kb(query: str) -> str:
    """Search the knowledge base."""
    # Your implementation
    return "Found 3 articles about password reset"

def send_email(to: str, content: str) -> str:
    """Send an email."""
    # Your implementation
    return f"Email sent to {to}"

tools = [
    Tool(
        name="search_knowledge_base",
        func=search_kb,
        description="Search the knowledge base for articles"
    ),
    Tool(
        name="send_email",
        func=send_email,
        description="Send an email to a user"
    )
]

# Create prompt
prompt = PromptTemplate.from_template("""
You are a helpful customer support agent.

Available tools:
{tools}

Tool names: {tool_names}

Question: {input}
{agent_scratchpad}
""")

# Initialize Sentrial
client = SentrialClient(
    api_key="your-api-key",
    project_id="your-project-id"
)

session_id = client.create_session(
    name="Support Agent - User 123",
    metadata={"user_id": "user_123"}
)

handler = SentrialCallbackHandler(client, session_id, verbose=True)

# Create and run agent
llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = create_react_agent(llm, tools, prompt)

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    callbacks=[handler],  # 👈 Sentrial tracking
    verbose=True,
    max_iterations=10
)

# Execute
result = agent_executor.invoke({
    "input": "User user_123 forgot their password. Help them reset it."
})

print(result["output"])

# Close session
client.close_session(session_id, status="success")

Framework Versions

Supported LangChain Versions

  • LangChain 0.1.x ✅
  • LangChain 0.2.x ✅
  • LangChain 0.3.x ✅ (latest)

Compatible LLM Providers

  • OpenAI (GPT-3.5, GPT-4, GPT-4 Turbo)
  • Anthropic (Claude 2, Claude 3)
  • Google (Gemini, PaLM)
  • Azure OpenAI
  • Local models (Ollama, etc.)

Advanced: Custom Callbacks

For fine-grained control, extend the base callback handler:

from sentrial.langchain import SentrialCallbackHandler

class CustomSentrialHandler(SentrialCallbackHandler):
    def on_tool_end(self, output, **kwargs):
        # Custom logic before tracking
        if self.should_track_tool(kwargs.get('name')):
            super().on_tool_end(output, **kwargs)
    
    def should_track_tool(self, tool_name):
        # Your custom filtering logic
        return tool_name not in ['cache_lookup', 'internal_tool']

# Use custom handler
handler = CustomSentrialHandler(client, session_id)

Performance Impact

The callback handler adds minimal overhead (<5ms per event). Network calls to Sentrial are asynchronous and don't block your agent execution.