3 lines of setup. Then use generateText and streamText exactly like you normally would.
import { configureVercel, wrapAISDK } from '@sentrial/sdk';import * as ai from 'ai';import { openai } from '@ai-sdk/openai';// 1. Configure SentrialconfigureVercel({ apiKey: process.env.SENTRIAL_API_KEY, defaultAgent: 'my-ai-agent', // groups sessions by agent name userId: 'user_123', // optional — string or () => string});// 2. Wrap the AI SDKconst { generateText, streamText, generateObject, streamObject } = wrapAISDK(ai);// 3. Use normally — everything is tracked automaticallyconst { text } = await generateText({ model: openai('gpt-4o'), prompt: 'What is the capital of France?',});// Session created with: input, output, tokens, cost, latency
configureVercel({ apiKey: process.env.SENTRIAL_API_KEY, // required apiUrl: 'https://api.sentrial.com', // optional — defaults to production defaultAgent: 'my-agent', // optional — agent name for grouping userId: 'user_123', // optional — string or function convoId: 'convo_abc', // optional — string or function failSilently: true, // optional — true by default});
Fail-Safe by Default — With failSilently: true (the default), any Sentrial API errors are logged but never crash your app. Your AI calls always go through. Set to false during development to see full errors.
userId and convoId accept functions for per-request resolution. This is essential for apps using Clerk, NextAuth, or any auth system where the user changes per request.
import { auth } from '@clerk/nextjs/server';configureVercel({ apiKey: process.env.SENTRIAL_API_KEY, defaultAgent: 'my-chatbot', userId: () => auth().userId ?? 'anonymous', // resolved per AI call convoId: () => getConversationId(), // resolved per AI call});
Functions are called at session creation time (every generateText/streamText call), so they always get the current request’s values.You can also set them per-instance via wrapAISDK:
When you pass tools to generateText or streamText, every tool’s execute function is automatically wrapped. Each execution is recorded with input args, output, duration, and any errors. Zero changes to your tool code.
import { z } from 'zod';const { generateText } = wrapAISDK(ai);const { text } = await generateText({ model: openai('gpt-4o'), prompt: "What's the weather in San Francisco?", tools: { getWeather: { description: 'Get weather for a location', parameters: z.object({ location: z.string() }), execute: async ({ location }) => { // Automatically traced: input args, return value, duration const res = await fetch(`https://api.weather.com/${location}`); return res.json(); }, }, searchWeb: { description: 'Search the web', parameters: z.object({ query: z.string() }), execute: async ({ query }) => { // Errors are caught, recorded, then re-thrown return await searchAPI(query); }, }, },});
In your Sentrial dashboard, each tool call appears as a child event under the session — with the tool name, input, output, and execution time.
Use convoId to link multiple AI SDK calls into a single conversation thread. Set it globally in configureVercel() or per-instance in wrapAISDK().
// Per-instance: link all calls from this wrapper to the same conversationconst { generateText } = wrapAISDK(ai, { convoId: `user-${userId}-${Date.now()}` });// Turn 1await generateText({ model: openai('gpt-4o'), prompt: 'My name is Alice.' });// Turn 2 — automatically linked to the same conversation in the dashboardawait generateText({ model: openai('gpt-4o'), prompt: 'What was my name?' });
You can also pass a custom SentrialClient instance if you need full control:
When using maxSteps for agentic loops, each step is automatically tracked as a separate event with its own token usage, tool calls, and finish reason. The session aggregates totals across all steps.
const { generateText } = wrapAISDK(ai);const { text } = await generateText({ model: openai('gpt-4o'), prompt: 'Find the weather in NYC and summarize it.', tools: { getWeather: weatherTool }, maxSteps: 5,});// Session events: step 1 (tool_call) → step 2 (tool_result) → step 3 (final answer)// Each step tracked with individual token counts; session has aggregated totals
Streaming works transparently. The wrapper intercepts the text stream, accumulates the full response, and records the session when the stream completes.
const { streamText } = wrapAISDK(ai);const result = streamText({ model: openai('gpt-4o'), prompt: 'Write a haiku about debugging',});// Option 1: Consume the text stream directlyfor await (const chunk of result.textStream) { process.stdout.write(chunk);}// Session recorded when stream finishes: full text, tokens, cost, latency// Option 2: Use with Next.js streaming response// return result.toDataStreamResponse();
A complete Next.js API route with streaming, tool calls, dynamic user tracking, and Sentrial:
// app/api/chat/route.tsimport { configureVercel, wrapAISDK } from '@sentrial/sdk';import { auth } from '@clerk/nextjs/server';import * as ai from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';// Configure once at module level — userId resolved per requestconfigureVercel({ apiKey: process.env.SENTRIAL_API_KEY, defaultAgent: 'nextjs-chat', userId: () => auth().userId ?? 'anonymous',});const { streamText } = wrapAISDK(ai);export async function POST(request: Request) { const { messages, conversationId } = await request.json(); // Per-instance convoId for this conversation thread const { streamText: streamWithConvo } = wrapAISDK(ai, { convoId: conversationId, }); const result = streamWithConvo({ model: openai('gpt-4o'), system: 'You are a helpful assistant.', messages, tools: { searchKnowledgeBase: { description: 'Search internal docs', parameters: z.object({ query: z.string() }), execute: async ({ query }) => { return { results: ['doc1', 'doc2'] }; }, }, }, }); // Stream response to client — session auto-completes when done return result.toDataStreamResponse();}
No Clerk? Replace auth().userId with whatever your auth system provides — session.user.id, req.headers['x-user-id'], getServerSession().user.id, etc. Any function that returns a string works.
If an AI call or tool execution throws, the error is recorded and the session is marked as failed. The original error is always re-thrown so your app’s error handling works normally.
try { const { text } = await generateText({ model: openai('gpt-4o'), prompt: 'Hello!', });} catch (error) { // Error is recorded in Sentrial as a failed session // with error type and message, then re-thrown here console.error(error);}