Mastra is a TypeScript framework. This integration is for TypeScript/Node.js only. For Python agent frameworks, see LangChain or CrewAI.
What Gets Tracked Automatically
Sessions
One session per generate() or stream() call — input, output, status, duration.
LLM Steps
Every LLM call within an agent step — model, tokens, finish reason.
Tool Calls
Each tool execution with input args, output result, and tool call ID.
Tokens & Cost
Prompt and completion tokens, auto-calculated cost per provider.
Multi-Turn
memory.thread maps to convoId, memory.resource maps to userId — zero config.
PII Redaction
Auto-redact emails, phone numbers, SSNs, and more from tracked inputs.
Installation
npm install @sentrial/mastra @sentrial/sdk @mastra/core
Quick Start
One function call. Then use your agent exactly like before.
import { Agent } from '@mastra/core/agent';
import { instrumentAgent } from '@sentrial/mastra';
// Your existing Mastra agent
const agent = new Agent({
name: 'My Agent',
instructions: 'You are a helpful assistant.',
model: 'openai/gpt-4o',
tools: { /* your tools */ },
});
// Wrap with Sentrial — that's it
const trackedAgent = instrumentAgent(agent, {
apiKey: process.env.SENTRIAL_API_KEY,
agentName: 'my-agent',
});
// Use normally — everything is tracked automatically
const result = await trackedAgent.generate('What is the weather today?');
// Session created with: input, output, tokens, cost, tool calls, latency
Configuration Options
const trackedAgent = instrumentAgent(agent, {
apiKey: process.env.SENTRIAL_API_KEY, // required
apiUrl: 'https://api.sentrial.com', // optional — defaults to production
agentName: 'my-agent', // optional — defaults to agent.name or agent.id
failSilently: true, // optional — true by default
pii: true, // optional — auto-fetch PII redaction config
});
Fail-Safe by Default — With failSilently: true (the default), any Sentrial API errors are logged but never crash your app. Your agent calls always go through. Set to false during development to see full errors.
Multi-Turn Conversations
When you use Mastra’s memory system, conversations are automatically linked. No extra config needed.
const THREAD_ID = 'convo-abc-123';
const USER_ID = 'user-42';
// Turn 1
await trackedAgent.generate('Hi, my name is Alice.', {
memory: { resource: USER_ID, thread: THREAD_ID },
});
// Turn 2 — automatically linked to the same conversation
await trackedAgent.generate('What was my name again?', {
memory: { resource: USER_ID, thread: THREAD_ID },
});
In your Sentrial dashboard, both sessions appear linked under the same conversation ID. The mapping is:
| Mastra | Sentrial |
|---|
memory.resource | userId |
memory.thread | convoId |
When your agent calls tools, each execution is automatically recorded as an event — with the tool name, input args, and output result.
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
const lookupUser = createTool({
id: 'lookup-user',
description: 'Look up a customer by ID',
inputSchema: z.object({ userId: z.string() }),
execute: async ({ userId }) => {
// Automatically traced: input args, return value
return await db.users.find(userId);
},
});
const agent = new Agent({
name: 'Support Agent',
model: 'openai/gpt-4o-mini',
tools: { lookupUser },
instructions: 'You help customers with account lookups.',
});
const tracked = instrumentAgent(agent, {
apiKey: process.env.SENTRIAL_API_KEY,
agentName: 'customer-support',
});
const result = await tracked.generate('Look up user USR-123', {
maxSteps: 5,
});
// Session events: llm:openai:gpt-4o-mini → lookup-user → llm:openai:gpt-4o-mini
Streaming
Streaming works transparently. The wrapper intercepts the onFinish callback, records the full response, and completes the session when the stream ends.
const stream = await trackedAgent.stream(
'Give me a summary of my recent orders.',
{
memory: { resource: 'user-42', thread: 'convo-abc' },
maxSteps: 5,
onStepFinish: (step) => {
// Your callback still works — Sentrial injects alongside it
console.log('Step done:', step.toolCalls?.length, 'tool calls');
},
}
);
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
// Session recorded when stream finishes: full text, tokens, cost, latency
PII Redaction
Enable PII redaction to automatically scrub sensitive data from tracked inputs before they reach Sentrial’s servers.
const tracked = instrumentAgent(agent, {
apiKey: process.env.SENTRIAL_API_KEY,
agentName: 'support-agent',
pii: true, // Auto-fetch redaction config from your Sentrial org
});
// User sends: "My email is john@example.com and SSN is 123-45-6789"
// Sentrial stores: "My email is [EMAIL] and SSN is [SSN]"
PII redaction happens client-side before data leaves your server. Configure which PII types to redact in your Sentrial dashboard under Settings > PII Redaction.
Per-Call Overrides
Override userId, convoId, or attach extra metadata on a per-call basis:
const result = await trackedAgent.generate('How can I help?', {
sentrial: {
userId: 'override-user-id',
convoId: 'override-convo-id',
metadata: {
satisfaction: 5,
source: 'web-chat',
priority: 'high',
},
},
});
Error Handling
If an agent call throws, the error is recorded and the session is marked as failed. The original error is always re-thrown so your error handling works normally.
try {
await trackedAgent.generate('Do something risky');
} catch (error) {
// Error is recorded in Sentrial as a failed session
// with error type and message, then re-thrown here
console.error(error);
}
Full Production Example
A complete customer support agent with multi-turn, tools, streaming, and PII redaction:
import { Agent } from '@mastra/core/agent';
import { createTool } from '@mastra/core/tools';
import { instrumentAgent } from '@sentrial/mastra';
import { z } from 'zod';
// Define tools
const lookupUser = createTool({
id: 'lookup-user',
description: 'Look up a customer by name or email',
inputSchema: z.object({ query: z.string() }),
execute: async ({ query }) => {
return await db.users.search(query);
},
});
const checkOrder = createTool({
id: 'check-order',
description: 'Check order status by order ID',
inputSchema: z.object({ orderId: z.string() }),
execute: async ({ orderId }) => {
return await db.orders.find(orderId);
},
});
// Create and instrument agent
const agent = new Agent({
name: 'Customer Support',
model: 'openai/gpt-4o-mini',
tools: { lookupUser, checkOrder },
instructions: `You are a friendly customer support agent.
Help customers with account lookups and order status checks.`,
});
const support = instrumentAgent(agent, {
apiKey: process.env.SENTRIAL_API_KEY,
agentName: 'customer-support',
pii: true,
});
// Handle a multi-turn support conversation
async function handleConversation(userId: string, threadId: string) {
// Turn 1: Account lookup
const turn1 = await support.generate(
'Hi, I need help with my account. My email is alice@example.com',
{
maxSteps: 5,
memory: { resource: userId, thread: threadId },
}
);
console.log(turn1.text);
// Turn 2: Order status (same conversation)
const turn2 = await support.generate(
"What's the status of order ORD-12345?",
{
maxSteps: 5,
memory: { resource: userId, thread: threadId },
}
);
console.log(turn2.text);
}
handleConversation('user-42', `thread-${Date.now()}`);
Supported Models
Cost is auto-calculated per provider. The provider is detected from the Mastra model string (e.g., openai/gpt-4o).
| Provider | Model String | Example Models |
|---|
| OpenAI | openai/<model> | gpt-4o, gpt-4.1, o3, o4-mini |
| Anthropic | anthropic/<model> | claude-sonnet-4, claude-haiku-3.5 |
| Google | google/<model> | gemini-2.5-pro, gemini-2.5-flash |
What You See in the Dashboard
Each agent call creates a session in Sentrial with:
Session Overview
Agent name, user ID, status, duration, cost, conversation linkage.
Input / Output
User prompt (PII-redacted if enabled) and full agent response.
Events Timeline
LLM steps and tool calls as events with input, output, tokens, and cost.
Conversation View
Multi-turn sessions linked by convoId, displayed as a threaded conversation.
Next Steps