AI prompts now managed from dashboard with version control

Developers can now define AI prompts in code with `prompts.define()`, then manage versions and overrides from the dashboard without redeploying, with rich AI span inspectors showing prompt context and metrics in run traces.
AI prompts are now managed from a dashboard instead of requiring code redeploys. Developers define prompts with prompts.define() in their tasks, then manage versions, create overrides, and monitor usage from the Trigger.dev dashboard. The prompt detail page shows each version with metrics — total generations, average tokens, cost, and latency — broken down by version. Overrides take priority over deployed code, allowing teams to test prompt changes without touching source control.
Every AI SDK operation (ai.generateText, ai.streamText, ai.generateObject, ai.embed, and tool calls) now displays a rich inspector in the run trace. These inspectors show token usage, message threads, model details, and — when linked to a managed prompt — the prompt metadata, input variables, and template content. A new "AI" section appears in the sidebar with links to Prompts and AI Metrics pages.
The feature lives in the webapp dashboard, SDK, and API layers. It's part of a larger platform initiative to provide full-stack observability for AI operations — from prompt definition through execution to cost tracking.
View Original GitHub Description
- Full prompt management UI: list, detail, override, and version management for AI prompts defined with
prompts.define() - Rich AI span inspectors for all AI SDK operations with token usage, messages, and prompt context
- Real-time generation tracking with live polling and filtering
Prompt management
Define prompts in your code with prompts.define(), then manage versions and overrides from the dashboard without redeploying:
import { task, prompts } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const supportPrompt = prompts.define({
id: "customer-support",
model: "gpt-4o",
variables: z.object({
customerName: z.string(),
plan: z.string(),
issue: z.string(),
}),
content: `You are a support agent for Acme SaaS.
Customer: {{customerName}} ({{plan}} plan)
Issue: {{issue}}
Respond with empathy and precision.`,
});
export const supportTask = task({
id: "handle-support",
run: async (payload) => {
const resolved = await supportPrompt.resolve({
customerName: payload.name,
plan: payload.plan,
issue: payload.issue,
});
const result = await generateText({
model: openai(resolved.model ?? "gpt-4o"),
system: resolved.text,
prompt: payload.issue,
...resolved.toAISDKTelemetry(),
});
return { response: result.text };
},
});
The prompts list page shows each prompt with its current version, model, override status, and a usage sparkline over the last 24 hours.
From the prompt detail page you can:
- Create overrides to change the prompt template or model without redeploying. Overrides take priority over the deployed version when
prompt.resolve()is called. - Promote any code-deployed version to be the current version
- Browse generations across all versions with infinite scroll and live polling for new results
- Filter by version, model, operation type, and provider
- View metrics (total generations, avg tokens, avg cost, latency) broken down by version
AI span inspectors
Every AI SDK operation now gets a custom inspector in the run trace view:
ai.generateText/ai.streamText— Shows model, token usage, cost, the full message thread (system prompt, user message, assistant response), and linked prompt detailsai.generateObject/ai.streamObject— Same as above plus the JSON schema and structured outputai.toolCall— Shows tool name, call ID, and input argumentsai.embed— Shows model and the text being embedded
For generation spans linked to a prompt, a "Prompt" tab shows the prompt metadata, the input variables passed to resolve(), and the template content from the prompt version.
All AI span inspectors include a compact timestamp and duration header.
Other improvements
- Resizable panel sizes now persist across page refreshes (patched
@window-splitter/stateto fix snapshot restoration) - Run page panels also persist their sizes
- Fixed
<div>inside<p>DOM nesting warnings in span titles and chat messages - Added Operations and Providers filters to the AI metrics dashboard