LLM cost tracking and AI span inspector land in Trigger.dev

Developers can now automatically track LLM costs across 145+ models directly in their traces, with a new inspector showing tokens, pricing, messages, and tool calls right in the span details.
The platform now automatically enriches AI spans with cost data. When a span contains gen_ai.response.model and token usage information, the system calculates costs from an in-memory pricing registry and dual-writes the data to both span attributes and a new ClickHouse table.
A new AI span inspector sidebar displays model name, token counts (input, output, cached, reasoning), cost breakdown, and the full conversation history including tool calls. Developers can see exactly what their LLM calls cost without leaving the debugging flow.
For operators, an admin dashboard lets teams manage pricing for 145+ models, detect which models in production lack pricing data, and add new model prices with regex pattern matching. The missing models page generates Claude Code-ready prompts to help add pricing for unknown models.
The metrics dual-write to llm_metrics_v1 enables analytics across the entire platform — cost by model, provider, user, task, or time period. A built-in AI Metrics dashboard provides immediate visibility into LLM spend and performance.
Gateway and OpenRouter models use prefix-stripping so "mistral/mistral-large-3" automatically matches "mistral-large-3" pricing without manual configuration.
View Original GitHub Description
- Automatic LLM cost enrichment for AI SDK spans (streamText, generateText, generateObject) or any other spans that use semantic gen_ai attributes with support for 145+ models
- New AI span inspector sidebar showing model, tokens, cost, messages, tool calls, and response text
- LLM metrics dual-write to ClickHouse
llm_metrics_v1table for analytics - LLM metrics built-in dashboard (unlinked at the moment)
- Provider cost fallback — uses gateway/OpenRouter reported costs from
providerMetadatawhen registry pricing is unavailable - Prefix-stripping for gateway/OpenRouter model names (e.g.
mistral/mistral-large-3matchesmistral-large-3pricing) - Admin dashboard for managing LLM model pricing (list, create, edit, delete, search, test pattern matching)
- Missing models detection page — queries ClickHouse for unpriced models with sample spans and Claude Code-ready prompts for adding pricing
- AI span seed script (
pnpm run db:seed:ai-spans) with 51 spans across 12 provider systems for local dev testing - UI fixes:
completionTokens/promptTokensaliases,ai.response.objectdisplay for generateObject, cache read/write token breakdown