Making AI Capabilities Observable
Organizations deploy hundreds of AI agent skills but have no visibility into which ones are actually used, how much they cost, or whether deprecated capabilities are still active. Skills Trace changes that.
The Problem
AI agents rely on structured skills — tool definitions, prompt templates, capability modules. These skills are well-defined during development: versioned, tested, and reviewed.
But once deployed, they are serialized into LLM request payloads and disappear into the prompt stream. Existing observability tools can track model latency, token usage, and prompt logs — but they cannot tell you which capabilities your agents are invoking.
Skills Trace makes them visible again — without modifying your prompts, without adding latency, and without logging prompt content.
The Skills Platform
Skills Trace is part of a broader platform for managing AI agent capabilities across the entire lifecycle.
skills-check
Development-time skill validation, linting, and auditing. Generate fingerprint registries, enforce quality standards, and analyze skill usage reports.
Skills Trace
Runtime detection and telemetry. Middleware that sits inside your AI gateway, detects skills in LLM request payloads, and emits structured telemetry events.
Together, they form a DevSecOps platform for AI capabilities.
Development
Author, lint, and validate skills with skills-check
Deployment
Generate fingerprint registries and sign them for production
Runtime
Detect skill usage at the gateway layer with Skills Trace
Analytics
Monitor usage, cost, drift, and risk across your fleet
Open Source
Skills Trace core is open source. The detection engine, all gateway adapters (Express, Vercel, Cloudflare), and the Python LiteLLM SDK are freely available under a permissive license.
We believe runtime observability for AI capabilities should be accessible to every team building with AI agents — from startups to enterprises.