Vercel AI SDK telemetry that doesn't ship your prompts
Most observability stories for LLM agents end the same way. You wire up an SDK. The dashboard fills with full prompts, full completions, tool arguments, retrieved documents. Beautiful for debugging. A nightmare for any system where someone outside your team is supposed to trust the data, because every byte of user content is now sitting somewhere a security review has to argue about. @agentlair/vercel-ai shipped to npm yesterday at v0.1.1. It plugs into the Vercel AI SDK's experimental_telemetry, letting you track usage and performance without shipping raw prompts and completions to third-party dashboards.