← All FeaturesFeature

MCP Server Analytics

If you run an MCP server, you have no idea how AI clients actually use it. There are no access logs. No built-in dashboards. No APM tool that understands the Model Context Protocol. You ship tools, AI clients invoke them, and you hear nothing back. Not unless something breaks badly enough for a user to file an issue.

BetterMeter wraps your MCP tool handlers to track every invocation automatically. One function call gives you full visibility into which tools are called, which AI clients call them, how long they take, and whether they succeed or fail, all without modifying your tool logic.

One line to integrate

Install the @bettermeter/node SDK, initialize it with your site ID and API key, and call wrapMcpServer(). That's it. Every tool invocation is tracked from that point forward.

Auto-track all tools
import { BetterMeter } from "@bettermeter/node";

const bm = new BetterMeter({
  siteId: "my-mcp-server",
  apiKey: "bm_...",
});

// Wraps server.tool() and tracks every invocation
bm.wrapMcpServer(server);

Why MCP analytics matter

MCP servers are the new APIs. They expose tools that AI assistants call on behalf of users: reading files, querying databases, deploying code, managing infrastructure. But unlike REST APIs, which have decades of observability tooling, MCP servers ship with zero built-in visibility.

Traditional APM tools don't understand the MCP protocol. Datadog, New Relic, and Sentry can't distinguish between a tool invocation and a random function call. There are no access logs. No request tracing. No way to see which AI client made the call or whether the tool succeeded. You're flying blind.

BetterMeter is purpose-built for MCP. It understands the protocol, identifies the client on the other end, and gives you a dashboard that answers the questions REST API developers take for granted: what's being called, by whom, how often, and how reliably.

What you see

  • 01Top tools with invocation counts. See which tools are called most frequently, with total invocation counts and success rates for each.
  • 02Client breakdown. Understand the distribution across Claude Code, Cursor, Windsurf, and other MCP clients connecting to your server.
  • 03Latency per tool. Average, p50, and p95 execution duration for every tool, so you can identify slow handlers before users complain.
  • 04Error rates and types. Which tools fail most often, what error types occur, and how failure rates trend over time.
  • 05Daily trends and unique callers. Invocation volume and distinct caller count over time, with period-over-period comparison.
  • 06Optional token usage tracking. Track input and output token counts per tool call when your tools interact with language models.

How client detection works

When an AI client connects to your MCP server, BetterMeter identifies it through multiple signals. For stdio-based transports, the wrapper inspects the process environment and parent process metadata to determine which AI application spawned the connection. For SSE and HTTP transports, user-agent strings and connection headers provide client identification.

This means you see “Claude Code” or “Cursor” in your dashboard rather than anonymous invocation counts. You can compare adoption across clients, track which tools each client prefers, and understand how different AI assistants interact with your server.

Who this is for

  • MCP server authors publishing tools to the community. Understand which tools get traction, which clients drive adoption, and where to invest development effort.
  • AI tool builders tracking product adoption. Measure how developers use your tools through AI clients, identify the most valuable integrations, and track growth.
  • Enterprise teams monitoring internal MCP infrastructure. Audit which tools internal AI assistants invoke, enforce usage policies, and track reliability across your organization's tool ecosystem.

Privacy

BetterMeter tracks only tool names, client names, and execution metadata: duration, success/failure status, and optional token counts. Tool input parameters and output content are never sent to BetterMeter.

The analytics wrapper is transparent to your tool logic. Your handlers run exactly as before, with no modification to inputs or outputs. Events are batched and sent asynchronously, so tracking never blocks your tool's response. If the analytics endpoint is unreachable, events are silently dropped. Your MCP server keeps working normally.

Frequently asked questions

What is the Model Context Protocol (MCP)?

MCP is an open protocol that lets AI assistants call external tools. Think of it as a standardized way for Claude, Cursor, or Windsurf to interact with your code: reading files, querying databases, calling APIs, or running any custom function you expose as a tool. Anthropic open-sourced the spec in late 2024, and it has quickly become the standard integration layer between AI clients and developer tooling.

Which AI clients does BetterMeter detect?

Claude Code, Claude Desktop, Cursor, Windsurf, VS Code with Copilot, and any MCP-compatible client that connects via stdio or SSE transport. BetterMeter identifies clients through user-agent strings, connection metadata, and transport-level signals. As new MCP clients emerge, detection is updated automatically.

Does wrapping my server affect performance?

No measurable impact. The wrapper adds async event tracking around your existing tool handlers. Events are batched and sent in the background after each invocation completes. Your tool logic runs exactly as before, with no blocking I/O added to the request path.

What data is tracked from MCP invocations?

Tool name, client name, execution duration, success/failure status, and optional token counts. Input parameters and output content are never sent to BetterMeter. The analytics layer captures structure and metadata only. Your actual tool inputs, outputs, and any sensitive data stay on your server.

Can I track custom metadata per invocation?

Yes. Pass additional metadata fields to the track call for custom dimensions, such as user tier, workspace ID, or model version. Custom metadata appears in your dashboard alongside built-in metrics and can be used for filtering and segmentation.

Prêt à voir l'image complète ?

Installation en 60 secondes. Aucune carte de crédit requise.

Voir les tarifs