Your Sentry Errors Are Garbage In for AI Agents

·Antoni·4 min read

Picture this: an AI agent receives a Sentry alert. TypeError: Cannot read properties of undefined (reading 'id') in checkout.ts:147. It gets a stack trace, a file name, and some browser metadata. That's the entire input.

Now think about what you'd do with that same alert. You'd open the file, read the surrounding code, check the last few deploys, maybe search Slack for "checkout bug." You'd fill in the gaps with context the alert never contained.

The AI agent doesn't do any of that. It works with what you give it. And what most Sentry errors give it is next to nothing.

The quality of your errors is now the bottleneck for AI-assisted bug fixing.

What AI Agents Actually See

A typical Sentry error provides an exception type, a message, a stack trace, and some device metadata. That's useful for locating the crash, but it tells the agent nothing about:

  • What the user was trying to do. Was this a checkout flow? An admin action? A background job?
  • What state led to the failure. Was the cart empty? Was the session expired? Was this a retry?
  • What changed recently. Did a deploy go out an hour ago? Was a feature flag toggled?

Human engineers fill these gaps instinctively with tribal knowledge. AI agents treat the error payload as the complete picture. If the context isn't in the error, it doesn't exist.

The Usual Suspects

Most codebases have the same error quality problems. Here are the ones that hurt AI agents the most.

Vague Messages

Before:

Something went wrong

After:

Payment processing failed: Stripe returned card_declined for user u_8f2k on plan pro_monthly

The first version tells the AI agent nothing actionable. The second gives it the service (Stripe), the failure reason (card declined), and the affected entities (user, plan). That's enough to start narrowing down the code path.

Swallowed Context

Before:

try {
  await processOrder(order);
} catch (e) {
  throw new Error("failed");
}

After:

try {
  await processOrder(order);
} catch (e) {
  throw new Error(`Order processing failed for order ${order.id}: ${e.message}`, { cause: e });
}

The first version discards the original error entirely. The AI agent sees "failed" and has to guess what went wrong. The second preserves the chain — the original error, the order ID, and the context of what was being attempted.

Missing Breadcrumbs

Before: A Sentry error with no trail of what happened before the crash.

After: Breadcrumbs showing the user flow — loaded cartapplied coupon SAVE20hit checkoutStripe API call failedcrash. The AI agent can now see that the coupon application might be relevant to the payment failure.

No Business Context

Before: A raw NullReferenceException with no tags.

After: An error tagged with user_tier: enterprise, feature: bulk_export, request_id: req_abc123. Now the AI agent knows this is an enterprise feature, can search for related bulk export code, and can correlate with other errors from the same request.

Why This Matters Now

AI coding agents are becoming part of real engineering workflows. Teams are feeding Sentry errors directly to Claude, Cursor, and Copilot. Tools like SpecSource can enrich errors with surrounding context automatically — pulling in relevant code, recent commits, and team discussions. But the foundation is still the error itself. No amount of downstream tooling fixes a catch (e) { log('error') }.

The Takeaway

Error quality used to matter for human debugging speed. Now it directly determines whether AI agents can help you at all. Better error messages, structured context, and Sentry breadcrumbs aren't just good hygiene — they're the highest-leverage investment for getting real value from AI-assisted bug fixing.

Enjoyed this post?

Subscribe for more updates from the SpecSource team.