Error Monitoring with Sentry
Why console.log Is Not a Monitoring Strategy
Your app is in production. A user in Germany on a 3G connection using Firefox encounters an error. What do you see? Nothing. The user sees a white screen, closes the tab, and never comes back. Your console.error fired in their browser and disappeared forever.
This is the reality without error monitoring: you only know about the bugs users bother to report, which is roughly 1% of the bugs they encounter. The other 99% silently erode trust, increase bounce rates, and cost you users you never knew you had.
Sentry captures those silent failures automatically. Every unhandled exception, every rejected promise, every failed network request — captured, enriched with context (browser, OS, user actions leading up to the error), and delivered to your dashboard with a readable stack trace pointing to the exact line of your source code.
Think of Sentry like a black box recorder on an airplane. You hope you never need it, but when something goes wrong, it tells you exactly what happened in the seconds before the crash: what the user clicked, what network requests fired, what state the app was in. Without it, you are investigating a crash with nothing but "something broke for someone somewhere."
SDK Setup for a Next.js Project
Sentry v8 uses a functional API (no more class-based integrations). The setup for Next.js involves three configuration files: client, server, and edge.
npx @sentry/wizard@latest -i nextjs
The wizard scaffolds the configuration, but let us understand what it creates:
Client-Side Configuration
// sentry.client.config.ts
import * as Sentry from '@sentry/nextjs';
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
environment: process.env.NODE_ENV,
release: process.env.NEXT_PUBLIC_SENTRY_RELEASE,
tracesSampleRate: 0.1,
replaysSessionSampleRate: 0.01,
replaysOnErrorSampleRate: 1.0,
integrations: [
Sentry.replayIntegration({
maskAllText: true,
blockAllMedia: true,
}),
Sentry.browserTracingIntegration(),
],
beforeSend(event) {
if (event.exception?.values?.[0]?.type === 'ChunkLoadError') {
return null;
}
return event;
},
});
Let us walk through the critical settings:
tracesSampleRate: 0.1— Capture 10% of transactions for performance monitoring. At high traffic, 100% would be expensive and unnecessary.replaysSessionSampleRate: 0.01— Record 1% of all sessions as replays. This gives you a baseline sample.replaysOnErrorSampleRate: 1.0— Record 100% of sessions that encounter an error. When something breaks, you always get the replay.beforeSend— Filter out noise.ChunkLoadErrorhappens when a user loads a stale page after a deployment and the old chunk URLs no longer exist. It is expected, not actionable, and would flood your error feed.
Server-Side Configuration
// sentry.server.config.ts
import * as Sentry from '@sentry/nextjs';
Sentry.init({
dsn: process.env.SENTRY_DSN,
environment: process.env.NODE_ENV,
release: process.env.SENTRY_RELEASE,
tracesSampleRate: 0.1,
});
The server configuration is simpler — no replay (it is a server, there is no screen to record), no browser tracing. Just error capture and transaction sampling.
Source Maps: Making Stack Traces Readable
Without source maps, Sentry shows you stack traces like this:
TypeError: Cannot read properties of undefined (reading 'map')
at o.render (/_next/static/chunks/pages/courses-8f3a2b1c.js:1:28394)
That is useless. With source maps uploaded, you see:
TypeError: Cannot read properties of undefined (reading 'map')
at CourseList (src/components/CourseList.tsx:42:18)
Now you know exactly where to look.
Uploading Source Maps in CI
Sentry uses Debug IDs (injected at build time) to match minified code with source maps. The Sentry webpack/Vite plugin handles this automatically:
// next.config.ts
import { withSentryConfig } from '@sentry/nextjs';
const nextConfig = {
// your config
};
export default withSentryConfig(nextConfig, {
org: 'your-org',
project: 'your-project',
authToken: process.env.SENTRY_AUTH_TOKEN,
silent: true,
hideSourceMaps: true,
widenClientFileUpload: true,
});
hideSourceMaps: true— Prevents source maps from being served to the browser. Users cannot reverse-engineer your minified code, but Sentry can still read the maps because they were uploaded during the build.widenClientFileUpload: true— Uploads source maps for all client chunks, not just entry points. Without this, errors in dynamically imported modules might not have mapped stack traces.
If you deploy to multiple environments (staging, production) from the same commit, make sure each deployment uses a unique release identifier that includes the environment. Otherwise, Sentry cannot distinguish staging errors from production errors, and source map matching might break when the same release has different builds for different environments.
Breadcrumbs: The Timeline of Events
Breadcrumbs are the most underrated Sentry feature. They automatically record a timeline of user actions, network requests, and console messages leading up to an error. When you open an issue in Sentry, you see something like:
[00:00] Navigation → /courses
[00:03] UI Click → button.enroll-btn
[00:03] HTTP → POST /api/enroll (pending)
[00:04] Console → Warning: Each child in a list should have a unique "key" prop
[00:05] HTTP → POST /api/enroll (500)
[00:05] Error → TypeError: Cannot read properties of undefined (reading 'id')
Without breadcrumbs, you just see the error. With breadcrumbs, you see the story: the user navigated to courses, clicked enroll, the API returned 500, and then the code tried to read .id from the undefined response.
Custom Breadcrumbs
You can add your own breadcrumbs for application-specific context:
Sentry.addBreadcrumb({
category: 'enrollment',
message: `User started enrollment for course ${courseId}`,
level: 'info',
data: {
courseId,
pricingPlan: selectedPlan,
paymentMethod: method,
},
});
These custom breadcrumbs show up in the timeline alongside automatic ones, giving you application-level context that Sentry cannot infer on its own.
Custom Contexts and Tags
Tags and contexts let you add structured metadata to errors for filtering and searching.
Sentry.setTag('theme', isDarkMode ? 'dark' : 'light');
Sentry.setTag('plan', user.subscriptionPlan);
Sentry.setContext('course', {
courseId: course.id,
moduleName: currentModule.name,
progress: userProgress.percentage,
});
Sentry.setUser({
id: user.id,
email: user.email,
subscription: user.plan,
});
Tags are indexed and searchable. You can filter your error feed by theme:dark to see if errors correlate with dark mode. Contexts are not indexed but provide rich detail when viewing a specific issue. User data lets you see how many unique users are affected by each issue.
- 1Use tags for high-cardinality filtering: plan, theme, locale, browser, feature-flag variant
- 2Use contexts for detailed debugging data: current page state, cart contents, form values
- 3Always set the user context so Sentry can show unique user counts per issue
- 4Never put PII (emails, names) in tags — they are indexed and harder to delete for compliance
Release Tracking
Sentry releases let you track which version of your code introduced an error. When you deploy a new release, Sentry marks the boundary and can tell you "this error was first seen in release v1.4.2."
# In your CI workflow
- name: Create Sentry release
run: |
npx @sentry/cli releases new "${{ github.sha }}"
npx @sentry/cli releases set-commits "${{ github.sha }}" --auto
npx @sentry/cli releases finalize "${{ github.sha }}"
- name: Deploy
run: pnpm deploy
- name: Notify Sentry of deployment
run: |
npx @sentry/cli deploys new -r "${{ github.sha }}" -e production
With release tracking, Sentry shows you a "Release Health" dashboard: crash-free sessions, error count delta from the previous release, and adoption rate (how many users have loaded the new version). If a new release causes a spike, you see it immediately.
Session Replay: Video of the Bug
Session Replay records a video-like reconstruction of the user's browser session. It is not a screen recording — it captures DOM mutations, mouse movements, clicks, and scroll positions, then reconstructs them as a playable timeline.
When a user hits an error, you can watch exactly what they did:
- Opened the page
- Scrolled to the pricing section
- Clicked "Upgrade Plan"
- Modal opened with a spinner
- Spinner disappeared, modal was empty (the bug)
- Error logged:
TypeError: plan.features is undefined
This turns "I clicked a button and it broke" into a reproducible scenario.
Privacy by Default
Session Replay masks all text and blocks all media by default:
Sentry.replayIntegration({
maskAllText: true,
blockAllMedia: true,
})
Users see placeholders instead of actual content in the replay. You can selectively unmask specific elements for debugging:
<h1 data-sentry-unmask>Course Title</h1>
Only elements with data-sentry-unmask show their actual text in replays. Everything else is replaced with **** characters.
Session Replay performance impact
Session Replay adds roughly 40-60KB to your bundle (gzipped) and uses the MutationObserver API to track DOM changes. The performance overhead is minimal for most applications — typically under 1ms per mutation batch. However, for pages with high DOM churn (live data tables, real-time feeds), the overhead can add up. The SDK includes built-in throttling to cap mutation processing, but monitor your INP metrics after enabling replay to catch any regression. You can also use replaysSessionSampleRate: 0 to only capture replays when errors occur, eliminating overhead for error-free sessions.
Alert Rules: Getting Notified
Sentry without alerts is just a log viewer. Configure alerts to notify you when something actually needs attention.
Smart Alert Rules
When: A new issue is first seen
Conditions: Event's environment is production
Action: Send notification to #eng-alerts Slack channel
When: Issue frequency exceeds 100 events in 5 minutes
Conditions: Event's environment is production AND issue is not marked as ignored
Action: Page the on-call via PagerDuty
When: A regression occurs (previously resolved issue reappears)
Conditions: Event's environment is production
Action: Reopen the issue and assign to the original resolver
The key is tiered alerting. Not every error deserves to wake someone up. A new, unknown error in production? Slack notification. A flood of the same error affecting hundreds of users? Page the on-call. A resolved issue coming back? Notify the person who originally fixed it.
Issue Grouping
Sentry groups errors by stack trace fingerprint. Two errors with the same stack trace are one issue with many events. But sometimes Sentry's default grouping is wrong — it might split one logical error into many issues, or merge unrelated errors into one.
Custom Fingerprinting
Sentry.init({
beforeSend(event) {
if (event.exception?.values?.[0]?.type === 'NetworkError') {
event.fingerprint = ['network-error', event.request?.url || 'unknown'];
}
return event;
},
});
This groups all network errors by URL instead of stack trace. Without it, the same fetch('/api/courses') failure might create dozens of issues because the stack trace varies slightly depending on where the fetch was called from.
| What developers do | What they should do |
|---|---|
| Setting tracesSampleRate to 1.0 in production 100% sampling generates enormous data volumes and costs. Performance monitoring at 10% sampling is statistically representative for most traffic levels. | Use 0.05-0.2 for production, 1.0 only in development |
| Not filtering ChunkLoadError and ResizeObserver errors These errors are browser artifacts, not application bugs. They flood your error feed and drown out real issues. | Use beforeSend to drop known non-actionable errors |
| Deploying without uploading source maps Without source maps, every stack trace points to minified code. You cannot debug what you cannot read. | Include source map upload in your CI deployment pipeline |
| Using Sentry's default alert rules without customization Default alerts either spam you with noise (alert on every error) or miss critical issues (no alerts configured). Tiered alerting matches severity to response. | Configure tiered alerts: Slack for new issues, PagerDuty for floods |
Integrating Sentry with Your CI/CD Pipeline
The full integration looks like this: CI builds the app, uploads source maps, creates a release, deploys, and marks the deployment in Sentry. Now every error in Sentry links back to the exact commit and deployment that introduced it.
- name: Build with Sentry
run: pnpm build
env:
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
SENTRY_ORG: your-org
SENTRY_PROJECT: your-project
NEXT_PUBLIC_SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
NEXT_PUBLIC_SENTRY_RELEASE: ${{ github.sha }}
With this pipeline, when an error appears in Sentry, you click "View Commit" and see the exact diff that introduced the bug. You click the release and see how many users are affected. You click the replay and watch the bug happen. The entire debugging workflow — from "something broke" to "here is the fix" — happens in minutes instead of hours.