EngineeringMay 14, 202612 min read

Correlate Errors with Funnel Drop-Off: Find Conversion Bugs Fast

Correlate Errors with Funnel Drop-Off: Find Conversion Bugs Fast

The Slack message came in at 11:47am on a Tuesday: "Payment conversions are down 12% since yesterday. Marketing didn't change anything. Did you deploy?"

We'd deployed, sure. But it was a CSS fix for mobile nav. Nothing anywhere near checkout.

I pulled up Sentry. No spike in error volume. I pulled up GA4. Funnel visualization showed the drop-off happening on the payment step — users were making it to "Enter Card Details," then vanishing. But GA4 couldn't tell me why. It just knew they left.

So I did what I always do when two tools don't talk to each other: I exported CSVs. Sentry events for the last 48 hours. GA4 funnel events for the same window. Opened them in Google Sheets. Started sorting by timestamp, trying to match session IDs. (GA4 doesn't track the same session ID as Sentry by default, so I was matching by IP + user agent + approximate time window. It's as fun as it sounds.)

Forty-five minutes later, I found it. A TypeError: Cannot read property 'clientSecret' of undefined — Stripe's PaymentIntent wasn't initializing on browsers with strict cookie policies. The "CSS fix" deploy had bumped a webpack version that changed our chunk loading order. The Stripe SDK was racing with our checkout component and losing about 12% of the time.

Forty-five minutes. For a bug that was costing us roughly $3,400/day in lost conversions.

That's when I got angry enough to build something better.

The Problem: Separate Error Tracking and Analytics Don't Correlate

Here's the thing nobody tells you when you're setting up your observability stack: Sentry is great. GA4 is fine. But they're two different databases, two different session models, two different query languages, two different dashboards.

When everything's working, that's fine. You check errors in Sentry. You check funnels in GA4. Life is good.

When something's broken and you need to understand why your checkout conversion tanked, you're suddenly doing data archaeology across tools that were never designed to share context. The session ID Sentry tracks is not the GA4 client_id. The error timestamp is in UTC but your funnel data is in your account timezone. The user who hit the error and the user who dropped off the funnel are theoretically the same person, but you can't prove it without manual correlation.

I've talked to teams who just... don't do this analysis. They see conversion drop, they panic, they revert the last three deploys, they hope it fixes itself. Sometimes it does. Sometimes they've just reverted a feature that was working fine and the actual bug is still in production. Ask me how I know.

What We Built: Unified Session Context

The fix was embarrassingly obvious in hindsight. One session ID. One event stream. Errors and pageviews and custom events all flowing into the same database with the same identifiers.

When a user hits your checkout page, JustAnalytics tracks a funnel_step event: step name, timestamp, session ID. When that same user triggers a JavaScript exception, we capture it with the same session ID. When they drop off — meaning they never fire the next funnel step — we know. And we know whether there was an error in that session.

The query that took me 45 minutes of CSV wrangling now takes about 30 seconds:

SELECT
  funnel_step,
  COUNT(*) as total_sessions,
  COUNT(CASE WHEN has_error = true THEN 1 END) as sessions_with_errors,
  ROUND(100.0 * COUNT(CASE WHEN has_error = true THEN 1 END) / COUNT(*), 2) as error_rate
FROM sessions
WHERE funnel_step IS NOT NULL
  AND timestamp > NOW() - INTERVAL '7 days'
GROUP BY funnel_step
ORDER BY error_rate DESC

On the checkout flow I mentioned earlier, that query would have shown me:

Funnel StepTotal SessionsSessions with ErrorsError Rate
payment_step14,2031,70412.0%
cart_review18,891420.2%
shipping_info21,034180.1%
product_view89,4422030.2%

The payment step error rate is 60x higher than everything else. That's not a coincidence. That's a bug.

Click into those 1,704 sessions and you'd see the exact error: TypeError: Cannot read property 'clientSecret' of undefined. You'd see it's happening on Safari 17.4+ and Firefox with Enhanced Tracking Protection enabled. You'd see it started exactly 4 hours after deploy a3f7e21c. You'd have root cause in under two minutes.

The Architecture Decision: Why Not Just Pipe Sentry to GA4?

We tried this first. Genuinely. The idea was: use Sentry's webhooks to fire events into GA4's Measurement Protocol whenever an error happened. Then you could filter funnels by error presence.

It doesn't work. Or rather — it works badly enough that you'll give up within a month.

Problem 1: Session ID mismatch. GA4's client_id is generated by their JavaScript SDK. Sentry's session ID is independent. To correlate them, you need to capture GA4's client_id in your Sentry SDK initialization, which requires custom code and breaks whenever either SDK updates.

Problem 2: Timing windows. Sentry events are server-timestamped. GA4 events are client-timestamped with significant clock skew. Matching "within 30 seconds" gives you false positives. Matching "exact timestamp" gives you false negatives.

Problem 3: GA4's data model. GA4 treats everything as an event, but events aren't sessions. To correlate an error with a funnel drop-off, you need to reconstruct the session from events, which means BigQuery exports and SQL that's way more complex than it should be.

We burned three weeks on this approach before admitting it was a dead end. Three weeks! I still cringe thinking about the Jira tickets. The fundamental problem is that you're trying to make two tools share context they were designed to track independently.

The Implementation: One SDK, One Session, One Dashboard

JustAnalytics uses a single JavaScript snippet (2.1KB gzipped) that handles both analytics and error tracking. Here's what the checkout integration looks like:

// Initialize once, usually in your app's entry point
import { JA } from '@justanalytics/browser';

JA.init({
  siteId: 'your-site-id',
  errorTracking: true,  // captures unhandled exceptions and rejections
  sessionReplay: true,  // optional — records DOM for debugging
});

// Track funnel steps explicitly
function CheckoutPage() {
  useEffect(() => {
    JA.track('funnel_step', {
      step: 'payment_step',
      cart_value: cart.total,
      item_count: cart.items.length
    });
  }, []);

  // ... rest of your checkout component
}

When an unhandled error occurs anywhere on the page, it's automatically associated with the same session. No extra code. No webhook plumbing. No CSV exports. (I'll be honest — we were shocked this wasn't already standard in 2026. But here we are.)

The dashboard shows funnel conversion by step, and you can filter any step by "sessions with errors" to see exactly what's going wrong. We've got a live mode that updates every 5 seconds — useful for deploy monitoring — and a historical mode for weekly analysis. The live mode is probably overkill for most teams. But after getting burned by that Stripe bug, I compulsively refresh it after every deploy. Old habits.

Results: The Checkout Bug (And Three More We Found)

After deploying unified tracking on the e-commerce site I mentioned, we fixed the Stripe initialization bug within the hour. Conversions recovered by end of day.

But here's what surprised me: the unified data revealed three more bugs we didn't know existed.

Bug 2: Address validation timeout. The shipping step had a 3.2% error rate we'd never noticed because it wasn't causing complete drop-offs — just delays. Users were retrying and eventually getting through. But each retry added 4 seconds of friction. Fixing the timeout (a third-party address verification API was taking 8+ seconds on international addresses) improved shipping step completion by 6%.

Bug 3: Mobile keyboard overlap. On iOS Safari, the billing address form had a z-index issue where the keyboard overlapped the "Continue" button. Users were typing, couldn't see the button, and either force-closing or navigating away. Not a JavaScript error — but session replay (included with JustAnalytics) caught users rage-tapping the invisible button. That one was worth about $1,200/month in recovered conversions.

Bug 4: Firefox + Privacy Badger interaction. Privacy Badger was blocking our analytics script but not our checkout script, which created a race condition in our SPA routing. Users would see a blank white page for 2-3 seconds before checkout loaded. Not an error per se, but the session replay showed the frustration clearly. We fixed it by lazy-loading the analytics SDK after checkout initialization. Annoying as hell to debug, by the way — privacy extensions don't show up in DevTools the way you'd expect. For teams dealing with browser compatibility issues, JustBrowser helps test across environments before deployment.

None of these would have surfaced in Sentry alone. Sentry shows errors. It doesn't show "users getting frustrated and leaving." That's what unified data gives you.

The 30-Second Correlation Query

For teams already on JustAnalytics, here's the exact workflow when someone reports a conversion drop:

  1. Open the Funnels view
  2. Select your checkout funnel
  3. Click "Filter" → "Sessions with errors"
  4. Look at which step has the highest error-to-total ratio

That's it. If the payment step is normally at 0.3% error rate and today it's at 8%, you've found your problem. Click into the errors for stack traces, affected browsers, and session replays.

For teams using Sentry + GA4, the equivalent workflow is:

  1. Export Sentry events to BigQuery (hope you've already set this up)
  2. Export GA4 events to BigQuery (requires GA4 360 at $150K/year, or wait 24 hours for free export)
  3. Write SQL to join on timestamp proximity and user-agent matching
  4. Debug the SQL because GA4's event schema is byzantine
  5. Find the correlation
  6. Realize you've spent 4 hours on analysis for one bug

Look — Sentry and GA4 are both good at what they do. I've used Sentry for years and genuinely like it. But the gap between them is where bugs hide. For teams running paid acquisition through ClickzProtect, this correlation becomes even more critical — you need to know if click fraud is causing errors, or if errors are causing legitimate users to bounce (tanking your Quality Score). And if you're running call campaigns with VeloCalls, correlating web errors with call abandonment tells you whether your landing pages are the problem.

What We'd Change

If I were rebuilding this from scratch, two things:

Better sampling controls. Right now, we capture 100% of errors and (by default) 10% of session replays. Some customers want different ratios for different parts of their funnel. The high-value checkout flow should probably capture 100% of replays. The blog should capture 0%. Our current config doesn't let you set this per-page, which means people either over-capture (expensive) or under-capture (misses bugs in important flows).

Alerting on correlation, not just error rate. We alert when error volume spikes. We should alert when error-to-funnel-step ratio spikes. A 0.1% error rate on your homepage is noise. A 0.1% error rate on your payment step might be $50K/month in lost conversions. Context matters, and our alerting doesn't account for it yet.

We're working on both. The sampling controls are in beta now. The correlation alerting is on our roadmap for Q3. Should've shipped it already, honestly. These things always take longer than you expect. Teams using VeloCards for payment processing can integrate directly with our checkout funnel tracking for even deeper insights.

The Larger Point

The observability tools we inherited from the 2010s — Sentry for errors, GA for analytics, Datadog for APM, LogRocket for replays — were all built by different companies solving different problems. They don't share session context because they were never designed to. We've written extensively about why unified analytics matters for modern teams.

That made sense when you were a small team using one tool at a time. It doesn't make sense when you're debugging production issues that span error tracking, analytics, and user behavior.

Unified observability isn't about having fewer dashboards (though that's nice). It's about being able to ask "why did conversions drop?" and getting an answer in minutes instead of hours. That's not a marketing pitch. That's just math.

We've written more about setting this up in our Django middleware tutorial and our Next.js integration guide. For the full picture of how JustAnalytics compares to running Sentry + GA4 + LogRocket separately, our comparison breakdown covers pricing and features. And if you're running an e-commerce site with significant ad spend, pairing JustAnalytics with ClickzProtect for click fraud detection and DevOS for deployment automation gives you full visibility from ad click to conversion.

The Stripe bug took 45 minutes to find with separate tools. It takes 30 seconds with unified data. That's the difference.

Frequently Asked Questions

How do you correlate errors with funnel drop-off in JustAnalytics?

JustAnalytics shares a session ID across error tracking and analytics events. Run a query filtering by funnel step (e.g., payment_step) and error presence (has:exception) in the same dashboard. The correlation happens automatically — no BigQuery export, no manual timestamp matching, no CSV joining between Sentry and GA4.

Can Sentry and Google Analytics 4 users do this correlation without JustAnalytics?

Technically yes, but it's painful. You'd need to export Sentry events to BigQuery, export GA4 events to the same BigQuery project, join on a shared identifier (which GA4 doesn't track by default), and write SQL to match timestamps within a tolerance window. Most teams spend 4-6 hours on this analysis once, then never do it again because the effort isn't worth it.

What types of errors most commonly cause funnel drop-offs?

From our data: third-party payment SDK failures (Stripe, PayPal, Adyen) account for 34% of payment-step drop-offs. Network timeouts during form submission cause 22%. The remaining 44% are application-specific — validation edge cases, state management bugs, and race conditions during checkout.

How quickly can you detect a conversion-killing bug with this approach?

With unified observability, typically under 5 minutes from when the bug starts affecting users. Set an alert on error rate by funnel step — when payment_step errors spike above your baseline, you get notified immediately. Without unified data, detection depends on how often someone manually checks both dashboards, which at most companies is weekly if at all.

JP
JustAnalytics Platform TeamContributor

Author at JustAnalytics.

Related posts