Continuous Profiling
Identify performance bottlenecks in production with low-overhead continuous CPU profiling.
What Is Continuous Profiling?#
Continuous profiling captures CPU and function-level performance data from your running application at regular intervals. Unlike one-off profiling sessions, continuous profiling runs in production 24/7, so you can see exactly where your application spends its time -- not just during a test, but under real user load.
This lets you answer questions like:
- Which functions consume the most CPU across my fleet?
- Did the latest deploy introduce a performance regression?
- Why is this endpoint slower than it was last week?
- Where should I focus optimization effort for the most impact?
How It Works#
JustAnalytics uses the V8 inspector protocol to collect CPU profiles from your Node.js application. The SDK periodically starts a profiling session, collects a sample, and uploads the profile data to JustAnalytics for analysis.
Architecture#
Your Application (Node.js)
│
├─ JA SDK Profiler
│ ├─ V8 Inspector Session
│ ├─ Sample every 10ms
│ └─ Profile captured every 10 seconds
│
└─ Upload to JA API
└─ /api/ingest/profiles
└─ Stored, aggregated, rendered as flame graphs
What Gets Captured#
Each CPU profile sample includes:
- Call stack -- the full stack trace at the time of sampling
- Function name -- including anonymous functions (resolved via source maps)
- File and line number -- exact source location
- Self time -- time spent in the function itself (not its children)
- Total time -- time spent in the function and all functions it called
- Timestamp -- when the sample was taken
- Service and environment -- metadata for filtering
Setup#
Enabling the Profiler#
Enable profiling when initializing the SDK:
import JA from '@justanalyticsapp/node';
JA.init({
siteId: 'YOUR_SITE_ID',
apiKey: 'YOUR_API_KEY',
serviceName: 'api-server',
profiling: {
enabled: true,
},
});
That's it. With default settings, the profiler will start collecting CPU profiles immediately.
Configuration Options#
Fine-tune profiling behavior:
JA.init({
siteId: 'YOUR_SITE_ID',
apiKey: 'YOUR_API_KEY',
serviceName: 'api-server',
profiling: {
enabled: true,
sampleIntervalMs: 10, // How often V8 samples the stack (default: 10ms)
profileDurationMs: 10000, // How long each profile session runs (default: 10s)
uploadIntervalMs: 60000, // How often profiles are uploaded (default: 60s)
cpuThreshold: 0, // Only upload if CPU usage > N% (default: 0, always upload)
},
});
Environment Variables#
You can also configure profiling via environment variables:
JA_PROFILING_ENABLED=true
JA_PROFILING_SAMPLE_INTERVAL=10
JA_PROFILING_DURATION=10000
JA_PROFILING_UPLOAD_INTERVAL=60000
Environment variables are overridden by programmatic configuration.
Manual Profiling API#
For targeted profiling of specific operations, use the manual API:
JA.startProfiling()#
Start a named profiling session:
import JA from '@justanalyticsapp/node';
// Start profiling a specific operation
const profileId = JA.startProfiling('order-processing');
// ... perform the operation you want to profile ...
await processOrder(orderId);
// Stop profiling and upload the result
const profile = JA.stopProfiling(profileId);
JA.stopProfiling()#
Stop a profiling session and get the results:
const profile = JA.stopProfiling(profileId);
// profile contains:
// {
// id: 'prof_abc123',
// name: 'order-processing',
// durationMs: 342,
// samples: 34,
// topFunctions: [
// { name: 'processPayment', selfTimeMs: 89, file: 'src/payments.ts', line: 42 },
// { name: 'validateInventory', selfTimeMs: 67, file: 'src/inventory.ts', line: 18 },
// ],
// }
Profiling a Code Block#
A convenience wrapper for profiling a specific block:
const result = await JA.withProfiling('checkout-flow', async () => {
const cart = await getCart(userId);
const payment = await chargePayment(cart);
const order = await createOrder(cart, payment);
return order;
});
// result is the return value of your function
// The profile is automatically uploaded
Profiling Express Routes#
Profile specific routes to understand per-endpoint performance:
import express from 'express';
import JA from '@justanalyticsapp/node';
const app = express();
app.post('/api/orders', async (req, res) => {
const result = await JA.withProfiling('POST /api/orders', async () => {
const order = await createOrder(req.body);
return order;
});
res.json(result);
});
Automatic Sampling#
When continuous profiling is enabled, the SDK automatically captures profiles at regular intervals without any manual instrumentation.
Sampling Strategy#
The default sampling strategy:
- Every 10 seconds, start a V8 CPU profiling session
- Sample the stack every 10ms during the session
- After 10 seconds, stop the session and buffer the profile
- Every 60 seconds, batch upload buffered profiles to JustAnalytics
This means each minute produces approximately 6 profiles, each covering a 10-second window.
Adaptive Sampling#
For high-traffic applications, you can enable adaptive sampling to reduce overhead:
JA.init({
siteId: 'YOUR_SITE_ID',
apiKey: 'YOUR_API_KEY',
serviceName: 'api-server',
profiling: {
enabled: true,
adaptive: true, // Enable adaptive sampling
maxCpuOverhead: 2, // Target max 2% CPU overhead
},
});
With adaptive sampling, the profiler automatically reduces the sample rate or profile duration when CPU usage is high, and increases it when the system is idle. This keeps overhead below your configured ceiling.
Sampling During Idle#
By default, the profiler skips sampling when the event loop is idle (no active requests). This avoids capturing profiles of an idle application, which aren't useful.
profiling: {
enabled: true,
skipIdle: true, // Default: true. Skip profiling when no active requests.
}
Viewing Profiles in the Dashboard#
Navigate to Dashboard > Monitoring > Profiling to view your profiles.
Profile List#
The profile list shows:
- Timestamp -- when the profile was captured
- Service -- which service generated the profile
- Duration -- how long the profile session ran
- Top function -- the function with the highest self time
- CPU usage -- average CPU usage during the profile
Filter by service, environment, time range, or function name.
Profile Detail#
Click any profile to see:
- Flame graph -- visual representation of the call stack (see Flame Graphs)
- Top functions table -- ranked by self time or total time
- Call tree -- hierarchical view of function calls
- Source view -- click any function to see the source code with line-level timing
Aggregated View#
The aggregated view merges profiles across a time range to show overall trends:
- Top functions over time -- which functions consume the most CPU this hour, day, or week
- Regression detection -- functions whose CPU time increased after a deploy
- Comparison -- compare profiles between two time ranges or two releases
Performance Overhead#
Continuous profiling is designed for production use with minimal overhead.
Expected Overhead#
| Configuration | CPU Overhead | Memory Overhead | |--------------|-------------|-----------------| | Default (10ms sample, 10s profile) | ~1-2% | ~5-10 MB | | Conservative (20ms sample, 5s profile) | < 1% | ~3-5 MB | | Aggressive (5ms sample, 30s profile) | ~3-5% | ~15-20 MB |
Factors That Affect Overhead#
- Sample interval -- lower intervals (more frequent sampling) mean higher overhead
- Profile duration -- longer sessions capture more data but use more memory
- Stack depth -- deeply nested call stacks take longer to capture
- Concurrency -- more concurrent requests mean more diverse stacks to sample
Minimizing Overhead#
If you're concerned about overhead:
profiling: {
enabled: true,
sampleIntervalMs: 20, // Sample less frequently
profileDurationMs: 5000, // Shorter profile sessions
uploadIntervalMs: 120000, // Upload less frequently
adaptive: true, // Let the SDK manage overhead
maxCpuOverhead: 1, // Cap at 1% CPU overhead
}
Monitoring Profiler Overhead#
The SDK reports its own overhead as a metric:
justanalytics.profiler.overhead_percent -- CPU overhead of the profiler itself
justanalytics.profiler.profiles_captured -- Number of profiles captured
justanalytics.profiler.upload_errors -- Number of failed uploads
These metrics appear in your Infrastructure Metrics dashboard.
When to Use Profiling#
Always-On Continuous Profiling#
Enable continuous profiling in production for:
- Services that handle latency-sensitive requests
- Services where you want to detect regressions automatically
- Any service where you want baseline performance data
Targeted Manual Profiling#
Use JA.startProfiling() / JA.stopProfiling() for:
- Investigating a specific slow endpoint
- Benchmarking before and after an optimization
- Profiling batch jobs or background workers
- Capturing profiles during a specific scenario
When NOT to Profile#
- Extremely CPU-constrained environments -- if your service is already at 95% CPU, adding profiling overhead (even 1-2%) may not be acceptable
- Short-lived processes -- Lambda functions or one-off scripts may not benefit from continuous profiling; use manual profiling instead
- Sensitive environments -- function names and file paths are included in profiles; ensure your compliance requirements allow this
Troubleshooting#
Profiles Not Appearing#
- Verify profiling is enabled: check that
profiling.enabledistrue - Check the SDK logs: enable debug logging with
JA.init({ logLevel: 'debug' }) - Verify API key permissions: the API key must have the
profiles:writescope - Check network connectivity: the SDK must be able to reach
api.justanalytics.app
High Overhead#
- Increase
sampleIntervalMsto 20ms or higher - Decrease
profileDurationMsto 5000ms - Enable
adaptive: truewith amaxCpuOverheadceiling - Check if another profiling tool is also running (e.g., Node.js --inspect)
Missing Function Names#
- Ensure source maps are uploaded (see Source Maps)
- V8 may inline small functions -- try running with
--no-turbo-inlining(not recommended for production) - Native C++ functions appear as
[native]and cannot be deobfuscated