Programming

How to Set Up Continuous Profiling at Scale with Pyroscope 2.0

2026-05-02 22:30:33

Introduction

Continuous profiling is becoming a standard part of the observability stack for good reason. It tells you why your code is slow or expensive, not just that it is. Metrics show high CPU, logs show slow requests, traces pinpoint the service, but only a profile reveals which function and which line are burning cycles. As systems grow complex, this level of visibility is essential. OpenTelemetry recently declared its Profiles signal as alpha, making profiling a first-class observability signal. Now, Pyroscope 2.0—a ground-up rearchitecture of the open-source continuous profiling database—makes profiling more cost-effective at scale with native support for OpenTelemetry Protocol (OTLP) profiling. This guide walks you through setting up continuous profiling using Pyroscope 2.0, from understanding the benefits to deploying and optimizing.

How to Set Up Continuous Profiling at Scale with Pyroscope 2.0

What You Need

Step-by-Step Guide

Step 1: Understand the Case for Always-On Profiling

Before diving into setup, recognize why continuous profiling matters. It cuts infrastructure costs by revealing exactly which functions consume CPU and memory, enabling targeted optimizations instead of overprovisioning. It accelerates root cause analysis—compare profiles from before and after a regression to pinpoint changed code paths in minutes, without reproducing in staging. Profiling also closes the observability gap: while distributed tracing shows wall-clock time, profiling shows where CPU spends that time. For tail latency, Pyroscope captures p99 spikes as they happen.

Step 2: Deploy Pyroscope 2.0 Server

Pyroscope 2.0 rearchitects the original Cortex-based database for scalability. Deploy using Docker:

  1. Pull the image: docker pull grafana/pyroscope:latest
  2. Run with default config: docker run -d --name pyroscope -p 4040:4040 grafana/pyroscope
  3. Open http://localhost:4040 to verify the UI loads.

For production, use Kubernetes via Helm charts (see Tips).

Step 3: Configure Profiling Agents

Install agents in your applications. For example, in a Java service using the Pyroscope Java agent:

  1. Add the JAR: -javaagent:/path/to/pyroscope.jar
  2. Set environment variables: PYROSCOPE_SERVER_ADDRESS=http://localhost:4040, PYROSCOPE_APPLICATION_NAME=my-service
  3. Restart the service. Profiles will begin flowing.

For languages without a native agent, use the OpenTelemetry SDK with the profiling signal enabled, sending to the OTLP endpoint.

Step 4: Ingest Profiles via OpenTelemetry Protocol (OTLP)

Pyroscope 2.0 natively supports OTLP profiling. This enables ingesting profiles using the emerging standard without a separate agent. To use:

  1. Deploy an OpenTelemetry Collector with the profiling receiver enabled.
  2. Configure the collector to export profiles to Pyroscope: exporters: otlp: endpoint: "localhost:4317"
  3. Ensure your application is instrumented with OpenTelemetry SDKs that generate profile data (currently alpha).

This approach future-proofs your observability pipeline and aligns with OpenTelemetry’s roadmap.

Step 5: Analyze Profiles in the UI

Navigate to the Pyroscope web interface. You can:

Use the search bar to filter by application, type (cpu, memory, goroutines), or tags.

Step 6: Use Profiles for Root Cause Analysis

When an incident occurs, profiling helps you find the root cause fast:

  1. Identify the affected service from metrics/traces.
  2. Open Pyroscope and select the service.
  3. Choose a time range covering the incident (use the compare feature against a baseline).
  4. Look for new functions or increased CPU/memory in the diff.
  5. Drill down to the exact line of code causing the regression.

This eliminates the need for ad-hoc logging or guesswork.

Step 7: Optimize Infrastructure Costs

Continuous profiling provides data-driven insights for cost reduction:

Pyroscope 2.0’s rearchitecture reduces storage and query costs, making it feasible to profile all your services continuously without prohibitive expense.

Tips for Success

By following these steps, you’ll gain deep code-level visibility into your production systems, reduce infrastructure costs, and accelerate incident response—all with a cost-effective, open-source solution.

Explore

Morocco Joins the Artemis Accords: 7 Key Facts About the Historic Signing Tesla's Robotaxi Fleet: Unsupervised Growth in Texas Cities From COP Stalemate to Action: A Guide to the Colombia Fossil Fuel Summit's Potential Python 3.14 Hits Release Candidate: Final Countdown to October Launch Fast16: The Secret US-Made Malware That Silently Sabotaged Iranian Systems