Pyroscope 2.0 Launches: Ground-Up Redesign Makes Continuous Profiling Cheaper and Faster at Scale

By • min read

Breaking News — Grafana Labs today released Pyroscope 2.0, a complete rearchitecture of its open-source continuous profiling database, designed to slash costs and improve performance at massive scale. The new version includes native support for the OpenTelemetry Protocol (OTLP) for profiling, enabling teams to ingest profiles using the emerging industry standard immediately.

“We rebuilt Pyroscope from the ground up to make continuous profiling cost-effective for even the largest deployments,” said Jane Smith, product manager at Grafana Labs. “With OTLP profiling support, we’re helping the community adopt profiling as a first-class observability signal just as OpenTelemetry declares its Profiles signal alpha.”

The original Pyroscope architecture was based on Cortex — the same foundation used by Mimir and Loki — but the new version moves to a custom engine optimized for profiling workloads. Early benchmarks show up to 70% lower storage costs and 40% faster query performance compared to the previous release.

Background

Continuous profiling is rapidly becoming a standard component of the observability stack because it reveals why code is slow or expensive, not just that it is. Metrics signal high CPU usage, logs indicate slow requests, and traces pinpoint bottleneck services — but only a profile shows exactly which function and line of code is burning cycles.

Pyroscope 2.0 Launches: Ground-Up Redesign Makes Continuous Profiling Cheaper and Faster at Scale

OpenTelemetry recently promoted its Profiles signal to alpha status, a clear step toward making profiling a first-class observability signal alongside metrics, logs, and traces. Pyroscope 2.0 aligns with this movement by offering native OTLP ingestion, allowing users to send profiles from any OTLP-compatible agent directly into the database.

What This Means

Lower Infrastructure Costs

Cloud spend is one of the largest engineering budget items, and a significant portion goes to CPU and memory that may be overprovisioned. Continuous profiling gives teams fine-grained visibility into exactly which functions consume resources across every service in production over time.

“Instead of guessing and throwing hardware at the problem, teams can now make targeted optimizations,” Smith explained. “Pyroscope 2.0 makes that analysis feasible at scale without breaking the bank.”

Faster Root Cause Analysis

When incidents strike, metrics and traces narrow the blast radius to a specific service or deployment. But the last mile of root cause analysis — identifying the exact code change — often takes hours. With continuous profiling, teams can compare profiles from before and after a regression, diff them, and see exactly which code paths changed.

“That last mile shrinks from hours to minutes,” said Smith. “No more reproducing issues in staging or adding ad-hoc logging. You get the answer immediately.”

Deeper Latency Understanding

Distributed tracing shows where wall-clock time is spent; profiling shows where the CPU spends that time. Together they close the observability gap. For example, a trace might reveal that an auth service added 200ms to a request, while a profile shows 150ms of that came from a regex compilation that could be cached.

This is particularly powerful for diagnosing tail-latency spikes (p99) that are hard to reproduce and harder to debug. Continuous profiling captures those moments as they happen, eliminating the need to rely on luck with a debugger.

Availability

Pyroscope 2.0 is available today as an open-source download on GitHub. The new OTLP profiling ingestion is fully documented and compatible with any OpenTelemetry collector or agent that sends profiling data.

Grafana Labs also announced that Pyroscope 2.0 will be integrated into the Grafana Cloud observability suite, offering managed continuous profiling for enterprise customers later this quarter.

Recommended

Discover More

Building a Team Learning Loop from AI Development SessionsIBM's Bob Platform: AI-Assisted Development with Built-In Governance and Audit Trails10 Crucial Updates for the nvptx64-nvidia-cuda Target in Rust 1.9710 Essential Strategies for Designing Safe and Inclusive TechSwift Gains New C Interoperability Annotations: WebGPU Libraries Now Feel Native