Continuous Profiling in Kubernetes Using Pyroscope

Developers usually need to look at performance bottlenecks in production applications to determine the cause of the problem. To do this, you usually need information that can be collected through logs and code tools. Unfortunately, this approach is usually time consuming and does not provide enough details about the underlying problem.

A modern and more advanced approach is to apply and use profiling techniques and tools that highlight the slowest application code, that is, the area that consumes the most of your resources.

In this blog post, we’ll talk about continuous profiling, and then craft some microservices that run on Kubernetes using an open source tool called Pyroscope.

What is profiling?

The code must be analyzed, debugged, and revised to determine the most effective way to make it run faster. Using the profiling tool to inspect an application’s code helps us identify and fix performance bottlenecks. This can quickly diagnose how an application is performing and enable programmers to get into the key details of poor performance. The result is a simplified database that reduces CPU/memory consumption and makes the user experience even better!

Profiling is the analysis of a program that measures the memory and time complexity of a program or the frequency and duration of function calls. Profiling information helps improve program performance. Profiler programs can trace every single line of code.

continuous profiling

Persistent profiles are used to make troubleshooting faster and easier. Persistent profiles are production code profiles that allow you to analyze code-level performance across your environment over time. Because profiles are collected continuously, they can quickly detect more resource-intensive features (or lines of code) after new code is introduced. The optimization can reduce end-user delays and cloud provider accounts.

What continuing analysts are there?

So here is a list of some clips you may have come across:


Pyroscope is an open source platform that consists of servers and proxies. It allows the user to efficiently collect, store, and query configuration data on the CPU and disk.


Parca collects, stores, and makes available for inquiry over time. It is open source and can be deployed in production environments where Parca focuses on sampling and identification of two main types of profiles: tracking and sampling.


Datadog Continuous Profiler analyzes and compares code performance all the time and in any environment, including production. It identifies hard-to-replicate production problems caused by inefficient code. He also has insights into automated code profiling.

Google – Cloud Profiler

Cloud Profiler is a low-cost statistical profile that continuously collects CPU usage and memory allocation information from your production applications. It has a practical application profile, a low-impact production profile, and broad platform support.

Why do we use a periscope?

Before we start exploring Pyroscope, let’s see how it differs from a few other continuous profiling tools available in the market. DataDog and Google Cloud Profiler are widely used in the industry. As one Reddit user pointed out, here are a few reasons why Pyroscope is better compared to the other two.

Comparison of Pyroscope vs. DataDog and Google Cloud Profiler


Pyroscope focuses on building a storage engine specifically designed for data profiling to store and query that data as efficiently as possible. A proxy server model is used to send profiles from applications to the Pyroscope server:

proxy server model


Pyroscope allows identification professionals of any language to send data to it and store that data efficiently by the storage engine. For example, Pyroscope contains operators specific to Go, Python, Ruby, eBPF, Java, .NET, PHP, and Rust.

On the other hand, Parca takes a slightly different approach and relies on eBPF for compiled languages ​​like C, C++, Go, etc. At the time of writing this article, support for other languages ​​is in progress. Similar to Pyroscope, it can read from any pprof-formatted profiles from HTTP endpoints as well.

In theory, since all of these languages ​​are eventually compiled and run on the kernel, eBPF should work with any of these languages. However, in practice, if you actually run eBPF for interpreted languages ​​like Python, the function names are unreadable to humans in many cases. This is because symbols are not stored in these languages.

For this reason, Pyroscope supports both language-specific profiling as well as eBPF profiling. This comes at the cost of more work to integrate language-specific factors than eBPF, which can only be run at the kernel level. But it also comes with the advantage of having more executable and readable profiles.

How to install Pyroscope?

You can start the server followed by the proxy no matter what you’re using, Docker, Linux, or if you’re looking for Ruby or Go documents, Pyroscope has you covered. Even if you aim for ten seconds or ten months of selecting program data, the specially designed storage engine makes quick queries.

– Pyroscope site

We will be using the minikube to run the Kubernetes cluster. Create a block with minikube:

Add Helm repo:


helm repo add pyroscope-io

Helm chart installation:


helm install pyroscope pyroscope-io/pyroscope --set service.type=NodePort

To verify that the Pyroscope Helm chart has been installed successfully:

Check if Pyroscope is running:

Now that we have Pyroscope running in our Kubernetes suite, we’ll continue with the steps for using the app with it.

We will be using Google microservices for this demonstration.

Integrating the Google Microservices demo with Pyroscope

We will modify our container images to use the Pyroscope binary. This duo will start our app and inject itself for monitoring. You can refer to this Pyroscope document

We will be working on a Python, Go, and .NET microservice from Google microservices for the demonstration. All modifications have been pushed to the fork of Google’s microservices on GitHub, so let’s take a look at these changes for each service.

To try out the Pyroscope demo with Google microservices, you don’t need to create Docker images yourself. You can only apply the Kubernetes manifest as described in the Getting Configuration Data from Microservices section.


We will be using an email service application written in Python. The following changes to the “Dockerfile” are required to use the Python application with Pyroscope.


COPY --from=pyroscope/pyroscope:latest /usr/bin/pyroscope /usr/bin/pyroscope
CMD [ "pyroscope", "exec", "python", "" ]

After editing the Dockerfile, within the same folder, we proceeded to build and push the image.


docker build . -t beellzrocks/emailservice:latest

docker push beellzrocks/emailservice:latest


We will be using the Cart Service app for .NET. To use a .NET application with Pyroscope, the following changes are required in the Dockerfile.


COPY --from=pyroscope/pyroscope:latest /usr/bin/pyroscope /usr/bin/pyroscope
ENTRYPOINT ["pyroscope", "exec", "-spy-name", "dotnetspy", "/app/cartservice"]

After editing the Dockerfile, we proceeded to build and push the image.

He goes

We’ll take the Product Catalog Service app written in Go. To use the Go app with Pyroscope, the following changes must be made in “server.go”.

import (
  pyroscope ""

func main() {

    ApplicationName: os.Getenv("APPLICATION_NAME"),
    ServerAddress:   os.Getenv("SERVER_ADDRESS"),
  // code here

After editing server.go, we proceeded to build and push the image.

Get profiling data from Microservice

We modified the Kubernetes menus to use our images with Pyroscope. The kubernetes-manifests.yaml file contains resources for all applications. We edited it to use the images we created in the above steps, such as the email service, shopping cart service, and product catalog service.

  - name: server
    image: beellzrocks/emailservice

When running Pyroscope in Kubernetes, we need to make the following changes:

  • Add SYS_PTRACE possibility.
  • Tell the agent the location of the Pyroscope server, and the name of the application using environment variables.
  - name: server
    - name: PYROSCOPE_SERVER_ADDRESS # To change Pyroscope Server Port change the value
      value: "http://pyroscope:4040"
    - name: PYROSCOPE_APPLICATION_NAME # Application name shown in the UI
      value: "email.service" 
        - SYS_PTRACE

Now, to publish all the services, you can apply the Kubernetes manifest to your cluster.

kubectl apply -f

Get the service URL for Pyroscope:


minikube service pyroscope

| NAMESPACE |   NAME    | TARGET PORT |            URL            |
| default   | pyroscope | http/4040   | |
  Opening service default/pyroscope in default browser

To access the Pyroscope user interface, you can go to the URL – (your address will be different).

Pyroscope UI with Pyroscope Server CPU

As you can see in the screenshot above, Pyroscope itself takes low CPU usage while storing data locally. Badger database is used to store data locally.

Use of Pyroscope Resources

Monitoring Kubernetes capsules is also important in the context of resource utilization, utilization, and cost control. Pyroscope uses low resources with low overhead.

Pyroscope CPU Usage

Observation with a periscope

Pyroscope characterizes the code using different operators depending on the programming language. Here are some examples of flame graph for application described using Pyroscope.

Pyroscope with Go Product Catalog Service app

Pyroscope with .Net Cart . app

Pyroscope with Python Email app


Continuous profiling performance is a critical factor in achieving end-user expectations. And if performance issues do occur, you should be ready to diagnose the problem before it affects the end user experience.

Hence, keep optimizing your apps and fix issues right away to continue delivering ultra-fast app performance to users with tools like Pyroscope. Pyroscope offers a layer of insight to help you understand how to improve the performance of your code in production and reduce cloud infrastructure costs.

This is a summary for folks, I hope the article was useful and that you enjoyed reading it. I’d love to hear your thoughts and experience – let’s connect on LinkedIn or in the comments section below.


Leave a Comment