Requirements for Running K8ssandra for Development

K8ssandra is a full stack to run Apache Cassandra® in production. As such, it comes with several components that can consume a lot of resources and make it difficult to run on a developer laptop. Let’s explore how we can configure K8ssandra for this environment and run some simple benchmarks to determine what performance we can expect.

Expectations Management

K8ssandra Quickstart is an excellent guide to perform a full installation of K8ssandra on a developer laptop and try out the different components of the K8ssandra stack. While this is a great way to get your first hands-on experience with a K8ssandra, let’s say the obvious: Run K8ssandra locally on a developer laptop that’s not performance-oriented. In this blog post, we’ll start Apache Cassandra® locally and then explain how to run the benchmarks to help assess the level of performance (especially productivity) you can expect from a developer laptop deployment.

Our goal was to achieve the following:

  • Run the entire stack, if possible, with at least three Cassandra nodes and at least one Stargate node. K8ssandra ships with the following open source components:
  • Achieving reasonable startup times
  • Select a developer setup that is stable enough to withstand moderate workloads (50 to 100 operations/sec)
  • Coming up with some minimum requirements and recommended K8ssandra settings

Using the correct settings

Cassandra can operate with fairly limited resources as long as you don’t put too much pressure on her. For example, for the Reaper project, we’re running our integration tests with CCM (Cassandra Cluster Manager), which is configured to 256MB of heap size. This allows the JVM to allocate an additional 256MB of non-threaded memory, allowing Cassandra to use up to 512MB of RAM.

If we want to run K8ssandra with limited resources, we will need to set it appropriately in our Helm values ​​files.

Adjust heap sizes in K8ssandra

The K8ssandra Helm charts allow us to set the stack sizes for Cassandra and Stargate capsules separately.

Cassandra

For Cassandra, heap and new generation sizes can be set at the cluster level, or at the datacenter level (K8ssandra will support multiple DC deployments in a future release):

cassandra:
  version: "3.11.10"
  ... 
  ...
  # Cluster level heap settings
  heap: {}
   #size:
   #newGenSize:

  datacenters:
  - name: dc1
    size: 3
    ... 
    ... 
    # Datacenter level heap settings
    heap: {}
      #size:
      #newGenSize:

By default, these values ​​are not set, which allows Cassandra to perform its own calculations based on available RAM, with the following formula applied:

max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB))

The catch when running multiple Cassandra nodes on the same machine is that they will all see the same total available RAM but not be aware that other Cassandra nodes can also run. When allocating 8GB of RAM to Docker, each Cassandra node will compute a 2GB heap. With a set of 3 nodes, 6GB of RAM is already used, and the additional memory that can be used by each JVM is not counted. This doesn’t leave much RAM for the other components that K8ssandra includes, such as Grafana, Prometheus, and Stargate.

Takeaway here: It is not a good idea to leave the heap settings blank for the development environment In particular, as many instances of Cassandra will be clustered on the same host machine. (By default, K8ssandra does not allow multiple Cassandra nodes on the same Kubernetes working node. For this post, we’re using type to run multiple working nodes on the same OS instance – or virtual machine in the case of Docker Desktop).

The heap size chosen will directly affect the throughput you can expect to achieve (although it’s not the only limiting factor). A smaller pile will include more garbage collections, which will lead to more pauses in the world and directly affect productivity and response time. It also increases the odds of running out of memory if the workload is too heavy, as objects cannot finish their lifecycle fast enough for the available heap space.

The heap size will be set to 500MB with new generation 200MB globally for the cluster as follows:

cassandra:
  version: "3.11.10"
  ... 
  ...
  # Cluster level heap settings
  heap: 
   size: 500M
   newGenSize: 200M

  datacenters:
  - name: dc1
    size: 3

stargate

Since Stargate nodes are private coordinator-only Cassandra nodes and run in the JVM, it is also necessary to set the maximum heap size:

stargate:
  enabled: true
  version: "1.0.9"
  replicas: 1
  ...
  ...
  heapMB: 256

Stargate nodes will follow the same rule when it comes to off-heap memory: it will allow the JVM to use as much RAM for off-heap memory as the size of the configured heap.

Since Stargate acts as a coordinator, it will likely keep objects longer in the heap waiting for all nodes to respond to queries before it can identify them and possibly return result sets to clients. It needs enough pile to do this without over-collecting litter. Unlike Cassandra, Stargate does not calculate the heap size based on available RAM, and the value must be set explicitly.

During our tests, we noticed that 256MB was a good starting value for getting a stable Stargate pod. In production, you may want to adjust this value for optimal performance.

normative environment

Our setup to run the benchmarks was as follows:

  • Apple MacBook Pro 2019 – i7 (6 cores) – 32GB RAM – 512GB SSD
  • Desktop Docker 3.1.0
  • 0.7.0 . type
  • Conservatives 1.17.11
  • kubectl v1.20.2

Note that we used a fairly robust environment as we ran our tests on a 2019 Apple MacBook Pro with a six-core i7 CPU and 32GB of RAM.

We used the Kind Deployment Guidelines found in the K8ssandra documentation to start a k8s batch with 3 working nodes.

Docker Desktop allows you to set the resources assigned to it by clicking on its icon in the status bar, then going to “Preferences…”:

Allocating resources in Docker Desktop

Then click Resources in the left menu, which will allow you to set the number of cores and how much RAM Docker can generally use:

Allocating resources in Docker Desktop

running standards

We used NoSQLBench to perform moderate load criteria. It comes with a convenient Docker image that we can use right away to run the stress functions in our k8s suite.

This is the Helm values ​​file that we used as a base to rotate our group, which we’ll call three_nodes_cluster_with_stargate.yaml:

cassandra:
  datacenters:
  - name: dc1
    size: 3
  ingress:
    enabled: false

stargate:
  enabled: true
  replicas: 1
  ingress:
    host: 
    enabled: true

    cassandra:
      enabled: true

medusa:
  multiTenant: true
  storage: s3

  storage_properties:
      region: us-east-1

  bucketName: k8ssandra-medusa
  storageSecret: medusa-bucket-key

We want Stargate to be our Cassandra gateway, and enabling Medusa requires us to set up a secret (remember, we want to run the entire batch).

You will have to adjust your Medusa storage settings to suit your requirements (container and region) or disable it if you don’t have access to an AWS container at all by disabling Medusa:

medusa:
  enabled: false

Adjust Medusa storage settings to suit your requirements (bucket and area). You will need to disable Medusa if it is using AWS when an S3 bucket is not available. In addition to AWS, future versions of Medusa will provide support for S3/MinIO and local storage configurations.

We can create a secret for Medusa with the following yaml application:

apiVersion: v1
kind: Secret
metadata:
 name: medusa-bucket-key
type: Opaque
stringData:
 # Note that this currently has to be set to medusa_s3_credentials!
 medusa_s3_credentials: |-

[default]

aws_access_key_id = aws_secret_access_key =

You will notice in Helm values ​​that it lacks heap settings. We did this on purpose to set it on call helm install with different values ​​for our different tests.

To fully set up our environment, we performed the following steps:

  1. Creating the thin block:kind create cluster --config ./kind.config.yaml
  2. Configure and install Traefik
  3. Create a namespace:kubectl create namespace k8ssandra
  4. (If Medusa is enabled) Create the secret:kubectl apply -f medusa_secret.yaml -n k8ssandra
  5. Deploy K8ssandra with your desired heap settings:helm repo add k8ssandra https://helm.k8ssandra.io/stable helm repo update helm install k8ssandra k8ssandra/k8ssandra -n k8ssandra -f /path/to/three_nodes_cluster_with_stargate.yaml --set cassandra.heap.size=500M,cassandra.heap.newGenSize=250M,stargate.heapMB=300

You will have to wait for a file cassandradatacenter Resources and then the Stargate cabin to have them ready before you can start interacting with Cassandra. This usually takes about 7 to 10 minutes.

you can wait cassandradatacenter To be ready with the following kubectl ordering:

kubectl wait --for=condition=Ready cassandradatacenter/dc1 --timeout=900s -n k8ssandra

Then wait for Stargate to be ready:

kubectl rollout status deployment k8ssandra-dc1-stargate -n k8ssandra

Once the Stargate is ready, the above command should produce something like this:

deployment "k8ssandra-dc1-stargate" successfully rolled out.

You can implement the NoSQLPench compression run by creating a k8s function. You will need the superuser credentials in order for NoSQLBench to connect to the Cassandra group. You can get this credential with the following commands (requires jq to be installed):

SECRET=$(kubectl get secret "k8ssandra-superuser" -n k8ssandra -o=jsonpath="{.data}")
echo "Username: $(jq -r '.username' <<< "$SECRET" | base64 -d)"
echo "Password: $(jq -r '.password' <<< "$SECRET" | base64 -d)"

Then create a NoSQLPench function which will start automatically:

kubectl create job --image=nosqlbench/nosqlbench nosqlbench -n k8ssandra 
    -- java -jar nb.jar cql-iot rampup-cycles=1k cyclerate=100 
    username=<superuser username> password=<superuser pass>    
    main-cycles=10k write_ratio=7 read_ratio=3 async=100       
    hosts=k8ssandra-dc1-stargate-service --progress console:1s -v

This will run a 10k compression cycle with 100 operations/sec with 70% writes and 30% reads, allowing for 100 async queries during the flight. Note that we serve Stargate as the contact host for NoSQLBench (the exact name will vary depending on the name of your Helm version).

While the function is running, you can create records for it with the following command:

kubectl logs job/nosqlbench -n k8ssandra --follow

Latency metrics can be found at the end of the run, and since we’re working at a constant rate, we’d be interested in the latency that takes a formatted deletion (video, paper) into account:

kubectl logs job/nosqlbench -n k8ssandra 
  |grep cqliot_default_main.cycles.responsetime

Which should produce something like this:

12:41:18.924 [cqliot_default_main:008] INFO  i.n.e.c.m.PolyglotMetricRegistryBindings - 
  timer added: cqliot_default_main.cycles.responsetime
12:42:58.788 [main] INFO  i.n.engine.core.ScenarioResult - type=TIMER, 
  name=cqliot_default_main.cycles.responsetime, count=10000, min=1560.064, max=424771.583, 
  mean=21894.6342016, stddev=45876.836258003656, median=5842.175, p75=17157.119, 
  p95=100499.455, p98=187908.095, p99=263397.375, p999=384827.391, mean_rate=100.03389528501059, 
  m1=101.58021531751795, m5=105.18698132587139, m15=106.3340149754869, rate_unit=events/second, 
  duration_unit=microseconds

As Cassandra operators, we usually focus on p99 latency: p99=263397.375. That’s 263 milliseconds in p99, which is good considering our environment (a laptop) and performance requirements (very low).

Benchmark results

We ran our benchmarks with the following settings matrix:

  • Cores: 4 and 8
  • RAM: 4 GB and 8 GB
  • Operation rate: 100, 500, 1000 and 1500 operations/sec
  • Cassandra stack: 500MB
  • Stargate Heap: 300MB and 500MB

Running the full suite with three Cassandra nodes, one Stargate node, and 4GB dedicated to Docker fails any attempt to run stress tests, even moderate ones. However, running with a single Cassandra node allowed stress testing with full stack loading using 4GB of RAM.

p99 latency: response time chart

Response times are very reasonable for all settings when using a rate of 100 operations/sec. Attempting to achieve higher throughput requires the use of at least 8 cores, allowing to reach 1000 operations/sec with a latency of 290 ms p99. None of our tests allowed us to reach a constant transfer rate of 1,500 operations/sec as indicated by response times in excess of 9 seconds per operation.

p99 latency: response time graph

conclusion

Getting the full K8ssandra experience on a laptop will require at least 4 cores, 8GB of RAM available for Docker, and decent stack sizes for Cassandra and Stargate. If you don’t have these Docker resources available on your development machine, you can avoid deploying features like Monitor, Reaper, and Medusa, as well as reduce the number of Cassandra nodes. Using heap sizes of 500MB for Cassandra and 300MB for Stargate proved sufficient to keep workloads between 100 and 500 operations per second, which should be sufficient for development purposes.

Note that starting the whole set takes about 7-10 minutes at the time of writing on a fairly recent high-end MacBook Pro, so expect your mileage to vary slightly depending on your hardware. Part of that time is spent pulling images from the Docker Hub, which means your internet connection will play a big role in how long it takes to boot up.

Now you know how to configure K8ssandra for your development machine, and you’re ready to start creating cloud-native applications! Visit the Tasks section of our documentation site for detailed instructions on developing vs. the published kit.

.

Leave a Comment