Scaling Kubernetes with KEDA and RabbitMQ

author image

Tim Nichols

CEO/Founder

2025-01-16T03:20:58.727Z

TL;DR - setting KEDA to scale on RabbitMQ extends horizontal autoscaling to message-based systems. Here’s a quick guide!

KEDA 101

Kubernetes Event Driven Autoscaling extends Horizontal Pod Autoscaling to act on custom events and metrics. Specifically KEDA opens up

  • Event Driven Scaling based on reactive or deterministic triggers (ex: batch jobs or push notifications)
  • Scaling on custom metrics (ex: Queuesize) that are more accurate and responsive measure of load
  • Enables Scale-to-Zero Behavior: Pods can completely scale down when no load is present.

Installing KEDA can improve the accuracy and responsiveness of kubernetes autoscaling, and means more types of workloads can enjoy the benefits of horizontal autoscaling. Read more here

Scaling KEDA with Rabbit MQ

RabbitMQ is a popular open-source tool for message queuing. Infrastructure teams love it as a tool for decoupling microservices and ensuring resilient communication between them.

KEDA’s RabbitMQ scaler allows scaling based on the lag of consumer groups, ensuring optimal processing of messages. This is incredibly powerful because it …

  • Unlocks horizontal autoscaling for messaging based systems
  • Allows these systems to scale on a ‘true’ measure of load: CPU & Memory Utilization are not a perfect representation of load for a queue consumer.

Diagram - RabbitMQ

For example: Imagine a microservice that processes user-uploaded images. RabbitMQ holds processing tasks; KEDA scales the consumer pods to handle sudden spikes (say, after a marketing campaign) and scales back down when tasks subside.

Pros:

  • Horizontal Scaling for Asynchronous Tasks: Perfect when you have a surge of tasks that come in at variable rates.
  • Cost Savings with Scale-to-Zero: If the queue is empty, no need to pay for idle Pods.
  • Mature Ecosystem: RabbitMQ has extensive tooling and a large community.

Cons:

  • Potential Over-Provisioning: If your threshold is too low or the messages spike, you could spin up more Pods than necessary.
  • Multi-Queue Complexity: Coordinating multiple RabbitMQ queues can complicate your scaling logic.
  • Cold Starts: After scaling to zero, the first set of messages might wait longer while new Pods spin up.

Step-by-Step Guide to Scaling KEDA on RabbitMQ

Prerequisites:

  • RabbitMQ Deployment: Install RabbitMQ using a Helm chart or any preferred method:
  • KEDA Installation: Ensure KEDA is installed in your Kubernetes cluster.
  • RabbitMQ Queue: Create a RabbitMQ queue named your-queue.

Setup:

1. Deploy RabbitMQ

You can deploy RabbitMQ using Helm with the following command:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install rabbitmq bitnami/rabbitmq --set auth.username=guest --set auth.password=guestpassword

2. Install KEDA

Deploy KEDA to your Kubernetes cluster using Helm:

helm repo add kedacore https://kedacore.github.io/charts
helm install keda kedacore/keda --namespace keda --create-namespace

3. Configure RabbitMQ Trigger

KEDA requires a ScaledObject resource to define the scaling behavior. Here’s an example scaling policy:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: rabbitmq-scaledobject
  namespace: default

spec:
  scaleTargetRef:
    name: payments
 minReplicaCount: 2
 maxReplicaCount: 10
 pollingInterval: 15
 cooldownPeriod: 30
  triggers:
  - type: rabbitmq
    metadata:
      queueName: your-queue
      host: amqp://guest:guestpassword@rabbitmq.default.svc.cluster.local:5672/
      mode: QueueLength
value: "10"

This configuration tells KEDA to:

  • Monitor the your-queue queue on RabbitMQ, checking every 15s
  • Scale the deployment named payments when the queue length exceeds 10 messages.

4. Apply the Configuration Save the ScaledObject definition to a file, e.g., scaledobject.yaml, and apply it:

kubectl apply -f scaledobject.yaml

5. Test the Autoscaling Send messages to the RabbitMQ queue and observe the scaling behavior:

kubectl get hpa -w

You should see the Horizontal Pod Autoscaler (HPA) dynamically adjust the number of pods based on the queue’s message count.

Scaling to Zero on RabbitMQ

RabbitMQ often serves as a message broker in distributed systems, and scaling workloads based on the queue length is a common use case. With KEDA, you can scale consumer pods down to zero when there are no messages in the queue. Here's an example configuration:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: rabbitmq-scale-to-zero
  namespace: default
spec:
  scaleTargetRef:
    name: your-deployment-name
  triggers:
    - type: rabbitmq
      metadata:
        queueName: your-queue
        host: amqp://rabbitmq.default.svc.cluster.local
        queueLength: "1"

Best Practices & Edge Cases

  • Optimize Queue Length: Choose a threshold matching your system's performance needs (e.g., 10 tasks waiting might warrant a new Pod).
  • Continuously check and just scaling behavior: Use Prometheus + Grafana (or any observability tooling) to check lag and update scaling behavior.
  • Handle Dead Letters: Configure dead-letter queues for unprocessable messages.
  • Cold Start Delays: Understand that scaling from zero may introduce a short delay for initial tasks. Simulate and optimize

RabbitMQ, KEDA and Flightcrew

Combining RabbitMQ with KEDA unlock event-driven scaling for messaging systems. This means non microservices can maintain responsiveness during traffic spikes and save costs by scaling down when idle.

Once you’ve setup KEDA to scale on RabbitMQ you’ll need to tune your KEDA config so that it aligns with your pod resources, and your underlying node lifecycle.

Flightcrew is an AI tool that can help with this, and other production engineering tasks. Let us know if we can help.

author image

Tim Nichols

CEO/Founder

Tim was a Product Manager on Google Kubernetes Engine and led Machine Learning teams at Spotify before starting Flightcrew. He graduated from Stanford University and lives in New York City. Follow on Bluesky

keep-reading-vector
Subscription decoration

Don’t miss out!

Sign up for our newsletter and stay connected