USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookThe All-Seeing Eye: Monitoring and Debugging Kafka
Previous Chapter
Production Kafka with Strimzi
Next Chapter
AI-Assisted Kafka Development
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

31 sections

Progress0%
1 / 31

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Monitoring and Debugging Kafka

It's 3 AM. Your pager goes off. The order processing system stopped sending confirmation emails two hours ago. You check the notification service logs—no errors. The service is running, polling Kafka, and processing messages. So where are the orders?

You discover the consumer group has 47,000 messages of lag on one partition. Those orders are sitting in Kafka, unprocessed. Your consumer has been processing, but slower than the incoming rate. For two hours, the gap widened silently until a customer complained.

This scenario illustrates why Kafka monitoring isn't optional—it's your early warning system. In this chapter, you'll learn to monitor consumer lag, inspect topics and consumer groups with CLI tools, diagnose common failures, and configure alerts that catch problems before customers do.

Consumer Lag: The Most Important Metric

Consumer lag is the difference between where producers are writing (the log-end offset) and where your consumer has processed (the current offset). It tells you whether your consumer is keeping up with the production rate.

Specification
Partition 0: Log-end offset (latest): 10,000 Consumer offset: 8,500 LAG = 10,000 - 8,500 = 1,500 messages behind

Why lag matters more than throughput:

MetricWhat It Tells You
Messages/secondHow fast you're processing right now
Consumer lagWhether you're processing faster than producers write
Lag trendWhether you're falling behind, catching up, or stable

A consumer processing 1,000 msg/sec sounds fast—until you realize producers are writing 1,200 msg/sec. Your lag grows by 200 messages every second. Within an hour, you're 720,000 messages behind.

Monitoring Lag with kafka-consumer-groups.sh

The primary tool for checking consumer lag is kafka-consumer-groups.sh. On a Strimzi cluster, you can execute it inside a Kafka pod:

bash
# Check lag for a specific consumer group kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-consumer-groups.sh \ --bootstrap-server localhost:9092 \ --describe \ --group notification-service

Output:

text
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID notification-service task-created 0 8500 10000 1500 consumer-1-abc123 /10.244.0.15 consumer-1 notification-service task-created 1 9800 9800 0 consumer-1-abc123 /10.244.0.15 consumer-1 notification-service task-created 2 7200 9500 2300 consumer-2-def456 /10.244.0.16 consumer-2

Reading this output:

ColumnMeaning
CURRENT-OFFSETLast committed offset for this partition
LOG-END-OFFSETLatest message offset in the partition
LAGMessages waiting to be processed
CONSUMER-IDWhich consumer instance owns this partition
HOSTIP address of the consumer

From this output, you can see:

  • Partition 1 is caught up (lag = 0)
  • Partition 0 has 1,500 messages of lag
  • Partition 2 has 2,300 messages of lag—the worst performer
  • consumer-2 on partition 2 might be slower or handling more complex messages

Interpreting Lag Patterns

Different lag patterns indicate different problems:

PatternWhat It MeansLikely Cause
All partitions have similar, growing lagOverall throughput issueConsumer processing too slow, need to scale or optimize
One partition has much higher lagPartition-specific issueHot partition (uneven key distribution), slow message type, stuck consumer
Lag spikes then recoversTransient issueConsumer restart, rebalance, temporary slow processing
Lag stays constant and lowHealthy stateConsumer keeping pace with production
Lag at 0 for all partitionsCaught upHealthy, or no messages being produced

Lag Alert Thresholds

Setting appropriate thresholds depends on your tolerance for processing delay:

yaml
# Example alert thresholds for a notification service alert_rules: # Warning: lag growing but not critical yet consumer_lag_warning: threshold: 1000 duration: "5m" message: "Consumer lag above 1000 for 5 minutes" # Critical: significant delay, may miss SLAs consumer_lag_critical: threshold: 10000 duration: "2m" message: "Consumer lag above 10000 - potential message loss risk" # Emergency: approaching retention limit consumer_lag_emergency: threshold: 100000 duration: "1m" message: "Consumer lag near retention limit - data loss imminent"

Rule of thumb: Alert when lag exceeds what you can process in 1/3 of your retention period. If retention is 7 days and you process 10,000 msg/hour, alert around 50,000 lag.

Inspecting Topics with kafka-topics.sh

When troubleshooting, you often need to understand the topic structure—how many partitions, replication factor, and configuration:

bash
# List all topics kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 \ --list

Output:

text
__consumer_offsets__ strimzi-topic-operator-kstreams-topic-store-changelog task-completed task-created task-updated
bash
# Describe a specific topic kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 \ --describe \ --topic task-created

Output:

text
Topic: task-created TopicId: ABC123xyz PartitionCount: 3 ReplicationFactor: 1 Configs: retention.ms=604800000,cleanup.policy=delete Topic: task-created Partition: 0 Leader: 0 Replicas: 0 Isr: 0 Topic: task-created Partition: 1 Leader: 0 Replicas: 0 Isr: 0 Topic: task-created Partition: 2 Leader: 0 Replicas: 0 Isr: 0

Key information:

  • PartitionCount: 3 partitions (can run up to 3 parallel consumers)
  • ReplicationFactor: 1 (dev setting—no fault tolerance)
  • Configs: 7-day retention, delete cleanup policy
  • Leader: Broker ID handling reads/writes for this partition
  • Isr: In-Sync Replicas—brokers that have the latest data

Checking Under-Replicated Partitions

Under-replicated partitions are partitions where one or more replicas have fallen behind the leader. This indicates broker health issues:

bash
# Find under-replicated partitions kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 \ --describe \ --under-replicated-partitions

Healthy output (no problems):

Specification
(empty - no under-replicated partitions)

Unhealthy output:

text
Topic: task-created Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1 Topic: task-created Partition: 2 Leader: 1 Replicas: 0,1,2 Isr: 1,2

This shows:

  • Partition 0: Broker 2 is not in ISR (expected 3 replicas, only 2 in sync)
  • Partition 2: Broker 0 is not in ISR

Diagnosing under-replication:

SymptomLikely CauseFix
One broker missing from all ISRsBroker down or slowCheck broker pod status, restart if needed
Random partitions under-replicatedNetwork issuesCheck pod connectivity, cluster networking
All partitions under-replicatedCluster-wide problemCheck all broker health, disk space, memory

Reading Messages with kafka-console-consumer.sh

When debugging, you often need to see what's actually in a topic:

bash
# Read messages from beginning kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic task-created \ --from-beginning \ --max-messages 5

Output:

json
{"id": "task-1", "title": "Buy groceries", "created_at": "2025-01-15T10:00:00Z"} {"id": "task-2", "title": "Call dentist", "created_at": "2025-01-15T10:01:00Z"} {"id": "task-3", "title": "Review PR", "created_at": "2025-01-15T10:02:00Z"} {"id": "task-4", "title": "Deploy to staging", "created_at": "2025-01-15T10:03:00Z"} {"id": "task-5", "title": "Write tests", "created_at": "2025-01-15T10:04:00Z"}

Processed a total of 5 messages

Useful options:

bash
# Read from a specific partition kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic task-created \ --partition 2 \ --offset 100 \ --max-messages 3 # Include keys and metadata kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --topic task-created \ --from-beginning \ --property print.key=true \ --property print.timestamp=true \ --max-messages 3

Output with keys and timestamps:

text
CreateTime:1705312800000 task-1 {"id": "task-1", "title": "Buy groceries"} CreateTime:1705312860000 task-2 {"id": "task-2", "title": "Call dentist"} CreateTime:1705312920000 task-3 {"id": "task-3", "title": "Review PR"}

Common Errors and Troubleshooting

The Kafka ecosystem has specific error patterns. Understanding them speeds up debugging:

ErrorCauseFix
NOT_ENOUGH_REPLICASISR count below min.insync.replicasCheck broker health; ensure enough brokers are up
COORDINATOR_NOT_AVAILABLEConsumer group coordinator not readyWait and retry; usually transient during startup
REBALANCE_IN_PROGRESSConsumer group is rebalancingWait for completion; check for flapping consumers
OFFSET_OUT_OF_RANGERequested offset doesn't existAdjust auto.offset.reset; offset may have been deleted by retention
UNKNOWN_TOPIC_OR_PARTITIONTopic doesn't existCreate topic first; check for typos in topic name
REQUEST_TIMED_OUTBroker didn't respond in timeCheck broker health, network, or increase timeout
LEADER_NOT_AVAILABLEPartition has no leaderWait for leader election; check broker health

Debugging a Slow Consumer

When a consumer is falling behind, use this systematic approach:

Step 1: Confirm the lag

bash
kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-consumer-groups.sh \ --bootstrap-server localhost:9092 \ --describe \ --group notification-service

Step 2: Check if lag is growing

Specification
# Run the describe command twice, 30 seconds apart # Compare LAG values - if growing, consumer is too slow

Step 3: Check partition distribution

Specification
If one partition has much higher lag: - Check message key distribution (is one key getting all traffic?) - Check if messages on that partition are slower to process - Consider repartitioning or rebalancing

Step 4: Check consumer performance

python
# Add timing to your consumer import time while True: msg = consumer.poll(1.0) if msg and not msg.error(): start = time.time() process_message(msg) duration = time.time() - start if duration > 0.1: # 100ms threshold print(f"SLOW: {duration:.2f}s for partition {msg.partition()}")

Step 5: Scale if needed

bash
# Check current consumer count kubectl get pods -l app=notification-service -n kafka # Scale up if you have fewer consumers than partitions kubectl scale deployment notification-service --replicas=3 -n kafka

JMX Metrics for Production Monitoring

Kafka exposes detailed metrics via JMX (Java Management Extensions). In production, you'll export these to Prometheus or another monitoring system.

Key broker metrics:

MetricWhat It MeasuresAlert Threshold
ReplicaManager,name=UnderReplicatedPartitionsCount of under-replicated partitions> 0 for 5 minutes
BrokerTopicMetrics,name=BytesInPerSecIncoming bytes/secondDepends on capacity
BrokerTopicMetrics,name=MessagesInPerSecIncoming messages/secondDepends on capacity
RequestMetrics,name=TotalTimeMsRequest latency99th percentile > 500ms
LogFlushStats,name=LogFlushRateAndTimeMsDisk flush latency> 100ms average

Key consumer metrics:

MetricWhat It MeasuresAlert Threshold
fetch-manager-metrics,name=records-lagPer-partition lag> 10000 for 5 minutes
fetch-manager-metrics,name=records-consumed-rateConsumption rateDepends on expected rate
consumer-coordinator-metrics,name=rebalance-latency-avgAverage rebalance time> 30 seconds

Strimzi Metrics with Prometheus

Strimzi provides built-in support for Prometheus metrics. Enable them in your Kafka resource:

yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: task-events spec: kafka: version: 4.1.1 metadataVersion: 4.1-IV0 # ... other config ... metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: kafka-metrics key: kafka-metrics-config.yml

Then create the metrics ConfigMap:

yaml
apiVersion: v1 kind: ConfigMap metadata: name: kafka-metrics data: kafka-metrics-config.yml: | lowercaseOutputName: true rules: - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_$1_$2 type: GAUGE labels: clientId: "$3" topic: "$4" partition: "$5" - pattern: kafka.server<type=(.+), name=(.+)><>Value name: kafka_server_$1_$2 type: GAUGE

Building an Alert Runbook

When alerts fire, you need clear steps. Here's a template runbook:

Alert: Consumer Lag Critical (> 10,000)

text
1. CONFIRM the alert $ kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-consumer-groups.sh \ --bootstrap-server localhost:9092 \ --describe --group <consumer-group> 2. IDENTIFY the pattern - All partitions lagging → Processing too slow overall - One partition lagging → Hot partition or stuck consumer 3. CHECK consumer health $ kubectl get pods -l app=<consumer-app> -n <namespace> $ kubectl logs <consumer-pod> --tail=100 4. CHECK for rebalancing Look for "Revoking" or "Assigned" in logs Frequent rebalancing = consumers timing out 5. SCALE if needed $ kubectl scale deployment <consumer-app> --replicas=<N> (only helps if partitions > consumers) 6. IF still lagging after 15 minutes - Check for slow external dependencies (DB, API calls) - Consider increasing max.poll.records for batch processing - Escalate if data loss risk (lag approaching retention)

Alert: Under-Replicated Partitions > 0

text
1. IDENTIFY which partitions $ kubectl exec -it task-events-kafka-0 -n kafka -- \ /opt/kafka/bin/kafka-topics.sh \ --bootstrap-server localhost:9092 \ --describe --under-replicated-partitions 2. CHECK broker status $ kubectl get pods -l strimzi.io/cluster=task-events -n kafka Look for pods not in Running state 3. CHECK broker logs $ kubectl logs task-events-kafka-<N> -n kafka --tail=200 Look for: disk errors, OOM, connection failures 4. IF broker pod is down $ kubectl describe pod task-events-kafka-<N> -n kafka Check Events section for failure reason 5. IF broker is slow - Check disk usage: df -h on broker - Check memory: possible GC pressure - Check network: latency between brokers

Reflect on Your Skill

You built a kafka-events skill in Chapter 1. Test and improve it based on what you learned.

Test Your Skill

Specification
Using my kafka-events skill, diagnose a consumer lag issue and identify which consumer is falling behind.Does my skill show how to use kafka-consumer-groups.sh and interpret lag metrics?

Identify Gaps

Ask yourself:

  • Did my skill explain consumer lag metrics and what causes lag growth?
  • Did it show how to use Kafka CLI tools for debugging (kafka-topics.sh, kafka-consumer-groups.sh)?

Improve Your Skill

If you found gaps:

Specification
My kafka-events skill is missing monitoring and debugging patterns (consumer lag, offset inspection, CLI tools).Update it to include how to diagnose and resolve common Kafka operational issues.

Try With AI

Setup: You're on-call and receive an alert about your Kafka cluster.

Prompt 1: Interpret monitoring output

Specification
I'm debugging a Kafka consumer issue. Here's my kafka-consumer-groups output:GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAGorder-service orders 0 45000 50000 5000order-service orders 1 49500 50000 500order-service orders 2 42000 50000 8000order-service orders 3 49800 50000 200I have 2 consumer instances running. What patterns do you see, and whatshould I check first? Walk me through a systematic diagnosis.

What you're learning: AI helps identify asymmetric lag patterns—partition 2 is significantly behind, suggesting either a hot partition, slow message processing, or an issue with the consumer assigned to it.

Prompt 2: Build a troubleshooting checklist

Specification
Create a troubleshooting checklist for this Kafka error:"NOT_ENOUGH_REPLICAS: Messages are rejected because there are fewer in-syncreplicas than required: 2"My cluster has 3 brokers and topics with replication.factor=3and min.insync.replicas=2.What are all the possible causes and how do I diagnose each one?

What you're learning: AI walks through ISR mechanics and helps you understand why this error occurs (at least one broker is not in sync), plus diagnostic steps for each scenario.

Prompt 3: Design alerting for your system

Specification
I'm setting up alerting for a Kafka-based event processing system. We have:- 3 topics: orders (high priority), notifications (medium), analytics (low) - SLA: orders must be processed within 5 minutes, others within 1 hour - Retention: 7 days for all topics - Traffic: orders 1000/min, notifications 5000/min, analytics 50000/min Help me design alert thresholds for consumer lag on each topic.Consider: SLA requirements, traffic rates, and what "critical" means for each.

What you're learning: AI collaborates on translating business SLAs into technical alert thresholds, showing how to differentiate alert severity based on topic priority and processing requirements.

Safety note: When running diagnostic commands on production Kafka clusters, use read-only commands (--describe, --list) rather than commands that modify state. Never reset consumer offsets or delete topics without understanding the implications.