You have a Kafka cluster running on Docker Desktop Kubernetes. Now it's time to send your first message.
In the request-response world, you call an API and wait for a response. In event-driven systems, you publish an event and move on. But "move on" doesn't mean "forget about it." You need to know whether your message actually reached Kafka. Did it land on a partition? Which offset was assigned? Did something go wrong?
The confluent-kafka-python library handles this through a pattern that might feel unusual at first: asynchronous production with delivery callbacks. You call produce(), which returns immediately. Later, you call poll() to process callbacks that tell you what happened. This pattern maximizes throughput while still giving you visibility into delivery success or failure.
By the end of this lesson, you'll have working producer code that sends messages to your Kafka cluster and confirms each delivery.
The confluent-kafka-python library is the official Confluent client for Python. It wraps the high-performance librdkafka C library, giving you the best performance available for Python Kafka clients.
Why not aiokafka? You might see aiokafka in tutorials—it has cleaner async/await syntax. We use confluent-kafka because:
The callback pattern takes getting used to, but it's what you'll see in real Kafka jobs, and you'll need Schema Registry support in Lesson 10.
Install it with uv:
Output:
If you're using pip:
The library requires librdkafka to be available on your system. On macOS, the pip/uv installation handles this automatically. On Linux, you may need to install it separately (apt-get install librdkafka-dev on Debian/Ubuntu).
Your Kafka cluster from Lesson 4 includes a NodePort listener on port 30092. This exposes Kafka directly on your localhost—no extra setup needed.
Verify the NodePort is working:
Output:
Connection Reference:
For this lesson, you run code locally, so use localhost:30092.
Let's start with the simplest producer that actually works:
Output:
This works, but it's blind. You have no idea whether the message actually reached Kafka or where it landed. Let's add visibility.
The produce() method is non-blocking. When you call it, the message goes into an internal buffer, and the method returns immediately. The actual network transmission happens in a background thread.
This creates a problem: how do you know if delivery succeeded?
The answer is delivery callbacks. You provide a function that Kafka calls after each message is delivered (or fails). But there's a catch: callbacks don't execute automatically. You must call poll() to trigger them.
Here's the mental model:
A delivery callback receives two arguments:
Output:
Now you can see exactly where your message landed: topic task-created, partition 0, offset 0.
So far, we've sent messages without keys. Kafka accepts this, but you lose an important guarantee.
When you provide a key, Kafka uses it to determine the partition:
Why this matters:
For task events, using task_id or user_id as the key ensures all events for that entity arrive in order.
Here's a production-ready producer that sends multiple messages with proper error handling:
Output:
Notice how messages landed on different partitions (0, 1, 2). Kafka distributed them based on the key hash.
These two methods are often confused. Here's the difference:
The pattern:
What happens if you skip poll()?
Callbacks accumulate in memory. If you never call poll() or flush(), your callback functions never execute, and you never learn about delivery failures until the program exits.
Your producer is sending messages, but let's verify they actually arrived. Use the Kafka console consumer:
Output:
Your messages are in Kafka, persisted and ready for any consumer to read.
You built a kafka-events skill in Lesson 0. Test and improve it based on what you learned.
Ask yourself:
If you found gaps:
What you're learning: The asynchronous callback model—understanding that produce() is non-blocking and callbacks require explicit triggering.
What you're learning: Key design decisions—balancing ordering guarantees against parallelism and understanding partition assignment.
What you're learning: Production resilience patterns—moving from "it works on my machine" to handling real-world failure scenarios.
Safety note: When testing producer code, start with a development topic. Avoid producing to production topics until you've verified your error handling works correctly.