Confluent has launched “Queues for Kafka,” a new feature that brings traditional message queuing capabilities to the Apache Kafka streaming platform, enabling organizations to handle both streaming and queuing workloads on a single system. The feature, now available in Confluent Cloud and Confluent Platform 7.7, introduces “share groups” that allow multiple consumers to process messages from the same partition simultaneously—a significant departure from Kafka’s traditional one-consumer-per-partition model.
The development represents a fundamental shift for Apache Kafka, which has traditionally required each partition to be processed by only one consumer at a time. The new capability, built on KIP-932 and included in Apache Kafka 4.2’s core release, allows organizations to consolidate infrastructure by handling both streaming and traditional queuing workloads on a single platform, according to Confluent’s announcement.
The feature introduces per-message acknowledgment, replacing Kafka’s traditional offset-based batch commits. When a consumer fetches a message, the broker places a 30-second lock on it, requiring explicit action such as acknowledging successful processing or releasing it for redelivery, Confluent explained in its technical documentation. This enables elastic scaling, where consumer numbers can be adjusted independently of partition counts to handle variable workloads.
Technical Trade-offs
The most significant change involves message ordering. While traditional Kafka consumer groups guarantee in-order processing per partition, share groups sacrifice this guarantee to enable parallel processing, according to Confluent. Messages remain stored in order but can be processed concurrently and out of sequence by multiple consumers. The system currently provides at-least-once delivery semantics, with exactly-once semantics planned for future integration.
For Enterprise and Dedicated clusters on Confluent Cloud, the feature is available immediately, with Standard cluster support planned for the second half of 2026, the company stated. The functionality comes at no additional charge beyond standard consumption costs for data ingress, egress, and uptime, according to Confluent’s pricing documentation.
Market Applications
The feature targets operational workloads that benefit from traditional message queue semantics, including task distribution to worker pools, service-to-service communication, and asynchronous job processing, Confluent noted. However, the company recommends standard Kafka consumer groups for use cases requiring strict ordering and high-throughput batch processing, such as event sourcing and streaming analytics.
Confluent Cloud enhances the open-source feature with a dedicated UI for managing share groups and introduces a “share lag” metric similar to queue depth, which can drive auto-scaling decisions through the Metrics API, according to the company’s technical blog.
Future enhancements include Dead Letter Queues, key-based ordering to restore some ordering guarantees, and integration with Kafka transactions for exactly-once semantics, signaling continued investment in making Kafka a comprehensive data platform beyond its streaming origins.
Sources
- Confluent
- Apache Software Foundation


























