Skip to Main Content

100% Compatible with Apache Kafka®

Info

The term "AutoMQ Kafka" mentioned in this text specifically refers to the open-source project called automq-for-kafka, which is hosted under the GitHub organization AutoMQ, operated by AutoMQ CO.,LTD.

Introduction

There are many products in the open source community that are adapted to the Kafka protocol, such as Redpanda,KafkaonPulsar . Some of them are rebuilding Kafka from scratch, and some are grafting the protocol based on existing products. No matter which method is used, it is extremely difficult to achieve 100% compatibility of the Kafka protocol and semantics. And with the development of Aapache Kafka in the later period, how to continue to maintain compatibility with the Kafka protocol is also a big challenge.

The compatibility of Kafka 's protocol and semantics is an important consideration for users to choose Kafka products. Therefore, the premise of all architecture designs of AutoMQ for Kafka(referred to as AutoMQ Kafka) is that it must be 100% compatible with the protocols and semantics of Apache Kafka® , and can Continuous follow-up and alignment with Apache Kafka®.

100% Compatible Solution

In order to be 100% compatible and continue to follow up with Apache Kafka®. AutoMQ Kafka performs slicing based on Apache Kafka® smallest storage unit LogSegment and maps LogSegment data to S3Stream. The storage layer exposes the same LogSegment, LocalLog and Partition abstractions, and upper-layer Kraft metadata management, transactions, KafkaApis and other functions can reuse the original code logic.

Through the LogSegment code aspect, KoS can sink data into shared storage through S3Stream, making the Broker a "stateless" node and realizing Kafka 's cloud-native reconstruction.

The upper layer of the storage layer reuses the original logic, so that AutoMQ Kafka can not only easily achieve 100% protocol and semantic compatibility, but also continue to follow the latest functions and defect fixes of Apache Kafka® .

Next, let's take a look at how AutoMQ Kafka maps Partition data to S3Stream in detail.

There are two types of data under Apache Kafka® Partition: LogSegment dimension and Partition dimension. LogSegment contains data files, sparse index files, transaction index files and time index files. The Partition dimension contains Producer idempotent information snapshot data, leader epoch information and other metadata.

LogSegment is a bounded data segment that rolls with size and time. AutoMQ Kafka logically divides the S3Stream into Slices and then maps them. Data files and sparse index files are merged and mapped to Data S3Stream. Transaction index files and time index files are mapped to Txn S3Stream and Time S3Stream respectively. The mapping expression is similar to:


{
"streamMap": {
"log": 100, // stream id
"txn": 101,
"time": 102,
}
"segments": [
{
"baseOffset": 0, // segment base offset
"log": { "start": 233, "end": 1000 },
"time": { "start": 200, "end": 300 },
...
},
{
"baseOffset": 1000, // segment base offset
"log": { "start": 1000, "end": 2100 },
"time": { "start": 300, "end": 400 },
...
},
...
]
}

Metadata for the Partition dimension will be uniformly mapped to Meta S3Stream in the form of KV. When the Partition is opened, the Meta S3Stream will be played back to restore the corresponding metadata.

System Test

Practice brings true knowledge. In addition to being 100% compatible in theory at the architectural design level, AutoMQ Kafka also passes all AutoMQ Apache Kafka® system Kafka case sets (excluding Zookeeper mode). This use case set covers testing of Kafka functions (message sending and receiving, consumer management, Topic Compaction, etc.), client compatibility (>= 0.9), operation and maintenance (partition migration, rolling restart, etc.), starting from The actual operating level ensures 100% protocol and semantic compatibility of AutoMQ Kafka .

The following is a set of system test cases:

Use CaseDescription
Module: kafkatest.tests.core.replica_scale_test Class: ReplicaScaleTest Method: test_clean_bounce Arguments: { "metadata_quorum": "REMOTE_KRAFT", "partition_count": 34, "replication_factor": 3, "topic_count": 1 }-
Module: kafkatest.tests.core.replica_scale_test Class: ReplicaScaleTest Method: test_produce_consume Arguments: { "metadata_quorum": "REMOTE_KRAFT", "partition_count": 34, "replication_factor": 3, "topic_count": 3 }-
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_fencing_static_consumer Arguments: { "fencing_stage": "all", "metadata_quorum": "REMOTE_KRAFT", "num_conflict_consumers": 1 }Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with the same group.instance.id.

- Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_fencing_static_consumer Arguments: { "fencing_stage": "stable", "metadata_quorum": "REMOTE_KRAFT", "num_conflict_consumers": 1 }Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with the same group.instance.id.

- Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_fencing_static_consumer Arguments: { "fencing_stage": "all", "metadata_quorum": "REMOTE_KRAFT", "num_conflict_consumers": 2 }Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with the same group.instance.id.

- Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_fencing_static_consumer Arguments: { "fencing_stage": "stable", "metadata_quorum": "REMOTE_KRAFT", "num_conflict_consumers": 2 }Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with the same group.instance.id.

- Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_consume_bench Arguments: { "metadata_quorum": "REMOTE_KRAFT", "topics": [ "consume_bench_topic[0-5]:[0-4]" ] }Runs a ConsumeBench workload to consume messages
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_consume_bench Arguments: { "metadata_quorum": "REMOTE_KRAFT", "topics": [ "consume_bench_topic[0-5]" ] }Runs a ConsumeBench workload to consume messages
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_multiple_consumers_random_group_partitions Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Runs multiple consumers in to read messages from specific partitions. Since a consumerGroup isn't specified, each consumer will get assigned a random group and consume from all partitions
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_multiple_consumers_random_group_topics Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Runs multiple consumers group to read messages from topics. Since a consumerGroup isn't specified, each consumer should read from all topics independently
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_multiple_consumers_specified_group_partitions_should_raise Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Runs multiple consumers in the same group to read messages from specific partitions. It is an invalid configuration to provide a consumer group and specific partitions.
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_single_partition Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Run a ConsumeBench against a single partition
Module: kafkatest.tests.core.consume_bench_test Class: ConsumeBenchTest Method: test_two_consumers_specified_group_topics Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Runs two consumers in the same consumer group to read messages from topics. Since a consumerGroup is specified, each consumer should dynamically get assigned a partition from group
Module: kafkatest.tests.client.client_compatibility_produce_consume_test Class: ClientCompatibilityProduceConsumeTest Method: test_produce_consume Arguments: { "broker_version": "dev", "metadata_quorum": "REMOTE_KRAFT" }These tests validate that we can use a new client to produce and consume from older brokers.
Module: kafkatest.tests.core.round_trip_fault_test Class: RoundTripFaultTest Method: test_produce_consume_with_client_partition Arguments: { "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.core.round_trip_fault_test Class: RoundTripFaultTest Method: test_produce_consume_with_latency Arguments: { "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.core.round_trip_fault_test Class: RoundTripFaultTest Method: test_round_trip_workload Arguments: { "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.core.snapshot_test Class: TestSnapshots Method: test_broker Arguments: { "metadata_quorum": "COLOCATED_KRAFT" }Test the ability of a broker to consume metadata snapshots and to recover the cluster metadata state using them The test ensures that that there is atleast one snapshot created on the controller quorum during the setup phase and that at least the first log segment in the metadata log has been marked for deletion, thereby ensuring that any observer of the log needs to always load a snapshot to catch up to the current metadata state. Each scenario is a progression over the previous one. The scenarios build on top of each other by: * Loading a snapshot * Loading and snapshot and some delta records * Loading a snapshot and delta and ensuring that the most recent metadata state has been caught up. Even though a subsequent scenario covers the previous one, they are all left in the test to make debugging a failure of the test easier e.g. if the first scenario passes and the second fails, it hints towards a problem with the application of delta records while catching up
Module: kafkatest.tests.core.snapshot_test Class: TestSnapshots Method: test_broker Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Test the ability of a broker to consume metadata snapshots and to recover the cluster metadata state using them The test ensures that that there is atleast one snapshot created on the controller quorum during the setup phase and that at least the first log segment in the metadata log has been marked for deletion, thereby ensuring that any observer of the log needs to always load a snapshot to catch up to the current metadata state. Each scenario is a progression over the previous one. The scenarios build on top of each other by: * Loading a snapshot * Loading and snapshot and some delta records * Loading a snapshot and delta and ensuring that the most recent metadata state has been caught up. Even though a subsequent scenario covers the previous one, they are all left in the test to make debugging a failure of the test easier e.g. if the first scenario passes and the second fails, it hints towards a problem with the application of delta records while catching up
Module: kafkatest.tests.core.snapshot_test Class: TestSnapshots Method: test_controller Arguments: { "metadata_quorum": "COLOCATED_KRAFT" }Test the ability of controllers to consume metadata snapshots and to recover the cluster metadata state using them The test ensures that that there is atleast one snapshot created on the controller quorum during the setup phase and that at least the first log segment in the metadata log has been marked for deletion, thereby ensuring that any observer of the log needs to always load a snapshot to catch up to the current metadata state. Each scenario is a progression over the previous one. The scenarios build on top of each other by: * Loading a snapshot * Loading and snapshot and some delta records * Loading a snapshot and delta and ensuring that the most recent metadata state has been caught up. Even though a subsequent scenario covers the previous one, they are all left in the test to make debugging a failure of the test easier e.g. if the first scenario passes and the second fails, it hints towards a problem with the application of delta records while catching up
Module: kafkatest.tests.core.snapshot_test Class: TestSnapshots Method: test_controller Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Test the ability of controllers to consume metadata snapshots and to recover the cluster metadata state using them The test ensures that that there is atleast one snapshot created on the controller quorum during the setup phase and that at least the first log segment in the metadata log has been marked for deletion, thereby ensuring that any observer of the log needs to always load a snapshot to catch up to the current metadata state. Each scenario is a progression over the previous one. The scenarios build on top of each other by: * Loading a snapshot * Loading and snapshot and some delta records * Loading a snapshot and delta and ensuring that the most recent metadata state has been caught up. Even though a subsequent scenario covers the previous one, they are all left in the test to make debugging a failure of the test easier e.g. if the first scenario passes and the second fails, it hints towards a problem with the application of delta records while catching up
Module: kafkatest.tests.client.compression_test Class: CompressionTest Method: test_compressed_topic Arguments: { "compression_types": [ "snappy", "gzip", "lz4", "zstd", "none" ], "metadata_quorum": "REMOTE_KRAFT" }Test produce => consume => validate for compressed topics Setup: 1 zk, 1 kafka node, 1 topic with partitions=10, replication-factor=1 compression_types parameter gives a list of compression types (or no compression if "none"). Each producer in a VerifiableProducer group (num_producers = number of compression types) will use a compression type from the list based on producer's index in the group.

- Produce messages in the background

- Consume messages in the background

- Stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.produce_bench_test Class: ProduceBenchTest Method: test_produce_bench Arguments: { "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.core.reassign_partitions_test Class: ReassignPartitionsTest Method: test_reassign_partitions Arguments: { "bounce_brokers": false, "metadata_quorum": "REMOTE_KRAFT", "reassign_from_offset_zero": false }Reassign partitions tests. Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3, and min.insync.replicas=3

- Produce messages in the background - Consume messages in the background

- Reassign partitions

- If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress

- When done reassigning partitions and bouncing brokers, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.reassign_partitions_test Class: ReassignPartitionsTest Method: test_reassign_partitions Arguments: { "bounce_brokers": false, "metadata_quorum": "REMOTE_KRAFT", "reassign_from_offset_zero": true }Reassign partitions tests. Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3, and min.insync.replicas=3

- Produce messages in the background

- Consume messages in the background - Reassign partitions

- If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress

- When done reassigning partitions and bouncing brokers, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.reassign_partitions_test Class: ReassignPartitionsTest Method: test_reassign_partitions Arguments: { "bounce_brokers": true, "metadata_quorum": "REMOTE_KRAFT", "reassign_from_offset_zero": false }Reassign partitions tests. Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3, and min.insync.replicas=3

- Produce messages in the background

- Consume messages in the background

- Reassign partitions

- If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress

- When done reassigning partitions and bouncing brokers, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.reassign_partitions_test Class: ReassignPartitionsTest Method: test_reassign_partitions Arguments: { "bounce_brokers": true, "metadata_quorum": "REMOTE_KRAFT", "reassign_from_offset_zero": true }Reassign partitions tests. Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3, and min.insync.replicas=3

- Produce messages in the background

- Consume messages in the background

- Reassign partitions

- If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress

- When done reassigning partitions and bouncing brokers, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_broker_failure Arguments: { "clean_shutdown": true, "enable_autocommit": false, "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_broker_failure Arguments: { "clean_shutdown": true, "enable_autocommit": true, "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_broker_rolling_bounce Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Verify correct consumer behavior when the brokers are consecutively restarted. Setup: single Kafka cluster with one producer writing messages to a single topic with one partition, an a set of consumers in the same group reading from the same topic.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers and wait until they've joined the group.

- In a loop, restart each broker consecutively, waiting for the group to stabilize between each broker restart.

- Verify delivery semantics according to the failure type and that the broker bounces did not cause unexpected group rebalances.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_consumer_bounce Arguments: { "bounce_mode": "all", "clean_shutdown": true, "metadata_quorum": "REMOTE_KRAFT" }Verify correct consumer behavior when the consumers in the group are consecutively restarted. Setup: single Kafka cluster with one producer and a set of consumers in one group.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers and wait until they've joined the group.

- In a loop, restart each consumer, waiting for each one to rejoin the group before restarting the rest.

- Verify delivery semantics according to the failure type.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_consumer_bounce Arguments: { "bounce_mode": "rolling", "clean_shutdown": true, "metadata_quorum": "REMOTE_KRAFT" }Verify correct consumer behavior when the consumers in the group are consecutively restarted. Setup: single Kafka cluster with one producer and a set of consumers in one group.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers and wait until they've joined the group.

- In a loop, restart each consumer, waiting for each one to rejoin the group before restarting the rest.

- Verify delivery semantics according to the failure type.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_consumer_failure Arguments: { "clean_shutdown": true, "enable_autocommit": false, "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_consumer_failure Arguments: { "clean_shutdown": true, "enable_autocommit": true, "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_group_consumption Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Verifies correct group rebalance behavior as consumers are started and stopped. In particular, this test verifies that the partition is readable after every expected rebalance. Setup: single Kafka cluster with a group of consumers reading from one topic with one partition while the verifiable producer writes to it.

- Start the consumers one by one, verifying consumption after each rebalance

- Shutdown the consumers one by one, verifying consumption after each rebalance
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_static_consumer_bounce Arguments: { "bounce_mode": "all", "clean_shutdown": true, "metadata_quorum": "REMOTE_KRAFT", "num_bounces": 5, "static_membership": false }Verify correct static consumer behavior when the consumers in the group are restarted. In order to make sure the behavior of static members are different from dynamic ones, we take both static and dynamic membership into this test suite. Setup: single Kafka cluster with one producer and a set of consumers in one group.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static/dynamic members and wait until they've joined the group.

- In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered during this process if the group is in static membership.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_static_consumer_bounce Arguments: { "bounce_mode": "rolling", "clean_shutdown": true, "metadata_quorum": "REMOTE_KRAFT", "num_bounces": 5, "static_membership": false }Verify correct static consumer behavior when the consumers in the group are restarted. In order to make sure the behavior of static members are different from dynamic ones, we take both static and dynamic membership into this test suite. Setup: single Kafka cluster with one producer and a set of consumers in one group.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static/dynamic members and wait until they've joined the group.

- In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered during this process if the group is in static membership.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_static_consumer_bounce Arguments: { "bounce_mode": "all", "clean_shutdown": true, "metadata_quorum": "REMOTE_KRAFT", "num_bounces": 5, "static_membership": true }Verify correct static consumer behavior when the consumers in the group are restarted. In order to make sure the behavior of static members are different from dynamic ones, we take both static and dynamic membership into this test suite. Setup: single Kafka cluster with one producer and a set of consumers in one group.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static/dynamic members and wait until they've joined the group.

- In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered during this process if the group is in static membership.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_static_consumer_bounce Arguments: { "bounce_mode": "rolling", "clean_shutdown": true, "metadata_quorum": "REMOTE_KRAFT", "num_bounces": 5, "static_membership": true }Verify correct static consumer behavior when the consumers in the group are restarted. In order to make sure the behavior of static members are different from dynamic ones, we take both static and dynamic membership into this test suite. Setup: single Kafka cluster with one producer and a set of consumers in one group.

- Start a producer which continues producing new messages throughout the test.

- Start up the consumers as static/dynamic members and wait until they've joined the group.

- In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered during this process if the group is in static membership.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_static_consumer_persisted_after_rejoin Arguments: { "bounce_mode": "all", "metadata_quorum": "REMOTE_KRAFT" }Verify that the updated member.id(updated_member_id) caused by static member rejoin would be persisted. If not, after the brokers rolling bounce, the migrated group coordinator would load the stale persisted member.id and fence subsequent static member rejoin with updated_member_id.

- Start a producer which continues producing new messages throughout the test.

- Start up a static consumer and wait until it's up

- Restart the consumer and wait until it up, its member.id is supposed to be updated and persisted.

- Rolling bounce all the brokers and verify that the static consumer can still join the group and consumer messages.
Module: kafkatest.tests.client.consumer_test Class: OffsetValidationTest Method: test_static_consumer_persisted_after_rejoin Arguments: { "bounce_mode": "rolling", "metadata_quorum": "REMOTE_KRAFT" }Verify that the updated member.id(updated_member_id) caused by static member rejoin would be persisted. If not, after the brokers rolling bounce, the migrated group coordinator would load the stale persisted member.id and fence subsequent static member rejoin with updated_member_id.

- Start a producer which continues producing new messages throughout the test.

- Start up a static consumer and wait until it's up

- Restart the consumer and wait until it up, its member.id is supposed to be updated and persisted.

- Rolling bounce all the brokers and verify that the static consumer can still join the group and consumer messages.
Module: kafkatest.tests.core.replication_test Class: ReplicationTest Method: test_replication_with_broker_failure Arguments: { "broker_type": "leader", "compression_type": "gzip", "failure_mode": "clean_bounce", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT", "tls_version": "TLSv1.2" }Replication tests. These tests verify that replication provides simple durability guarantees by checking that data acked by brokers is still available for consumption in the face of various failure scenarios. Setup: 1 zk/KRaft controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

- Produce messages in the background

- Consume messages in the background

- Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)

- When done driving failures, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.replication_test Class: ReplicationTest Method: test_replication_with_broker_failure Arguments: { "broker_type": "leader", "compression_type": "gzip", "failure_mode": "clean_bounce", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT", "tls_version": "TLSv1.3" }Replication tests. These tests verify that replication provides simple durability guarantees by checking that data acked by brokers is still available for consumption in the face of various failure scenarios. Setup: 1 zk/KRaft controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

- Produce messages in the background

- Consume messages in the background

- Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)

- When done driving failures, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.replication_test Class: ReplicationTest Method: test_replication_with_broker_failure Arguments: { "broker_type": "leader", "compression_type": "gzip", "failure_mode": "clean_shutdown", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT", "tls_version": "TLSv1.2" }Replication tests. These tests verify that replication provides simple durability guarantees by checking that data acked by brokers is still available for consumption in the face of various failure scenarios. Setup: 1 zk/KRaft controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

- Produce messages in the background

- Consume messages in the background

- Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)

- When done driving failures, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.replication_test Class: ReplicationTest Method: test_replication_with_broker_failure Arguments: { "broker_type": "leader", "compression_type": "gzip", "failure_mode": "clean_shutdown", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT", "tls_version": "TLSv1.3" }Replication tests. These tests verify that replication provides simple durability guarantees by checking that data acked by brokers is still available for consumption in the face of various failure scenarios. Setup: 1 zk/KRaft controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

- Produce messages in the background - Consume messages in the background

- Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)

- When done driving failures, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.replication_test Class: ReplicationTest Method: test_replication_with_broker_failure Arguments: { "broker_type": "leader", "compression_type": "gzip", "failure_mode": "hard_bounce", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT", "tls_version": "TLSv1.2" }Replication tests. These tests verify that replication provides simple durability guarantees by checking that data acked by brokers is still available for consumption in the face of various failure scenarios. Setup: 1 zk/KRaft controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

- Produce messages in the background - Consume messages in the background

- Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)

- When done driving failures, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.tests.core.replication_test Class: ReplicationTest Method: test_replication_with_broker_failure Arguments: { "broker_type": "leader", "compression_type": "gzip", "failure_mode": "hard_bounce", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT", "tls_version": "TLSv1.3" }Replication tests. These tests verify that replication provides simple durability guarantees by checking that data acked by brokers is still available for consumption in the face of various failure scenarios. Setup: 1 zk/KRaft controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

- Produce messages in the background - Consume messages in the background

- Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)

- When done driving failures, stop producing, and finish consuming

- Validate that every acked message was consumed
Module: kafkatest.sanity_checks.test_bounce Class: TestBounce Method: test_simple_run Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "quorum_size": 3 }Test that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages both before and after a subsequent roll.
Module: kafkatest.sanity_checks.test_bounce Class: TestBounce Method: test_simple_run Arguments: { "metadata_quorum": "REMOTE_KRAFT", "quorum_size": 3 }Test that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages both before and after a subsequent roll.
Module: kafkatest.tests.client.consumer_test Class: AssignmentValidationTest Method: test_valid_assignment Arguments: { "assignment_strategy": "org.apache.kafka.clients.consumer.RangeAssignor", "metadata_quorum": "REMOTE_KRAFT" }Verify assignment strategy correctness: each partition is assigned to exactly one consumer instance. Setup: single Kafka cluster with a set of consumers in the same group. - Start the consumers one by one - Validate assignment after every expected rebalance
Module: kafkatest.tests.client.consumer_test Class: AssignmentValidationTest Method: test_valid_assignment Arguments: { "assignment_strategy": "org.apache.kafka.clients.consumer.RoundRobinAssignor", "metadata_quorum": "REMOTE_KRAFT" }Verify assignment strategy correctness: each partition is assigned to exactly one consumer instance. Setup: single Kafka cluster with a set of consumers in the same group. - Start the consumers one by one - Validate assignment after every expected rebalance
Module: kafkatest.tests.client.consumer_test Class: AssignmentValidationTest Method: test_valid_assignment Arguments: { "assignment_strategy": "org.apache.kafka.clients.consumer.StickyAssignor", "metadata_quorum": "REMOTE_KRAFT" }Verify assignment strategy correctness: each partition is assigned to exactly one consumer instance. Setup: single Kafka cluster with a set of consumers in the same group. - Start the consumers one by one - Validate assignment after every expected rebalance
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "0.10.0.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.10.0.1", "timestamp_type": "LogAppendTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "0.10.1.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.10.1.1", "timestamp_type": "LogAppendTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "lz4" ], "consumer_version": "0.10.2.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.10.2.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "gzip" ], "consumer_version": "0.11.0.3", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.11.0.3", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "0.9.0.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.9.0.1", "timestamp_type": "LogAppendTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "dev", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.9.0.1", "timestamp_type": null }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "dev", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "0.9.0.1", "timestamp_type": null }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "1.0.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "1.0.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "lz4" ], "consumer_version": "1.1.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "1.1.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "2.0.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.0.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "zstd" ], "consumer_version": "2.1.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.1.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.2.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.2.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.3.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.3.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.4.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.4.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.5.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.5.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.6.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.6.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.7.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.7.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "2.8.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "2.8.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "3.0.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "3.0.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "3.1.2", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "3.1.2", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "3.2.3", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "3.2.3", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "3.3.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "3.3.1", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "0.9.0.1", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "timestamp_type": "CreateTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "none" ], "consumer_version": "dev", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "timestamp_type": "LogAppendTime" }-
Module: kafkatest.tests.core.compatibility_test_new_broker_test Class: ClientCompatibilityTestNewBroker Method: test_compatibility Arguments: { "compression_types": [ "snappy" ], "consumer_version": "dev", "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "timestamp_type": "LogAppendTime" }-
Module: kafkatest.tests.core.security_test Class: SecurityTest Method: test_client_ssl_endpoint_validation_failure Arguments: { "interbroker_security_protocol": "SSL", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT" }Test that invalid hostname in certificate results in connection failures. When security_protocol=SSL, client SSL handshakes are expected to fail due to hostname verification failure. When security_protocol=PLAINTEXT and interbroker_security_protocol=SSL, controller connections fail with hostname verification failure. Since metadata cannot be propagated in the cluster without a valid certificate, the broker's metadata caches will be empty. Hence we expect Metadata requests to fail with an INVALID_REPLICATION_FACTOR error since the broker will attempt to create the topic automatically as it does not exist in the metadata cache, and there will be no online brokers.
Module: kafkatest.tests.core.security_test Class: SecurityTest Method: test_client_ssl_endpoint_validation_failure Arguments: { "interbroker_security_protocol": "PLAINTEXT", "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SSL" }Test that invalid hostname in certificate results in connection failures. When security_protocol=SSL, client SSL handshakes are expected to fail due to hostname verification failure. When security_protocol=PLAINTEXT and interbroker_security_protocol=SSL, controller connections fail with hostname verification failure. Since metadata cannot be propagated in the cluster without a valid certificate, the broker's metadata caches will be empty. Hence we expect Metadata requests to fail with an INVALID_REPLICATION_FACTOR error since the broker will attempt to create the topic automatically as it does not exist in the metadata cache, and there will be no online brokers.
Module: kafkatest.sanity_checks.test_performance_services Class: PerformanceServiceTest Method: test_version Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "version": "dev" }Sanity check out producer performance service - verify that we can run the service with a small number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
Module: kafkatest.sanity_checks.test_performance_services Class: PerformanceServiceTest Method: test_version Arguments: { "metadata_quorum": "REMOTE_KRAFT", "version": "dev" }Sanity check out producer performance service - verify that we can run the service with a small number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
Module: kafkatest.tests.tools.log4j_appender_test Class: Log4jAppenderTest Method: test_log4j_appender Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SASL_PLAINTEXT" }Tests if KafkaLog4jAppender is producing to Kafka topic :return: None
Module: kafkatest.tests.tools.log4j_appender_test Class: Log4jAppenderTest Method: test_log4j_appender Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SASL_SSL" }Tests if KafkaLog4jAppender is producing to Kafka topic :return: None
Module: kafkatest.sanity_checks.test_bounce Class: TestBounce Method: test_simple_run Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "quorum_size": 1 }Test that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages both before and after a subsequent roll.
Module: kafkatest.sanity_checks.test_bounce Class: TestBounce Method: test_simple_run Arguments: { "metadata_quorum": "REMOTE_KRAFT", "quorum_size": 1 }Test that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages both before and after a subsequent roll.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "security_protocol": "SASL_PLAINTEXT" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SASL_PLAINTEXT" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "security_protocol": "SASL_SSL" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SASL_SSL" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "sasl_mechanism": "PLAIN", "security_protocol": "SASL_SSL" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "REMOTE_KRAFT", "sasl_mechanism": "PLAIN", "security_protocol": "SASL_SSL" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_multiple_kraft_sasl_mechanisms Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Test for remote KRaft cases that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages. The inter-controller and broker-to-controller security protocols are both SASL_PLAINTEXT but the SASL mechanisms are different (we set GSSAPI for the inter-controller mechanism and PLAIN for the broker-to-controller mechanism). This test differs from the above tests -- he ones above used the same SASL mechanism for both paths.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_multiple_kraft_security_protocols Arguments: { "inter_broker_security_protocol": "PLAINTEXT", "metadata_quorum": "REMOTE_KRAFT" }Test for remote KRaft cases that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages. The inter-controller and broker-to-controller security protocols are defined to be different (which differs from the above test, where they were the same).
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_multiple_kraft_security_protocols Arguments: { "inter_broker_sasl_mechanism": "GSSAPI", "inter_broker_security_protocol": "SASL_SSL", "metadata_quorum": "REMOTE_KRAFT" }Test for remote KRaft cases that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages. The inter-controller and broker-to-controller security protocols are defined to be different (which differs from the above test, where they were the same).
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_multiple_kraft_security_protocols Arguments: { "inter_broker_sasl_mechanism": "PLAIN", "inter_broker_security_protocol": "SASL_SSL", "metadata_quorum": "REMOTE_KRAFT" }Test for remote KRaft cases that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages. The inter-controller and broker-to-controller security protocols are defined to be different (which differs from the above test, where they were the same).
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_multiple_kraft_security_protocols Arguments: { "inter_broker_security_protocol": "SSL", "metadata_quorum": "REMOTE_KRAFT" }Test for remote KRaft cases that we can start VerifiableProducer on the current branch snapshot version, and verify that we can produce a small number of messages. The inter-controller and broker-to-controller security protocols are defined to be different (which differs from the above test, where they were the same).
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "producer_version": "dev", "sasl_mechanism": "GSSAPI", "security_protocol": "SASL_SSL" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "sasl_mechanism": "GSSAPI", "security_protocol": "SASL_SSL" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "producer_version": "dev", "sasl_mechanism": "PLAIN", "security_protocol": "SASL_SSL" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "sasl_mechanism": "PLAIN", "security_protocol": "SASL_SSL" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.tests.client.consumer_rolling_upgrade_test Class: ConsumerRollingUpgradeTest Method: rolling_update_test Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Verify rolling updates of partition assignment strategies works correctly. In this test, we use a rolling restart to change the group's assignment strategy from "range" to "roundrobin." We verify after every restart that all members are still in the group and that the correct assignment strategy was used.
Module: kafkatest.tests.client.pluggable_test Class: PluggableConsumerTest Method: test_start_stop Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Test that a pluggable VerifiableConsumer module load works
Module: kafkatest.tests.core.authorizer_test Class: AuthorizerTest Method: test_authorizer Arguments: { "authorizer_class": "kafka.security.authorizer.AclAuthorizer", "metadata_quorum": "REMOTE_KRAFT" }Tests that the default Authorizer implementations work with both ZooKeeper-based and KRaft clusters. Alters client quotas, making sure it works. Rolls Kafka with an authorizer. Alters client quotas, making sure it fails. Adds ACLs with super-user broker credentials. Alters client quotas, making sure it now works again. Removes ACLs. Note that we intend to have separate test explicitly for the KRaft-based replacement for the ZooKeeper-based authorizer.
Module: kafkatest.tests.core.authorizer_test Class: AuthorizerTest Method: test_authorizer Arguments: { "authorizer_class": "org.apache.kafka.metadata.authorizer.StandardAuthorizer", "metadata_quorum": "REMOTE_KRAFT" }Tests that the default Authorizer implementations work with both ZooKeeper-based and KRaft clusters. Alters client quotas, making sure it works. Rolls Kafka with an authorizer. Alters client quotas, making sure it fails. Adds ACLs with super-user broker credentials. Alters client quotas, making sure it now works again. Removes ACLs. Note that we intend to have separate test explicitly for the KRaft-based replacement for the ZooKeeper-based authorizer.
Module: kafkatest.tests.core.elastic_log_reopen_test Class: ElasticLogReOpenTest Method: test_elastic_log_reassign Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "partition_count": 1, "replication_factor": 1 }Test that we can reassign a topic with elastic log and record time cost
Module: kafkatest.tests.core.elastic_log_reopen_test Class: ElasticLogReOpenTest Method: test_elastic_log_reassign Arguments: { "metadata_quorum": "REMOTE_KRAFT", "partition_count": 1, "replication_factor": 1 }Test that we can reassign a topic with elastic log and record time cost
Module: kafkatest.tests.core.get_offset_shell_test Class: GetOffsetShellTest Method: test_get_offset_shell_internal_filter Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Tests if GetOffsetShell handles --exclude-internal-topics flag correctly :return: None
Module: kafkatest.tests.core.get_offset_shell_test Class: GetOffsetShellTest Method: test_get_offset_shell_topic_partitions Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Tests if GetOffsetShell handles --topic-partitions argument correctly :return: None
Module: kafkatest.tests.core.get_offset_shell_test Class: GetOffsetShellTest Method: test_get_offset_shell_topic_pattern Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Tests if GetOffsetShell handles --topic argument with a pattern correctly :return: None
Module: kafkatest.tests.tools.kibosh_test Class: KiboshTest Method: test_kibosh_service-
Module: kafkatest.tests.tools.log4j_appender_test Class: Log4jAppenderTest Method: test_log4j_appender Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT" }Tests if KafkaLog4jAppender is producing to Kafka topic :return: None
Module: kafkatest.tests.tools.log4j_appender_test Class: Log4jAppenderTest Method: test_log4j_appender Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SSL" }Tests if KafkaLog4jAppender is producing to Kafka topic :return: None
Module: kafkatest.tests.tools.log_compaction_test Class: LogCompactionTest Method: test_log_compaction Arguments: { "metadata_quorum": "REMOTE_KRAFT" }-
Module: kafkatest.tests.tools.trogdor_test Class: TrogdorTest Method: test_network_partition_faultTest that the network partition fault results in a true network partition between nodes.
Module: kafkatest.tests.tools.trogdor_test Class: TrogdorTest Method: test_trogdor_serviceTest that we can bring up Trogdor and create a no-op fault.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "security_protocol": "PLAINTEXT" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "security_protocol": "SSL" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_console_consumer Class: ConsoleConsumerTest Method: test_lifecycle Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SSL" }Check that console consumer starts/stops properly, and that we are capturing log output.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "producer_version": "dev", "security_protocol": "PLAINTEXT" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "security_protocol": "PLAINTEXT" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "producer_version": "dev", "security_protocol": "SSL" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.sanity_checks.test_verifiable_producer Class: TestVerifiableProducer Method: test_simple_run Arguments: { "metadata_quorum": "REMOTE_KRAFT", "producer_version": "dev", "security_protocol": "SSL" }Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and verify that we can produce a small number of messages.
Module: kafkatest.tests.core.consumer_group_command_test Class: ConsumerGroupCommandTest Method: test_describe_consumer_group Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT" }Tests if ConsumerGroupCommand is describing a consumer group correctly :return: None
Module: kafkatest.tests.core.consumer_group_command_test Class: ConsumerGroupCommandTest Method: test_describe_consumer_group Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SSL" }Tests if ConsumerGroupCommand is describing a consumer group correctly :return: None
Module: kafkatest.tests.core.consumer_group_command_test Class: ConsumerGroupCommandTest Method: test_list_consumer_groups Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "PLAINTEXT" }Tests if ConsumerGroupCommand is listing correct consumer groups :return: None
Module: kafkatest.tests.core.consumer_group_command_test Class: ConsumerGroupCommandTest Method: test_list_consumer_groups Arguments: { "metadata_quorum": "REMOTE_KRAFT", "security_protocol": "SSL" }Tests if ConsumerGroupCommand is listing correct consumer groups :return: None
Module: kafkatest.tests.core.elastic_log_reopen_test Class: ElasticLogReOpenTest Method: test_elastic_log_create Arguments: { "metadata_quorum": "COLOCATED_KRAFT", "partition_count": 1, "replication_factor": 1, "topic_count": 1 }Test that we can create a topic with elastic log and record time cost
Module: kafkatest.tests.core.elastic_log_reopen_test Class: ElasticLogReOpenTest Method: test_elastic_log_create Arguments: { "metadata_quorum": "REMOTE_KRAFT", "partition_count": 1, "replication_factor": 1, "topic_count": 1 }Test that we can create a topic with elastic log and record time cost
Module: kafkatest.tests.core.get_offset_shell_test Class: GetOffsetShellTest Method: test_get_offset_shell_partitions Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Tests if GetOffsetShell handles --partitions argument correctly :return: None
Module: kafkatest.tests.core.get_offset_shell_test Class: GetOffsetShellTest Method: test_get_offset_shell_topic_name Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Tests if GetOffsetShell handles --topic argument with a simple name correctly :return: None
Module: kafkatest.tests.core.security_test Class: SecurityTest Method: test_quorum_ssl_endpoint_validation_failure Arguments: { "metadata_quorum": "REMOTE_KRAFT" }Test that invalid hostname in ZooKeeper or KRaft Controller results in broker inability to start.