Skip to Main Content

Quick Start

This article specifically refers to the term "AutoMQ Kafka" as the open-source project automq-for-kafka under the GitHub AutoMQ organization by AutoMQ CO., LTD.

Quick Experience will help you launch AutoMQ for Kafka (also known as AutoMQ Kafka) on your local machine, and briefly experience the production, consumption of messages, and rapid reassignment of Partitions.

Docker-Compose

Prerequisites

  • JDK 17

  • Docker

  • Docker compose

  • AutoMQ Kafka Release

    Download the latest tgz package from AutoMQ Kafka Release. The installation package contains a docker compose configuration file, and you can use its included command-line tool to pull up AutoMQ.

    Note: Please ensure that at least 8GB of memory resources are reserved for Docker Engine to avoid affecting data reading and writing due to insufficient memory.

Launch Based on Docker-Compose

Unzip the Release package and go to the unzipped directory:


tar zxvf automq-1.0.0_kafka-3.4.0.tgz && cd automq-1.0.0_kafka-3.4.0

You can start an AutoMQ Kafka cluster by configuring the docker-compose.yaml file. docker/docker-compose.yaml gives an example file of 1 Controller role node and 2 Broker role nodes.

Use the following command to start the AutoMQ Kafka cluster:


docker compose -f docker/docker-compose.yaml up -d

To simulate S3 locally, we started a LocalStack container to simulate S3 service;

The aws-cli container is used to create a Bucket after LocalStack is started, and exits after successful creation;

We enable the "10.6.0.0/16" network segment for use by the AutoMQ Kafka cluster. If it conflicts with your other Docker networks, please modify the above network configuration, for example, modify it to the "10.7.0.0/16" network segment, and modify the static IP of LocalStack accordingly;

We map the ports of the two brokers to the host, and then you can communicate with broker1 through localhost:9094 on the host, and communicate with broker2 through localhost:9095;

Message Sending

In this section, you will create a Topic named quickstart-events and experience message sending.

Create a Topic named quickstart-events with 1 Paritition, and produce messages:


# Create Topic
bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9094
# Use Kafka-console-producer.sh to Produce the Specified Message
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9094

You can enter multiple lines of data and end with CTRL+C.

Message Consumption

In this section, you will experience message consumption in AutoMQ Kafka.

In the container, continue to use kafka-console-consumer to consume messages from quickstart-events:


bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9094

You will see the data you entered in the previous step. Press CTRL+C to end consumption.

Partition Reassignment in Seconds

In this section, you will experience the ability of AutoMQ Kafka to reassign partitions in seconds.

In broker1, check the distribution of Partitions in quickstart-events:


bin/kafka-topics.sh --topic quickstart-events --describe --bootstrap-server localhost:9094

You will see a return similar to the following, on the test machine, quickstart-events-0 is managed by Node 1:

Create the following reassignment plan file move.json, move quickstart-events-0 to Node 2 (If you see quickstart-events-0 is managed by Node 2 in the previous step, it should be moved to Node 1):


{
"partitions": [
{
"topic": "quickstart-events",
"partition": 0,
"replicas": [
2
]
}
],
"version": 1
}

Execute the reassignment:


bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9094 --reassignment-json-file move.json --execute

You will see a return similar to the following:

Check if the reassignment is successful:


bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9094 --reassignment-json-file move.json --verify

You will see a return similar to the following:

You can check the distribution of quickstart-events Partition again:


bin/kafka-topics.sh --topic quickstart-events --describe --bootstrap-server localhost:9094

You can see that quickstart-events-0 is now managed by Node 2:

You can produce and consume again for further verification.

Auto Data Self-balancing

In this section, you will experience the automatic data self-balancing ability brought by AutoMQ Kafka's built-in Auto Balancer.

AutoMQ Kafka has built-in Auto Balancer, which automatically balances data for Partitions. This mechanism detects nodes that are overloaded or underloaded and automatically reassigns some Partitions to low-load nodes.

To experience the automatic data self-balancing ability, first close the broker2 container, then create a new Topic test-topic with 10 Partitions:


# Close Broker2
docker stop broker2
# Create Topic
bin/kafka-topics.sh --create --topic test-topic --partitions 10 --bootstrap-server localhost:9094

Please note that you need to input the address of broker1 in the end.

Since broker2 has been shut down, all Partitions will be managed by broker1. Then, restart broker2:


docker start broker2

Use kafka-producer-perf-test.sh to continuously produce messages:


bin/kafka-producer-perf-test.sh --topic test-topic --num-records=1024000 --throughput 5120 --record-size 1024 --producer-props bootstrap.servers=localhost:9094

After a while, the Producer side will throw a WARN log of NOT_LEADER_OR_FOLLOWER:

The WARN log here is the log of the Producer. The Producer side will retry sending messages in scenarios like NOT_LEADER_OR_FOLLOWER, so you don't have to worry about missing messages;

This is because Auto Balancer is migrating. After a while, when Partiton reassignment is complete, production resumes:

Use the following command to check the distribution of Partition:


bin/kafka-topics.sh --topic test-topic --describe --bootstrap-server localhost:9094

You will see an output similar to the following:

As you can see, some Partitions have been reassigned to broker2.

During the Producer pressure test, you may see multiple batches of WARN information about NOT_LEADER_OR_FOLLOWER. This could be because Auto Balancer has generated multiple reassignment plans.

You can view detailed reassignment plans in the Controller container through grep "Action-MOVE" /opt/kafka/kafka/logs/server.log.

Destroy Based on Docker-Compose

Use the following command to destroy the AutoMQ Kafka cluster:


docker compose -f docker/docker-compose.yaml down -v