Cluster Deployment on Linux
This article will introduce how to quickly deploy and start an AutoMQ cluster with 3 CONTROLLER nodes and 2 BROKER nodes in a Public Cloud environment and test the core features of AutoMQ.
AutoMQ supports deployment in a Private Cloud. You can choose to build your own storage system compatible with AWS EBS and AWS S3, such as Ceph, CubeFS, or MinIO.
Prerequisites
Prepare 5 hosts for deploying the AutoMQ cluster. In a Public Cloud environment, it is recommended to choose network-optimized Linux amd64 hosts with 2 CPUs and 16GB of memory, ensuring that the system disk storage space is not less than 10GB and the data volume disk space is not less than 10GB. Configuration can be appropriately reduced for the testing environment. Example as follows:
Role IP Node ID System Volume Data Volume CONTROLLER 192.168.0.1 0 EBS 20GB EBS 20GB CONTROLLER 192.168.0.2 1 EBS 20GB EBS 20GB CONTROLLER 192.168.0.3 2 EBS 20GB EBS 20GB BROKER 192.168.0.4 3 EBS 20GB EBS 20GB BROKER 192.168.0.5 4 EBS 20GB EBS 20GB It is recommended to specify the same subnet and IP addresses as in this example when purchasing computing resources, making it convenient to directly copy operation commands.
Download the binary installation package for installing AutoMQ. Refer to Software Artifact▸.
Create two custom-named object storage buckets, such as automq-data and automq-ops.
Create an IAM user and generate an Access Key and Secret Key for this user. Then, ensure that the IAM user has full read and write permissions to the previously created object storage bucket.
- AWS
- Azure
- GCP
- AWS China
- Alibaba Cloud
- Tencent Cloud
- Huawei Cloud
- Baidu Cloud
- Other Cloud Platforms
For more detailed information, please refer to the official website.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject"
],
"Resource": [
"arn:aws-cn:s3:::automq-data/*",
"arn:aws-cn:s3:::automq-ops/*"
]
}
]
}
Due to the incompatibility between Azure's object storage and AWS S3 in terms of network protocol, AutoMQ currently cannot run on Azure. To address this issue, the AutoMQ Team is developing a compatibility solution for Azure and plans to release it soon.
{
"title": "AutomqStorageRole",
"description": "Custom Roles for AutoMQ Store Operations",
"stage": "GA",
"includedPermissions": [
"storage.multipartUploads.create",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.get"
]
}
For more detailed information, please refer to the official website.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject"
],
"Resource": [
"arn:aws-cn:s3:::automq-data",
"arn:aws-cn:s3:::automq-ops"
]
}
]
}
For more detailed information, please refer to the official website.
{
"Version": "1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"oss:PutObject",
"oss:AbortMultipartUpload",
"oss:GetObject",
"oss:DeleteObject"
],
"Resource": [
"acs:oss:*:*:automq-data",
"acs:oss:*:*:automq-ops"
]
}
]
}
Please refer to the official website for more detailed information.
{
"statement": [
{
"action": [
"cos:AbortMultipartUpload",
"cos:GetObject",
"cos:CompleteMultipartUpload",
"cos:InitiateMultipartUpload",
"cos:DeleteObject",
"cos:PutObject",
"cos:UploadPart"
],
"effect": "allow",
"resource": [
"qcs::cos:ap-nanjing:uid/1258965391:automq-data-1258965391/*",
"qcs::cos:ap-nanjing:uid/1258965391:automq-ops-1258965391/*"
]
}
],
"version": "2.0"
}
Please refer to the official website for more detailed information.
{
"Version": "1.1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"obs:object:GetObject",
"obs:object:AbortMultipartUpload",
"obs:object:DeleteObject",
"obs:object:PutObject"
],
"Resource": [
"OBS:*:*:object:automq-data/*",
"OBS:*:*:object:automq-ops/*"
]
}
]
}
Please refer to the official website for more detailed information.
{
"accessControlList": [
{
"service": "bce:bos",
"region": "*",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ",
"WRITE"
],
"resource": ["automq-data/*"],
}
]
}
AutoMQ requires EBS and S3 services. As long as the cloud platform supports the standard protocols for these two services, AutoMQ can run on that platform. The AutoMQ Team will continuously improve the compatibility test reports for other cloud platforms. The completed compatibility tests are as follows:
Install and Start the AutoMQ Cluster
Step 1: Generate S3 URL
AutoMQ provides the automq-kafka-admin.sh
tool for quickly launching AutoMQ. By simply providing an S3 URL containing the necessary S3 access point and authentication information, you can start AutoMQ with a single click without manually generating a cluster ID or formatting the storage.
bin/automq-kafka-admin.sh generate-s3-url \
--s3-access-key=xxx \
--s3-secret-key=yyy \
--s3-region=cn-northwest-1 \
--s3-endpoint=s3.cn-northwest-1.amazonaws.com.cn \
--s3-data-bucket=automq-data \
--s3-ops-bucket=automq-ops
For parameters like Endpoint and Region used in the command line, please refer to the configuration instructions of each cloud provider.
Output Result
After executing the command, the process will automatically proceed through the following stages:
Probe the basic features of S3 using the provided accessKey and secretKey to verify the compatibility between AutoMQ and S3.
Generate an s3url based on the provided access point and authentication information.
Obtain an example command to start AutoMQ based on the s3url. In the command, replace
--controller-list
and--broker-list
with the actual CONTROLLER and BROKER you need to deploy.
############ Ping S3 ########################
[ OK ] Write s3 object
[ OK ] Read s3 object
[ OK ] Delete s3 object
[ OK ] Write s3 object
[ OK ] Upload s3 multipart object
[ OK ] Read s3 multipart object
[ OK ] Delete s3 object
############ String of S3url ################
Your s3url is:
s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=xxx&s3-secret-key=yyy&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA
############ Usage of S3url ################
To start AutoMQ, generate the start commandline using s3url.
bin/automq-kafka-admin.sh generate-start-command \
--s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" \
--controller-list="192.168.0.1:9093;192.168.0.2:9093;192.168.0.3:9093" \
--broker-list="192.168.0.4:9092;192.168.0.5:9092"
TIPS: Please replace the controller-list and broker-list with your actual IP addresses.
Step 2: Generate the List of Startup Commands
Replace the --controller-list and --broker-list parameters in the commands generated in the previous step with your host information. Specifically, replace them with the IP addresses of the 3 CONTROLLER and 2 BROKER machines mentioned in the environment setup, using the default ports 9092 and 9093.
bin/automq-kafka-admin.sh generate-start-command \
--s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" \
--controller-list="192.168.0.1:9093;192.168.0.2:9093;192.168.0.3:9093" \
--broker-list="192.168.0.4:9092;192.168.0.5:9092"
Parameter Description
Parameter Name | Required | Description |
---|---|---|
--s3-url | Yes | Generated by the bin/automq-kafka-admin.sh generate-s3-url command line tool, includes authentication, cluster ID, etc. |
--controller-list | Yes | At least one address is needed, used as the IP and port list of CONTROLLER hosts. The format is IP1:PORT1; IP2:PORT2; IP3:PORT3 |
--broker-list | Yes | At least one address is needed, used as the IP and port list of BROKER hosts. The format is IP1:PORT1; IP2:PORT2; IP3:PORT3 |
--controller-only-mode | No | Determines whether the CONTROLLER node only serves the CONTROLLER role. The default is false, meaning the deployed CONTROLLER node also acts as a BROKER. |
Output Result
After executing the command, a command for starting AutoMQ will be generated.
############ Start Commandline ##############
To start an AutoMQ Kafka server, please navigate to the directory where your AutoMQ tgz file is located and run the following command.
Before running the command, make sure that Java 17 is installed on your host. You can verify the Java version by executing 'java -version'.
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.1:9092,CONTROLLER://192.168.0.1:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=1 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.2:9092,CONTROLLER://192.168.0.2:9093 --override advertised.listeners=PLAINTEXT://192.168.0.2:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=2 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.3:9092,CONTROLLER://192.168.0.3:9093 --override advertised.listeners=PLAINTEXT://192.168.0.3:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker --override node.id=3 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.4:9092 --override advertised.listeners=PLAINTEXT://192.168.0.4:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker --override node.id=4 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.5:9092 --override advertised.listeners=PLAINTEXT://192.168.0.5:9092
TIPS: Start controllers first and then the brokers.
node.id is automatically generated starting from 0 by default.
Step 3: Start AutoMQ
To start the cluster, execute the command list from the previous step on the designated CONTROLLER or BROKER hosts in sequence. For example, to start the first CONTROLLER process on 192.168.0.1, execute the first command from the generated startup command list.
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.1:9092,CONTROLLER://192.168.0.1:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092
Parameter Explanation
When using the startup command, any unspecified parameters will use the default configurations of Apache Kafka. For new parameters introduced by AutoMQ, the default values provided by AutoMQ will be used. To override the default configurations, you can append additional --override key=value parameters at the end of the command.
Parameter Name | Required | Description |
---|---|---|
s3-url | Yes | Generated by the bin/automq-kafka-admin.sh generate-s3-url command-line tool, includes authentication, cluster ID, etc. |
process.roles | Yes | Options are CONTROLLER or BROKER. If a host acts as both CONTROLLER and BROKER, the value should be configured as CONTROLLER,BROKER. |
node.id | Yes | An integer used to uniquely identify a BROKER or CONTROLLER within the Kafka cluster. Must be unique within the cluster. |
controller.quorum.voters | Yes | Information about the hosts participating in the KRAFT election, including nodeid, IP, and port information, e.g., 0@192.168.0.1:9093, 1@192.168.0.2:9093, 2@192.168.0.3:9093 |
listeners | Yes | IP and port to listen on |
advertised.listeners | Yes | The access address provided by the BROKER for Clients. |
log.dirs | No | Directories to store KRAFT and BROKER metadata. |
s3.wal.path | No | In production environments, it is recommended to store AutoMQ WAL data on a newly mounted raw device on a separate data volume. This setup enhances performance as AutoMQ supports writing data directly to raw devices, reducing latency. Ensure the correct path is configured to store the WAL data. |
autobalancer.controller.enable | No | The default value is false, meaning traffic self-balancing is disabled. When enabled, AutoMQ's auto balancer component will automatically reassign partitions to ensure balanced overall traffic. |
To enable self-balancing or run Example: Self-Balancing When Cluster Nodes Change▸, it is recommended to specify the parameter --override autobalancer.controller.enable=true for the Controller at startup.
Background Operation
To run in background mode, append the following code at the end of the command:
command > /dev/null 2>&1 &
Data Volume Path
Use the Linux lsblk
command to view local data volumes. Unpartitioned block devices are considered data volumes. In the following example, vdb
is an unpartitioned raw block device.
vda 253:0 0 20G 0 disk
├─vda1 253:1 0 2M 0 part
├─vda2 253:2 0 200M 0 part /boot/efi
└─vda3 253:3 0 19.8G 0 part /
vdb 253:16 0 20G 0 disk
By default, AutoMQ stores metadata and WAL data in the /tmp
directory. However, it is important to note that directories like /tmp
on Alibaba Cloud and Tencent Cloud are often mounted on tmpfs
, making them unsuitable for production environments.
To better suit production or formal testing environments, it is recommended to modify the configuration as follows: Specify the metadata directory log.dirs
and the WAL data directory s3.wal.path
(raw device for writing data disk) to other locations.
bin/kafka-server-start.sh ...\
--override s3.telemetry.metrics.exporter.type=prometheus \
--override s3.metrics.exporter.prom.host=0.0.0.0 \
--override s3.metrics.exporter.prom.port=9090 \
--override log.dirs=/root/kraft-logs \
--override s3.wal.path=/dev/vdb \
> /dev/null 2>&1 &
Please change s3.wal.path
to the actual local raw device name.
Run the Demo
After starting the AutoMQ cluster, you can run the following demo to verify its functionality.
Stop and Uninstall the AutoMQ Cluster
After completing the tests, you can refer to the following steps to stop and uninstall the AutoMQ cluster.
- Execute the following command on each node to stop the process.
bin/kafka-server-stop.sh
You can configure the lifecycle rules for the object storage to automatically clear data in the
s3-data-bucket
ands3-ops-bucket
, and then delete these buckets.Delete the created compute instance along with its corresponding system and data volumes.
Delete the test user and their associated AccessKey and SecretKey.