Cluster Deployment on Linux
This article will introduce how to quickly deploy and start an AutoMQ cluster with 3 CONTROLLER nodes and 2 BROKER nodes in a Public Cloud environment and test the core features of AutoMQ.
AutoMQ supports deployment in a Private Cloud. You can choose to build your own storage system compatible with AWS EBS and AWS S3, such as Ceph, CubeFS, or MinIO.
Prerequisites
Prepare 5 hosts for deploying the AutoMQ cluster. In a Public Cloud environment, it is recommended to choose network-optimized Linux amd64 hosts with 2 CPUs and 16GB of memory, ensuring that the system disk storage space is not less than 10GB and the data volume disk space is not less than 10GB. Configuration can be appropriately reduced for the testing environment. Example as follows:
Role IP Node ID System Volume Data Volume CONTROLLER 192.168.0.1 0 EBS 20GB EBS 20GB CONTROLLER 192.168.0.2 1 EBS 20GB EBS 20GB CONTROLLER 192.168.0.3 2 EBS 20GB EBS 20GB BROKER 192.168.0.4 1000 EBS 20GB EBS 20GB BROKER 192.168.0.5 1001 EBS 20GB EBS 20GB It is recommended to specify the same subnet and IP addresses as in this example when purchasing computing resources, making it convenient to directly copy operation commands.
Download the binary package for installing AutoMQ. Refer to Software Artifact▸.
Create two custom-named object storage buckets, such as automq-data and automq-ops.
Create an IAM user and generate an Access Key and Secret Key for this user. Then, ensure that the IAM user has full read and write permissions to the previously created object storage bucket.
- AWS
- Azure
- GCP
- AWS China
- Alibaba Cloud
- Tencent Cloud
- Huawei Cloud
- Baidu Cloud
- Other Cloud Platforms
Please refer to the official website for more detailed information.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject"
],
"Resource": [
"arn:aws-cn:s3:::automq-data/*",
"arn:aws-cn:s3:::automq-ops/*"
]
}
]
}
Due to the incompatibility between Azure's object storage and AWS S3 in terms of network protocol, AutoMQ currently cannot run on Azure. To address this issue, the AutoMQ Team is developing a compatibility solution for Azure and plans to release it soon.
{
"title": "AutomqStorageRole",
"description": "Custom Roles for AutoMQ Store Operations",
"stage": "GA",
"includedPermissions": [
"storage.multipartUploads.create",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.get"
]
}
Please refer to the official website for more detailed information.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject"
],
"Resource": [
"arn:aws-cn:s3:::automq-data",
"arn:aws-cn:s3:::automq-ops"
]
}
]
}
Please refer to the official website for more detailed information.
{
"Version": "1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"oss:PutObject",
"oss:AbortMultipartUpload",
"oss:GetObject",
"oss:DeleteObject"
],
"Resource": [
"acs:oss:*:*:automq-data",
"acs:oss:*:*:automq-ops"
]
}
]
}
Please refer to the official website for more detailed information.
{
"statement": [
{
"action": [
"cos:AbortMultipartUpload",
"cos:GetObject",
"cos:CompleteMultipartUpload",
"cos:InitiateMultipartUpload",
"cos:DeleteObject",
"cos:PutObject",
"cos:UploadPart"
],
"effect": "allow",
"resource": [
"qcs::cos:ap-nanjing:uid/1258965391:automq-data-1258965391/*",
"qcs::cos:ap-nanjing:uid/1258965391:automq-ops-1258965391/*"
]
}
],
"version": "2.0"
}
Please refer to the official website for more detailed information.
{
"Version": "1.1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"obs:object:GetObject",
"obs:object:AbortMultipartUpload",
"obs:object:DeleteObject",
"obs:object:PutObject"
],
"Resource": [
"OBS:*:*:object:automq-data/*",
"OBS:*:*:object:automq-ops/*"
]
}
]
}
Please refer to the official website for more detailed information.
{
"accessControlList": [
{
"service": "bce:bos",
"region": "*",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ",
"WRITE"
],
"resource": ["automq-data/*"],
}
]
}
AutoMQ requires EBS and S3 services. As long as the cloud platform supports the standard protocols for these two services, AutoMQ can run on that platform. The AutoMQ Team will continuously improve the compatibility test reports for other cloud platforms. The completed compatibility tests are as follows:
Install and Start the AutoMQ Cluster
Step 1: Create a Cluster Deployment Project
AutoMQ provides the automq-cli.sh tool for AutoMQ cluster operations. The command automq-cli.sh cluster create [project]
will automatically create a cluster configuration template at clusters/[project]/topo.yaml
in the current directory.
bin/automq-cli.sh cluster create poc
Success create AutoMQ cluster project: poc
========================================================
Please follow the steps to deploy AutoMQ cluster:
1. Modify the cluster topology config clusters/poc/topo.yaml to fit your needs
2. Run ./bin/automq-cli.sh cluster deploy --dry-run clusters/poc , to deploy the AutoMQ cluster
Step 2: Edit the Cluster Configuration Template
Edit the configuration template generated in Step 1. A sample configuration template is shown below:
global:
clusterId: ''
# Bucket URI Pattern: 0@s3://$bucket?region=$region&endpoint=$endpoint
# Bucket URI Example:
# AWS : 0@s3://xxx_bucket?region=us-east-1
# AWS-CN: 0@s3://xxx_bucket?region=cn-northwest-1&endpoint=https://s3.amazonaws.com.cn
# ALIYUN: 0@s3://xxx_bucket?region=oss-cn-shanghai&endpoint=https://oss-cn-shanghai.aliyuncs.com
# TENCENT: 0@s3://xxx_bucket?region=ap-beijing&endpoint=https://cos.ap-beijing.myqcloud.com
config: |
s3.data.buckets=0@s3://xxx_bucket?region=us-east-1
s3.ops.buckets=1@s3://xxx_bucket?region=us-east-1
envs:
- name: KAFKA_S3_ACCESS_KEY
value: 'xxxxx'
- name: KAFKA_S3_SECRET_KEY
value: 'xxxxx'
controllers:
# The Controllers Default Are Combined Nodes Which Roles Are Controller and Broker.
# The Default Controller Port Is 9093 and the Default Broker Port Is 9092
- host: 192.168.0.1
nodeId: 0
- host: 192.168.0.2
nodeId: 1
- host: 192.168.0.3
nodeId: 2
brokers:
- host: 192.168.0.5
nodeId: 1000
- host: 192.168.0.6
nodeId: 1001
global.clusterId: A randomly generated unique ID, no modification needed.
global.config: Custom incremental configuration for all nodes in the cluster. Here, you must change
s3.data.buckets
ands3.ops.buckets
to actual values. You can also add new configuration items on a new line.global.envs: Environment variables for the nodes. Here, you must replace the value of
KAFKA_S3_ACCESS_KEY
andKAFKA_S3_SECRET_KEY
with actual values.controllers: List of Controller nodes, to be replaced with actual values;
brokers: List of Broker nodes, to be replaced with actual values;
Data Volume Path
The default location for AutoMQ to store metadata and WAL data is the /tmp directory. For a production or formal testing environment, it is recommended to add global configuration in the cluster configuration template, setting the metadata directory log.dirs and the WAL data directory s3.wal.path to a persistent storage location. The configuration reference is as follows:
global:
...
configs: |
s3.data.buckets=0@s3://xxx_bucket?region=us-east-1
s3.ops.buckets=1@s3://xxx_bucket?region=us-east-1
log.dirs=/root/kraft-logs
s3.wal.path=/root/kraft-logs/s3wal
...
Step 3: Start AutoMQ
Execute the cluster deployment command:
bin/automq-cli.sh cluster deploy --dry-run clusters/poc
This command will first check the correctness of the S3 configuration, ensure successful access to S3, and finally output the startup commands for each node. The output example is as follows:
Host: 192.168.0.1
KAFKA_S3_ACCESS_KEY=xxxx KAFKA_S3_SECRET_KEY=xxxx ./bin/kafka-server-start.sh -daemon config/kraft/server.properties --override cluster.id=JN1cUcdPSeGVnzGyNwF1Rg --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override advertised.listener=192.168.0.1:9092 --override s3.data.buckets='0@s3://xxx_bucket?region=us-east-1' --override s3.ops.buckets='1@s3://xxx_bucket?region=us-east-1'
...
To start the cluster, execute the command list from the previous step sequentially on the pre-specified CONTROLLER or BROKER hosts. For example, to start the first CONTROLLER process on 192.168.0.1, execute the corresponding command from the generated startup command list on that host.
KAFKA_S3_ACCESS_KEY=xxxx KAFKA_S3_SECRET_KEY=xxxx ./bin/kafka-server-start.sh -daemon config/kraft/server.properties --override cluster.id=JN1cUcdPSeGVnzGyNwF1Rg --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092 --override s3.data.buckets='0@s3://xxx_bucket?region=us-east-1' --override s3.ops.buckets='1@s3://xxx_bucket?region=us-east-1'
Run the Demo
After starting the AutoMQ cluster, you can run the following demo to verify its functionality.
Stop and Uninstall the AutoMQ Cluster
After completing the tests, you can refer to the following steps to stop and uninstall the AutoMQ cluster.
- Execute the following command on each node to stop the process.
bin/kafka-server-stop.sh
You can configure the lifecycle rules for the object storage to automatically clear data in the
s3-data-bucket
ands3-ops-bucket
, and then delete these buckets.Delete the created compute instance along with its corresponding system and data volumes.
Delete the test user and their associated AccessKey and SecretKey.