Skip to Main Content

Deploy Direct S3 Cluster

This article will introduce how to quickly deploy and start a single-node AutoMQ instance in a Public Cloud environment, and test the core features of AutoMQ.

Prerequisites

  • Prepare a host for deploying the AutoMQ cluster. In a Public Cloud environment, it is recommended to choose a network-optimized Linux amd64 host with 2 cores and 16GB of memory. Ensure the system disk storage space is no less than 10GB, and the data volume disk space is no less than 10GB. For a test environment, configurations can be appropriately lowered.

  • Download the AutoMQ binary installation package supporting Direct S3 version for installing AutoMQ.

  • Create a custom-named object storage bucket, for example, automq-data.

  • Create an IAM user and generate an Access Key and Secret Key for it. Then, ensure the IAM user has full read and write permissions for the previously created object storage bucket.

Install and Start the AutoMQ Cluster

  1. Modify the AutoMQ configuration

The instance configuration is located at config/kraft/server.properties, and the following configuration needs to be modified


s3.data.buckets=0@s3://<your-bucket>?region=<your-region>&endpoint=<your-s3-endpoint>
s3.ops.buckets=0@s3://<your-bucket>?region=<your-region>&endpoint=<your-s3-endpoint>
s3.wal.path=0@s3://<your-bucket>?region=<your-region>&endpoint=<your-s3-endpoint>

Use the endpoint and region of the S3-compatible service and the created bucket to fill in the above configuration

  1. Use configuration file to start AutoMQ.


    export KAFKA_S3_ACCESS_KEY=<your-ak>
    export KAFKA_S3_SECRET_KEY=<your-sk>
    bin/kafka-server-start.sh config/kraft/server.properties

    Use the access key and secret key of the S3-compatible service to populate environment variables

Run the Demo Program

After starting the AutoMQ cluster, you can run the following demo program to verify its functionality

  1. Example: Produce & Consume Message▸

  2. Example: Simple Benchmark▸

  3. Example: Partition Reassignment in Seconds▸

  4. Example: Self-Balancing When Cluster Nodes Change▸

  5. Example: Continuous Data Self-Balancing▸

Stop and Uninstall the AutoMQ Cluster

After completing the tests, you can refer to the following steps to stop and uninstall the AutoMQ cluster

  1. Execute the following command to stop the process

bin/kafka-server-stop.sh

  1. You can automatically clear the data in s3-data-bucket and s3-ops-bucket by configuring the lifecycle rules of object storage, and then delete these buckets

  2. Delete the created compute instances and their corresponding system volumes and data volumes

  3. Delete the test user and their associated AccessKey and SecretKey