Quick Start with Docker Compose:
The easiest way to run AutoMQ with Tigris is using Docker Compose. This guide will walk you through setting up a single-node AutoMQ cluster backed by Tigris storage.Prerequisites
- Docker and Docker Compose installed
- A Tigris account - create one at https://storage.new
- Tigris credentials - create Access Key and Secret Key from your Tigris dashboard at https://console.tigris.dev/createaccesskey
Create Buckets in Tigris
AutoMQ requires two buckets: one for data storage and one for cluster’s metrics and logs. You can create them via the Tigris console or using the AWS CLI:Configure Docker Compose
Edit thedocker-compose.yaml file and update the Tigris credentials and bucket names:
KAFKA_S3_ACCESS_KEY- Your Tigris Access Key (starts withtid_)KAFKA_S3_SECRET_KEY- Your Tigris Secret Key (starts withtsec_)s3.data.buckets- Your data bucket name in the S3 URL (stores Kafka data)s3.ops.buckets- Your ops bucket name in the S3 URL (stores operational metadata)s3.wal.path- Write-Ahead Log path (typically same as data bucket)endpoint=https://t3.storage.dev- Tigris S3-compatible endpointregion=auto- Tigris automatically routes to the nearest region
4. Start AutoMQ
Start the AutoMQ cluster with Docker Compose:-
Readiness check pass! (ObjectStorageReadinessCheck)- Connected to Tigris -
The broker has been unfenced- Broker is ready -
Kafka Server started- AutoMQ is running
Create a Topic
Create a Kafka topic using the AutoMQ CLI:Produce and Consume Messages
Produce test messages:Congratulations! 🎉
You’ve successfully deployed AutoMQ with Tigris as the storage backend! In this guide, you:- Created Tigris buckets for data and operational storage
- Configured and launched a single-node AutoMQ cluster using Docker Compose
- Connected AutoMQ to Tigris using S3-compatible endpoints
- Created a Kafka topic with multiple partitions
- Produced and consumed messages through AutoMQ