Kafdrop
Introduction
Kafdrop [1] is a sleek, intuitive, and robust Web UI tool designed for Kafka. It allows developers and administrators to easily view and manage key metadata of Kafka clusters, including Topics, Partitions, Consumer Groups, and their offsets. By providing a user-friendly interface, Kafdrop significantly simplifies the monitoring and management of Kafka clusters, enabling users to quickly obtain cluster status information without relying on complex command-line tools.
Thanks to AutoMQ's full compatibility with Kafka, it can seamlessly integrate with Kafdrop. By leveraging Kafdrop, AutoMQ users can also enjoy an intuitive user interface to monitor Kafka cluster status in real-time, including key metadata such as Topics, Partitions, Consumer Groups, and their offsets. This monitoring capability not only improves the efficiency of problem diagnosis but also helps optimize cluster performance and resource utilization.
This tutorial will teach you how to start the Kafdrop service and use it alongside an AutoMQ cluster to achieve cluster status monitoring and management.
Prerequisites
Environment for Kafdrop: AutoMQ cluster along with JDK17 and Maven 3.6.3 or above.
Kafdrop can be run via JAR package, Docker deployment, or protobuf deployment. Refer to the official documentation [3].
Prepare 5 hosts for deploying the AutoMQ cluster. It is recommended to choose Linux amd64 hosts with 2 cores and 16GB memory and to prepare two virtual storage volumes. Example as follows:
Role IP Node ID System Volume Data Volume CONTROLLER 192.168.0.1 0 EBS 20GB EBS 20GB CONTROLLER 192.168.0.2 1 EBS 20GB EBS 20GB CONTROLLER 192.168.0.3 2 EBS 20GB EBS 20GB BROKER 192.168.0.4 3 EBS 20GB EBS 20GB BROKER 192.168.0.5 4 EBS 20GB EBS 20GB Tips:
Ensure these machines are in the same subnet and can communicate with each other.
For non-production environments, you can deploy just 1 Controller. By default, this Controller also serves as a broker.
Download the latest official binary package from AutoMQ Github Releases for installing AutoMQ.
Below, I will first set up an AutoMQ cluster and then start Kafdrop.
Install and Start the AutoMQ Cluster.
Configure S3URL.
Step 1: Generate S3 URL.
AutoMQ provides the automq-kafka-admin.sh
tool for quickly starting AutoMQ. Simply provide the S3 URL containing the required S3 access point and authentication information to start AutoMQ with one click, without the need to manually generate cluster IDs or perform storage formatting.
### Command Line Usage Example
bin/automq-kafka-admin.sh generate-s3-url \
--s3-access-key=xxx \
--s3-secret-key=yyy \
--s3-region=cn-northwest-1 \
--s3-endpoint=s3.cn-northwest-1.amazonaws.com.cn \
--s3-data-bucket=automq-data \
--s3-ops-bucket=automq-ops
Note: Ensure the AWS S3 bucket is configured in advance. If you encounter errors, verify the correctness and format of the parameters.
Output Results
After executing the command, the process will automatically proceed through the following stages:
Using the provided accessKey and secretKey, the basic features of S3 will be probed to verify compatibility between AutoMQ and S3.
Based on the authentication information and access point details, an
s3url
will be generated.Using the
s3url
, an example command to start AutoMQ will be retrieved. In the command, replace--controller-list
and--broker-list
with the actual CONTROLLER and BROKER that need to be deployed.
############ Ping S3 ########################
[ OK ] Write s3 object
[ OK ] Read s3 object
[ OK ] Delete s3 object
[ OK ] Write s3 object
[ OK ] Upload s3 multipart object
[ OK ] Read s3 multipart object
[ OK ] Delete s3 object
############ String of S3url ################
Your s3url is:
s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=xxx&s3-secret-key=yyy&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA
############ Usage of S3url ################
To start AutoMQ, generate the start commandline using s3url.
bin/automq-kafka-admin.sh generate-start-command \
--s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" \
--controller-list="192.168.0.1:9093;192.168.0.2:9093;192.168.0.3:9093" \
--broker-list="192.168.0.4:9092;192.168.0.5:9092"
TIPS: Please replace the controller-list and broker-list with your actual IP addresses.
Step 2: Generate a List of Startup Commands
Replace the --controller-list and --broker-list parameters from the previous step with your host information. Specifically, replace them with the IP addresses of the 3 CONTROLLER and 2 BROKER machines mentioned in the environment setup, using the default ports 9092 and 9093.
bin/automq-kafka-admin.sh generate-start-command \
--s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" \
--controller-list="192.168.0.1:9093;192.168.0.2:9093;192.168.0.3:9093" \
--broker-list="192.168.0.4:9092;192.168.0.5:9092"
Parameter Description
Parameter Name | Required | Description |
---|---|---|
--s3-url | Yes | Generated by the bin/automq-kafka-admin.sh generate-s3-url command-line tool. Contains authentication, cluster ID, and other information. |
--controller-list | Yes | At least one address is required, used as the IP and port list for CONTROLLER hosts. Format: IP1:PORT1;IP2:PORT2;IP3:PORT3 |
--broker-list | Yes | At least one address is required, used as the IP and port list for BROKER hosts. Format: IP1:PORT1;IP2:PORT2;IP3:PORT3 |
--controller-only-mode | No | Determines whether the CONTROLLER node only assumes the CONTROLLER role. The default is false, meaning the deployed CONTROLLER node also acts as a BROKER role. |
Output Result
After executing the command, a startup command for AutoMQ will be generated.
############ Start Commandline ##############
To start an AutoMQ Kafka server, please navigate to the directory where your AutoMQ tgz file is located and run the following command.
Before running the command, make sure that Java 17 is installed on your host. You can verify the Java version by executing 'java -version'.
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.1:9092,CONTROLLER://192.168.0.1:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=1 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.2:9092,CONTROLLER://192.168.0.2:9093 --override advertised.listeners=PLAINTEXT://192.168.0.2:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=2 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.3:9092,CONTROLLER://192.168.0.3:9093 --override advertised.listeners=PLAINTEXT://192.168.0.3:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker --override node.id=3 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.4:9092 --override advertised.listeners=PLAINTEXT://192.168.0.4:9092
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker --override node.id=4 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.5:9092 --override advertised.listeners=PLAINTEXT://192.168.0.5:9092
TIPS: Start controllers first and then the brokers.
Note: The node.id is automatically generated starting from 0 by default.
Step 3: Start AutoMQ
To start the cluster, execute the command list from the previous step sequentially on the specified CONTROLLER or BROKER hosts. For example, to start the first CONTROLLER process on 192.168.0.1, execute the first command template from the generated startup command list.
bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override listeners=PLAINTEXT://192.168.0.1:9092,CONTROLLER://192.168.0.1:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092
Parameter Description
When using the startup command, any unspecified parameters will use the default Apache Kafka configuration. For new parameters introduced by AutoMQ, the default values provided by AutoMQ will be used. To override the default configuration, you can add additional --override key=value parameters at the end of the command.
Parameter Name | Required | Description |
---|---|---|
s3-url | Yes | Generated by the bin/automq-kafka-admin.sh generate-s3-url command-line tool, containing authentication, cluster ID, and other information. |
process.roles | Yes | Options are CONTROLLER or BROKER. If a host serves as both CONTROLLER and BROKER, the configuration value is CONTROLLER,BROKER . |
node.id | Yes | An integer uniquely identifying a BROKER or CONTROLLER in a Kafka cluster; must be unique within the cluster. |
controller.quorum.voters | Yes | Information of hosts participating in KRaft election, including node ID, IP, and port information, e.g., 0@192.168.0.1:9093 , 1@192.168.0.2:9093 , 2@192.168.0.3:9093 . |
listeners | Yes | The IP and port to listen on. |
advertised.listeners | Yes | The access address provided by the BROKER for Clients. |
log.dirs | No | Directories storing KRaft and BROKER metadata. |
s3.wal.path | No | In production environments, it is recommended to store AutoMQ WAL data on a new, independently mounted raw device. This setup can provide better performance because AutoMQ supports writing data to raw devices, thereby reducing latency. Make sure to configure the correct path to store the WAL data. |
autobalancer.controller.enable | No | The default value is false, which does not enable traffic self-balancing. When auto-balancer is enabled, the AutoMQ auto balancer component will automatically reassign partitions to ensure overall traffic is balanced. |
Tips:
- To enable continuous traffic self-balancing or to run Example: Self-Balancing When Cluster Nodes Change, it is recommended to explicitly specify the parameter --override autobalancer.controller.enable=true at startup for the Controller.
Run in the Background
If you need to run in the background mode, please add the following code at the end of the command:
command > /dev/null 2>&1 &
Start Kafdrop Service
In the process above, we have already set up the AutoMQ cluster and noted the addresses and ports that all broker nodes are listening on. Next, we will proceed to start the Kafdrop service.
Note: Ensure that the address where the Kafdrop service is located can access the AutoMQ cluster; otherwise, it will result in connection timeouts and other issues.
In this example, I use the JAR package method to start the Kafdrop service. The steps are as follows:
- Pull the Kafdrop repository source code: Kafdrop GitHub
git clone https://github.com/obsidiandynamics/kafdrop.git
- Use Maven to locally compile and package Kafdrop to generate a JAR file. Execute the following in the root directory:
mvn clean compile package
- To start the service, you need to specify the addresses and ports of the AutoMQ cluster brokers:
java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
-jar target/kafdrop-<version>.jar \
--kafka.brokerConnect=<host:port,host:port>,...
Replace
kafdrop-<version>.jar
with the specific version, such askafdrop-4.0.2-SNAPSHOT.jar
.Specify
--kafka.brokerConnect=<host:port,host:port>
with the actual host and port of the cluster broker nodes.
The console startup output is as follows:
If not specified, kafka.brokerConnect
defaults to localhost:9092
.
Note: Starting from Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka management API.
Open your browser and navigate to http://localhost:9000. You can override the port by adding the following configuration:
--server.port=<port> --management.server.port=<port>
Final Results
- Full Interface
Displays the number of partitions, topics, and other cluster status information.
- Create New Topic Feature
- Broker Node Details
- Topic Details
- Message Information under Topic
Summary
In this tutorial, we have explored the key features and functionalities of Kafdrop, along with its integration with the AutoMQ cluster, demonstrating how to easily monitor and manage AutoMQ clusters. Using Kafdrop not only helps teams better understand and control their data flows but also enhances development and operational efficiency, ensuring an efficient and stable data processing workflow. We hope this tutorial provides valuable insights and assistance as you use Kafdrop with AutoMQ clusters.
References
[1] Kafdrop: https://github.com/obsidiandynamics/kafdrop
[2] AutoMQ: https://www.automq.com/zh
[3] Kafdrop Deployment Method: https://github.com/obsidiandynamics/kafdrop/blob/master/README.md#getting-started
[4] Kafdrop Project Repository: https://github.com/obsidiandynamics/kafdrop