Manage Integrations
During the use of the AutoMQ Cloud product, users can achieve data exchange between instances within the environment and external systems through the integration features. This article introduces the use of the integration features.
In this article, references to the AutoMQ product service provider, AutoMQ service provider, and AutoMQ specifically refer to AutoMQ HK Limited.
Integration Types
AutoMQ Cloud currently supports the following integration features:
Integration Type | Data Type | Applicable Scenarios |
---|---|---|
Prometheus OTLP |
|
|
Apache Kafka |
|
|
CloudWatch |
|
|
Creating Prometheus Integration
Prerequisites
The principle behind the Prometheus integration feature in AutoMQ Cloud is that each data node in the cluster directly writes metrics data to the Prometheus HTTP interface using the OpenTelemetry protocol. Therefore, the Prometheus service provided by the user needs to be compatible with the OT protocol.
Integrating with a Self-Built Prometheus Service
Refer to the version notes of Prometheus. This feature requires the user's Prometheus service to meet the following conditions:
The Prometheus version must be at least 2.47.
When deploying the cluster, you need to enable the configuration “--enable-feature=otlp-write-receiver” in the startup command.
Configuration Example:
Start Prometheus service: ./prometheus --config.file=xxxx.yml --enable-feature=otlp-write-receiver
When using a self-hosted Prometheus service, set the OTLP endpoint to: http://${your_ip}:9090/api/v1/otlp/v1/metrics
If you are using a self-hosted Prometheus service based on the commercial version of VictoriaMetrics, ensure that the version of VictoriaMetrics is at least 1.92.0 and configure the endpoint according to the relevant documentation here.
Integration with Cloud Providers' Prometheus Services
If you are using a commercial version provided by Public Cloud providers, it is recommended to consult the technical support from the cloud provider. For example, Alibaba Cloud Prometheus service provides out-of-the-box support for the OT protocol. Refer to the documentation for reporting data using the OT protocol.
Steps to Follow
You can create a Prometheus integration by following these steps:
- Navigate to the integration list page and create a new integration. Access the environment console, click on the Integration option in the left-hand navigation bar to enter the integration list page. Click Create Integration and follow the prompts to enter the necessary information to complete the creation.
Parameter | Example Value |
---|---|
Integration Name | Provide a unique alias for the integration configuration item. For specific restrictions, please refer to Restrictions▸. |
Integration Type | Select Prometheus Service |
Prometheus OpenTelemetry Write Interface | AutoMQ Cloud environment uses the OpenTelemetry protocol to write the Metrics data of each instance in the environment directly into the user-specified Prometheus cluster. Users need to provide this protocol interface. |
Username | If the Prometheus service has ACL authentication enabled, configure the username. |
Password | If the Prometheus service has ACL authentication enabled, configure the password. |
- Navigate to the instance details page and reference the integration configuration. For instances that require integration configuration, navigate to the respective instance details page, reference the integration item created in the first step, enable the configuration, and subsequently check if data is being reported in the Prometheus service.
For the metrics data definitions provided by AutoMQ Cloud, please refer to Monitoring & Alert via Prometheus▸.
Creating Apache Kafka Integration
Prerequisites
The principle of integrating Apache Kafka in AutoMQ Cloud is to use the Kafka Connector component to connect with external Kafka clusters (including other Kafka-compatible versions) to achieve real-time synchronization and reassignment of messages and other data. Therefore, it is necessary to ensure:
The external Apache Kafka cluster version is higher than 0.9.x.
The network between the external Kafka cluster and AutoMQ is unimpeded.
Operating Steps
Users can create an Apache Kafka integration by following these steps:
- Navigate to the Integration List page and create a new integration. Users access the environment console, click Integration on the left navigation bar to enter the Integration List page. Click Create Integration and follow the prompts to enter the required information to complete the creation.
Parameter | Example Value |
---|---|
Integration Name | Provide a distinctive alias for the integration configuration item. For specific restrictions, refer to Restrictions▸. |
Integration Type | Select Apache Kafka Service If the source cluster is a cloud provider-managed Kafka or another Kafka release version, it is also supported. |
Access Protocol | The client access protocol for connecting to the external Kafka cluster. Currently supported:
|
Creating CloudWatch Integration
Prerequisites
The principle of AutoMQ Cloud's CloudWatch integration function is to write the metrics data of the AutoMQ cluster into the user-specified CloudWatch Namespace via the CloudWatch API.
Operation Steps
Users can create a CloudWatch integration by following these steps:
- Navigate to the integration list page and create a new integration. Users should access the environment console, click on Integrations in the left navigation bar to enter the integration list page. Click on Create Integration, and follow the prompts to enter the following information to complete the creation.
Parameter | Example Value |
---|---|
Integration Name | Enter a distinctive alias for the integration configuration item. For specific restrictions, refer to Restrictions▸. |
Integration Type | Select CloudWatch Service |
Namespace | Follow CloudWatch naming conventions and enter a custom Namespace. Metrics data will be written to the specified Namespace. |