Deploy to HuaWei Cloud CCE
Referencing Overview▸, AutoMQ supports deployment on Kubernetes. This article describes the installation process for deploying AutoMQ on the Huawei Cloud CCE platform.
In this article, AutoMQ Product Service Provider, AutoMQ Service Provider, and AutoMQ specifically refer to AutoMQ HK Limited and its subsidiaries.
Procedure
Step 1: Install Environment Console
Referencing Overview▸, AutoMQ supports deployment to CCE clusters. In the CCE deployment mode, you first need to install the AutoMQ console and then manage the CCE via the console interface to deploy the clusters onto CCE.
On Huawei Cloud, both cloud marketplace and Terraform-based installations of the environment console are supported.
Install the console via the Cloud Marketplace (Recommended), referencing Install Env via Huawei Marketplace▸.
Install the console via Terraform, referencing Install Env via Terraform Module▸.
Note:
When setting up the environment console as mentioned above, Install Env via Huawei Marketplace▸, the cluster deployment type must be set to Kubernetes. This is necessary to support subsequent steps 2-4, for installing the AutoMQ cluster on CCE.
After the AutoMQ console installation is complete, you need to obtain the environment console address, initial username, password, and permission delegation required by AutoMQ from the console interface or Terraform output menu. This delegation will be used in step 1 to create the CCE node pool.
Step 2: Create CCE Cluster
Refer to the Overview▸; users need to create a dedicated CCE cluster in advance for AutoMQ usage. Users can visit the Huawei Cloud CCE product console and follow the steps below.
- Log in to the Huawei Cloud CCE Console. Click Purchase Cluster.
- Select the cluster type as CCE Turbo and choose the billing mode and version according to the recommendations. It is recommended to choose a cluster size of 200-1000 nodes.
The network configuration needs to be set according to the following requirements:
Node subnet: Select a subnet with a sufficient IP range, recommended not less than /20, to avoid the inability to create machines later.
Container subnet and service subnet: Similarly, choose a subnet with sufficient IPs, recommended not less than /20, to avoid the inability to create Pods later.
Service forwarding mode: Make sure to select the IPVS mode.
Note:
When creating a CCE cluster, it is recommended to deselect the "Observability and local domain name resolution acceleration plugins". This is to avoid excessive consumption of node resources that may cause elastic scaling anomalies.
Click on Create Cluster and wait a few minutes for the creation to complete.
Once the cluster is created, go to the cluster details, Plugin Center, and install the CCE Cluster Elastic Engine Plugin.
Note:
During the elastic plugin deployment, select "Small Scale". This prevents the elastic scaling components of CCE from occupying too many node resources, which could lead to installation failure.
- Go to the cluster's Configuration Center, Network Configuration Tab, and enable Pod Access Metadata. Confirm and submit.
- Go to the cluster's Configuration Center, Cluster Auto-scaling Tab, enable Elastic Shrinkage, and check Ignore CPU and Memory Pre-allocation for DaemonSet Containers. Confirm and submit.
Step 3: Create a Public Node Pool for the CCE Cluster
Refer to the Overview▸; users need to create a Public Node Pool for CCE system components, which will be used to deploy CCE system components. Follow the steps below to create a compliant node pool.
- Access the CCE cluster details created in Step 1, click the Node Management menu, Create Node Pool. For the public node pool, it is recommended to select at least 2 machines with 2c8G for deploying CCE system components.
Step 4: Create a Dedicated CCE Node Pool for AutoMQ and Grant Delegated Authorization
Refer to the Overview▸; users need to create a dedicated node pool for AutoMQ to facilitate the subsequent deployment of instance machine requests. Follow the steps below to create a compliant node pool and complete the delegation authorization.
- Go to the CCE cluster details created in Step 1, click on the Node Management Menu, and Create Node Pool.
- Refer to the following documentation to set custom parameters and complete the node pool creation. For parameters not specified in the table, please use the default recommended values.
When creating a node pool, only single availability zone or three availability zones are supported. If a different number of availability zones is selected, instances cannot be created later.
Parameter Settings | Value Description |
---|---|
Node Pool Name |
|
Node Type |
Note: AutoMQ must run in the specified VM model. If a non-pre-defined model is selected when creating the node pool, the node pool cannot be used subsequently. |
Availability Zone |
Note: AutoMQ requires the availability zones for subsequent cluster creation to be completely consistent with the node pool. Therefore, if a single availability zone AutoMQ cluster is needed, create a single availability zone node pool here; if a three availability zone AutoMQ cluster is needed, create a three availability zone node pool here. Mixing the two is not allowed. |
Entrusted Name |
|
Taint |
|
- Bind the entrusted information to the node pool. The entrusted information comes from Step 1 Deploy to Huawei Cloud CCE▸ after installing the console's output parameters. Additionally, add taints to the node pool. The key for the taint is 'dedicated', the value is 'automq', and the effect is 'NO_SCHEDULE'.
- After the node pool is created, click on Elastic Scaling and enable the elastic scaling rules for the specified availability zone.
When setting the elastic scaling rules for the node pool, ensure the following two configurations are correct:
Number of nodes range: It is recommended to retain at least 1 node. The range should be reasonably assessed based on the subsequent AutoMQ cluster scale. If the setting is too small, it will result in insufficient nodes for deployment.
Specification selection: Make sure to enable all machine types that meet the conditions in all available zones.
- Click the scaling menu of the node pool to scale the initial node capacity. It is recommended to scale 1 node per availability zone.
Step 5: Enter the Environment Console and Configure the Kubernetes Cluster Information.
When entering the AutoMQ BYOC console for the first time, you need to configure the Kubernetes cluster information and Kubeconfig to use it normally. Follow the console's guidance page to fill in the CCE cluster ID and Kubeconfig created in Step 2 to complete the environment initialization.
- Copy the cluster ID of the GKE cluster created in Step 2.
- Find the Kubectl configuration menu and obtain the Kubeconfig configuration file.
Click on Kubectl configuration, set it to intranet access, and download the Kubeconfig configuration file.
Log in to the console, enter the cluster ID and Kubeconfig, and complete the initialization.