Skip to Main Content

Deploy to AWS EKS

Refer to Overview▸, AutoMQ supports deployment on Kubernetes. This article describes the installation process of deploying AutoMQ on the AWS EKS platform.

In this article, the terms AutoMQ product service provider, AutoMQ service provider, and AutoMQ specifically refer to AutoMQ HK Limited and its affiliated companies.

Operation Process

Step 1: Install the Environment Console

Refer to Overview▸, AutoMQ supports deployment on EKS clusters. In the EKS deployment mode, the AutoMQ console needs to be installed first. Then, use the console interface to operate EKS and deploy the cluster on EKS.

On AWS, support for installing the environment console is available both through Marketplace and Terraform.

Note:

While installing the environment console using the aforementioned methods, Install Env from Marketplace▸, you must set the cluster deployment type to Kubernetes. This is to support subsequent steps 2-4, which will install the AutoMQ cluster on EKS.

After installing the AutoMQ console, you need to obtain the environment console address, initial username, password, as well as the IAMRoleforAutoMQEnvironmentConsole and IAMRoleforAutoMQDedicatedNodeGroup from the console interface or the Terraform output menu. The former is used to configure access permissions for the console, while the latter is used to create the AutoMQ dedicated node group.

Step 2: Creating an IAM Role for the EKS Cluster

Refer to the AWS EKS documentation, each EKS cluster needs to be associated with an IAM Role to grant authorized access to AWS cloud resources for users. Therefore, you need to create a dedicated IAM Role for the EKS cluster before creating the EKS cluster.

  1. Access the IAM console and click on Create Role. Select the following parameters:
  • Trusted entity type: choose AWS Service.

  • Service Use Case: Select EKS and EKS-Cluster.

  1. Click "Next", input a custom role name, and Create IAM Role.

Step 3: Create EKS Cluster

Refer to Overview▸, users need to create an independent EKS cluster in advance for allocation to AutoMQ. Users can visit the AWS EKS product console and follow the steps below.

  1. Log in to the AWS EKS console. Click Create cluster.
  1. Configure Basic Cluster Information, focus on the configuration items listed below, while keeping other settings as default.

    1. Select Custom configuration mode.

    2. Disable EKS Auto Mode.

    3. Attach the IAM role created in step 1 to the EKS cluster.

  1. Configure VPC Network. Select the target VPC and subnet.

It is recommended to choose the VPC default security group and select all required private subnets for deploying the cluster.

  1. Keep other default configurations and create an EKS cluster.
  1. After the EKS cluster is created, you need to add the security group where the AutoMQ console resides to the inbound rules of the EKS cluster security group. This allows the AutoMQ console to call and access the EKS cluster.

Edit the inbound rules to add the security group where the AutoMQ console resides to the inbound rules, and select all traffic for the protocol.

Step 4: Create an EKS Public Node Group

Refer to Overview▸, users need to create a public node group for the EKS cluster to deploy EKS system components. Follow the steps below to create a compliant node group.

  1. Go to the details of the EKS cluster created in Step 3, click on the Compute menu, and Create Node Group.
  1. Select the Node Group IAM Role, refer to the screenshot below, select the IAM Role (you can reuse the IAMRoleforAutoMQDedicatedNodeGroup created in the AutoMQ console, or create the Role as recommended by EKS).
  1. Select the instance type, number, and zone-aware subnets for the default node group. Complete the creation of the node group.

    1. Instance type: It is recommended to select the default instance type of 2C4G.

    2. AMI Type: Change to Amazon Linux 2 (AL2_x86_64).

    3. Quantity: It is recommended to have 2-3 instances to meet the requirements of the EKS system components.

    4. Subnet: It is recommended to specify the subnets needed for EKS deployment.

Note: Ensure that AMI Type is selected as Amazon Linux 2; the default Amazon Linux 2023 is not supported yet.

Step 5: Create a Dedicated AutoMQ EKS Node Group

Refer to Overview▸, users need to create a dedicated node group that meets the requirements for AutoMQ to apply for machine deployment. Follow the steps below to create a compliant node group and complete IAM authorization.

  1. Go to the details of the EKS cluster created in Step 3, click on the Compute menu, and Create Node Group.
  1. Select the node group IAM Role. Refer to the screenshot below and choose the IAMRoleforAutoMQDedicatedNodeGroup created in the AutoMQ console. Configure the taint with the key dedicated, the value automq, and the effect NO_SCHEDULE.
  1. Select the instance type, quantity, and the availability zone subnet for the AutoMQ dedicated node group. Complete the node group creation.

When creating a node group, only single availability zone or three availability zones are supported. If other quantities of availability zones are selected, instances cannot be created later.

Parameter Settings
Value Description
Model Configuration
  • Description: Specify the model type for the node group. Please refer to the documentation Overview▸. Fill in the model type.

Note: AutoMQ must run on specified VM models. If you select a non-preset model when creating a node group, you won't be able to use that node group in the future.

AMI Type

Note: Be sure to select Amazon Linux 2 for the AMI Type. The default Amazon Linux 2023 is currently unsupported.

Subnet
  • Explanation: Based on the actual needs of the AutoMQ cluster, select one or three zone aware subnets.

Note:
AutoMQ requires that the availability zone and node group for subsequently created clusters must be completely consistent. Therefore, if you need to create a single-zone AutoMQ cluster, create a single-zone node group here; if you need to create a three-zone AutoMQ cluster, create a three-zone node group here as well. They cannot be mixed.

Number
  • Explanation: It is recommended to start with 3 nodes, the minimum number of nodes is set to 3, and the maximum number of nodes should be reasonably evaluated based on the scale of the AutoMQ cluster.

Step 6: Initialize the Local Kubectl and AWS CLI Tools

After the AWS EKS cluster is created, some system plugins such as CSI and NetworkPolicy components are not installed by default and need to be manually configured. You need to install AWS CLI and Kubectl tools locally.

  1. Install AWS CLI and Kubectl tools. Refer to the following documentation:

    1. Install AWS CLI tools.

    2. Install Kubectl tools.

    3. Install Helm tools.

  2. Enter the command below to generate the KubeConfig configuration file. This will generate the configuration file in the default path (~/.kube/config). Users can also specify the path to generate the configuration file, but make sure to configure the environment variable KUBECONFIG.


// replace the region and cluster-name param
aws eks update-kubeconfig --region <region> --name <cluster-name>

Step 7: Configure EKS AutoScaler

To automatically scale EKS Nodes when creating AutoMQ instances and during scaling scenarios, it is necessary to configure the EKS AutoScaler to achieve on-demand node scaling. Follow the configuration steps below:

  1. Download the AutoScaler configuration file from this link.

  2. Modify the EKS cluster name parameter in the configuration file, save it, and execute the installation command.

  1. Execute the installation command to install the AutoScaler.

// Check the config yaml path
kubectl apply -f cluster-autoscaler-autodiscover.yaml

After the installation is complete, you can check the running status of the EKS AutoScaler components. If you find the corresponding Running Deployment, it indicates a successful installation.

Step 8: Configure EKS CSI Storage Plugin

When creating an EKS cluster, the EKS CSI storage plugin is not created by default and needs to be configured manually. Configuration documentation can be found here.

  1. Refer to the EKS documentation and copy the OIDC Connect Provider URL of the EKS cluster.
  1. Go to the IAM console to create an OIDC Provider for EKS to obtain IAM identity. Fill in the configuration information as follows:

    1. Provider Type: Select OpenID Connect.

    2. Provider URL: Enter the URL copied from the previous step.

    3. Audience: Fill in sts.amazonaws.com.

  1. Create a CSI-specific IAM Role based on this Web Identity. Fill in the configuration items as follows:

    1. Trusted entity type: Choose Web identity.

    2. Identity Provider: Select the Identity Provider created in the previous step.

    3. Audience: Fill in sts.amazonaws.com.

    4. Policy: Select AmazonEBSCSIDriverPolicy.

  2. After creating the IAM Role, go to the details page of the Role and click Edit trust policy. Add the following line to the existing JSON file.

Note to modify the RegionCode and ProviderID, with ProviderID something like EXAMPLED539D4633E53DE1B7XXXXAMPLE.


"oidc.eks.{RegionCode}.amazonaws.com/id/{ProviderID}:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"

  1. Go to the EKS cluster, select the AddOn Tab, and add the Amazon EBS CSI Driver. Make sure you select the IAM Role created in the previous step.

Step 9: Enable EKS NetworkPolicy

AutoMQ supports controlling access to the cluster by restricting clients from specific IP sources. This functionality is based on NetworkPolicy, so you need to enable EKS NetworkPolicy.

  1. In the EKS console, select the AWS VPC CNI under the Add-on tab and click Edit.
  1. Expand the Optional configuration settings and add the following JSON under Configuration values. Select Override mode and save.

{
"enableNetworkPolicy": "true"
}

Step 10: Install AWS Load Balancer Controller

AWS does not provide the Load Balancer Controller installation by default, requiring users to manually install the plugin. The installation steps are as follows:

  1. Add the Helm repository and install the AWS Load Balancer CRD.

helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"

  1. Modify the command below to replace eks-cluster-id with the actual cluster ID and execute it.

helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<eks-cluster-id>

  1. Verify the installation results.

Step 11: Access the Environment Console and Configure Kubernetes Cluster Information.

The first time you enter the AutoMQ BYOC console, you need to configure Kubernetes cluster information and authorization to use it normally. Follow the instructions on the console guide page to enter the EKS Cluster ID and Kubeconfig created in step 2 to complete the environment initialization.

  1. Navigate to the EKS cluster, click the Access menu, and create an Access Entry.
  1. Choose the AutoMQ specific IAM Role generated by the installation environment console in step 1, and set the type to Standard.

Select the authorized Policy as AmazonEKSClusterAdminPolicy, with the scope set to Cluster, and create it.

  1. Copy the Cluster Name of the EKS cluster created in step 3. Enter the AutoMQ console, input the Cluster Name, and complete the initialization.