Skip to Main Content

Performance White Paper

Info

The AutoMQ terms mentioned in this article specifically refer to the open source automq-for-kafka organized by AutoMQ CO.,LTD through GitHub AutoMQ project.

AutoMQ: Fast and Economical

100x Efficiency Improvement

  • Partition migration efficiency 300 times compared to Apache Kafka® : AutoMQ partition migration speed is about 300 times higher than that of Apache Kafka® . AutoMQ turns Kafka high-risk routine operations into low-risk operations that can be automated and are basically invisible.
  • Ultimate elasticity from zero to 1GiB/s in 4min: AutoMQ cluster automatic emergency elasticity goes from 0 MiB/s to 1 GiB/s in just 4 minutes. It allows the system to quickly expand in response to burst traffic.
  • 200 times cold reading efficiency compared to Apache Kafka®: AutoMQ separates reading and writing. Compared with Apache Kafka® , the sending time is reduced by 200 times, and the catch-up throughput is increased by 5 times. AutoMQ can easily cope with online message peak shaving and offline batch processing scenarios.

10x Cost Savings

  • Compared to 2 times that of Apache Kafka® throughput limit: Same hardware specifications, AutoMQ limit throughput is 2 times that of Apache Kafka® , sending consumption At that time, P999 was 1/4 of Apache Kafka® . In real-time stream computing scenarios, using AutoMQ can obtain computing results faster at lower costs.
  • Compared to Apache Kafka® 1/11 billing costs: AutoMQ makes full use of Auto Scaling and Object storage to achieve 11 times cost reduction compared to Apache Kafka® . With AutoMQ, you no longer need to prepare capacity for peaks, and realize true pay-as-you-go computing and storage.

Test Preparation

The benchmark test is enhanced based on the Linux Foundation's OpenMessaging Benchmark to simulate real user scenarios and provide dynamic workloads. All test scenarios can be found in the Github repositoryConfiguration and load.

Configuration

By default, AutoMQ flushes data and then responds. The usage configuration is as follows:


acks=all
flush.message=1

Info

AutoMQ ensures high data reliability through the underlying multi-copy mechanism of EBS, and does not require multi-copy configuration on the Kafka side.

Apache Kafka® selects version 3.6.0, and refers to Confluent's recommendation not to set flush.message = 1, and uses three copies of memory asynchronous flushing to ensure data reliability (power failure in the computer room will cause data loss). The configuration is as follows:


acks=all
replicationFactor=3
min.insync.replicas=2

Machine Specifications

In terms of cost-effectiveness, small-format models + EBS are better than larger-format machines with SSDs.

Take the small size r6in.large + EBS vs. the large size i3en.2xlarge + SSD as an example:

  • i3en.2xlarge, 8 cores, 64 GB memory, network baseline bandwidth 8.4 Gps, comes with two 2.5 TB NVMe SSDs, maximum hard drive throughput 600 MB/s; price $0.9040/h.
  • r6in.large * 5 + 5 TB EBS, 10 cores, 80GB memory, network baseline bandwidth 15.625 Gbps, EBS baseline bandwidth 625 MB/s; price (compute) 0.1743 * 5 + (storage) 0.08 * 5 \ * 1024 / 24 / 60 = $1.156/h.

At first glance, the price and performance of the two are almost the same. Considering that in actual production, data is expected to be saved for a longer period of time, using i3en.2xlarge can only horizontally expand computing nodes to increase the storage space of the cluster, which is a waste of computing resources. If you use r6in.large + EBS, you only need to adjust the capacity of EBS.

Therefore, from the comprehensive consideration of cost and performance, r6in.large is selected as the Broker elastic minimum unit for both AutoMQ and Apache Kafka® calculations, and GP3 type EBS and Standard S3 are selected for storage.

  • r6in.large: 2 cores, 16 GB memory, network baseline bandwidth 3.125 Gbps, EBS baseline bandwidth 156.25 MB/s; price $0.1743/h.
  • GP3 EBS: Free quota 3000 IOPS, 125 MB/s bandwidth; price storage $0.08 GB per month, additional bandwidth $0.040 MB/s per month, additional IOPS $0.005 per month.

AutoMQ and Apache Kafka® have different positions on EBS:

  • AutoMQ uses EBS as a write buffer, so EBS only needs to configure 3 GB of storage space, IOPS and bandwidth usage free quota.
  • Apache Kafka® data is stored on EBS, and the EBS space is determined by the traffic and storage time of the specific test scenario. Purchase an additional EBS bandwidth of 31 MB/s to further increase the unit cost throughput of Apache Kafka® .

100x Efficiency Improvement

Second-level Partition Migration

In a production environment, a Kafka cluster usually serves multiple businesses. Business traffic fluctuations and partition distribution may cause insufficient cluster capacity or machine hotspots. Kafka operation and maintenance personnel need to expand the cluster and migrate hotspot partitions to idle nodes. To ensure the service availability of the cluster.

The time of partition migration determines emergency and operation and maintenance efficiency:

  • The shorter the partition migration time, the shorter the time it takes for the cluster to expand to meet capacity requirements, and the shorter the time for service damage.
  • The faster the partition migration, the shorter the observation time for operation and maintenance personnel, and the faster they can get operation and maintenance feedback to make subsequent actions.
Info

300x efficiency improvement. Compared with Apache Kafka® , AutoMQ 30 GiB partition migration time is reduced from 12 minutes to 2.2 seconds.

Test

This test measures the time-consuming and impact of migrating a partition with 30 GiB data to a node that does not have a copy of the partition in a scenario with daily sending and consumption traffic between AutoMQ and Apache Kafka® . The specific test scenarios are:

  1. 2 r6in.large as brokers, create on them:
    • 1 Topic A with a single partition and single copy, and continuous reading and writing at a throughput of 40 MiB/s.
    • 1 Topic B with 4 partitions and single copy, and continuous reading and writing at 10 MiB/s throughput as background traffic.
  2. After 13 minutes, migrate the only partition of Topic A to another node, with a migration throughput limit of 100 MiB/s.
Info

Each Apache Kafka® broker mounts an additional 320GB 156MiB/s gp3 EBS for data storage.

Driver file:apache-kafka-driver.yaml, automq-for-kafka-driver.yaml

Payload file: partition-reassignment.yaml

AutoMQ installation configuration file: partition-reassignment.yaml

ComparisonAutoMQApache Kafka®
Migration time2.2s12min
Impact of migrationMaximum sending delay 2.2sContinuous sending within 12min takes 1ms ~ 90ms jitter

Analyze

AutoMQ partition migration only requires uploading the buffered data in EBS to S3, which can be safely opened on the new node. 500 MiB of data can usually be uploaded within 2 seconds. The migration time of AutoMQ partition has nothing to do with the amount of data in the partition. The average partition migration time is about 1.5 seconds. The AutoMQ partition returns the NOT_LEADER_OR_FOLLOWER error code to the client during the migration process. After the migration is completed, the client updates to the new Topic routing table, and the client internally retries to send to the new node, so the partition The sending latency will rise at the moment and return to daily levels after the migration is complete.

Apache Kafka® partition migration requires copying the partition copy to the new node. While copying the historical data, it also needs to catch up with the newly written data. The migration time = partition data volume / (migration throughput limit- partition write throughput\ ), in actual production environments, partition migration often takes hours. The 30 GiB partition migration in this test took 12 minutes. In addition to the long migration time, Apache Kafka® migration requires reading cold data from the hard disk. Even when throttle is set, there will still be jitter in the sending delay due to preemption of the page cache, affecting the quality of service. This is reflected in the green curve jitter in the figure. part.

0 -> 1 GiB/s Extreme Flexibility

Kafka operation and maintenance personnel usually prepare Kafka cluster capacity based on historical experience. However, there will always be some unexpected popular events and activities that cause a sudden increase in cluster traffic. At this time, it is necessary to quickly expand the cluster capacity and rebalance the partitions to cope with sudden traffic.

Info

Extreme elasticity, the AutoMQ cluster's automatic emergency elasticity goes from 0 MiB/s to 1 GiB/s in just 4 minutes.

Test

The purpose of this test is to measure the elastic speed of AutoMQ 's Auto Scaling emergency elasticity function automatically from 0 MiB/s to 1 GiB/s. The specific test scenarios are:

  1. The cluster initially has only one Broker, set the Auto Scaling emergency elastic capacity to 1 GiB/s, and create a Topic with 1000 partitions.
  2. Start OpenMessaging and directly set the send traffic to 1 GiB/s.

Driver file: apache-kafka-driver.yaml, automq-for-kafka-driver.yaml

Payload file: emergency-scaling.yaml

AutoMQ installation configuration file: emergency-scaling.yaml

Analysis itemsMonitoring alarmsBatch expansionAuto BalancingTotal
0 -> 1 GiB/s Flexible time consumption70s80s90s4min

Analyze

AutoMQ 's cluster capacity usually maintains the cluster water level at xx% through Auto Scaling's target tracking strategy. In the scenario of unexpected sudden traffic increase, the target tracking strategy cannot meet the capacity requirements in time. Auto Scaling provides an emergency strategy. When the cluster water level exceeds 90%, the cluster will be directly elasticized to the target capacity.

In this test, the Auto Scaling emergency strategy elasticized the cluster capacity to the target capacity in 4 minutes:

  • 70s: The highest monitoring accuracy of AWS CloudWatch monitoring is 1 minute. The monitoring collected that the cluster water level exceeded 90% and generated an alarm.
  • 80s: AWS expands nodes in batches to the target capacity, and Broker completes node registration.
  • 90s: AutoMQ 's Auto Balancing detects the imbalance of traffic between nodes and completes automatic traffic rebalancing.
  • The cluster capacity meets the requirement of 1 GiB/s, and the sending time returns to the baseline time.

Catch-up Read

Catch-up reading is a common scenario in messaging and streaming systems:

  • As for messages, messages are usually used to decouple services and cut peaks and valleys. Peak shaving and valley filling requires that the message queue can accumulate the data sent by the upstream and allow the downstream to consume it slowly. At this time, the data that the downstream is catching up to read are cold data that is not in the memory.
  • For streams, periodic batch processing tasks need to start scanning calculations from data several hours or even a day old.
  • There are also fault scenarios: the consumer is down and comes back online after a few hours; the consumer logic problem is repaired and the consumption history data is traced back.

The catch-up reading mainly focuses on two points:

  • Catch-up read speed: The faster the catch-up read speed, the faster consumers can recover from failures and the faster batch processing tasks can produce analysis results.
  • Isolation of reads and writes: Catch-up reads need to minimize impact on production speed and latency.
Info

200x efficiency improvement, AutoMQ read and write separation compared to Apache Kafka® in the tail-end reading scenario, the sending time is reduced from 800ms to 3ms, and the catch-up time is shortened from 215 minutes to 42 minutes.

Test

This test measures the catch-up read performance of AutoMQ and Apache Kafka® under the same cluster size. The test scenario is as follows:

  1. Deploy 20 Brokers in the cluster and create a Topic with 1000 partitions.
  2. Send continuously at 800 MiB/s throughput.
  3. After sending 4 TiB data, start the consumer and start consuming from the earliest position.
Info

Each Apache Kafka® broker mounts an additional 1000GB 156MiB/s gp3 EBS for data storage.

Driver file: apache-kafka-driver.yaml, automq-for-kafka-driver.yaml

Payload file: catch-up-read.yaml

AutoMQ installation configuration file: catch-up-read.yaml

Comparison itemsTime-consuming sending during catching-up readingImpact on sending traffic during catching-up readingPeak throughput of catching-up reading
AutoMQLess than 3msRead and write isolation, maintain 800 MiB/s2500 ~ 2700 MiB/s
Apache Kafka®About 800msInteraction, dropped to 150 MiB/s2600 ~ 3000 MiB/s (sacrificial write)


Analyze

  • Under the same cluster size, AutoMQ 's sending traffic was not affected in any way when catching up on reads, while Apache Kafka® 's sending traffic dropped by 80%. This is because Apache Kafka® will read the hard disk when catching up and does not isolate IO, which occupies the read and write bandwidth of AWS EBS, resulting in a reduction in write hard disk bandwidth and a drop in sending traffic; in contrast, AutoMQ read and write separation, in When catching up, the hard disk will not be read, but the Object storage will be read, which will not occupy the hard disk read and write bandwidth and will not affect the sending traffic.

  • Under the same cluster size, when catching up on reads, the average sending delay of AutoMQ increased by about 0.4 ms compared to just sending, while Apache Kafka® soared by about 800 ms. There are two reasons for the increase in Apache Kafka® sending latency: First, as mentioned above, catching up on reads will occupy AWS EBS read and write bandwidth, which will lead to a decrease in write traffic and increased latency; second, when catching up on reads, reading from the hard disk Cold data in the page cache will pollute the page cache, which will also cause write latency to increase.

  • It is worth noting that when catching up to read 4 TiB data, AutoMQ took 42 minutes and Apache Kafka® took 29 minutes. There are two reasons why Apache Kafka® takes less time:

    • When catching up on reads, Apache Kafka® 's send traffic dropped by 80%, leaving it with less data to catch up on.
    • Apache Kafka® does not have IO isolation and sacrifices the sending rate to increase the reading rate.
  • If we assume that Apache Kafka® has IO isolation, that is, reading is performed while ensuring the sending rate as much as possible, the calculation is as follows:

    • Assume that the sending rate of Apache Kafka® is 700 MiB/s during catch-up reading. Consider that the EBS bandwidth occupied by three replicas for writing is 700 MiB/s * 3 = 2100 MiB/s.
    • And the total EBS bandwidth in the cluster is 156.25 MiB/s * 20 = 3125 MiB/s.
    • The bandwidth available for reading is 3125 MiB/s - 2100 MiB/s = 1025 MiB/s.
    • In the catch-up read scenario of sending and reading at the same time, reading 4TiB data takes 4 TiB * 1024 GiB/TiB * 1024 MiB/GiB / (1025 MiB/s - 700 MiB/s) / 60 s/min = 215 min.
  • Apache Kafka® takes 215 minutes to catch up and read 4 TiB data without affecting the transmission as much as possible, which is five times the time taken by AutoMQ .

10x Cost Savings

The cost of Kafka is mainly composed of two parts: computing and storage. AutoMQ has theoretically achieved the optimal cost on the cloud through the optimization of these two parts, with a 10x cost saving compared to Apache Kafka® :

  • calculate
    • Spot instances save up to 90%: AutoMQ benefits from Broker's statelessness and can use Spot instances on a large scale to save single-node computing costs.
    • EBS multi-copy high reliability saves up to 66%: AutoMQ is based on EBS multi-copy to ensure high data reliability. Compared with ISR three copies, the computing instance can be up to 1 instead of 3.
    • Auto Scaling: AutoMQ uses a target tracking strategy to scale the cluster in real time based on cluster traffic.
  • storage
    • Object storage savings of up to 93%: AutoMQ stores almost all data in Object storage, which can save up to 93% of storage costs compared to 3-copy EBS;

Fixed Scale

Info

The maximum throughput of AutoMQ is 2x that of Apache Kafka® , and the sending time is P999, which is 1/4 that of Apache Kafka® .

Test

This test measures the performance and throughput upper limits of AutoMQ and Apache Kafka® under the same cluster size and different traffic sizes. The test scenario is as follows:

  1. Deploy 23 Brokers* in the cluster and create a Topic with 1000 partitions.
  2. Start 1:1 read and write traffic of 500 MiB/s and 1 GiB/s respectively; in addition, additionally test the extreme throughput of the two (AutoMQ 2200 MiB/s, Apache Kafka® 1100 MiB/s).

Each Apache Kafka® broker mounts an additional 500GB 156MiB/s gp3 EBS for data storage.

Driver file: apache-kafka-driver.yaml, automq-for-kafka-driver.yaml

Payload file: tail-read-500m.yaml, tail-read-1024m.yaml, tail-read-1100m.yaml, tail-read-2200m.yaml

AutoMQ installation configuration file: tail-read.yaml

Comparison itemsExtreme throughput500 MiB/s sending time P9991 GiB/s sending time P999
AutoMQ2200 MiB/s13.829 ms25.492 ms
Apache Kafka1100 MiB/s55.401 ms119.033 ms


Detailed data on sending time and E2E time:

Pub Latency(ms)500 MiB/s500 MiB/s1 GiB/s1 GiB/sThroughput CapThroughput Cap
Pub Latency(ms)AutoMQApache KafkaAutoMQApache KafkaAutoMQ 2200 MiB/sApache Kafka 1100 MiB/s
AVG2.1161.8322.4313.9015.0584.591
P501.9531.3802.1182.1373.0342.380
P752.2711.6182.5033.0953.9063.637
P952.9972.6183.8598.2549.55510.951
P996.36812.2748.96850.76237.37360.207
P99913.82955.40125.492119.033331.729134.814
P999932.46776.30465.24233.89813.415220.280

E2E Latency(ms)1 GiB/s1 GiB/sThroughput limitThroughput limit
E2E Latency(ms)AutoMQApache KafkaAutoMQ 2200 MiB/sApache Kafka 1100 MiB/s
AVG4.4054.7866.8585.477
P503.2823.0444.8283.318
P753.9484.1086.2704.678
P9510.9219.51412.51011.946
P9926.61051.53134.35060.272
P99953.407118.142345.055133.056
P9999119.254226.945825.883217.076

Analyze

  1. Under the same cluster size, the ultimate throughput of AutoMQ is 2 times that of Apache Kafka®

AutoMQ ensures high data reliability based on multiple copies at the bottom of EBS, without redundant replication in the upper layer, while Apache Kafka® requires three copies of ISR to ensure high data reliability. Regardless of CPU and network bottlenecks, both AutoMQ and Apache Kafka® run to full hard disk bandwidth, and the theoretical throughput limit of AutoMQ is three times that of Apache Kafka® .

In this test, because AutoMQ needs to upload data to S3, the CPU usage is higher than that of Apache Kafka® , and the AutoMQ CPU reaches the bottleneck first. The total hard disk bandwidth limit of 23 r6in.large is 3588 MB/s. The theoretical sending limit of three copies of Apache Kafka® is 1196MB/s. The Apache Kafka® hard disk reaches the bottleneck first. The ultimate throughput of AutoMQ in the final stress test was 2 times that of Apache Kafka® .

  1. Under the same cluster size and traffic (500 MiB/s), AutoMQ ’s sending delay P999 is 1 / 4 of Apache Kafka® . Even under twice the traffic (500 MiB/s : 1024 MiB/s), AutoMQ sends Latency P999 is still 1/2 of Apache Kafka® .
  • AutoMQ uses Direct IO to bypass the file system and directly write to the EBS raw device. There is no file system overhead and a relatively stable sending delay can be obtained.
  • Apache Kafka® writes data to the page cache through mmap. After writing to the page cache, it returns success, and the operating system flushes the dirty pages to the hard disk in the background. File system overhead, consumption of cold reads, and the uncertainty of page cache page faults may all cause jitter in the sending delay.
  1. Converted to a throughput of 1 GiB/s, AutoMQ has at least 20x computing cost reduction and 10x storage cost reduction compared to Apache Kafka® .
  • Calculation: AutoMQ only uses EBS as the write buffer of S3. It will upload data to S3 when shutting down and complete the shutdown and offline within 30 seconds. Therefore AutoMQ can make full use of Spot instances. Spot instances are up to 90% cheaper than On-Demand instances. In addition, AutoMQ single-machine throughput is 2 times that of Apache Kafka® . Ultimately, AutoMQ can reduce computing costs by up to 20x compared to Apache Kafka® .
  • Storage: Almost all AutoMQ data is stored in S3, and S3 charges based on the actual amount of data stored. Apache Kafka® stores data based on three copies of the hard disk, and usually at least 20% of additional hard disk space is reserved in the production environment. Compared with Apache Kafka® , the storage cost per GB of AutoMQ is at most 1 / (S3 unit price 0.023 / (3 copies* 0.08 EBS unit price / 0.8 hard disk water level)) = 13x cost reduction, plus the cost of S3 API calls , in the end , AutoMQ can reduce storage costs by at least 10x compared to Apache Kafka® .

In addition, the Apache Kafka® extreme throughput stress test has fully occupied the hard disk bandwidth. In actual production environments, hard disk bandwidth needs to be reserved for partition migration and cold data catch-up reading, and the upper limit of the write water level will be set lower. AutoMQ, on the other hand, uses network bandwidth to read data from S3 for catch-up reading, which separates reading and writing. All hard disk bandwidth can be used for writing, and the upper limit of the actual production water level is consistent with the pressure test.

Auto Scaling

Info

11x extreme cost reduction, AutoMQ makes full use of Auto Scaling and Object storage to achieve true pay-as-you-go computing and storage.

Test

This test simulates production peak and valley loads and measures the cost and performance of AutoMQ under the Auto Scaling target tracking strategy. The test scenario is as follows:

  1. Install the cluster on AWS through AutoMQ Installer.
  2. Create a Topic with 256 partitions and a storage time of 24 hours.
  3. Conduct stress testing with the following 1:1 read and write dynamic traffic:
    1. The normal traffic is 40 MiB/s.
    2. From 12:00 to 12:30, the traffic increases to 800 MiB/s, and by 13:00, the traffic returns to 40 MiB/s.
    3. From 18:00 to 18:30, the traffic increases to 1200 MiB/s, and by 19:00, the traffic returns to 40 MiB/s.

Driver file: apache-kafka-driver.yaml, automq-for-kafka-driver.yaml

Payload file: auto-scaling.yaml

AutoMQ installation configuration file: auto-scaling.yaml

Cost comparison between AutoMQ and Apache Kafka® under the same load:

Cost CategoryApache Kafka® (USD / month)AutoMQ (USD / month)Multiply
Calculation3,054.26201.4615.2
Storage2,095.71257.388.1
Total5,149.97458.8411.2

Analyze

This test is a cost bill measured in AWS East America. Both computing and storage are paid as per usage:

  • Computing: The computing nodes dynamically expand and contract according to the cluster traffic under the target tracking strategy of Auto Scaling, and the cluster water level is controlled at 80% at the minute level (AWS monitoring and alarm progress is at the minute level, plus the monitoring delay, the tracking accuracy is 2 minutes about).
  • Storage: Almost all data stored is on S3, and storage fees are mainly composed of S3 storage fees and S3 API call fees. S3 storage costs are positively related to data writing volume and data storage time, and S3 API call costs are positively related to writing volume. S3 storage charges and S3 API call charges are pay-as-you-go.

If you use Apache Kafka® to build a three-copy cluster to serve the daily peak traffic of 1 GiB/s, under the same traffic model, Apache Kafka® will need to spend at least:

  • Calculation: It is difficult to dynamically scale, and capacity needs to be prepared for peaks. Broker costs r6in.large unit price $0.17433 per hour * 730 hours per month * 23 units = $2927.00.
  • storage:
    • Storage cost: (Total data volume 6890.625 GB x 3 copies / hard disk water level 80%) x EBS unit price $0.08GB per month = $2067.19.
    • Storage bandwidth cost: MB bandwidth unit price $0.04 x additional purchased bandwidth 31MB/s x 23 EBS volumes = $28.52.
Cost CategoryCost CategoryApache Kafka® (USD / month)AutoMQ (USD / month)Multiply
ComputeController127.26127.261.0
CalculationBroker2,927.0074.2039.4
CalculationTotal3,054.26201.4615.2
StorageEBS Storage2,067.190.43-
StorageEBS Throughput28.52--
StorageS3 Storage-158.48-
StorageS3 API-98.47-
StorageTotal2,095.71257.388.1
TotalTotal5,149.97458.8411.2

Summarize

This benchmark test shows that AutoMQ has improved efficiency and cost savings compared to Apache Kafka® after reshaping Kafka based on the cloud:

  1. 100x efficiency improvement:
    1. In the partition migration scenario, the AutoMQ 30GB partition migration time dropped from 12 minutes to 2.2 seconds in Apache Kafka® , achieving a 300x efficiency improvement.
    2. Extreme elasticity, AutoMQ 0 -> 1 GiB/s can automatically scale-out to meet the target capacity in just 4 minutes.
    3. In the historical data catch-up read scenario, AutoMQ read-write separation not only optimizes the average sending delay by 200 times from 800ms to 3ms, but also the catch-up read throughput is 5 times that of Apache Kafka® .
  2. 10x cost savings:
    1. Under a fixed scale, the throughput limit of AutoMQ is as high as 2200 MiB/s, which is 2 times that of Apache Kafka® . Sending simultaneously takes P999, which is only 1/4 of Apache Kafka® .
    2. In a dynamic load scenario of 40 MiB/s ~ 1200 MiB/s, AutoMQ saves a lot of computing resources during off-peak periods through Auto Scaling. According to actual measurements, AutoMQ achieves 11x bill cost reduction compared to Apache Kafka® .

Multi-cloud Compatibility

Compatible itemsAWSBaidu CloudAlibaba CloudHuawei CloudTencent Cloud
Broker StartPassPassPassPassPass
Server StartupPassPassPassPassPass
ProductionPassPassPassPassPass
consumepasspasspasspasspass
Broker expansionPassedPassedPassedTo be testedTo be tested
Broker reductionPassedPassedPassedTo be testedTo be tested
Cloud disk reading and writingPassPassPassPassPass
Object storage Read and WritePassPassPassPassPass