Skip to Main Content

Timeplus

Timeplus is a data analytics platform focused on stream processing. It leverages the open-source stream database Proton, offering end-to-end robust capabilities that enable teams to process both streaming and historical data quickly and intuitively. It caters to organizations of various scales and industries, empowering data engineers and platform engineers to fully harness the value of stream data using SQL.

This article describes how to import AutoMQ data into Timeplus using the Timeplus console. Given that AutoMQ is fully compatible with Apache Kafka, you can also create a Kafka external stream to analyze AutoMQ data without moving the data.

Prepare AutoMQ and Test Data

Refer to Deploy Locally▸ for deploying AutoMQ, ensuring network connectivity between AutoMQ and Timeplus.

If IP whitelisting is required, add the static IP of the Timeplus service to the whitelist:

52.83.159.13 for cloud.timeplus.com.cn

Quickly create a Topic named example_topic in AutoMQ and write a test JSON message to it by following these steps.

Create Topic

To create a topic using Apache Kafka® command-line tools, you must have access to the Kafka environment and ensure that the Kafka service is running. Below is an example of the command to create a topic:


./kafka-topics.sh --create --topic exampleto_topic --bootstrap-server 10.0.96.4:9092 --partitions 1 --replication-factor 1

When executing the command, replace topic and bootstrap-server with the actual Kafka server address you are using.

After creating the topic, you can use the following command to verify that the topic has been successfully created.


./kafka-topics.sh --describe example_topic --bootstrap-server 10.0.96.4:9092

Generate Test Data

Generate JSON-formatted test data corresponding to the table mentioned earlier.


{
"id": 1,
"name": "Test User"
"timestamp": "2023-11-10T12:00:00",
"status": "active"
}

Write Test Data

Use Kafka's command-line tool or programming method to write the test data into a topic named example_topic. Below is an example using the command-line tool:


```markdown
```bash
echo '{"id": 1, "name": "Test User", "timestamp": "2023-11-10T12:00:00", "status": "active"}' | sh kafka-console-producer.sh --broker-list 10.0.96.4:9092 --topic example_topic

Use the following command to view the recently written topic data:

sh kafka-console-consumer.sh --bootstrap-server 10.0.96.4:9092 --topic example_topic --from-beginning


<Admonition type="tip">

When executing the command, make sure to replace `topic` and `bootstrap-server` with the actual Kafka server address.

</Admonition>


## AutoMQ Data Source

1. In the left navigation menu, click "Data Ingestion", then click the "Add Data" button in the upper right.

2. In the popup window, review the available data sources and other methods to add data. Since AutoMQ is fully compatible with Apache Kafka, directly select Apache Kafka.

3. Enter the broker URL, disable TLS and authentication.

4. Enter the AutoMQ topic name, and choose the data format for "Read As". Supported formats include JSON, AVRO, and Text.
1. It is recommended to choose Text to save the entire JSON document as a string, facilitating handling of schema changes.

1. When selecting AVRO, you can enable the "auto-extract" option to store top-level attributes as different columns. You need to specify the schema registry address, API key, and secret.
  1. In the "Preview" step, display at least one event. By default, a new stream is created in Timeplus for the new data source.

  2. Name the stream and verify column information. You can set the event time column; if not set, the system will use the ingestion time. You can also choose an existing stream.

  3. After previewing the data, name the source, add a description, and review the configuration. Click "Finish," and the stream data will be immediately available in the specified stream.

AutoMQ Source Description

When using AutoMQ data sources, the following constraints must be adhered to:

  1. Currently, only messages in AutoMQ Kafka topics that are in JSON and AVRO formats are supported.

  2. Topic-level JSON properties will be converted into stream columns. For nested properties, elements will be stored as String columns, which can then be queried using one of the JSON functions.

  3. Numeric or Boolean types in JSON messages will be converted to corresponding types in the stream.

  4. DateTime or timestamp will be stored as String columns. They can be converted back to DateTime using the to_time function.