In this issue, we will explain in detail the production consumption, resource application, production/consumption examples, and monitoring alarms of Didi Logi-KafkaManager.

Overview of production and consumption: the production/consumption process will be briefly explained;

Resource application: It will briefly explain the meaning of application resources, cluster resources, and topic resources, and the application process.

Production/consumption

example: will briefly explain the meaning of production/consumption client, and the main configuration information of the client code;

Monitoring alarms will be explained: topic core indicator monitoring, and security alarm indicator reporting;

1. Overview of production and consumption

Messages in Kafka are categorized on a per-topic basis, with producers responsible for sending messages to a specific topic (each message sent to a Kafka cluster specifying a topic), and consumers subscribing to the topic and consuming it. In the production and consumption links, consumers need to authenticate their identity through Topic+AppID.

Application: That is, the AppID. It can be understood as an account in kafka, and is identified by AppID+Password;

Cluster: Users can use the shared cluster provided by the platform, and if users have higher requirements for stability, isolation, and data transmission rate, they can also apply for a separate cluster for an application.

Topic: You can apply to create a topic or apply for the production/consumption permission of other topics. When producing/consuming, you need to authenticate through Topic+AppID.

On the right side of the figure below is the flow chart of production and consumption: the first step is to determine whether the user name has an application. If not, you need to apply for an application; If so, determine whether you have cluster permissions. If you do not have cluster permissions, you need to apply for a cluster, if so, determine whether you have the production and consumption permissions of the topic, if not, you need to apply for the corresponding permissions of the topic, or create a topic. If you have permission, you can produce/consume through the client.

2. Resource application
01Application
(AppID).

The AppID is an account in Kafka, using AppID+Password as an identity, and needs to pass the Topic+ when producing/consuming Topic AppID for authentication.

When applying for an application, users need to be approved by O&M personnel, and after approval, they can obtain the AppID and key.

Click to go to Topic Management – Application Management – Apply Application. Fill in the application information, submit it for approval, and the operation and maintenance personnel will approve it.

After the approval is approved, you can view the newly created application in the application list and click Details to view the AppID and key.

02Cluster
application

Users can use the shared cluster provided by the platform, and if they have higher requirements for isolation, stability, and production and consumption rate, they can apply for a separate cluster for an application.

As can be seen from the flow chart on the right side of the figure below, when a user submits a cluster application, the application needs to fill in the cluster application form, and the application form needs to be filled in the application – that is, which application to apply for a separate cluster.

O&M personnel deploy and create the corresponding cluster based on the cluster type, peak traffic, application reason, and actual application usage of the application form.

Cluster Management – Submit

a 03
topic
application for cluster application

Users can create a topic according to the applied application, after creating a topic, the application owner has the production/consumption permission and management permission of the topic by default, and can also apply for the production and production of other topics. Consumption permissions.

On the right side of the figure above is the application form for applying for the topic permission, and the user needs to fill in the bound application, the corresponding permission of the application, and the reason. After submission, you can have the corresponding permissions after being approved by the person in charge of the application to which the topic belongs.

1. Enter Topic Management – My Topics, the list will show my Topics, including the ones I created and I applied for.

2. Click the button in the upper right corner to fill in the corresponding configuration information, cluster and other application information in the pop-up window. After submitting the application, the O&M personnel create a corresponding configured topic based on the application information and actual situation filled in by the user. For example, if you create 3 partitions and 2 replicas of the topic, the data retention time is 12 hours, and the topic ends up on the xxx region. Or on xxx broker.

In the Topic details page

: Basic information of the topic

: It includes basic information such as the name of the topic, the application, the person in charge, the region, and the bootstrap address, as well as the monitoring information of real-time traffic and real-time time.

Status chart: Displays the historical traffic and historical time consumption of the current topic.

Connection Information: Displays the applications that have recently connected to the current topic, including AppID, host name, client version, etc.

Consumer group information: Displays the consumer group information of the current topic, including the consumer group ID and AppID.

Partition information: Displays the partition information of the current topic, including partition ID, beginningOffset, endOffset, msgNum, etc.

Broker information: Displays the relevant information of the broker where the current topic is located, including BrokerID, host, number of leaders, partition leader ID, number of partitions, etc.

Application Information: Displays the application that has the current Topic permission, including the application name, permission type, and related quotas.

When

these resources are ready, real production/consumption can be carried out. Production/consumption in Kafka is realized through the client, which refers to the example of producing and consuming topics, and users need to write client code to achieve production and consumption actions.

You can see that the client code on the left side of the preceding figure contains configuration information such as topic name, application (AppID), key, message details, consumer group, and compression format.

As can be seen from the flow chart on the right side of the figure above, when the client produces/consumes, it needs to have the corresponding Topic production/consumption permission, and this production and consumption authentication is achieved through Topic+AppID. However, many similar products on the market do not have the definition of AppID.

In the environment of no identity verification and identity authentication, when consumers produce and consume, they only need to know the topic information to produce and consume. In this case, there will be great data security risks such as data leakage and data tampering.

After sending the data to the topic

, we can also collect the sample data in the topic through data sampling.

We also designed a platform-based interface for Didi Logi-KafkaManager to facilitate O&M personnel to configure collection parameters.

For example, the amount of data sampled, the timeout period, the partition number of the sample, the offset location of the sample.

The above collected data supports one-click copying.

4. Monitoring and
alarming

01
Topic Indicator Monitoring

In Didi Logi-KafkaManager, you can calculate the time-consuming time of each link of topic production and consumption, which is convenient for users to troubleshoot problems by themselves, and when key indicators are running, they can monitor different quantile performance indicators and retrospectively diagnose historical problems.

On the topic details page, users can see the real-time traffic and real-time time consumption

of the topic, and the real-time time consumption can be switched to 75/99th percentile data.

In the status diagram, Didi Logi-KafkaManager uses icons to display the information of the topic for each time period, the traffic situation of each application, and the historical time consumption. In this way, users can self-troubleshoot problems through rich monitoring metrics. At the same time, users can also initiate applications for quota adjustment and partition addition, and submit them to O&M personnel for approval.

02
Security alarm

For the core indicators of Kafka’s production and consumption, Didi Logi-KafkaManager users can customize the indicator reporting channels. This operation helps you establish a Kafka monitoring and alarm system by allowing you to create security alert policies, set security response levels, and configure alert notification receiving groups.

1. Enter the monitoring alarm – create a new alarm rule, and the basic information of the monitoring alarm is the name and the application to which it belongs.

2. Select the metrics, clusters, and topics to monitor.

3. Select the alarm strategy (for example, 1 time in the latest cycle is equal to 1 / second), the effective time of the alarm, the content of the message sent when configuring the alarm, the alarm period, the number of alarms in the cycle, and the alarm acceptance group.

Since Didi

Logi-KafkaManager is connected to the Didi Nightingale monitoring and alarm system by default, users can use it with Didi Nightingale, and the configuration method can refer to the configuration document of Didi Nightingale.

Of course, it can also be docked with the internal monitoring and alarm system of the enterprise, but Didi Logi-KafkaManager and Didi Nightingale monitoring and alarm system are more suitable, or it is recommended to use the nightingale Ha ~

This article is transferred from the public number: Obsuite