For many people, things that are unknown, uncertain, and out of control will have subconscious avoidance. I had a similar feeling when I first came into contact with Prometheus. For starters, Prometheus contains too many concepts and too high a bar.

Concepts: Instance, Job, Metric, Metric Name, Metric

Label, Metric Value, Metric Type (Counter, Gauge, Histogram, Summary), DataType (Instant Vector, Range Vector, Scalar, String), Operator, and Function

Jack Ma said: “Although Alibaba is the world’s largest retail platform, Alibaba is not a retail company, it is a data company.” The same is true of Prometheus, which is essentially a data-based monitoring system.

Daily monitoring

assumes that the request volume of each API of WebServerA needs to be monitored as an example, and the dimensions to be monitored include: service name (job), instance IP (instance), API name (handler), method (method), return code (code), and request volume (value).

If we take SQL as an example, we can demonstrate common query operations: query

method=

put and code=200 request volume (red box)

SELECT * from http_requests_total WHERE code="200" AND method="put" AND created_at BETWEEN 1495435700 AND 1495435710; 

Query handler=prometheus and method=post request volume (green box).

 SELECT * from http_requests_total WHERE handler=”prometheus” AND method=”post” AND created_at BETWEEN 1495435700 AND 1495435710; 

The number of requests (green box) for query instance=10.59.8.110 and handler starting with query

 SELECT * from http_requests_total WHERE handler=”query” AND instance=”10.59.8.110” AND created_at BETWEEN 1495435700 AND 1495435710; 

As can be seen from the above examples, in terms of common queries and statistics, daily monitoring is mostly used to combine queries and time queries based on the dimensions of monitoring. If you monitor 100 services, an average of 10 instances are deployed per service, each service has 20 APIs, 4 methods, and data is collected once in 30 seconds and retained for 60 days. Then the total number of data pieces is: 100 (service) 10 (instance) 20 (API) 4 (method) 86400 (1 day seconds) * 60 (days) / 30 (seconds) = 13.824 billion pieces of data, and it is impossible to write, store, and query such magnitude of data on a Mysql relational database. Therefore

, Prometheus uses TSDB as the storage

engineStorage engineTSDB as Prometheus’ storage engine perfectly fits the application scenario of monitoring

data

    the amount of data stored is very large

  • Most of the time is a write operationWrite
  • operations are added almost sequentially, and most of the time the data arrives in chronological
  • order Write operations rarely write data that is long ago and rarely update data. In most cases, data will be written to the database after it is collected for seconds or minutes
  • , and the deletion

  • operation is usually a block deletion, which selects the historical time of the start and specifies the subsequent blocks. Data that is rarely deleted separately at a certain time or separated at random times
  • is large and generally exceeds the memory size. Generally, only a small part of it is selected and irregular, and the cache almost does not play any roleRead
  • operations are very typical in ascending or descending orderRead
  • with high concurrency is very commonSo

how does TSDB achieve the above functions?


"labels": [{
    "latency":        "500"}]

"samples":[{


    "timestamp": 1473305798, "value": 0.9}] The raw data is divided into two parts: label,

samples. The former records the monitored dimension (tag: tag value), the metric name and the optional key-value pair of the tag to uniquely determine a time series (represented by series_id); The latter contains a timestamp and a value.

series^

│. . . . . . . . . . . .   server{latency="500"}


│. . . . . . . . . . . .   server{latency="300"}│. . . . . . . . . .   .   server{}│. . . . . . . . . . . . v<-------- time ---------->

TSDB stores value for key using timeseries:doc::. To speed up common query operations: label and time range combined. TSDB additionally builds three additional indexes: Series, Label Index and Time Index.

Take the tag latency as an example:

Series

stores two parts of data. One part is a series of all tag key-value pairs arranged lexicographically; The other part is the timeline to the index of the data file, which stores the specific position information of the data block record according to the time window, so that a large number of non-query window record data can be quickly skipped when querying

Label Index

Each pair of labels will be index:label: as key, store a list of all values of the label, and point to Series by reference The starting position of the value.

Time Index

data

will be calculated with index:timeseries:: as the key, pointing to the data file

data of the corresponding time period

The powerful storage engine provides the perfect power for data computing, making Prometheus completely different from other monitoring services. Prometheus can query different data series, and then add basic operators and powerful functions to perform matrix operations on metric series (see figure below).

In this way, the ability of the Promtheus system is not weaker than the “data warehouse” + “computing platform” of the monitoring community. Therefore, at the beginning of the application of big data in the industry, it can be understood that this is the future direction of monitoring.

One calculation, everywhere query

, of

course, such a powerful computing power, the consumption of resources is also quite terrifying. Therefore, the query precomputed result is usually much faster than the original expression is required every time, especially in the application scenario of the dashboard and alarm rules, where the same expression needs to be queried repeatedly every time the dashboard is refreshed, and the same is true for each operation of the alarm rule.

Therefore, Prometheus provides recoding rules that can pre-calculate expressions that are often needed or have a large amount of calculation, and save their results as a new set of time series, so as to achieve the purpose of one calculation and multiple queries.

Buy Me A Coffee