Have you encountered similar pain points in the process of business architecture?

(1) The amount of data is too large, and the capacity complexity is moved up to the service layer;

(2) The amount of concurrency is too large, and the performance complexity is moved up to the service layer;

(3) The front and back office storage are heterogeneous to meet different query needs;

(4) Online and offline storage heterogeneity to meet the needs of big data;

(5) The storage system migration cost is high, and it is not easy to reconstruct;

(6)…

Fifteen years of my career, I have basically used MySQL to store online business. In recent years, the problems encountered have slowly increased, which has seriously affected the efficiency of research and development. TiDB has been very popular in recent years, so recently did some research to share with you.

As consistent style, more talk about: what problems TiDB solves, why it is designed, and what architectural ideas are embodied.

The question arises, what are the problems with the MySQL architecture?

The above MySQL architecture diagram, I believe many people have seen:

(1) Upstream: MySQL client;

(2) Downstream: MySQL server, which also includes connection pooling, parsing and semantic analysis, query optimization, buffer pool, storage engine, physical storage, management functions… and many other modules;

Voice-over: It’s too complicated to read the words.

A simplification of the MySQL architecture has been made:

As shown in the figure above:

(1) Upstream: MySQL client;

(2) Intermediate: communication through MySQL protocol;

(3) Downstream: MySQL server;

Voice-over: I’m sorry, but put up with the ugly picture I drawn.

The core server side is mainly divided into two layers:

One layer, the computing layer;

One layer, storage tier;

What’s so natural about MySQL?

[1] Computing and storage are naturally coupled.

The computing layer and the storage layer, since they are in a MySQL process, all CPU resources and memory resources are shared, and there is bound to be a coupling of resource competition.

In addition to the inherent deficiencies, in the typical Internet business scenario with large data volume and large concurrency, what are the pain points for the use of MySQL?

As we all know, when the read and write volume increases, the MySQL is usually mastered and clustered:

As shown in the figure above: master-slave synchronization, read-write separation, and linear expansion of the system’s read performance by linearly increasing the slave library.

Voice-over: For most businesses, easy reading becomes the main contradiction.

As we know, when the storage capacity increases, MySQL is usually horizontally sliced clustered:

As shown in the preceding figure: Data sharding with a single key value for greater storage capacity.

So, in fact, the online MySQL cluster is like this:

(1) Existing horizontal slicing, multiple shards;

(2) There are master-slave groupings, and each shard has a master-slave cluster;

Sharding and grouping are both caller microservices that need to pay attention to, which leads to the next pain point:

[2] The caller needs to pay attention to the details of the storage, and the complexity of the underlying storage is transferred to the upper-level application.

In addition, in addition to online applications, most Internet companies have various types of big data processing needs:

(1) Offline analysis: for example, operating a daily newspaper;

(2) Online analysis: for example, analyst take numbers;

(3) Real-time processing: for example, real-time reports;

In order to meet such requirements, it is necessary to synchronize the data in MySQL to the clusters of various big data systems:

Use a series of big data technology systems to solve the needs of various types of big data processing.

This leads to another pain point:

[3] The technical side needs to pay attention to data synchronization, data consistency, and complexity of big data clusters.

Of course, many technical managers will also investigate various alternative products to solve the above 1-3 problems, such as MongoDB, one of the representatives of NoSQL, helpless [4] upgrade migration requires a lot of system transformation, after comprehensive evaluation, often have to give up the migration plan, continue to endure MySQL caused by various problems.

The pain points of history are often opportunities for innovation.

TiDB, here it comes!

How TiDB is designed to address:

[1] Coupling of calculation and storage;

[2] The complexity transfer of the underlying storage layer;

[3] Complexity transfer of big data system;

[4] The system migration cost is high;

And so on?

First of all, at the beginning of its birth, TiDB determined:

(1) Reuse the MySQL protocol;

(2) Separation of computing and storage;

The general direction of the design.

As shown in the figure above:

(1) Upstream: no need to make any changes, you can use MySQL drivers to access TiDB;

(2) Intermediate: communication through MySQL protocol;

(3) Downstream: Splitting the computing layer and the storage layer into two processes to uncouple resource contention, while the communication between these two layers uses internal protocols to be transparent to callers;

In this way, the problems [1] and [4] are solved.

How to solve the problems of “underlying complexity transfer” such as read and write volume expansion, storage volume expansion, etc.?

For the computing layer, modules such as connection pooling, syntactic analysis, semantic analysis, query optimization, etc. are implemented to be stateless and extended through clustering, which is the “Compute Engine tidb-server” cluster in the TiDB architecture. For callers, the access-layer TiDB cluster is the ingress, and the complexity behind it is not visible to the upstream. In the preceding figure, it is briefly described as the access layer (computing layer).

Voice-over: In a microservices architecture, site applications and the microservices layer must also be stateless to enable easy cluster expansion.

For the storage layer, the implementation of consensus algorithms, distributed transactions, MVCC concurrency control, operator pushdown and other modules, the realization of atomic KV storage, can also be automatically scaled through the way of clustering, which is the “storage engine TiKV-server” cluster in the TiDB architecture. In the preceding figure, it is briefly described as Storage Layer.

Voice-over: This is much like chunk-server in GFS, with which manual horizontal slicing expansion is no longer required.

In addition, a master with a global view, implementation of metadata storage, ID allocation (key-id, transaction ID), timestamp generation, heartbeat detection, cluster information collection and other modules is needed, which is the “PD-server” cluster in the TiDB architecture. In the picture above, it is briefly described as “Management”.

Voice-over: This is a lot like the master-server in GFS.

In this way, the problem [2] The complexity transfer problem between the underlying read and write capacity and storage capacity of the storage has also been solved.

The complexity of the big data system, TiDB also shielded it internally:

(1) Expand the access layer so that big data has independent access, such as TiSpark in the figure above;

(2) Expand the storage layer to match the storage suitable for big data, such as TiFlash in the figure above;

Voice-over: TiKV and TiFlash are stored separately and are asynchronous and decoupled from each other.

(3) Expand the management and manage the big data part at the same time;

In this way, the problem [3] big data synchronization, data consistency, and complexity of big data clusters has also been solved.

The architecture of TiDB embodies the design principle everywhere: users are simple and easy to use, and the complex and troublesome places are shielded from the inside of TiDB.

The event is divided into two tracks:

(1) Application direction: use TiDB to build games, e-commerce, finance, public welfare and other applications, more application scenarios, waiting for you to explore;

(2) Kernel direction: improve performance, stability, ease of use, and even add new functions for TiDB kernel and upstream and downstream tools;

Voice-over: The kernel direction is more difficult, but reading the source code and reading the document is certainly not a small gain.

Each track is divided into several stages:

Today – 10.17 : Sign up, find ideas, team up with teammates, submit preliminary RFCs (coding is not required at this stage)

10.17 – 10.19 : Preliminary round, with the top 30 groups of each track advancing to the finals

10.22 – 10.23: Final, coding (code must be open source and subject to the Apache 2.0 open source license)

10.23 : Final defense, award of prizes

Generous bonuses aside, a hackathon allows you to make more friends, understand the source code and application of TiDB, and do in-depth technical exchanges with TiDB authors, which is a rare learning opportunity.

A hackathon is a very worthwhile experience for an engineer’s career. Sometimes, if you don’t push yourself, you will never know how much potential you have.

Find the organizer to ask for a shortcut to participate, scan the code to add a small assistant (or directly add WeChat: billmay), enter the activity group, meet more small partners, and receive TiDB technical information.

Read the original article, learn more, I hope you have a harvest.