TCTP instant incentive sharing

skyjiang (Jiang Junzhao)

Basic Technology Products Department/Middleware Platform Room

WeRedis is a set of basic technical services based on redis provided in the industry, mainly to provide low-latency, high-performance and easier access to data caching services for various subsystems in the industry. WeRedis has hosted data storage services for many subsystems in the industry, including GNS. In the future, WeRedis will gradually evolve in the direction of KV persistent storage services in the bank to meet the data storage requirements of each business scenario and reduce machine expenditure costs.

Schema selection

At present, the brief architecture of WeRedis is as shown above, we proxy all traffic access to the backend real redis instance through the self-developed proxy component, with the observer as a service discovery, the access party in addition to providing the necessary UM account and password, only need to provide additional cluster name, you can complete the cluster access.

Why did we choose such an architecture instead of providing native redis clusters directly? This is a good question。 This is also my doubt when I first took over WeRedis, after continuous understanding and learning, the answer to this question should be resource convergence, which can be analyzed from the following two perspectives:

Cluster size

Since the redis 3.0 version, the official provides a native Redis cluster solution, which brings the ability of redis to scale horizontally. Therefore, many students believe that no matter how large the amount of data is, it can be expanded at an infinite level, and they are no longer afraid of the problem of data capacity. But officials have suggested that Redis Cluster should not be too large. So what limits cluster size?

The key lies in the communication overhead between instances, and redis instances propagate node data through the Gossip protocol. To avoid redundancy, I will not describe the Gossip protocol too much. Someone calculated that each instance sending a Gossip message would require an additional 104 bytes of payload. Calculated as a cluster of 1,000 instances, approximately every 2 nodes consume 24KB of network bandwidth for one communication per node. As the scale of the cluster increases, the network bandwidth becomes the performance bottleneck of the cluster first. So we chose to maintain multiple fixed-specification redis clusters and provide services through proxy proxies.

Client access

The design of redis is to put as much logic as possible on the client side, including the mapping relationship between the instance and the slot in cluster mode. This undoubtedly increases the complexity of the client side when accessing the redis cluster, and requires additional maintenance of some stateful data. Under WeRedis’s architecture, Proxy looks like separate redis instances externally, internally maintaining routing relationships between all clusters. From this point of view, it is to reduce the complexity of the client side. In addition, proxy can also reduce the connection overhead on the redis instance to some extent.

New features

In addition to improving the quality and maintainability of the various subsystems within WeRedis over the past year. It also provides many new features for common needs in various business scenarios. For example:

RedLock – A more reliable distributed locking scheme

Under the architecture of WeRedis, it can meet the requirements envisaged in the RedLock scheme given by the official reids. Therefore, we have also realized this more reliable distributed lock according to our actual needs. Can meet the extreme case, the risk of lock loss. It is also more suitable for the needs of some financial scenarios in the industry. In addition, a more secure way to unlock is provided to prevent distributed locks from being misinterpreted by locks.

Redis module

Thanks to the good community ecology of redis, various redis module-based mechanisms can be described as a variety of extensions to redis functions. We also loaded some of these modules for the development process. TairString, TairHash and BloomFilter modules are currently loaded, providing a string with a version number, a hash with a field-level version number and expiration time, and a native Bloom filter, respectively.

Ecological integration

This section focuses on improving the monitoring capabilities of the various components involved in the WeRedis request link. At present, both Proxy and Redis Cluster have been connected to WePS in the industry; WeRedis Client 2.0 also has built-in access to WeAPM. It greatly improves the observability of various subsystems within WeRedis, and there are rich metrics and tracing metrics to help us achieve full-link tracing, providing a basis for better troubleshooting.

Persistent storage

For kv persistent storage, this is also the focus of our next work on WeRedis. We looked at the vast majority of open source KV storage systems today, and although there are many wheels, there has never been a project that fully meets our needs. So, what do we need? This may come back to why we should support kv storage.

The positioning of redis is always a caching component where all data is stored in memory. The high cost of storage cannot meet the large-capacity data storage, which restricts the use of redis by the business side. Then the kv storage requirements we want are also very clear, to meet the needs of large-capacity data storage, low cost (compared to memory) has become our primary demand. In addition, the positioning of wekv should also be biased towards the database rather than the cache, so strong consistency of data has become a must (the default is distributed and multi-copy implementations).

We eventually plan to introduce and proxy two KV stores, TiKv and KvRocks, abstracting them into the concept of partitioning in WeRedis, integrating them into our architecture, and still providing services under the redis protocol. The unified API ensures that in the future, the business side can switch from redis to kv with minimal retrofit cost. A simple analysis of these two KV storage:

Tikv

Advantages: Based on the Raft algorithm, it provides strong consistent data guarantee and native support for horizontal scaling. Since the lower level kv operation interface (Put, Get) is provided, it is possible to provide multi-command batch commit transaction capability on this basis.

Disadvantages: tikv’s java client has weak support for transaction kv, and does not support pessimistic locks for the time being; Data backup and export of transaction kv are not supported for the time being; The need to map the data structure of redis to kv to support various commands, and the development effort is large.

KvRocks

Advantages: It is provided by the redis protocol itself, and for proxy, it only needs to transmit traffic, and the development workload is small.

Disadvantages: The master-slave replica is asynchronous synchronous, and there is a risk of data loss after the master node failure (the official plans to support a semi-synchronous mechanism similar to mysql in the future); There are scenarios that provide data cold backup, but hot backup is missing.

Summary and outlook

Imagine that for most projects, development students only need to introduce a client, facing a protocol, you can meet a variety of storage needs, while data consistency is guaranteed, the system is highly available and low-cost, which in itself is very attractive to development students. At the same time, we also know that there is no silver bullet in the field of software development, in the current rapid iteration of products, as a member of the middleware platform, we will continue to explore optimization, and strive to provide you with more stable and easy-to-use basic services.

Brother Dao praised

Jiang Junzhao, in line with the attitude of “thinking far away and walking thousands of miles”, works diligently in the middleware team. He takes the initiative to undertake complex work, is willing to share the results of his work, actively participates in various open source projects, and is good at learning new technologies and combining them with in-line scenarios. Establish a good cooperative relationship with team members, testing, operation and maintenance colleagues, and work together to ensure the stable operation of WeRedis system. It’s worth learning!