I. Overview
As one of the core systems in e-commerce, the importance of the commodity system is self-evident. The high performance, high concurrency and high availability of Internet services are vividly reflected in the commodity system. In addition to the introduction of distributed cache and the optimization of database sharding and tables, this paper explains the optimization of commodity systems from the perspective of data to improve the concurrency ability and performance of commodity systems.
Second, the current situation of the transfer of goods and services
In the division of business architecture, the transfer adopts the way of large middle office and small business to achieve rapid iteration of business, and the commodity system, as one of the core systems of the business middle office, carries the commodity business of all business parties.
In the design of the database, the commodity system adopts the strategy of segregation of databases and tables, and the strategy of splitting 16 libraries and 16 tables of each library is used to improve the concurrent operation ability of the database. In addition, a vertical split of the commodity table is also made according to the business type to reduce the index tree height and improve query performance and reduce the probability of lock violations to improve update performance.
At the same time, distributed cache is introduced to improve the concurrency of services. In terms of how the cache is used, the Cache Aside Pattern strategy is adopted.
Cache Aside Pattern
Third, the background and existing problems
With the development of business, more and more business parties access goods and services, and the increasing QPS brings more and more pressure to the system. At the same time, as a second-hand idle trading platform of all categories, it includes a variety of business models including C2C, B2C, C2B, B2B, C2B2C, etc. In this scenario, the data model of the commodity should be designed to be generalized enough to carry different business models, and in some modes, there is more information that needs to be displayed, which leads to a relatively large data of a single commodity record.
The three contradictions brought about by this have become more and more prominent, namely:
The contradiction between the constantly rising QPS and the high availability and high performance of the system
The contradiction between the GC pressure at the calling end and the large commodity data
Add the contradiction between the cost of the machine and the cost reduction and efficiency increase
In general, the core contradiction is how to provide better and faster service at the lowest possible cost.
Fourth, locate the optimization point
After clarifying the core contradictions, the following principles are followed in the positioning of optimization points:
Grab the big and let the small go. On the service governance platform, it can be seen that the call volume of commodity reading is much greater than that of commodity writing, so this optimization is only for commodity reading.
Complete the path analysis. On the completion path of a commodity read, each point can be used as an optimized point.
Feasibility analysis. Feasibility analysis of optimization points is carried out to ensure the effect of optimization.
The completion path of product reading is an RPC call link for product information query, and its simplified process is shown in the following figure:
An RPC call is essentially the acquisition and flow of data, and the use of cache to reduce time consumption and improve performance is used for data acquisition. So in terms of data flow, can you consider reducing the size of packets to reduce the time required for serialization and deserialization, the time for data transfer, and the memory footprint of the data?
As shown in the above figure, after the server receives the request from the client, it obtains the commodity data stream from Redis, deserializes it into an object, then serializes it into a data stream, transmits it to the Client, and after the client obtains the data stream, deserializes it into an object, if the size of the packet can be reduced, then the whole process Client, Redis, and Server will benefit.
How do I reduce the size of a packet? There are two key points, that is, data compression and reducing the transmission of invalid data.
In terms of data compression, the room for maneuverability is already small, and the serialization protocol has assumed a lot. So can you consider reducing the transmission of invalid data to reduce the size of packets? In other words, is the client each call to use all the field data returned by the interface? Or is only part of the field data used? If only part of the field data is used, can it be possible to return only the field data used by the caller, thereby reducing the transmission of invalid data to reduce the size of the packet?
Based on the above analysis, we have positioned an optimization idea to reduce the return of invalid data, and the feasibility of the following analysis is carried out.
Fifth, the feasibility analysis of optimization points
The following figure lists the number of fields used by some callers to query product information and the number of fields returned by the interface. It can be seen that the number of fields used by most callers is actually much smaller than the number of fields returned by the interface. That is to say, the caller got a large string like the description of the product, but did not use it, which undoubtedly increased the pressure between many systems.
For this analysis, two optimization schemes are proposed below.
Sixth, the optimization plan
As shown in the preceding figure, a separate query interface is provided for the caller of the TOP5 call, and the field data that the caller does not need is filtered in the interface, and only the data required by the caller is returned.
There are two issues that need to be clarified here:
1) Why only the callers of the TOP5 call volume provide a separate interface?
2) Can the optimization meet expectations?
For the first problem, in fact, it also abides by the principle of grasping the big and letting go of the small, the call volume of TOP5 accounts for more than 50% of the total call volume, and only provides a separate query interface for the caller of TOP5, which can take a balance between cost and effect.
For the second question, whether expectations are met is considered from two perspectives:
Is invalid data filtered?
Since the product information stored in Redis uses the String data structure, which can only be stored and rounded, the data obtained from Redis is not all valid data. The data returned to the caller from the goods service is then filtered to achieve the effect that all of it is valid data.
What about versatility and extensibility?
As a commodity business center, while satisfying the business of the business side, it is also necessary to ensure the versatility and scalability of the capability, so as to avoid strong coupling with the business side and fatigue of passive modification. Provide an interface to the business side separately, and the returned fields need to be agreed with the caller, which has obviously been strongly coupled with the caller. If the caller needs to increase the returned field information, then the interface must be modified.
The advantage of this scheme is that the implementation is simple, only need to encapsulate a layer on the original interface, but the later maintenance cost is high, and there is no full-link invalid data filtering.
GraphQL is a query language for APIs that can request no more or less of the data you need, and can get multiple resources in a single request, so that applications using GraphQL can perform quickly enough even on slow mobile networks.
image-20220427104321298
Referring to this design concept, the design scheme of the commodity system is as follows
1) Program Summary
As shown in the figure above, the caller marks the fields needed to be used, these fields can cross the table, and then the commodity system goes to Redis or Mysql to query according to the fields marked by the caller, and the decision of the return field is determined by the caller, and the commodity system only provides common query capabilities.
You can see the advantages of this scenario as:
Reduce the cost of customizing the development interface
On-demand query, on-demand return, reduce the time cost of invalid data transmission and GC in the link
Multi-table field routing, callers do not need to call multiple interfaces to stitch data
The disadvantage is that you need to mark the requested fields, which increases the request packet size to some extent, let’s talk about the implementation details of the scheme.
2) Token of the request field
Marking a field, a more readable representation is to pass in the name of the field, but the string occupies a relatively large amount of memory, and there will be a certain loss of performance in data transfer and serialization. In response to this problem, we take another way of using bits to mark a field.
As shown in the following figure, the long type has a total of 64 bits, the first 2 bits represent the information of the group, and the last 62 bits represent the information of the field, which can represent the information of 4 * 62 = 248 fields, which fully meets the current and future needs of the interface.
The current commodity system shares 57 marker bits.
For example, if the first digit on the right of the long type indicates the status of the product, and the second digit on the right represents the title of the product, then the field is represented as follows:
If the caller requests to mark the product status and product title fields, this can be calculated:
In this way, the first and second right digits of the long type are both 1, and when the commodity system gets this combined long value, it also knows the required fields of the caller.
Of course, there is a certain complexity in marking the field with bits, in order to ensure correctness and ease of use, the complexity of the construction can be used to isolate the construction of the builder design pattern to make it more convenient for the caller to use, as follows:
3) Implementation of on-demand queries
To query data by tagged field, in the request link for product information, it needs to be discussed in two parts, namely Redis on-demand query and Mysql’s on-demand query.
On-demand queries for Redis
At present, the data structure stored in Redis is String, where the key is the product ID, value is the entire commodity information after serialization, and the String data structure is the whole storage and integration, which cannot be queried on demand.
Redis String data structures are suitable for scenarios where a large number of fields need to be accessed each time and the storage structure has multiple levels of nesting, while Hash data structures are more suitable for scenarios where only a small number of fields need to be accessed in most cases and need to know which fields to access.
In our business scenario, the hash data structure is obviously more appropriate, so modifying the data structure of the goods stored in Redis to the hash type can realize the on-demand query of field information.
image-20220427104444335
On-demand queries for Mysql
According to statistics, in the request, Redis’s hit rate can reach more than 98.5%, Mysql’s hit rate is only 1.5%, Mysql’s hit rate is low, and the advantage of doing on-demand queries on it is not much.
And Mysql’s query performance is not as good as Redis, so here give up doing Mysql’s on-demand query, to avoid reducing Redis hit rate and increasing database pressure.
4) Table routing
In request notation, you can mark fields from different tables to enable cross-table queries. In the logic of querying commodity information in batches, it is necessary to route the product id according to the hit of the cache, and route the product id without hit cache to the corresponding queue for data acquisition from the database.
As shown in the following figure, status is located in the basic information table, price is located in the price list, stock is located in the inventory table, when querying the status, price, stock data of the three commodity IDs, in Redis, product1 does not hit the status data, product2 does not hit the price data, product3 is hit, then the product1 will be routed to the basic information table data to be obtained queue, product2 is routed to the price list data to be obtained queue, and then a separate thread goes to the corresponding data table to query the data concurrently, and then assembles the results to return.
5) Extensibility
Refer to Spring’s BeanFactoryPostProcessor, which provides some extensibility points to provide scalability without changing the main process. As shown in the following figure, after receiving the request parameters, the validation of the parameters and the parsing of the bit-bit request fields are implemented in the extension class
Extension points
Seventh, optimize the effect
From four perspectives, the effect of optimization is viewed, namely the GC situation on the calling side, the GC number and GC time on the server side, the network card traffic, and the interface time-consuming.
This data is obtained in the scenario where the TPS is 3500 and the optimized interface returns 14 fields of data and the interface returns 50 fields of data before tuning.
It should be noted that the 14 fields used by the promotion are in 3 different tables, and in the case of completing a transaction, 3 original interfaces need to be called.
Before the call was called, 547 GCs occurred per unit time of the interface, which took 1.74 seconds
After the call is optimized, 176 GCs occur in the interface per unit time, which takes 561ms
About 3 times the lift
Optimize the GC before calling the terminal
After optimization, the call-side GC is optimized
The caller calls the pre-optimization interface, and the server side has 10 YGCs per unit time
After the call side calls the optimized interface, the server side has 3 YGCs per unit time (4 times for highs and 2 times for lows)
About 3 times the lift
The number of GCs before optimization
The number of GCs after optimization
The caller calls the pre-optimization interface, and the GC time is about 120ms per unit time on the server side
The caller calls the optimized interface, and the GC time of about 40ms per unit time on the server side is used
About 3 times the lift
GC time before optimization
GC time after optimization
The call terminal calls the pre-optimization interface, and the traffic of the server machine passing through the NIC is 90.62MB per unit time
When the caller calls the optimized interface, the traffic of the server machine passing through the NIC is 11.95MB per unit time
About 8 times the lift
Optimize the pre-flow flow
Optimized traffic
The average time spent on the first three interfaces is 1.17ms, 1.52ms, and 1.23ms
After calling the optimized interface, the average interface takes 1.30ms
Performance before optimization
Optimized performance
VIII. Summary
This article optimizes the commodity system from the perspective of data, and makes some brief introductions from the analysis idea to the specific implementation.
Resources
https://codeahoy.com/2017/08/11/caching-strategies-and-how-to-choose-the-right-one/
http://hessian.caucho.com/doc/hessian-serialization.html/
https://graphql.cn