About the author
Kenny is a senior front-end development engineer at Ctrip. He joined Ctrip in 2021 and engaged in research and development related to Mini Programs/H5.
First, the background
As projects continue to iterate, the scale is increasing, and the shortcomings of the Taro3-based runtime are becoming more and more prominent, especially on complex list pages, which greatly affects the user experience. This article will focus on the performance optimization of complex lists, try to establish detection indicators, understand performance bottlenecks, and provide some technical solutions after experiments through preloading, caching, optimizing component levels, optimizing data structures and other ways, hoping to bring you some ideas.
Second, the current situation of the problem and analysis
We take a multi-functional list of hotels as an example (below), set the detection standard (setData number and setData response time as indicators), the detection situation is as follows:
For historical reasons, the code of this page, converted from WeChat’s native taro1, was subsequently iterated to taro3. There are problems in the project that the native writing of the Mini Program may ignore. According to the above measured metric values many times, as well as the visual experience, there are the following problems:
2.1 The loading time of the first entry to the list page is too long, and the white screen time is long
The interface time requested by the list page is too long;
The initialization list is also setData data volume is too large, and the number of times is too much;
Too many page nodes, resulting in a long rendering time;
2.2 Update Freeze for Page Filter Items, Pull Down Animated Freeze
There are too many nodes in the filter item, and the amount of setData data is large when it is updated;
A component update of a filter item causes the page to update along with it;
2.3 Endless list of updates to the stutter, swipe too fast will be white screen
The time to request the next page is too late;
When setData, the amount of data is large and the response is slow;
When the slide is too fast, there is no transition mechanism from the white screen to the rendering completion, and the experience is not good;
Third, try to optimize the program
3.1 Jump preload API:
By observing the requests of the Mini Program, you can see that two of the list page requests take a long time.
In the upgrade of Taro3, the official mentioned preloading Preload, in the Mini Program, from calling Taro.navigateTo and other routing jump APIs, to the Mini Program page to trigger onLoad will have a certain delay (about 300ms, if it is a new download of the subpackage, the jump time is longer), so some network requests can be requested together in advance to initiate the jump. So we use Taro.preload to preload requests for complex lists before jumping:
After repeated testing with the same detection method, when using preload, you can get the hotel’s list data 300 to 400ms in advance.
On the left is the old list that did not use preload, and on the right is the preloaded list, which is obviously faster after the preload list.
However, in actual use, we found that there are some defects in preload, and for the receiving page, if the interface is more complex, it will have a certain intrusion into the code of the business process. In essence, the network request is pre-empted, so we can add a caching policy to the network request part, which can achieve this effect, and the access cost will be greatly reduced.
3.2 Rational use of setData
setData is the API that is most frequently used in Mini Program development and is the most likely to cause performance problems. The process of setData can be roughly divided into several stages:
Traversal and update of the logical layer virtual DOM tree, triggering component lifecycles and observers, etc.;
Transfer data from the logical layer to the view layer;
Updates to the view-layer virtual DOM tree, updates to real DOM elements, and triggers page rendering updates.
The time consumption of data transfer is positively correlated with the size of the amount of data, when the old list page is loaded for the first time, a total of 4 interfaces are requested, setData has 6 times in a short period of time, and the amount of data is large twice, and the optimization method we try is to separate the two large amounts of data, and the other five times are found to be some scattered states and data, which can be used as once.
After this step, the average can be reduced by about 200ms, and the effect is small, because the number of nodes on the page has not changed, and the main time of setData is distributed in rendering time.
3.3 Optimize the number of nodes on the page
According to the official WeChat document, a node tree that is too large will increase memory usage and take longer to rearrange styles. It is recommended that the number of page nodes be less than 1000, the node tree depth is less than 30 layers, and the number of child nodes is not more than 60.
In the WeChat developer tools, there are a large number of nodes in the two modules of the analysis page. One is the filter item module and the other is the long list module. Because this part has more functions and a complex structure, we have adopted selective rendering. As in the user browse list, the filter item does not generate specific nodes. Clicking to expand the filter and then render the nodes to a certain extent for the experience of the page list. On the other hand, for the writing of the overall layout, it is consciously necessary to avoid nested writing methods that are too deep, such as the use of RichText, and the partial selection of pictures instead.
3.4 Optimization filter item related
3.4.1 Change the animation method
In the process of reconstructing the filter item, it was found that on some models, the animation effect of the Mini Program is not ideal, such as when the filter item tab is opened, it is necessary to achieve a pull-down effect, and in the early implementation, there will be two problems:
The animation will flash and then appear again
When there are too many filter page nodes, the click response is too slow and the user experience is poor
The animation of the old filter item was a fadeIn animation by keyframes, added to the outermost layer, but no matter what frame of the animation appears, it will flash. Analyze it down because keyframes perform animations caused by stuttering:
So, try a different way of implementation, transfrom through transition:
3.4.2 Maintain a concise state
When manipulating the filtered item, each operation needs to loop through the data structure of the filtered item according to the unique id to find the corresponding item, change the state of the item, and then set the entire structure back to the State. The official documentation mentions that with regard to setState, it is necessary to try to avoid handling too large amounts of data that will affect the update performance of the page.
The approach taken to this problem is to:
Trivialize complex objects in advance, for example:
Flattened filter item data structure:
Without altering the original data, maintain a dynamic selection list using the flattened data structure:
The above is a simple implementation, before and after, we only need to maintain a very simple object, add or remove its properties, the performance has been slightly improved, and the code is simpler and cleaner. In business code, there are many things like this that improve efficiency through data structure transformation.
Regarding the filter items, you can compare the average data of the detection, reduce by 200ms ~ 300ms, and also get some improvement:
3.5 Optimization of long lists
Early hotel list pages introduced virtual lists, rendering a certain number of hotels for long lists. The core idea is to only render the data displayed on the screen, the basic implementation is to listen for scroll events, and recalculate the data that needs to be rendered, and the data that does not need to be rendered leaves an empty div placeholder element.
Loading the next page with slight stuttering:
Through data discovery, the pull-down update list takes about 1900ms on average:
The solution to this problem is to load the next page of data in advance and store the next page in a memory variable. When rolling loading, it is fetched directly from the memory variable and then setData is updated into the data.
Sliding too fast will bring up a white screen (the faster the speed The longer the white screen lasts, left image below):
The principle of the virtual list is to use the empty View to occupy the place, when the fast rollback, when the node is too complex when rendering, especially the hotel with pictures, the rendering will become slower, resulting in a white screen, we have tried three schemes:
1) Use a dynamic skeleton diagram instead of the original View placeholder The following figure right:
In order to improve performance, the official recommended CusomWrapper, which can isolate the packaged component from the page, the component does not update the entire page when rendering, from page.setData to component.setData.
Custom components are implemented based on the Shadow DOM, encapsulating the DOM and CSS in the component, keeping the inside of the component separate from the DOM of the main page. The #shadow-root in the image is the root node, which becomes the shadow root and is rendered separately from the main document. #shadow-root can be nested to form a Shadow Tree
The packaged components are isolated so that the internal data update does not affect the entire page, and you can simply see the performance under the low-performance client. The effect is still obvious, click at the same time, the time required to appear in the right pop-up window will be 200ms ~ 300ms faster on average (measured in the same environment of the same model), and the lower the model, the more obvious it is.
(On the right is under CustomWrapper)
3) Use the native components of the App
Use the native components of the Mini Program to implement this list Item. The native component bypasses the Taro3 runtime, that is, when the user operates on the page, if it is a component of taro3, it is necessary to carry out the diff calculation of the data before and after, and then produce the node data required by the new virtual dom, and then call the APIs of the Mini Program to operate on the nodes. The native component bypasses the operation of these columns and directly updates the data by the underlying applet. So, some time has been shortened. You can take a look at the effect of the implementation:
It can be seen that the native performance has improved a lot, and the average update list has been shortened by about 1s, but the use of native also has disadvantages, mainly in the following two aspects:
All styles contained in the component need to be written according to the specifications of the Mini Program and are isolated from the styles of taro;
Taro’s API, such as createSelectorQuery, cannot be used in native components;
Compared with the three schemes, the performance improvement is gradually strengthened. Considering that the original meaning of using Taro is to cross the end, if you use native, you can’t achieve this purpose, but we are trying to see if we can use plugins to generate component code corresponding to native Mini Program at compile time to solve this problem and ultimately achieve the best results.
When there are too many complex page subcomponents, the rendering of the parent component causes the child components to follow the rendering, and React.memo can do a shallow comparison to prevent unnecessary rendering:
React.memo is a higher-order component. It is very similar to React.PureComponent, but it works with function components, but not for class components.
If your function component renders the same result given the same props, you can improve the performance of the component by memorizing the result by wrapping it in React.memo. This means that in this case, React will skip the render component and directly reuse the results of the most recent render.
By default, it only does shallow comparisons of complex objects, so if you want to control the comparison process, pass in your custom comparison function with the second parameter.
We have been optimizing the performance of this complex list for a long time, and we have tried various possible optimization points. From the preloading of the list page, the change of the data structure and animation implementation of the filter item, to the experience optimization and native combination of the long list, the update and rendering efficiency of the page have been improved, and it is still closely watched and continues to be explored.
The following is the final effect comparison (after optimization on the right):
Taro cross-end solution of Ctrip Mini Program ecosystem
Ctrip’s front-end “openness” construction and exploration of the activity construction platform
Ctrip’s GraphQL-based front-end BFF service development practices
How Ctrip WeChat Mini Program Conducts Size Governance
“Ctrip Technology” public account
Share, communicate, grow