Today, a former colleague found me and asked me a performance optimization question, and I simply answered with him such as lazy loading of components, lazy loading of routes, opening gzip, cdn on public third-party packages, and basic Yahoo 53 military regulations and so on.

After I said this, he didn’t seem very satisfied and wanted me to say more.

So, I remembered that the vue-cli@2-based project of the previous company had just been optimized.

Before the project package could not be found, I temporarily built a project with vue-cli@2 scaffolding, and wrote 10+ pages based on mint-ui, as an example, from the packaging stage and the on-line optimization stage to do a pixel-level disassembly and demonstration.

Fasten your seat belt and start departing.

The following is the basic table of contents for this article:

This is a project based on webpack\@3.6.0[1], so pay attention to its version when installing various feature packages, so the following installation packages I have added the corresponding version intimately, everyone can rest assured to use.

The role of the installation package: in the process of packaging, we can accurately analyze the time spent on each step, and then we can optimize for the parts that take a long time.

The following are the configurations:

After executing the npm run build, we can see the following result.

Package the cache for sass-loader, postcss-loader, and vue-loader to shorten the packaging time.

Here’s the code for caching vue-loader and url-loader:

The following is the code for cssLoader and postcssLoader to do cassing:

After executing the npm run build, we can see the following result:

The results of the comparison are 0.8 seconds shorter for plugin packing, 2.94 seconds for loader time, and 2.7 seconds for packaging. Since this project package is relatively small, the shortened time is not particularly obvious.

Multithreaded packaging can be turned on at the time of packaging. Use the link as follows: www.npmjs.com/package/hap… [2]

Because the above is mainly the sass-loader package most occupied time, and the cache has been turned on, the time optimization has reached a very small level. Of course, I reopened the happypack for multithreaded packaging, probably because the project was too small, and the discovery time was not only not shorter, but also 0.2 seconds more. If you can’t draw it, don’t draw a snake to add enough, friends who are curious about this function can try to unlock it.

Slightly larger projects can turn on this multi-threaded packaging mode, depending on the project.

As you can see in the above figure, the packaging time of the UglifyJsPlugin package is too long, and I thought that I could use the webpack-parallel-uglify-plugin package to turn on multi-core synchronous compression to increase the efficiency of compression.

Just do it and start configuring.

Override the places previously configured by UglifyJsPlugin with the configuration of ParallelUglifyPlugin. Then run npm build is executed to compare the packaging speed before and after.

The packaging speed is reduced by 0.9 seconds, and the compression speed of the plugin does not seem to change (it may be that my computer performance is not very good, and last year’s project optimization on the management side can see the optimization effect of a few seconds)

From the perspective of this small project, the entire packaging process has been reduced from 9.95 seconds to 6.52 seconds, the packaging efficiency has increased by 35%, and the packaging efficiency of the background project I did last year has increased by about 43%, and the overall gap is not very large.

Start by installing a package analysis tool to view the size of the generated package.

Method 1 will feel very strange when doing development, every time you start the development will automatically open the analysis file, a black question mark on the face, so choose method 2 is more humane, when you want to analyze directly analyze and generate a page.

Make the following configurations:

To view the occupancy of each package:

Let’s start with optimization.

According to the rules of Yahoo’s 35 military rules, let’s see if your project has anything to optimize. The link address is here: www.jianshu.com/p/4cbcd202a… [3]

Here are some of the commonly used ones:

Because fewer common components are used, on-demand ingestion is adopted instead of putting the entire package into the CDN

To see how well you improve:

Optimization effect: 0.17MB reduced, the effect is not very obvious

This is how the package is presented at this point

First of all, see the 0.js, 1.js and 2 moments .js.js are used, can you only make a merger of them. Moreover, the package occupied by the moment .js is too large, can you only package the part of it that is used.

The first is an optimization of the moment.js only reference Chinese packages, configured as follows:

At the same time, there are some group prices that reference this component, but they are not used, and this situation will also be packaged, so the useless code must also be removed

This is what happens after removing useless code.

Only 1.js and 3 .js have references to the entire package (this is an asynchronously loaded component).

At this point the size of the entire package is at 1.34Mb

Extract the moment .js project separately, and through the global introduction, it will not be repeated in each project. The following configurations can be made:

The result of the optimization is as follows:

I thought about it for a moment that the whole moment .js is introduced asynchronously between components, and no other file moment .js is introduced when the first screen is loaded. This step can be handled according to everyone’s actual loading speed.

Later, if there are too many public packages added to the app .js, you can extract the packages of the public part and pass them into the cdn acceleration service to reduce the size of the packages .js by the app.

The size of the package is now at 1.17MB.

After the image is losslessly compressed through the tinyjpg.com[4] website, it is uploaded to Tencent Cloud Object Storage. Code files such as css and js are compressed and then uploaded on the boiling cdn.

After going live, the final package size is 123.3KB.

From the analysis tool point of view: the initial volume of the package is 3.14MB, reduced to 1.17MB, a total of 1.97MB of volume optimized, optimization efficiency of 62.7%

After going through the 4 steps above, the final volume of the entire package is 123. KB, open fast, very good.

After the above operation, we can see.

Everyone is welcome to give better solutions and suggestions for improvement in the comment area.

About this article

https://juejin.cn/post/7136453274948861983

The End

“” interviewers are using the question bank, please take a look at the “

Finally, don’t forget to give it a thumbs up!

Happy Fortune 2022! Bursting beauty! Skinny!