in operational practice, caching is designed to improve access performance, a technique that sacrifices time for performance. previously, when you wanted to get data, you could go directly to the data source to get it. after having a cache, if you want to get the data, first go to the data source to get it, if there is one in the cache, you can return it directly; if there is no cache, go to the back-end data source to get it, and put the retrieved data in the cache to facilitate the next request to access.
THE TYPES OF CACHE ARE MAINLY DIVIDED INTO FRONT-END CACHE AND BACK-END CACHE, AND THE APPLICATION OF FRONT-END CACHE IS STATIC CACHE, AND THE WELL-KNOWN CDN IS A TYPICAL REPRESENTATIVE OF FRONT-END STATIC CACHE. THE APPLICATION OF THE BACKEND CACHE IS DIVIDED INTO DYNAMIC REQUEST (SUCH AS ASP, PHP, JSP) CACHE AND DATABASE HOTSPOT DATA CACHE.
Static caching generally refers to a caching technology in web applications that uses static files/resources such as HTML, JS, CSS, pictures, audio and video to improve resource response speed and reduce server pressure/resource overhead through caching methods such as disk/memory.
The main implementation techniques for static caching are Squid, Varnish, and Nginx. Squid is a proxy caching server that supports FTP, Gopher, HTTPS, and http protocols. Varnish is also a proxy caching server, but it only supports HTTP proxy caching. Compared with Squid, Varnish does not have many functions, but Varnish’s streamlining is also its biggest performance advantage. Nginx’s static caching requires a third-party module to complete it and can only meet the basic caching requirements. However, the combination of Nginx+ Memcache makes it and Squid and Varnish have professional static caching functions.
Browser caching, also known as client-side caching, is the most common and straightforward manifestation of static caching. Setting expires in Nginx does not mean caching static content in Nginx, but setting the time for client browser caching.
server-side static caching technology is mainly divided into two categories: disk cache class and memory cache class. disk caching refers to the technology of caching static resource files through the disk. memory caching, that is, caching static files in server-side memory. in this caching mode, if the cache is hit, the cached data return in memory is much higher than the cached data return performance in the disk.
In small applications, Nginx is used as a front-end reverse proxy server, the main business domain name resolution is bound to the front-end proxy server, and static resources are also stored on the proxy server. In the Nginx layer 7, the request is judged by Rewrite, and if it is a static resource request, the local static resource can be directly requested, and the corresponding static cache can be further set. If it is a dynamic request, it is forwarded to the back-end server for processing. This is the most mature solution for conventional dynamic and static separation.
IN MEDIUM-SIZED APPLICATIONS, STATIC RESOURCES ARE TYPICALLY DEPLOYED WITH THE APPLICATION CODE. TO ACHIEVE STATIC ACCELERATION, THE FRONT END ONLY NEEDS TO ADD A CDN, AND THE CDN WILL RETURN TO THE SOURCE AND ACTIVELY CACHE STATIC RESOURCES FOR CLIENT REQUESTS.
In large applications, the domain name of the primary service is directly bound to the load-balanced IP of the service server. Static resources are deployed as separate web servers, and a separate second-level domain name is bound to the static resources for request triage. When a user requests a corresponding page (containing both static and dynamic requests), the static request is implicitly redirected (the URL after the jump is not displayed) to the static server, so it is not perceptible to the client. Some code modification is required here.
Dynamic caching is used to cache dynamic data in the business, such as temporary data processed by the service, hot data that frequently access queries to the database, user Session data, dynamic page data, etc., and even frequently accessed static page data is also cached through dynamic caching technology (Redis, Memcache).
the dynamic cache of the backend cache mainly refers to the cache of dynamic class data, including the following two types:
- Caching of dynamic pages. For example, to .do, . jsp、. asp/.aspx、. PHP、. Dynamic pages such as js (nodejs) are cached.
- caches the contents of hot data that are frequently accessed by the database for queries.
THE TECHNICAL MEANS OF CDN IS ACHIEVED THROUGH DNS INTELLIGENT RESOLUTION + STATIC CACHING, AND THE FUNCTIONAL ASPECT IS TO DISTRIBUTE THE CONTENT PROVIDER’S STATIC RESOURCE DATA TO THE USER AS CLOSE AS POSSIBLE TO INTERACT WITH THE USER IN THE FASTEST WAY TO SAVE THE TIME OF REPEATED CONTENT TRANSMISSION.
The implementation technology of dynamic caching generally uses Memcache and Redis. They are all memory-based Key/Value databases.
- Redis uses a single-process, single-threaded model, and the use of CPU is not ideal. Memcached uses a single-process, multithreaded model that takes full advantage of multi-core CPUs.
- Redis not only supports simple k/v types of data, but also provides storage for data structures such as LIST, SET, HASH, and so on.
- Memcache does not support persistence, and Redis supports RDB/AOF to save in-memory data on disk.
- Memcache needs to be distributed at the code level through the HASH algorithm, while Redis comes with a master-slave, sharded cluster strategy.
REGARDING DATABASE CACHING 1, IMPROVING PERFORMANCE, THE DATA CACHED BY THE DATABASE IS BASICALLY STORED IN MEMORY, AND DATA ACCESS CAN BE RETURNED MORE QUICKLY THAN THE SPEED OF I/O READS AND WRITES. 2. FOR THE ADDITION, DELETION, CHECKING AND MODIFICATION OF THE DATABASE, MOST OF THE APPLICATION SCENARIOS OF DATABASE CACHING TECHNOLOGY ARE AIMED AT THE SCENE OF “CHECKING”. 3, IN THE VAST MAJORITY OF APPLICATIONS, THE DATA IN THE CACHE AND THE DATA IN THE DATABASE ARE INCONSISTENT. 4. THE CACHE RELIEVES A LOT OF PRESSURE FOR THE DATABASE, AND ALSO PROVIDES A HIGH ACCESS SPEED FOR THE APPLICATION.
In a distributed architecture, the introduction of load balancing can lead to file storage that needs to be centrally managed, and session session issues can occur. In the Session session persistence technology, the centralized storage and management of Sessions through the dynamic caching of the corresponding technology (Redis/Memcache) is a mature architecture.
Session is a technique used in a program to persist sessions with client requests. PHP stores Session data on native disk files by default, while Tomcat places Session data in JVM allocated memory by default.
Session persistence based on source IP refers to load balancing that identifies the source IP address of client requests and forwards requests of the same IP address to the same backend server for processing. If you bind the client’s request to one of the back-end servers and do not allow the request to be polled to other servers, the Session session problem does not occur.
To address the imbalance in back-end traffic caused by source-based IP session persistence, browser cookie-based session persistence has emerged, which addresses the issue well. Don’t worry if more people share an IP, because the client forwards it based on the cookie requested by the client. Session persistence for layer seven SLB is cookie-based session persistence.
The seven-layer SLB cookie processing is mainly divided into the following two types.
1) Implanting cookies: Means that when the client first accesses, Load Balancer will implant a cookie in the return request, and the next time the client carries this cookie to access, Load Balancer will direct the request to the backend server previously recorded.
2) Rewrite cookies: Specify as needed to insert cookies in HTTP/HTTPS responses, maintain the expiration time and time to live of the cookie on the back-end server.
In addition to being stored in local memory, Sessions can also be stored on local disk files. Data stored in local memory, server downtime, and memory data loss can all lead to the loss of sessions, but sessions can be centrally managed and shared through dynamic caching technologies such as Redis/Memcache.
Nginx’s dynamic page caching, mainly implemented through the proxy_cache of the built-in Proxy module of HTTP Reverse Proxy (Load Balancing). Basically, you can implement the caching of all dynamic pages, and of course, static pages can also be cached. Nginx built-in proxy template proxy_cache implements dynamic and static caching, where cache files are stored in disk files. In scenarios with high concurrent access, disk I/O can be a bottleneck in performance.
Nginx’s built-in Memcached module ngx_http_memcached_module, although it enables access to Memcached. But the put storage of data also needs to be manipulated through code. Efficient and transparent caching mechanisms can be built through third-party modules memc-nginx and srcache-nginx.
PHP’s dynamic page caching is implemented through FastCGI’s own caching, without the need to set the corresponding cache configuration through the Nginx reverse proxy. FastCGI also stores cache files in disk files. In scenarios with high concurrent access, disk I/O can also be a bottleneck in performance.