Hello everyone, I’m xy👨🏻 💻. It’s the Golden Nine Silver Season again, and recently I’ve been running through various interviews. I have summarized and sorted out the front-end interview questions in many directions myself. Through the National Day holiday, I also shared these topics with everyone, and I also wish that friends who are interviewing can get satisfactory offers💪
Post and Get are two methods of HTTP request, with the following differences:
The PUT request is to send data to the server side, thereby modifying the content of the data, but without increasing the type of data, etc., that is, no matter how many PUT operations are performed, the result is not different. (It can be understood as “updating data”)
A POST request is a request that sends data to the server side, changes the type of data and other resources, and it creates new content. (It can be understood as “creating data”)
1. The first request is an options preflight request with a status code of 204
2. The second time is for the real post request
“There are four common values for the Content-Type property:
(1) application/x-www-form-urlencoded: The browser’s native form form, if you do not set the enctype attribute, then the data will eventually be submitted in the application/x-www-form-urlencoded mode. The data submitted in this way is placed in the body, the data is encoded according to the way key1=val1 & key2=val2, and both key and val are URL-transcoded.
(2) multipart/form-data: This method is also a common POST submission method, which is usually used when the form uploads the file.
(3) application/json: The server message body is a serialized JSON string.
(4) text/xml: This method is mainly used to submit data in XML format.
“Why are there 304?”
In order to improve the speed of website access, the server specifies the caching mechanism for some of the pages visited before, when the client requests these pages here, the server will judge whether the page is the same as before according to the cached content, if it is the same, it will directly return to 304, at this time the client calls the cached content, there is no need to download it twice.
Status code 304 should not be considered an error, but a response to the client “in the case of caching” server.
Search engine spiders will favor websites with frequent content source updates. Adjust the frequency of crawls on a website by returning a status code for a crawl of a website within a specific period of time. If the site has been in the 304 state for a certain period of time, the spider may reduce the number of crawls on the site. Conversely, if the frequency of website changes very quickly, and new content can be obtained every time it is crawled, the return visit rate will increase over time.
“The reason for the large number of 304 status codes:”
“Too many 304 status codes can cause the following problems:”
Speak the language: “Send asynchronous network requests with js”
A : Asynchronous asynchronous
X : XML with XMLHttpRequest
XML: Addresses cross-platform data transfer.
「How to use」
“1. Instantiate an ajax object”
“2. open()” : Create an HTTP request The first parameter is to specify the submission method (post, get) The second parameter is to specify which address to be submitted The third parameter specifies whether it is asynchronous or synchronous (true means asynchronous, false means synchronous) The fourth and fifth parameters are used in HTTP authentication. is optional
“3. Set the request header”
“setRequestHeader” (Stringheader, Stringvalue) “(only used using the post method, the get method does not need to call the method)”
“4. Send a request”
“send(content)”: Send a request to the server If it is a get method, you do not need to fill in the parameters, or fill in null If it is a post method, write the parameters to be submitted
“5. Registering Callback Functions”
“1. Native xhr Cancellation Request”
“2.axios Cancellation Request”
“1. Create a cancel token using the CancelToken.source factory method”
“2. Pass an executor function to the constructor of CancelToken to create a cancel token”
“The Significance of Cancellation of Ajax Requests”
OPTIONS is one of the HTTP request methods in addition to GET and POST. (Browser automatic)
The OPTIONS method is a functional option used to request access to resources identified by the Request-URI during request/response communication. In this way, the client can “decide what necessary action to take on that resource or understand the performance of the server before taking a specific resource request.” The response of the request method cannot be cached.
The “main purposes” of the OPTIONS request method are twofold:
Teamhead blocking is caused by the basic request-reply model of HTTP. HTTP stipulates that messages must be “one send and one receive”, which forms a first-in, first-out “serial” queue. Requests in the queue have no priority, only the order of enlistment, and the first requests are processed first. If the request of the team leader is delayed because it is processed too slowly, then all the requests in the queue have to wait together, and the result is that other requests bear the undue time cost, resulting in the phenomenon of team head blockage.
(1) Concurrent connections: For a domain name to allow the allocation of multiple long connections, it is equivalent to increasing the task queue, so that the tasks of one team will not block all other tasks. (2) Domain name sharding: Divide the domain name into many second-level domain names, all of which point to the same server, and the number of long connections that can be concurrently increased solves the problem of team head blocking.
The main differences between HTTP and the HTTPS protocol are as follows:
In fact, the HTTP protocol specification does not limit the length of the URL requested by the get method, this restriction is the restriction of specific “browsers” and “servers”. IE’s limit on URL length is “2083” bytes (2K+35). Since Internet Explorer’s allowable value for URL length is “minimum”, as long as the URL does not exceed 2083 bytes during development, it will not be a problem working in all browsers.
Let’s take a look at the range of length limits that major browsers have on URLs in the get method:
Mainstream servers limit the range of URLs in the get method:
Based on the data above, it can be known that the URL length in the get method is no longer than 2083 characters in length, so that all browsers and servers may work properly.
(1) “Parse URL: ” “The URL is first parsed and the path of the requested resource is analyzed by the transport protocol to be used and the requested resource.” If the protocol or host name in the URL entered is not legitimate, what is entered in the address bar will be passed to the search engine. If there is no problem, the browser checks the URL for illegal characters and, if so, escapes them before proceeding to the next procedure.
(2) “Cache judgment:” “The browser will determine whether the requested resource is in the cache”, if the requested resource is in the cache and has not expired, then it is used directly, otherwise a new request is initiated to the server.
(3) “DNS resolution:” The next step is to obtain the IP address of the domain name in the entered URL, first of all, “determine whether there is a cache of the IP address of the domain name in the local area”, if there is one, use, “if not, initiate a request to the local DNS server”. “The local DNS server will also check for caching first”, if “there is none, it will first initiate a request to the root name server”, after obtaining the address of the responsible top-level name server, “then request from the top-level name server”, and then obtain the address of the responsible authoritative name server, “then initiate a request to the authoritative name server”, “After finally obtaining the IP address of the domain name, the local DNS server will return this IP address to the requesting user”. A request made by a user to a local DNS server is a recursive request, and a request by a local DNS server to a domain name server at all levels is an iterative request.
(4) “Get MAC address (optional)” When the browser gets the IP address, “data transmission also needs to know the destination host MAC address”, because the application layer sends data to the transport layer, and the TCP protocol specifies the source port number and destination port number, and then sends it to the network layer. The network layer uses the native address as the source address and the obtained IP address as the destination address. It will then be sent down to the data link layer, and the data link layer needs to join the MAC address of both communicating parties, the native MAC address as the source MAC address, and the destination MAC address needs to be handled separately. By matching the IP address with the subnet mask of the local machine, you can determine whether it is in the same subnet as the requesting host, if in the same subnet, you can use the APR protocol to obtain the MAC address of the destination host, if it is not in a subnet, then the request should be forwarded to the gateway, which forwards it on behalf of the gateway, at this time the MAC address of the gateway can also be obtained through the ARP protocol, and the MAC address of the destination host should be the address of the gateway.
(5) “TCP three-time handshake:”, “Confirm the reception and sending capabilities of the client and the server”, the following is the process of TCP establishing a three-way handshake, first the client sends a SYN connection request packet segment and a random sequence number to the server, the server receives the request and sends a SYN ACK packet segment to the server, confirms the connection request, and also sends a random sequence number to the client. After the client receives the server’s confirmation answer, it enters the state of connection establishment, and at the same time sends an ACK confirmation message segment to the server, and after the server receives the acknowledgement, it also enters the connection establishment state, at which time the connection between the two parties is established.
(6) “HTTPS Handshake (Optional):” “If the HTTPS protocol is used, there is also a four-fold handshake process of TLS before communication.” The version number of the protocol used, a random number, and the encryption method that can be used are first sent by the client to the server. After the server side receives it, it confirms the encryption method and also sends a random number and its own digital certificate to the client. After the client receives it, it first checks whether the digital certificate is valid, and if it does, generates another random number, encrypts the random number with the public key in the certificate, and then sends it to the server side, and also provides a hash value for the server side to verify. After the server receives it, it decrypts the data with its own private key and sends a hash value of everything preceding to the client for the client to verify. At this time, both parties have three random numbers, according to the previously agreed encryption method, use these three random numbers to generate a key, and then before the two parties communicate, use this key to encrypt the data and then transmit it.
(7) “Send HTTP request”
“The server processes the request and returns an HTTP message” (response) (file)
(8) “Page rendering:” The browser will first “build a DOM tree” according to the html file (response), and construct a “CSSOM tree” according to the parsed css file, if you encounter a script tag, the judge contains defer or async attributes, otherwise the loading and execution of the script will cause the page rendering to block. “When the DOM tree and CSSOM tree are established, build the render tree from them.” Once the render tree is built, it is laid out according to the render tree. After the layout is complete, the page is finally drawn using the browser’s UI interface. This is when the entire page is displayed.
(9) “TCP four wavings” “The last step is the four-wave process of TCP disconnection”. If the client believes that the data transmission is complete, it needs to send a connection release request to the server. After the server receives the connection release request, it tells the application layer to release the TCP link. The ACK packet is then sent and enters the CLOSE_WAIT state, indicating that the connection from the client to the server has been released and no longer receives data from the client. However, because the TCP connection is bidirectional, the server can still send data to the client. If the server still has unfinished data at this time, it will continue to send, and after completion, it will send a connection release request to the client, and then the server will enter the LAST-ACK state. After the client receives the release request, it sends an acknowledgment reply to the server, and the client enters the TIME-WAIT state. This state lasts for 2MSL (maximum segment lifetime, refers to the time that the packet segment survives in the network, the timeout will be discarded) time, and if there is no server-side resend request during this time period, it enters the CLOSED state. When the server receives the acknowledgment reply, it also enters the CLOSED state.
Under HTTP 1, the maximum number of TCP connections for a browser domain name is 6, so it will request multiple times. It can be solved by “multi-domain deployment”. This increases the number of simultaneous requests and speeds up the acquisition of page images.
Under HTTP 2, many resources can be loaded in an instant, because HTTP2 supports multiplexing and can send multiple HTTP requests in a single TCP connection.
The header compression of HTTP2 is the HPACK algorithm. Establish a “dictionary” on both the client and server ends, use index numbers to represent duplicate strings, and use Huffman encoding to compress integers and strings, which can achieve a high compression ratio of 50% to 90%.
The request message consists of 4 parts:
(1) The request line includes: request method field, URL field, HTTP protocol version field. They are separated by spaces. For example, GET/index.html HTTP/1.1.
(2) Request header: The request header consists of keyword/value pairs, one pair per line, and the keywords and values are separated by a colon “:”
(3) Request body: The data carried by the request such as post put
The request message consists of 4 parts:
HTTP is a hypertext transfer protocol that defines the format and mode of exchange of messages between the client and the server, and uses port 80 by default. It uses TCP as the transport layer protocol to ensure the reliability of data transmission.
The HTTP protocol has the following “advantages”:
The HTTP protocol has the following “disadvantages”:
(1) Communication uses clear text (not encrypted), and the content may be eavesdropped; (2) The identity of the communicating party is not verified, so it is possible to encounter disguise; (3) The integrity of the message cannot be proved, so it may have been tampered with;
HTTP 3.0, also known as HTTP over QUIC. The core of HTTP 3.0 is the QUIC (pronunciation quick) protocol, a new protocol evolved from SPDY v3 proposed by Google in 2015, the traditional HTTP protocol is a protocol based on the transport layer TCP, and QUIC is a protocol based on the transport layer UDP, which can be defined as: HTTP 3.0 is based on the UDP secure and reliable HTTP 2.0 protocol.
The HTTP protocol is TCP/IP based and uses a request-reply communication mode.
“The HTTP protocol has two connection modes, one is a persistent connection and the other is a non-persistent connection.” (1) Non-persistent connection means that the server must establish and maintain a completely new connection for each requested object. (2) Under persistent connection, the TCP connection is not closed by default and can be reused by multiple requests. The benefit of using a persistent connection is that you avoid the time it takes to shake hands three times each time a TCP connection is established.
Take the URL below as an example to www.aspxfans.com:8080/news/index?ID=246188#name
As you can see from the URL above, a complete URL consists of the following parts:
“1. Strong cache:” does not send a request to the server, read the resource directly from the cache, in the network options of the chrome console you can see that the request returns a status code of 200, and the size shows from disk cache or from memory cache (gray indicates cache).
“2. Negotiate Cache:” Send a request to the server, the server will judge whether to hit the negotiation cache according to some parameters of the request header of the request, if hit, return a 304 status code and bring a new response header to notify the browser to read resources from the cache;
Common denominator: they all read resources from the client’s cache; The difference is that a strong cache does not send a request, and a negotiated cache sends a request.
“http1.0 is turned off by default and needs to be turned on manually. Enabled by default after http1.1”
“Role: ” keep-alive prevents the link-establishment or re-establishment of links when subsequent requests to the server occur.
“How to use:” Add Connection:keep-alive to the request header.
“Disadvantages: ” Resources that could have been released are still being used. Some requests have ended, but they are still connected.
“Solution:” The server sets the expiration time and the number of requests, and the connection is disconnected after this time or number of times.
ISO developed a set of standard architecture ISO models in 1978 that are referenced to illustrate the structure and functionality of data communication protocols.
OSI can be functionally divided into two groups:
Network groups: physical layer, data link layer, network layer
Consumer groups: transport layer, session layer, presentation layer, application layer
The upper layers (Layers 7, 6, 5, and 4) define the functionality of the application, and the lower three layers (Layers 3, 2, and 1) are primarily for end-to-end data flow through the network
Hypertext Transfer Protocol Secure (HTTPS) is a transport protocol for secure communication over a computer network. “HTTPS communicates via HTTP and uses SSL/TLS to encrypt packets.” The main purpose of HTTPS is to provide authentication to the website server and protect the privacy and integrity of the exchange data. The HTTP protocol uses “clear text transmission” of information, which has the risk of “information eavesdropping”, “information tampering” and “information hijacking”, while the protocol TLS/SSL has the functions of “authentication”, “information encryption” and “integrity check” to avoid such problems.
The main responsibilities of the security layer are to “encrypt the data of the HTTP request initiated” and “decrypt the content of the received HTTP”.
The full name of “TLS” is “Transport Layer Security” and its predecessor “Secure Sockets Layer” (abbreviated as “SSL”) is a layer of security between TCP and HTTP, which does not affect the original TCP protocol and HTTP protocol, so the use of HTTPS basically does not require much modification of HTTP pages.
The functional implementation of TLS/SSL mainly relies on three basic algorithms: “hash function hash”, “symmetric encryption”, and “asymmetric encryption”. The functions of these three types of algorithms are as follows:
“Symmetric encryption and asymmetric encryption are encryption algorithms in the secure transport layer”
Symmetric encryption is characterized by the same key for file encryption and decryption, i.e. the encryption key can also be used as the decryption key,
This method is called symmetric encryption algorithm in cryptography, and the symmetric encryption algorithm is simple and quick to use, the key is short, and it is difficult to decipher
“Both parties to the communication use the same key for encryption and decryption.” For example, the secret code agreed upon by two people in advance is symmetric encryption.
The amount of computing is small, the encryption speed is fast, and the encryption efficiency is high.
“Before data is transmitted, the sender and receiver must agree on the key, and then both parties save the key.”
“If one party’s key is compromised, then the encrypted information is not secure.”
The most insecure part is when you start by agreeing on a key to each other!!! Pass the key!
Use cases: local data encryption, https communication, network transmission, etc
The two parties to the communication use different keys for encryption and decryption, that is, key pairs (private key + public key).
Features: The private key can decrypt the contents of the public key encryption, and the public key can decrypt the contents of the private key encryption
The characteristics of asymmetric encryption are:
Usage scenarios: https pre-session, CA digital certificate, information encryption, login authentication, etc
A hash algorithm is used to encrypt the public key and other information, generate a digest of information, and then have a trusted certificate authority (CA) encrypt the message digest with its private key to form a signature. Finally, the original information and signature are combined together and are called “digital certificates”. When the receiver receives the digital certificate, first use the same hash algorithm to generate a digest based on the original information, and then use the public key of the notary office to decrypt the digest in the digital certificate, and finally compare the decrypted digest with the generated digest to find out whether the information obtained has been changed.
The current method is not necessarily safe, because there is no way to determine that the public key obtained must be a secure public key. There may be a middleman who intercepts the public key sent to us by the other party and then sends us his own public key, and when we use his public key to encrypt the information we send, he can decrypt it with his own private key. Then he disguised himself as us sending messages to each other in the same way, so that our information was stolen, but he didn’t know it yet. To solve such problems, you can use “digital certificates”.
Digital signature is to first use the hash algorithm that comes with the CA to calculate a digest of the certificate content, and then use the CA private key to encrypt and form a digital signature.
When someone else sends his digital certificate, the receiver uses the same algorithm to generate the digest again, and decrypts it with the CA public key to get the summary generated by the CA, and after comparing the two, it can determine whether the middle has been tampered with. In this way, the security of communication can be guaranteed to the greatest extent.
The communication process of HTTPS is as follows:
The “advantages” of HTTPS are as follows:
The “disadvantages” of HTTPS are as follows:
Combined with “symmetric encryption” and “asymmetric encryption” two encryption methods, the symmetric encryption key is encrypted using the public key of asymmetric encryption, and then sent out, the receiver uses the private key to decrypt to obtain the symmetric encryption key, and then the two parties can use symmetric encryption to communicate.
At this time, a secure “third-party issued certificate” (CA) is also needed to prove the identity of the identity and prevent it from being attacked by a man-in-the-middle.
In order to prevent a middleman from tampering with the certificate, the technique of “digital signature” is required
Digital signature is to use the HASH algorithm that comes with the CA to HASH the contents of the certificate to obtain a digest, and then encrypt with the private key of the CA to finally form a digital signature. When someone sends his certificate over, I use the same hash algorithm to generate a message digest again, and then use the CA’s public key to decrypt the digital signature to get the message digest created by the CA. At this time, the security of communication can be guaranteed to the greatest extent.
“(3) 4XX client error”
“(4) 5XX Server Error”
“302” is the http1.0 protocol status code, and in the http1.1 version, two 303 and 307 came out in order to refine the 302 status code.
“303” clearly states that the client should use the get method to obtain resources, and he will turn the POST request into a GET request for redirection.
“307” will follow browser standards and will not change from post to get.
“Concept” 😀 NS is an abbreviation for “Domain Name System”, which provides a hostname to IP address translation service, which is what we often call the Domain Name System. It is a distributed database of hierarchical DNS servers and is an application-tier protocol that defines how hosts query this distributed database. It makes it easier for people to access the Internet without having to remember the number of IP strings that can be read directly by the machine.
“Role”: The domain name is resolved to an IP address, the client sends a domain name query request to the DNS server (the DNS server has its own IP address), and the DNS server tells the client the IP address of the Web server.
The process of DNS server resolving domain names:
“First handshake:” The client sends a connection request packet to the server. This message segment contains its own initial serial number for data communication. After the request is sent, the client enters the SYN-SENT state.
“Second handshake:” After the server receives the message segment of the connection request, if it agrees to the connection, it will send a reply, which will also contain its own data communication initial sequence number, and enter the SYN-RECEIVED state after the transmission is completed.
“Third handshake: ” When the client receives the answer of the connection consent, it also sends a confirmation message to the server. After the client sends this packet, it enters the ESTABLISHED state, and the server also enters the ESTABLISHED state after receiving this reply, and the connection is established successfully.
If the client makes a connection request, but does not receive an acknowledgement due to the loss of the connection request packet, the client retransmits the connection request again. A confirmation was received and a connection was established. After the data transmission is completed, the connection is released, the client issued a total of two connection request packets, the first of which was lost, the second reached the server, but the first lost packet segment was only stuck for a long time at some network nodes, delayed to a certain time after the connection was released to reach the server, at this time the server mistakenly thought that the client issued a new connection request, so it issued a confirmation message segment to the client, agreed to establish a connection, did not use three handshakes, as long as the server issued an acknowledgment, A new connection is established, at which point the client ignores the confirmation sent by the server and does not send data, then the server waits for the client to send the data, wasting resources.
“First wave:” If the client thinks that the data transmission is complete, it needs to send a connection release request to the server.
“Second wave”: After the server receives the connection release request, it tells the application layer to release the TCP link. The ACK packet is then sent and enters the CLOSE_WAIT state, indicating that the connection from the client to the server has been released and no longer receives data from the client. However, because the TCP connection is bidirectional, the server can still send data to the client.
“Third wave of the hand”: If the server has unfinished data at this time, it will continue to send, and after it is completed, it will send a connection release request to the client, and then the server will enter the LAST-ACK state.
“Fourth wave:” After the client receives the release request, it sends a confirmation reply to the server, and the client enters the TIME-WAIT state. This state lasts for 2MSL (maximum segment lifetime, refers to the time that the packet segment survives in the network, the timeout will be discarded) time, and if there is no server-side resend request during this time period, it enters the CLOSED state. When the server receives the acknowledgment reply, it also enters the CLOSED state.
Because when the server receives the client’s SYN connection request packet, it can directly send the SYN+ACK packet. Among them, ACK messages are used to answer, and SYN messages are used for synchronization. However, when the connection is closed, when the server receives the FIN packet, it is likely that the SOCKET will not be closed immediately, so it can only reply to an ACK packet first, telling the client, “I received the FIN packet you sent”. I can only send FIN messages until all the messages on my server have been sent, so they can’t be sent together, so I need to wave four times.
By default, TCP connections enable the “Delayed Transfer Algorithm” (Nagle algorithm), which caches data before it is sent. If multiple data is sent in a short period of time, it will be buffered together for a single send (buffer size⻅ socket.bufferSize), which can reduce IO consumption and improve performance.
If it is a transfer file, then there is no need to deal with the problem of sticky packets at all, just put together a package one by one. But if there are multiple messages, or data for other purposes then sticky packets need to be processed.
For the problem of dealing with sticky packets, the solutions often are:
“One wait time interval before multiple sending”: You only need to wait for a while before making the next send, which is suitable for scenarios with particularly low interaction frequency. The disadvantages are also obvious, the transmission efficiency is too low for more frequent scenarios, but there is almost no need to do anything to deal with it.
“Close Nagle algorithm”: Close Nagle algorithm, in Node.js you can close the Nagle algorithm through socket.setNoDelay() method, so that each send is not buffered and sent directly. This method is more suitable for scenarios where the data sent each time is large (but not as large as the file) and the frequency is not particularly high. If the amount of data sent each time is relatively small, and the frequency is particularly high, turning off Nagle is purely self-defeating. In addition, this method is not suitable for poor network cases, because the Nagle algorithm is a packet merging situation on the server side, but if the client’s network condition is not good in a short period of time, or the application layer cannot recv the TCP data in time for some reason, it will cause multiple packets to be buffered on the client and thus sticky packets. (If it is in the stable computer room internal communication, then this probability is relatively small and can be ignored)
“Packing/Unpacking:” Packing/unpacking is a common solution in the industry today. That is, to each packet before sending, put some characteristic data in front of / after it, and then when the data is received, the packets are divided according to the characteristic data.
Tokens are generally stored in localStorage, cookies, or sessionStorage on the client side. The server generally exists in the database
The authentication process for tokens
The token can resist csrf, cookie+session does not
Session is stateful, usually stored in server memory or hard disk, when the server is distributed or clustered, session will face load balancing problems. In the case of load balancing multiple servers, it is not easy to confirm whether the current user is logged in because multiple servers do not share sessions
The client logs in to pass the information to the server, and the server receives the user information encryption (token) to the client, and the client stores the token in a container such as localStroage. The client passes the token every time it accesses, and the server decrypts the token to know who the user is. Through cpu encryption and decryption, the server does not need to store sessions to occupy storage space, which is a good solution to the problem of load balancing multiple servers. This method is called JWT (Json Web Token).
“What is Sensorless Refresh”
The token returned by the background is time-sensitive, the time is up, when you interact with the background, the background will judge whether your token has expired (security needs), if it expires, it will force you to log in again!
“The essence of token inductive refresh is to optimize the user experience, when the token expires, the user does not need to jump back to the login page to log in again, but when the token is invalid, intercept, send the ajax of the refresh token, get the latest token for override, so that the user does not feel that the token has expired”
“Achieve sensorless refresh”
1. The backend returns the expiration time, and the front-end judges the token expiration time and calls the refresh token interface.
Disadvantages: You need to provide an additional field for the expiration time of the token on the backend; Using local time, interception fails if the local time is tampered with, especially if the local time is slower than the server time.
2. Write a timer and refresh the token interface regularly. Disadvantages: Wastes resources, consumes performance, is not recommended.
3. Intercept in the response interceptor, and after judging that the token returns to expire, call the refresh token interface.
There are two types of cyber hijacking:
(1) “DNS hijacking”: (Enter JD.com to be forced to jump to Taobao, which belongs to dns hijacking)
(2) “HTTP hijacking”: (Visit Google but there have been ads playing with Blue Moon), due to http clear text transmission, the operator will modify your http response content (i.e. add ads)
DNS hijacking has been regulated due to suspected illegal acts, and now there are few DNS hijacking, and http hijacking is still very popular, the most effective way is the whole site HTTPS, HTTP encryption, which makes it impossible for operators to obtain plaintext, it is impossible to hijack your response content.
Essentially, processes and threads are both a description of the CPU working time slice:
“A process is the smallest unit of resource allocation, and a thread is the smallest unit of CPU scheduling.”
Communication between multiple tabs is essentially achieved through the mediator pattern. Because there is no way to communicate directly between tabs, we can find an intermediary, let the tab communicate with the mediator, and then let the intermediary forward the message. The communication method is as follows:
For the browser’s cache, it is mainly for the static resources of the front-end, and after the request is initiated, the corresponding static resources are pulled and saved locally. If the static resources of the server are not updated, then in the next request, it can be read directly from the local, if the static resources of the server have been updated, then when we request again, we will go to the server to pull the new resources and save them locally. This greatly reduces the number of requests and improves the performance of the website. This requires the browser’s caching strategy.
The so-called “browser cache” refers to the browser to the user’s requested static resources, stored in the computer’s local disk, when the browser accesses again, you can directly load from the local area, no need to go to the server to request.
Using browser caching has the following advantages:
That is to say, if you want the faster you want to render the first screen, the less you should load the JS file on the first screen, which is why it is recommended to put the script label at the bottom of the body label. Of course, at the moment, it is not that the script tag has to be placed at the bottom, because you can add a defer or async attribute to the script tag.
Cookies are the first local storage methods to be proposed, before that, the server could not tell whether two requests in the network were initiated by the same user, in order to solve this problem, cookies appeared. “The size of the cookie is only 4kb”, it is a plain text file, “every time you make an HTTP request, you will carry a cookie.”
“Characteristics of cookies:”
“If you need to share cookies across domains across domains, there are two ways:
LocalStorage is a new feature introduced in HTML5, because sometimes we store a large amount of information, cookies can not meet our needs, then LocalStorage comes in handy.
“Advantages of LocalStorage:”
“Disadvantages of LocalStorage:”
SessionStorage and LocalStorage are both storage solutions proposed in HTML5, SessionStorage is mainly used to temporarily save the same window (or tab) of data, refresh the page will not be deleted, close the window or tab will delete this data.
“SessionStorage vs. LocalStorage vs. LocalStorage:”
“Cookies” are actually a way for the server side to record the user’s state, set by the server, stored on the client, and then sent to the server every time a same-origin request is made. A cookie can store up to 4 k of data, its lifetime is specified by the expires attribute, and the cookie can only be shared by page visits of the same origin.
“sessionStorage:” html5 provides a browser-local storage method that draws on the concept of a server-side session and represents data saved in a session. It is generally capable of storing 5M or more of data, it is invalidated after the current window is closed, and sessionStorage can only be accessed and shared by the same source page of the same window.
“localStorage:” html5 provides a way for browser-local storage, which is generally also capable of storing 5M or larger data. Unlike sessionStorage, it does not expire unless it is manually deleted, and localStorage can only be shared by same-origin pages.
“Same-origin policy: protocol, domain, port must be consistent.”
“The cognate policy mainly restricts three aspects:
The purpose of the same origin policy is mainly to ensure the security of the user’s information, it is only a restriction on the js script, not a restriction on the browser, for the general img, or script script requests will not have cross-domain restrictions, because these operations will not respond to the results of the operation may have security problems.
“What is Cross-Domain?”
“The cross-domain problem is actually caused by the browser’s same-origin policy.”
CORS requires both browser and server support, and the entire CORS process is done by the browser without user involvement. Therefore, the key to implementing CORS is the server, as long as the server implements CORS requests, cross-origin communication can be used.
The principle of “jsonp” is to use the tag