Hello everyone, I’m Brother Code.

This time to bring you sixty-two questions about the computer network, 30,000 words, seventy figures in detail, probably the most complete network interview questions in the whole network.

It is recommended that you collect it and look at it slowly.

There are generally three types of computer network architectures: OSI Layer 7 Model, TCP/IP Layer 4 Model, and Layer 5 Architecture.

To put it simply, OSI is a theoretical network communication model, TCP/IP is the actual network communication model, and the five-layer structure is a compromised network communication model to introduce the network principle.

OSI seven-layer model

The OSI seven-layer model is a standard system developed by the International Organization for Standardization for interconnection between computers or communication systems.

TCP/IP Layer 4 model

Application Layer: Corresponds to the OSI reference model (Application Layer, Presentation Layer, Session Layer).

Transport layer: The transport layer corresponding to the OSI provides end-to-end communication capabilities for application layer entities, ensuring the sequential transmission of packets and the integrity of data.

Internet layer: The network layer corresponding to the OSI reference model that primarily addresses host-to-host communication issues.

Network interface layer: Corresponds to the data link layer and physical layer of the OSI reference model.

Five-tier architecture

Application Layer: Corresponds to the OSI reference model (Application Layer, Presentation Layer, Session Layer).

Transport layer: The transport layer that corresponds to the OSI reference model

Network layer: The network layer that corresponds to the OSI reference model

Data Link Layer: The data link layer that corresponds to the OSI reference model

Physical layer: The physical layer that corresponds to the OSI reference model.

A table summarizes common network protocols:

For the sender, the layers are wrapped layer by layer from the upper layer to the lower layer, and for the receiver, from the lower layer to the upper layer, the layer is unwrapped.

This process is similar to writing a letter, writing a letter, and at each level, add an envelope and write some address information. After arriving at the destination, layer by layer is unsealed and passed to the next destination.

This problem, the approximate process is relatively simple, but there are many points that can be mined: DNS resolution, TCP three handshakes, HTTP message format, TCP four waving and so on.

Let’s take the input www.baidu.com as an example:

What protocols are used in each process?

DNS, the full English name is domain name system, domain name resolution system, its role is also very clear, that is, domain name and IP map each other.

The DNS resolution process is as follows:

Suppose you want to query the IP address of www.baidu.com:

Specifically, Socket is a set of standards that complete the high degree of encapsulation of TCP/IP, masking network details to facilitate better network programming by developers.

HTTP status codes should first know a rough classification:

A few commonly used, in addition to the interview, should also be remembered:

I wrote a story before: programmer May 1 was pulled to go on a blind date, and the result completely understood the HTTP common status code, which is more interesting, you can see.

Tell us about the difference between 301 and 302?

To use an analogy, 301 is the marriage of Niigaki Yuki, and 302 is Masami Nagasawa who has a boyfriend.

Among them, the meanings of POST, DELETE, PUT, AND GET correspond to the additions, deletions, corrections, and checks that we are most familiar with.

The difference between GET and POST can be explained from the following aspects:

The GET method in HTTP passes data through URLs, but the URL itself does not actually limit the length of the data, and the real limit to the GET length is the browser.

For example, the maximum limit of Internet Explorer for URLs is more than 2000 characters, about 2kb, and browsers like Chrome, Firefox and other browsers support more URL characters, of which the maximum length limit of URLs in Firefox is 65536 characters, and Chrome is 8182 characters.

This length limit is not specific to the part of the data, but to the entire URL.

The HTTP protocol defines how the browser requests documents from the server and how the server passes the documents to the browser.

The interaction of requests and responses between the browser and the server must follow the prescribed format and follow certain rules, which are the Hypertext Transfer Protocol HTTP.

PS: This question is not much different from the question that happened to the browser input URL above.

There are two types of HTTP packets, HTTP request packets and HTTP response packets:

HTTP request packets

The format of the HTTP request message is as follows:

The first line of the HTTP request message is called the request line, the following line is called the first line, and the first line can be followed by an entity body. There is a blank line after the request header, which cannot be omitted, and is used to divide the header from the entity.

The request line contains three fields:

HTTP response packet

The format of the HTTP response message is as follows:

The first line of the HTTP response packet is called the status line, followed by the first line, and finally the entity body.

URI, Uniform Resource Identifier (URI), identifies every resource available on the Web, such as HTML documents, images, video clips, programs, etc. are identified by a URI.

URL, Uniform Resource Location, which is a subset of URI whose main role is to provide the path to a resource.

The main difference between them is that the URL provides a way to access the resource in addition to providing the identity of the resource. In this metaphor, the URI is like an ID card, which can uniquely identify a person, while the URL is more like an address, and the person can be found through the URL – Human Address Agreement: //Earth/China/Beijing/Haidian District/xx Vocational and Technical College/Dormitory Building 14/Dormitory Building 525/Zhang San. Man.

The key is to remember that HTTP/1.0 is a short connection by default, which can be forcibly opened, HTTP/1.1 defaults to a long connection, and HTTP/2.0 uses multiplexing.

HTTP/1.0

HTTP/1.1

Persistent connections are introduced, that is, TCP connections are not closed by default and can be reused by multiple requests.

Block transfer encoding, that is, every time the server produces a piece of data, it sends a piece, replacing the “cache mode” with “streaming mode”.

The pipe mechanism, in which a client can send multiple requests at the same time on the same TCP connection.

HTTP/2.0

There are two main changes in HTTP/3, the transport layer is based on UDP, and the use of QUIC to ensure UDP reliability.

Some problems in HTTP/2, such as retransmission, etc., are caused by the characteristics of TCP itself, so HTTP/3 is developed on the basis of QUIC, QUIC (Quick UDP Connections) is literally translated as a fast UDP network connection, and the underlying uses UDP for data transmission.

HTTP/3 mainly has these characteristics:

Let’s take a graph and look at the evolution of the HTTP protocol:

What are long connections to HTTP?

HTTP is divided into long and short connections, essentially long and short connections of TCP. A TCP connection is a two-way channel that can remain unclosed for a period of time, so that TCP connections have true long and short connections.

TCP long connections can reuse a single TCP connection to initiate multiple HTTP requests, which can reduce resource consumption, such as requesting HTML once, and if it is a short connection, it may also require requesting subsequent JS/CSS.

How do I set up a long connection?

By setting the Connection field in the header (request and response header) specified as keep-alive, the HTTP/1.0 protocol is supported, but is closed by default, and since HTTP/1.1, connections are long by default.

When will it time out?

HTTP generally has an httpd daemon, which can be set to keep-alive timeout, when the tcp connection is idle for more than this time, it will be closed, and you can also set the timeout in the header of HTTP

TCP’s keep-alive contains three parameters and supports setting in net.ipv4 of the system kernel; When a TCP connection is tcp_keepalive_time idle, a reconnaissance packet occurs, and if the other party’s ACK is not received, it is sent again every tcp_keepalive_intvl until the tcp_keepalive_probes is sent, and the connection is discarded.

HTTP is a hypertext transfer protocol, and information is transmitted in clear text, which has the problem of security risks. HTTPS solves the HTTP insecurity flaw by adding the SSL/TLS security protocol between TCP and the HTTP network layer, enabling packets to be transmitted encrypted.

HTTP connection establishment is relatively simple, TCP can be transmitted after three handshakes. After the TCP three handshakes of HTTPS, the SSL/TLS handshake process is required before it can enter the encrypted message transmission.

The port number for HTTP is 80 and the port number for HTTPS is 443.

The HTTPS protocol requires a digital certificate from a CA (Certificate Authority) to ensure that the identity of the server is trustworthy.

Because HTTP is a clear text transport, there are security risks:

Eavesdropping is risky, such as communication content can be obtained on the communication link, and user accounts are stolen.

Tampering risks, such as forced placement of spam ads, visual pollution.

Impersonating ⻛ insurance, such as impersonating Taobao websites, users lose money.

So the introduction of HTTPS, which adds the SSL/TLS protocol between the HTTP and TCP layers, solves these risks very well:

Information encryption: Interactive information cannot be stolen.

Verification mechanism: The communication content cannot be tampered with, and it cannot be displayed normally if it is tampered with.

Identity Certificate: Can prove that Taobao is the real Taobao.

Therefore, the SSL/TLS protocol can ensure that communication is secure.

There are several points to this problem: public-private keys, digital certificates, encryption, symmetric encryption, asymmetric encryption.

HTTPS main workflow:

Here’s a more detailed picture:

First of all, where do the certificates on the server come from?

In order to make the public key of the server trusted by everyone, the certificate of the server is signed by the CA (Certificate Authority, certificate certification authority), CA is the public security bureau in the network world, the notary center, has a high degree of credibility, so it is up to sign the public key, the certificate issued by the trusted party, the certificate is bound to be trusted.

The process by which a CA issues a certificate, as shown in the left part of the preceding figure:

First of all, the CA will type the holder’s public key, use, issuer, validity time and other information into a package, and then hash the information to get a hash value;

The CA then uses its own private key to encrypt the hash value and generate a Certificate Signature, that is, the CA signs the certificate;

Finally, add the Certificate Signature to the file certificate to form a digital certificate;

The process of validating the server’s digital certificate by the client, as shown in the right part of the preceding figure:

If, during the communication process of HTTPS, the middleman tampers with the original text of the certificate, because he does not have the private key of the CA institution, the content of the CA public key decryption is inconsistent.

What is the value of this stateless state? is the state of the client, so literally, the server does not save any information about the client in the HTTP protocol.

For example, when the browser sends a request to the server for the first time, the server responds; If the same browser makes a second request to the server, it will still respond, but the server does not know that you are the same browser as before.

So what is the way to record the status?

There are two main ways, Session and Cookie.

Let’s take a look at what Session and cookies are:

What exactly is the difference between Session and Cookie?

The storage location is different, the cookie is saved on the client side, and the session is saved on the server side.

Storage data types are not the same, cookies can only save ASCII, Session can store any data type, in general, we can keep some commonly used variable information in the Session, such as UserId, etc.

Depending on the validity period, cookies can be set to be held for a long time, such as the default login function that we often use, the session is generally valid for a short time, the client is closed or the session timeout will expire.

Different privacy policies, cookies are stored on the client, more vulnerable to illegal acquisition, early people stored the user’s login name and password in cookies to cause information to be stolen; Sessions are stored on the server side, and security is better than cookies.

Depending on the size of the storage, a single cookie can not store more than 4K of data, Session can store data much higher than cookies.

What is the relationship between Session and cookies?

A cookie can be used to record the identity of the session.

How to deal with Sessions in a distributed environment?

In a distributed environment, client requests are load balanced and may be distributed to different servers, and if a user’s request does not land on the same server twice, there is no session that records the user’s status on the new server.

What to do then?

Sessions can be stored using distributed caches such as Redis, which are shared across multiple servers.

What if the client cannot use cookies?

It is possible that the client cannot use cookies, such as the browser disables cookies, or the client is Android, IOS, etc.

What to do then? How to save the SessionID? How to pass it on to the server?

The first is the storage of the SessionID, which can use the client’s local storage, such as the browser’s sessionStorage.

What’s next?

PS: TCP three handshakes is the most important knowledge point, must be familiar with the question to send points.

TCP provides a connection-oriented service where a connection must be established before data can be transferred, and a TCP connection is established by three handshakes.

The process of three handshakes:

TCP three-way handshake popular analogy:

In the countryside twenty years ago, the telephone was not popular, let alone the mobile phone, so communication basically relied on roaring.

Lao Zhang and Lao Wang were neighbors, this day Lao Zhang went to the ground, and as a result, there was something going on at home, and the enthusiastic neighbor Lao Wang quickly ran to the mouth of the village and began to call Lao Wang.

Lao Wang: Old Zhang Alas! I’m Lao Wang, can you hear me?

When Lao Zhang listened, it was Lao Wang’s voice: Lao Wang Lao Wang, I am Lao Zhang, I can hear, can you hear it?

Lao Wang listened, well, yes, it was Lao Zhang: Lao Zhang, I heard, I have something to tell you.

“Your wife is going to give birth, hurry home!”

Old Zhang rushed home in a furious manner, and his wife successfully gave birth to a big fat boy with a handle. The story of the handshake is full of happiness and contentment.

Why can’t it be twice?

Since the network transmission is delayed (through the network fiber and various intermediate proxy servers), during the transmission, for example, the client initiates the first handshake of SYN=1.

If the server directly creates this connection and returns a packet containing SYN, ACK, Seq and so on to the client, the packet is lost due to network transmission, and the client has not received the packet returned by the server after the loss.

If there is no third handshake to tell the server client to receive the data transmitted to the server, the server does not know whether the client has received the information returned by the server.

The server thinks that the connection is available, the port remains open, and when the client reissues the request due to a timeout, the server will reopen a port connection. In this way, there will be a lot of invalid connection ports open in vain, resulting in waste of resources.

Another case is that the request information issued by the client that has been invalidated, for some reason, is transmitted to the server side, and the server side thinks that it is a valid request issued by the client, and after receiving it, an error is generated.

So we need a “third handshake” to confirm the process:

The data of the third handshake tells the server whether the client has received the data passed by the server when it “shook the second hand”, and whether the sequence number of the connection is valid. If the data sent is a “received and there is no problem” message, the server will establish a TCP connection normally after receiving it, otherwise the TCP connection fails and the server closes the connection port. This reduces server overhead and errors that occur when failed requests are received.

Why not four times?

Simply put, three waves of the hand are enough to create a reliable connection, and there is no need for one more handshake leading to more time being used to establish the connection.

The first handshake server did not receive a SYN message

The server will not perform any action, and the client will not receive the confirmation packet sent by the server for a period of time, wait for a period of time and then resend the SYN packet, if there is still no response, the process will be repeated until the number of times sent exceeds the maximum number of retransmissions, and the connection establishment failure will be returned.

The second handshake client did not receive the ACK packet of the server-side response

The client will continue to retransmit until the number of times is limited; At this time, the server will block at accept(), waiting for the client to send an ACK packet

The third handshake server receives the ACK packet sent by the client

The server will also use a time-out retransmission mechanism similar to the client, if the number of retries exceeds the limit, the accept() call returns -1, and the server fails to establish a connection; At this time, the client believes that it has established a successful connection, so it begins to send data to the server, but the accept() system call of the server has returned, and it is not in the listening state at this time, so when the server receives the data sent by the client, it will send RST packets to the client, eliminating the state of the client unilaterally establishing a connection.

ACK is meant to tell the client that the incoming data has been received without error.

The SYN is transmitted back to tell the client that the server is responding to the message sent by the client.

The 3rd handshake is possible to carry data.

At this point the client is already in the ESTABLISHED state. For the client, it has established a successful connection and confirms that the receive and send capabilities of the server are normal.

The first handshake can not carry data is for security reasons, because if the data is allowed, the attacker carries a large amount of data in the SYN packet each time, which will cause the server to consume more time and space to process these packets, which will cause CPU and memory consumption.

What is a semi-connected queue?

Before TCP enters the three handshakes, the server changes from the CLOSED state to the LISTEN state, and two queues are created internally: the half-connection queue (SYN queue) and the full-connection queue (ACCEPT queue).

As the name suggests, the half-connection queue holds the unfinished connection of the three-way handshake, and the full-connection queue holds the connection that completes the three-handshake connection.

What is SYN Flood?

SYN Flood is a typical DDos attack that spoofs non-existent IP addresses in a short period of time and sends a large number of SYN messages to the server. When the server replies to the SYN+ACK packet, it does not receive the ACK response packet, then the connection in the SYN queue will not be queued, and over time it will fill the SYN receive queue (semi-connection queue) on the server side, making the server unable to serve normal users.

So what’s the solution?

There are mainly syn cookies and SYN proxy firewalls.

syn cookie: after receiving the SYN packet, the server according to a certain method, to the source address of the packet, port and other information as the parameter to calculate a cookie value as its own SYNACK package sequence number, reply to SYN+ACK, the server does not immediately allocate resources for processing, and so after receiving the sender’s ACK packet, re-calculate whether the confirmation sequence number in the packet is correct according to the source address and port of the packet, if it is correct, establish a connection, otherwise discard the packet.

SYN Proxy Firewall: The server firewall proxies and responds to every SYN message received and maintains a semi-connection. After the sender returns the ACK packet, the SYN packet is reconstructed to the server to establish a true TCP connection.

PS: After asking three handshakes, you will often ask four times to wave your hand, so you must also master the knowledge points.

TCP Four Waving Process:

After the data transfer is over, both parties to the communication can initiate a disconnection request, which is assumed to be initiated by the client

The client sends a release connection packet, waving the finger for the first time (FIN=1, seq=u), and after sending it, the client enters the FIN_WAIT_1 state.

The server sends an acknowledgment packet, waving the second wave (ACK=1, ack=u+1, seq =v), after the sending is completed, the server enters the CLOSE_WAIT state, and the client receives the confirmation packet and enters the FIN_WAIT_2 state.

The server sends the release connection packet, waving the third wave (FIN=1, ACK1, seq=w, ack=u+1), after the sending is completed, the server enters the LAST_ACK state and waits for the last ACK from the client.

The client sends an acknowledgment packet, the fourth wave (ACK=1, seq=u+1, ack=w+1), the client receives a shutdown request from the server, sends an acknowledgment packet, and enters the TIME_WAIT state, waits for a fixed period of time (two maximum lifecycles, 2MSL, 2 Maximum Segment Lifetime), does not receive the server-side ACK, and believes that the server-side has closed the connection normally. So I close the connection and enter the CLOSED state. After the server receives this acknowledgment packet, it closes the connection and enters the CLOSED state.

The big vernacular waved four times:

If a single dog blogger has a girlfriend – because the blogger goes to work and leaves the liver blog, there is no time to spend with his girlfriend, and the girlfriend can’t stand it.

The sand sculpture blogger carefully installed his own blue-shaft mechanical keyboard.

The story of waving is always full of sadness and regret!

Looking back at the process of handing out the FIN package with both parties four times, you can understand why it is necessary to do it four times.

As can be seen from the above process, the server usually needs to wait for the data to be sent and processed, so the ACK and FIN on the server side are generally sent separately, resulting in one more than three handshakes.

Why wait?

1. To ensure that the last ACK packet sent by the client can reach the server. This ACK packet may be lost, so that the server in the LAST-ACK state will not receive confirmation of the FIN + ACK packet that has been sent. The server will time out to retransmit this FIN+ACK packet, and the client can receive this retransmitted FIN+ACK packet within 2MSL time (timeout + 1MSL transmission). The client then retransmits the acknowledgment and restarts the 2MSL timer. Finally, both the client and the server normally enter the CLOSED state.

2. Prevent invalidated connection request packets from appearing in this connection. After the client sends the last ACK packet segment, and then elapses 2 MSL, all the packet segments generated during the duration of the connection can disappear from the network. This makes it possible that this old connection request message segment will not appear in the next connection.

Why is the waiting time 2MSL?

MSL is the Maximum Segment Lifetime, the maximum lifetime of a packet, which is the maximum time that any message exists on the network, after which the packet will be discarded.

TIME_WAIT wait for 2 times the MSL, a reasonable explanation is that there may be packets from the sender in the network, and when the packets of these senders are processed by the receiver, they will send a response to the other party, so it takes 2 times the time to wait for one time and one time.

For example, if the passive closing party does not receive the last ACK packet of the disconnection, it will trigger a timeout to resend the Fin packet, and after the other party receives the FIN, it will resend the ACK to the passive closing party, and there are exactly 2 MSLs in one go.

In addition to the time wait timer, TCP has a keepalive timer.

Imagine a scenario where the customer has actively established a TCP connection to the server. But then the client’s host suddenly failed. Obviously, the server can no longer receive data from the client in the future. Therefore, there should be measures to prevent the server from waiting in vain. This requires the use of a keepalive timer.

Each time the server receives data from the client, it resets the keepalive timer, usually for two hours. If no data from the client is received for two hours, the server sends a probe packet and then every 75 seconds. If there is still no client-wide response after sending 10 probe packets in a row, the server will consider the client to be faulty and then close the connection.

What does the CLOSE-WAIT state mean?

After the server receives the request from the client to close the connection and confirms it, it enters the CLOSE-WAIT state. At this time, the server may have some data that has not been transmitted, so the connection cannot be closed immediately, and the CLOSE-WAIT state is to ensure that the server will finish processing the data to be sent before closing the connection.

What does TIME-WAIT mean?

The TIME-WAIT status occurs at the fourth wave of the hand, and enters the TIME-WAIT state after the client sends an ACK confirmation packet to the server.

The significance of its existence is mainly two:

Prevents packets for old connections

If the client closes the connection immediately after receiving the FIN packet from the server, but the corresponding port on the server is not closed at this time, if the client establishes a new connection on the same port, it may cause the new connection to receive the residual packet of the old connection, resulting in an unexpected exception.

Ensure that the connection is closed correctly

Assuming that the last ACK packet sent by the client is lost during transmission, due to the time-out retransmission mechanism of the TCP protocol, the server will retransmit the FIN packet, if the client does not maintain the TIME-WAIT state and close it directly, when the FIN packet resent by the server is received, the client will use the RST packet to respond to the server, causing the server to think that there is an error, but the actual closing connection process is normal.

What can I do with too many TIME_WAIT states?

If the server has TCP in the TIME-WAIT state, the disconnect request is initiated by the server side.

There are two main hazards to too much TIME-WAIT status:

The first is memory resource consumption;

The second is the occupation of port resources, a TCP connection consumes at least one local port;

How to solve the problem of too many TIME_WAIT states?

Take a look at the format of the first TCP message:

16-bit port number: the source port number, where the host message segment comes from; The destination port number, to which upper-layer protocol or application to pass to

32-bit ordinal number: The number of each byte of a stream of bytes in one direction of transmission during a TCP communication (from the establishment to the disconnection of a TCP connection).

32-bit acknowledgment number: Used as a response to a tcp message sent by the other party. Its value is the ordinal value of the received TCP message segment plus 1.

4-bit header length: Indicates how many 32-bit words (4 bytes) there are in the tcp header. Because the 4-bit maximum identifies 15, the TCP header is up to 60 bytes long.

6-bit flag bits: URG (whether the emergency pointer is valid), ACk (indicating whether the confirmation number is valid), PST (buffer has not been filled), RST (indicating that the other party is required to re-establish the connection), SYN (establishing a connection message flag connection), FIN (indicating that the other party is telling the other party that the connection is to be closed)

16-bit window size: is a means of TCP flow control. The window mentioned here refers to the window for receiving advertisements. It tells the other party how many bytes of data can be held in the TCP receive buffer on the other side, so that the other party can control the speed at which the data is sent.

16-bit checksum: Populated by the sender, the receiver performs a CRC algorithm on the TCP message segment to verify whether the TCP message segment is corrupted during transmission. Note that this check includes not only TCP headers, but also data portions. This is also an important guarantee for reliable transmission over TCP.

16-bit emergency pointer: A positive offset. It and the value of the Ordinal field add up to represent the ordinal number of the next byte of the last emergency data. Therefore, to be precise, this field is the offset of the emergency pointer from the current ordinal, which may be called the emergency offset. TCP’s emergency pointer is a method of sending emergency data from the sender to the receiver.

TCP mainly provides methods such as verification sum, serial number/acknowledgment answer, timeout retransmission, maximum message length, and sliding window control to achieve reliable transmission.

Connection Management: TCP uses three handshakes and four waves to ensure reliable connections are established and released, so needless to say.

Checksum: TCP will maintain the test sum of its header and data. This is an end-to-end inspection and the purpose of detecting any changes in data during transmission. If there is an error in the test and error on the receiver, TCP discards the packet segment and does not acknowledge receipt of this packet.

TCP provides a mechanism for the sender to control the amount of data sent based on the actual reception capacity of the receiver, which is flow control.

TCP controls traffic through sliding windows, let’s take a look at the brief flow:

TCP sends one piece of data and does not send the next data if an acknowledgment answer is required. This will have a disadvantage: the efficiency will be relatively low.

“To use an analogy, we chat on WeChat, you type a sentence, I reply to a sentence, you can type the next sentence. What if I don’t respond in time? Are you holding back your words? And then stupidly wait until I reply and then send the next sentence? ”

To solve this problem, TCP introduces a window, which is a cache space opened up by the operating system. The window size value represents the maximum value at which data can continue to be sent without waiting for an acknowledgment reply.

There is a field in the TCP header called win, that is, the 16-bit window size, which tells the other party how many bytes of data can be accommodated in the TCP receive buffer on the other side, so that the other party can control the speed of sending data, so as to achieve the purpose of flow control.

“In layman’s terms, every time the receiver receives a packet, when sending a confirmation packet, he tells the sender how much free space there is in his cache area and the free space in the buffer, which we call the acceptance window size.” That’s win. ”

TCP sliding windows are divided into two types: the send window and the receive window. The sliding window on the sender side contains four main parts, as follows:

In the dark blue box is the send window.

SND. WND: Indicates the size of the sending window, the number of cells in the dotted box in the above figure is 10, that is, the sending window size is 10.

SND. NXT: The location of the next send, which points to the sequence number of the first byte that was not sent but could be sent.

SND. UNA: An absolute pointer to the sequence number of the first byte sent but not acknowledged.

The receiver’s sliding window consists of three main sections, as follows:

What does Nagle algorithm and delayed acknowledgment do?

When the data carried by our TCP message is very small, such as a few bytes, then the efficiency of the entire network is very low, because each TCP message will have 20 bytes of TCP headers, there will also be 20 bytes of IP headers, and the data is only a few bytes, so the proportion of valid data in the entire message will be very low.

It’s like a courier driving a big shipment to deliver a small package.

Then there are two strategies to reduce the transmission of tabloids, namely:

Nagle algorithm

Nagle algorithm: At any time, there can be at most one unrecognized segment. The so-called “small segment” refers to the data block smaller than the MSS size, and the so-called “unconfirmed” refers to the ACK sent by the other party after the data block is sent out, and the ACK confirmed that the data has been received.

Strategy for the Nagle algorithm:

As long as one of the above conditions is not met, the sender has been hoarding data until the above sending conditions are met.

Delay the acknowledgement

In fact, when there is no ACK that carries data, its network efficiency is also very low, because it also has 40 bytes of IP headers and TCP headers, but it does not carry data packets.

In order to solve the problem of inefficient ACK transmission, TCP delay acknowledgment is derived.

Policy for TCP Delay Acknowledgment:

When there is response data to send, the ACK is immediately sent to the other party along with the response data

When there is no response data to send, the ACK is delayed for a period of time until there is no response data that can be sent together

If, while waiting for the ACK to be sent, the second data packet of the other party arrives again, the ACK is sent immediately

In general, the Nagle algorithm and the delay acknowledgment cannot be used together, the Nagle algorithm means delayed sent, the delayed acknowledgment means the delayed reception, and the two together will cause greater latency and will cause performance problems.

What is congestion control? Isn’t there flow control?

The preceding flow control is to prevent the sender’s data from filling the receiver’s cache, but not knowing what’s going on across the network.

In general, computer networks are in a shared environment. Therefore, it is also possible that the network is congested due to communication between other hosts.

When the network is congested, if you continue to send a large number of packets, it may lead to packet delay, loss, etc., then TCP will retransmit data, but a retransmission will lead to a heavier burden on the network, which will lead to greater delay and more packet loss, this situation will enter a vicious circle is constantly amplified….

So, TCP can’t ignore what’s happening in the entire network, it’s designed as a selfless protocol that sacrifices itself when the network sends congestion, reducing the amount of traffic sent.

Thus, there is congestion control, the purpose of which is to prevent the sender’s data from filling the entire network.

Just like a water pipe, you can’t let too much water (data flow) flow into the water pipe, if you exceed the capacity of the water pipe, the water pipe will be burst (lost packet).

The sender maintains a variable for the congestion window, cwnd(congestion window), to adjust the amount of data to be sent.

What is a congestion window? What does it have to do with the send window?

The congestion window cwnd is a state variable maintained by the sender that changes dynamically depending on the congestion of the network.

The sending window swnd and the receiving window rwnd are approximately equal to each other, so since the concept of congestion window is added, the value of the send window is swnd = min(cwnd, rwnd), that is, the minimum value of the congestion window and the receive window.

Rules for congestion window cwnd changes:

What are the common algorithms for congestion control?

There are several commonly used algorithms for congestion control:

Slow start

Congestion avoidance

Congestion occurs

Fast recovery

Slow start algorithm, start slowly.

It means that after the TCP connection is established, instead of sending a large amount of data at first, first probe the congestion of the network. Gradually increase the size of the congestion window from small to large, if there is no packet loss, for each ACK received, the size of the congestion window cwnd is increased by 1 (in MSS). The sending window doubles each round, grows exponentially, and if there is a packet loss, the congestion window is halved and enters the congestion avoidance phase.

For example:

The number of contracts issued is an exponential increase.

To prevent cwnd from growing too much and causing network congestion, you also need to set a slow-start threshold ssthresh(slow start threshold) status variable. When cwnd reaches this threshold, it is as if the water pipe has been turned off and the tap is reduced, reducing congestion. That is, when cwnd >ssthresh, it enters the congestion avoidance algorithm.

In general, the slow-start threshold ssthresh is 65535 bytes, after the cwnd reaches the slow-start threshold

For each ACK received, cwnd = cwnd + 1/cwnd

For every RTT that passes, cwnd = cwnd + 1

Obviously this is a linear upward algorithm that avoids causing network congestion problems too quickly.

Following on from the slow-start example above, suppose ssthresh is 8::

When packet loss occurs from network congestion, there are two scenarios:

RTO timeout retransmission

Fast retransmission

If an RTO timeout retransmission occurs, the congestion generation algorithm is used

This way is like braking sharply when racing, and then reversing at a high speed, this…

In fact, there is a better way to deal with it, that is, to quickly retransmit. When the sender receives 3 consecutive duplicate ACKs, it retransmits them quickly without having to wait for the RTO timeout before retransmission.

Congestion occurrence algorithm where fast retransmission occurs:

Congestion window size cwnd = cwnd/2

Slow start threshold ssthresh = cwnd

Enter the fast recovery algorithm

Fast retransmission and fast recovery algorithms are generally used at the same time. The fast recovery algorithm believes that there are 3 more duplicate ACKs received, indicating that the network is not so bad, so there is no need to be as strong as the RTO timeout.

As mentioned earlier, cwnd and sshthresh have been updated before entering Rapid Recovery:

– sshthresh = cwnd

Then, enter the fast recovery algorithm as follows:

Retransmission includes four types: timeout retransmission, fast retransmission, retransmission with selection confirmation (SACK), and repeat SACK.

Timeout retransmission is another important mechanism for the TCP protocol to ensure data reliability, the principle is to open a timer after sending a certain data, in a certain period of time if there is no ACK message for the transmitted datagram, then resend the data until the transmission is successful.

What should the timeout be set to?

Let’s take a look at what RTT (Round-Trip Time) is.

RTT is the time when the data is fully sent and the acknowledgment signal is received, that is, the time of a round trip of the packet.

The timeout retransmission time is RTO (Retransmission Timeout). So, how big is the RTO setting?

If the RTO setting is large and you have not reissued it for a long time, this is definitely not possible.

If the RTO setting is small, it is likely that no data is lost and it will start to retransmit, which will lead to network blocking, which will lead to a vicious circle, resulting in more timeouts.

In general, the RTO is slightly larger than RTT, and the effect is optimal.

In fact, RTO has a standard method of calculation, also called Jacobson/Karels algorithm.

Under Linux, α = 0.125, β = 0.25, μ = 1, ∂ = 4. Don’t ask how these parameters come from, they are the optimal parameters that are called up in a lot of practice.

Timeout retransmission is not a perfect retransmission scheme, it has these drawbacks:

Also, for TCP, if a timeout retransmission occurs, the interval doubles the next time.

TCP has another Fast Retransmit mechanism, which is not time-driven, but data-driven retransmission.

It’s not time-driven, it’s data-driven. It initiates retransmission based on feedback from the receiver.

It can be used to solve the time waiting problem of timeout retransmission, and the fast retransmission process is as follows:

In the image above, the sender sent out 1,2,3,4,5 data:

The first Seq1 was delivered first, so Ack back to 2;

As a result, Seq2 was not received for some reason, and Seq3 arrived, so it was still Ack back to 2;

The next Seq4 and Seq5 arrived, but Ack returned to 2 because Seq2 still did not receive it;

The sender received three Ack = 2 acknowledgments, and knowing that Seq2 had not yet been received, it retransmitted the lost Seq2 before the timer expired.

Finally, Seq2 was received, and because Seq3, Seq4, and Seq5 were all received, Ack returned to 6.

The fast retransmission mechanism only solves one problem, that is, the problem of timeout time, but it still faces another problem. That is, when it is retransmitted, whether it is the one before the retransmission, or the retransmission of all the questions.

For example, for the above example, is it to retransmit Seq2? Or re-pass Seq2, Seq3, Seq4, Seq5? Because it’s not clear to the sender who passed these three Ack 2s back in a row.

Depending on the implementation of TCP, both of these scenarios are possible. But this is a double-edged sword.

To solve the problem of not knowing which TCP messages to retransmit, there is a SACK method.

To solve the problem of how many packages should be retransmitted? TCP provides retransmission with select acknowledgment (SACK, Selective Acknowledgment).

The SACK mechanism is that, on the basis of a fast retransmission, the receiver returns the range of serial numbers of the most recently received packets, so that the sender knows which packets the receiver did not receive. This makes it clear which packets should be retransmitted.

As shown in the figure above, the sender receives three identical ACK confirmation packets, so it will trigger the fast retransmission mechanism, and only 200~299 data loss is found through SACK information, then when retransmission, only this TCP segment is selected for retransmission.

D-SACK, English is Duplicate SACK, is based on SACK to do some extensions, mainly to tell the sender which packets they have repeatedly accepted.

The purpose of the DSACK is to help the sender determine whether packet disorder, ACK loss, packet duplication, or pseudo-retransmission have occurred. Let TCP can do better network flow control.

For example the packet duplicate caused by ACK packet loss:

3499)

TCP’s sticky and unpacking is more of a business concept!

What is TCP Sticky Wrapping and Unpacking?

TCP is a stream-oriented, boundary-free string of data. TCP bottom layer does not understand the specific meaning of the upper layer of business data, it will be based on the actual situation of the TCP buffer package division, so in the business it is believed that a complete package may be split by TCP into multiple packets for sending, it is also possible to encapsulate multiple small packets into a large packet to send, which is the so-called TCP sticky packet and unpacking problem.

Why do sticky and unpacking occur?

The data to be sent is less than the size of the TCP send buffer, and TCP sends out the data written to the buffer multiple times at once, and a sticky packet will occur;

The application layer of the receiving data side does not read the data in the receive buffer in time, and a sticky packet will occur;

If the data to be sent is greater than the amount of space remaining in the TCP send buffer, a unpacking will occur;

If the data to be sent is greater than MSS (maximum message length), TCP will unpack before transmission. That is, TCP message length – TCP header length > MSS.

So how to solve it?

UDP doesn’t ask much, it’s basically compared to TCP.

The most fundamental difference: TCP is connection-oriented, while UDP is connectionless.

It can be described as follows: TCP is a phone call, and UDP is a big speaker.

What about TCP and UDP scenarios?

PS: This is an old topic from many years ago, pull out nostalgia.

To sum it up simply: UDP protocol is a connectionless protocol, it is efficient, fast, occupies less resources, and has less pressure on the server. However, its transmission mechanism is unreliable transmission, and it must rely on auxiliary algorithms to complete transmission control. The communication protocol used by QQ is mainly UDP, supplemented by TCP protocol.

UDP does not need to establish a connection before transmitting data, and the transport layer of the remote host does not need to confirm after receiving UDP packets, providing unreliable delivery. To sum up, the following four points:

More precisely, DNS uses both TCP and UDP.

TCP is used when making zone transfers (the portion of the data that the primary nameserver transmits to the secondary nameservers) because the amount of data transmitted simultaneously is greater than the amount of data transmitted by a request and reply, and TCP allows longer packet lengths, so TCP based on reliable connections is used to ensure the correctness of the data.

When the client wants to DNS server query the domain name (domain name resolution), the general returned content will not exceed the maximum length of UDP packets, that is, 512 bytes, when transmitting with UDP, there is no need to create a connection, which greatly improves the response speed, but this requires the domain name resolution server and the name server to handle the timeout and retransmission to ensure reliability.

What is the IP protocol?

IP Protocol (Internet Protocol), also known as Internet Protocol, is a packet protocol that supports interconnection between networks, working in the Internet layer, the main purpose is to improve the scalability of the network.

Through the Internet Protocol IP, the network involved in the interconnection and different performance can be regarded as a unified network.

Compared with the transport layer TCP, the IP protocol is a connectionless/unreliable, best-effort packet transmission service, and together with the TCP protocol, it forms the core of the TCP/IP protocol.

What does the IP protocol do?

The IP protocol mainly has the following functions:

What is the difference between a transport layer protocol and a network layer protocol?

The network layer protocol is responsible for providing logical communication between hosts; Transport layer protocols are responsible for providing logical communication between processes.

An IP address is unique within this range of the Internet, and it is generally considered that IP address = {<网络号>,<主机号>}.

Network Number: It indicates which network belonging to the Internet the network address to which the host is connected.

Host number: It flags the host address indicating which host in the network it belongs to.

IP addresses are divided into five categories: A, B, C, D, E:

Class A address (1~126): Starts with 0, with the network number in the first 8 bits and the host number in the last 24 bits.

Class B address (128~191): Starts with 10, with the network number occupying the first 16 bits and the host number occupying the next 16 bits.

Class C address (192~223): Starts with 110, with the network number occupying the first 24 bits and the host number occupying the next 8 bits.

Class D address (224~239): Starts with 1110 and is reserved as a multicast address.

Class E address (240~255): Starts with 1111 and reserves bits for future use

If you have multiple nicknames that you don’t use, your friend can call you by any of them, but your ID number is unique. But at the same time, your nickname may be repeated with others, and if you are not there, someone will call your nickname, and someone else may agree.

A domain name can correspond to multiple IPs, but in this case, DNS is load balanced, and a domain name can only correspond to one IP address during user access.

An IP can correspond to multiple domain names, which is a one-to-many relationship.

We know that the IP address has 32 bits, can mark 2 to the 32nd power of the address, it sounds like a lot, but the number of network devices in the world has far exceeded this number, so the IPV4 address is no longer enough, so how to solve it?

The ARP protocol, Address Resolution Protocol, is an IP address that implements mapping to a MAC address.

First, each host establishes an ARP list in its own ARP buffer to represent the correspondence between IP addresses and MAC addresses.

When the source host needs to send a packet to the destination host, it first checks its own ARP list and whether there is a MAC address corresponding to the IP address; If so, send the packet directly to this MAC address; If not, an ARP request is initiated to the local CIDR block for a broadcast packet that queries the MAC address for this destination host. The packet for this ARP request includes the IP address of the source host, the hardware address, and the IP address of the destination host.

When all hosts in the network receive this ARP request, they check that the destination IP in the packet matches their own IP address. If not, this packet is ignored; If the same, the host first adds the sender’s MAC address and IP address to its ARP list, overwrites the IP information if it already exists in the ARP table, and then sends an ARP response packet to the source host, telling the other party that it is the MAC address it needs to find.

After the source host receives this ARP response packet, it adds the IP address and MAC address of the obtained destination host to its own ARP list and uses this information to start the transmission of data. If the source host has not received ARP response packets, the ARP query has failed.

What do MAC addresses and IP addresses do?

Why do I need an IP address to have a MAC address?

If we only use MAC addresses for addressing, we need the router to remember which subnet each MAC address belongs to, otherwise every time the router receives a packet, it will fill the world looking for the destination MAC address. We know that the length of the MAC address is 48 bits, that is, up to a total of 2 MAC addresses to the 48th power, which means that each router needs 256T of memory, which is obviously unrealistic.

Unlike the MAC address, the IP address is region-related, and the IP address prefix we assign to the device in a subnet is the same, so that the router can know which subnet the device belongs to according to the prefix of the IP address, and the rest of the addressing is handed over to the subnet internal implementation, which greatly reduces the memory required by the router.

Why do I need a MAC address when I have an IP address?

Only when the device is connected to the network can it be assigned an IP address based on which subnet it has entered, when the device does not have an IP address, or in the process of assigning IP. We need the MAC address to distinguish between different devices.

An IP address can be compared to an address and a MAC address to a recipient, both of which are indispensable during a single communication.

ICMP (Internet Control Message Protocol), the Internet Control Message Protocol.

The ICMP protocol is a connection-oriented protocol for transmitting error reporting control information.

It is a very important protocol, and it is extremely important for network security. It is a network layer protocol that is primarily used to pass control information between hosts and routers, including reporting errors, exchanging restricted control and status information, and so on.

ICMP messages are automatically sent when IP data cannot access the destination, IP routers cannot forward packets at the current transmission rate, and so on.

For example, the ping we use more daily is based on ICMP.

Ping, Packet Internet Groper, is an Internet package explorer for programs that test the amount of network connected. Ping is a service command that works in the application layer of TCP/IP network architecture, mainly to send ICMP (Internet Control Message Protocol Internet Message Control Protocol) request packets to a specific destination host, and test whether the destination station can reach and understand its relevant status.

In general, ping can be used to detect network failure. It works based on the ICMP protocol. Suppose machine A pings machine B, and the working process is as follows:

There are two main types of cybersecurity attacks, passive attacks and active attacks:

DNS hijacking, or domain hijacking, is an attack method that replaces the IP address corresponding to the original domain name, so that the user can access the wrong website, or make the user unable to access the website normally.

Domain name hijacking is often only possible within a specific network range, and DNS servers outside the range are able to return normal IP addresses. An attacker can impersonate the original domain name authority, modify the domain name registration information of the organization by e-mail, or transfer the domain name to another host, and save the new domain name information in the designated DNS server, so that the user cannot resolve the original domain name to access the destination address.

What are the steps of DNS hijacking?

How to deal with DNS hijacking?

What is a CSRF attack?

CSRF, or Cross-site request forgery, is an attack method that holds a user hostage to perform unintended actions on a currently logged-in web application.

How does the CSRF attack?

Let’s look at an example:

The user logs in to the bank, does not exit, the browser contains the user’s identity authentication information in the bank.

The attacker will fake the transfer request and include it in the post

Users browse posts while the bank’s website remains logged in

Send a forged transfer request, along with authentication information, to the bank’s website

The bank website sees the identity authentication information, thinking that it is the user’s legal operation, and finally causes the loss of user funds.

How do you deal with CSRF attacks?

Check the Referer field

The Referer field in the HTTP header records the source address of the HTTP request. Usually, a request to access a security-restricted page comes from the same website, and if a hacker wants to carry out a CSRF attack on it, he can usually only construct the request on his own website. Therefore, CSRF attacks can be defended against by verifying the Referer value.

Add a validation token

Add a randomly generated token in the form of a parameter in the HTTP request, and set up an interceptor on the server side to verify the token, if there is no token in the request or the token content is incorrect, it is considered that it may be a CSRF attack and the request is rejected.

Multiple checks for sensitive operations

For some sensitive operations, in addition to the need to verify the user’s authentication information, you can also confirm through the mailbox, verification code confirmation and other ways to multi-check.

DOS: (Denial of Service), which translates to denial of service, and any attack that causes a denial of service is called a DOS attack. The most common DoS attacks are computer network broadband attacks and connectivity attacks.

DDoS: (Distributed Denial of Service), which translates to distributed denial of service. It refers to multiple attackers in different locations launching attacks on one or several targets at the same time, or an attacker taking control of multiple machines in different locations and using these machines to attack the victim at the same time.

The main forms are traffic attacks and resource exhaustion attacks, common DDoS attacks are: SYN Flood, Ping of Death, ACK Flood, UDP Flood and so on.

DRDoS: (Distributed Reflection Denial of Service), Chinese distributed reflection denial of service, is a method that sends a large number of packets with the victim’s IP address to the attacking host, and then the attacking host responds to the IP address source in large quantities, resulting in a denial of service attack.

How to Protect Against DDoS?

For traffic attacks in DDoS, the most direct way is to increase bandwidth, theoretically as long as the bandwidth is greater than the attack traffic, but this method is very costly. Under the premise of sufficient bandwidth, we should try to improve the configuration of hardware facilities such as routers, network cards, and switches.

For resource exhaustion attacks, we can upgrade the host server hardware to enable the server to effectively resist massive SYN attack packets under the premise that the network bandwidth is guaranteed. We can also install a professional anti-DDoS firewall to combat traffic-based attacks such as SYN Flood. Porcelain bowl, load balancing, CDN and other technologies can effectively combat DDos attacks.

XSS attacks are also common, XSS, called cross-site scripting, because it will be confused with the abbreviation Cascading Style Sheets (CSS), so some people abbreviate cross-site scripting attacks as XSS. It refers to a malicious attacker who inserts malicious html code into a Web page, and when the user browses the web, the html code embedded in the Web will be executed to achieve the special purpose of malicious attack on the user.

XSS attacks are generally divided into three types: storage, reflection, and DOM XSS

How does XSS attack?

Simply put, the attack method of XSS is to find a way to “instigate” the user’s browser to execute some front-end code that does not exist in this web page.

Take the reflective type as an example, the flow chart is as follows:

How do I respond to XSS attacks?

Symmetric encryption: refers to the use of the same key for encryption and decryption, the advantage is that the operation speed is faster, and the disadvantage is how to securely transmit the key to another party. Common symmetric encryption algorithms are: DES, AES, etc.

Asymmetric encryption: Refers to the use of different keys (i.e. public and private keys) for encryption and decryption. The public key and the private key exist in pairs, and if the data is encrypted with the public key, only the corresponding private key can be decrypted. A common asymmetric encryption algorithm is RSA.

RSA

It adopts the form of asymmetric encryption, using the public key for encryption and the private key to decrypt. The length of the private key is generally longer, and due to the need for multiplication and modulo of a large number of operations, its operation speed is slower, which is not suitable for large data file encryption.

AES

Using symmetric encryption, the key length is only 256 bits long, encryption and decryption speed is fast, easy to hardware implementation. Due to symmetric encryption, both communicating parties need to know the encryption key before data can be transmitted.

[1]. 2W Word! Combing 50 Classic Computer Network Interview Questions (Collector’s Edition)

[2]. Kobayashi coding “Illustrated Network”

[3].“ Three handshakes, four waves of the hand, “Say this, make sure you can’t forget: https://juejin.cn/post/6965544021833809928.”

[4]. Ai Xiaoxian “I want to enter the big factory”

[5]. Illustrated HTTP

[6]. A brief analysis of the DNS domain name resolution process: https://cloud.tencent.com/developer/news/324975

[7]. Computer Network for “2021” High-Frequency Front-end Interview Questions: https://juejin.cn/post/6908327746473033741

[8] Computer network: https://juejin.cn/post/6943748667333427207

[9]. Xie Xiren, ed., Computer Networks

[10], “Illustrated TCP/IP”

[11]. How TCP’s reliable transmission is guaranteed: https://zhuanlan.zhihu.com/p/112317245

[12].Front-end Security Series (1): How to prevent XSS attacks? :https://tech.meituan.com/2018/09/27/fe-security.html

I built a technical exchange group, where the big manufacturers gathered, technology, interviews, life chat is in full swing, I will also share high-quality technical information from time to time, add WeChat MageByte102, pull you into the group.