Previous:Alibaba Open Source Intranet High Concurrency Programming Manual .pdf
Since most computers are now multi-core CPUs, multithreading tends to be faster and more capable of increasing concurrency than single threading, but increasing concurrency does not mean starting more threads to execute. More threads means that thread creation and destruction overhead is high, the context is very frequent, and your program cannot support higher TPS.
Multitasking systems often require multiple jobs to be performed simultaneously. The number of jobs is often greater than the number of CPUs of the machine, but a CPU can only perform one task at a time, how to make the user feel that these tasks are being carried out at the same time? The designers of the operating system cleverly exploited the rotation of time sheets
A time slice is the amount of time that the CPU assigns to individual tasks (threads)!
“Think: Why do single-core CPUs also support multi-threading?”
The thread context refers to the contents of the CPU registers and program counters at a certain point in time, and the CPU cycles the task (thread) through the time slice allocation algorithm, because the time slice is very short, so the CPU executes by constantly switching threads.
In other words, with a single CPU so frequent, multi-core CPUs can reduce context switching to a certain extent.
In addition to the processor core, modern CPUs also include registers, L1L2 cache storage devices, floating-point arithmetic units, integer arithmetic units and other auxiliary computing devices, as well as internal buses. A multi-core CPU, that is, multiple processor cores on a CPU, means that different threads of the program need to communicate frequently on the external bus between the CPUs, and also deal with the problem of data inconsistencies caused by different caches between different CPUs.
The concept of hyper-threading was proposed by Intel, which is simply to really concurrently two threads on a CPU, because the CPU is time-sharing (if two threads A and B, A is using the processor core, B is using a cache or other device, then the AB two threads can execute concurrently, but if AB is accessing the same device, it can only wait for the previous thread to execute after the last thread executes). The principle of achieving this concurrency is to add a coordination auxiliary core to the CPU, according to the data provided by Intel, such a device will increase the device area by 5%, but the performance will be increased by 15% to 30%.
Thread switching, a switch between two threads in the same process
Save the state of the current task before the CPU switchover, so that the next time you switch back to the task, you can load the state of the task again, and then load the status of the next task and execute. The state of the task is saved and reloaded, and this process is called context switching.
Each thread has a program counter (which records the next instruction to be executed), a set of registers (which holds the working variable of the current thread), and a stack (which records the execution history, where each frame holds a procedure that has been called but not returned).
Registers are a small but fast amount of memory inside the CPU (corresponding to the relatively slow RAM main memory outside the CPU). Registers increase the speed at which computer programs run through quick access to commonly used values, usually the middle of operations.
A program counter is a specialized register that indicates where the CPU is executing in a sequence of instructions, with values stored at the position of the executing instruction or the position of the next instruction to be executed.
Context switching incurs additional overhead, often manifesting as slow serial speed when executing concurrently, so reducing the number of context switches can improve the efficiency of multithreaded programs.
In Linux systems, you can use the vmstat command to view the number of context switches, where the cs column refers to the number of context switches (in general, the context switches of idle systems are about 1500 or less per second)
It refers to the time that each thread executes, the switching of threads is controlled by the system, and system control refers to the time slice that each thread may be divided into the same execution time slices under a certain operating mechanism of the system, or it may be that some threads execute for a long time slice, or even some threads do not get the time slice of execution. Under this mechanism, the clogging of one thread does not cause the entire process to clog.
Java uses the thread call to use preemptive scheduling, Java threads will allocate CPU time slices to run according to priority, and the higher the priority, the higher the priority of execution, but the higher the priority does not mean that it can occupy the execution time slice alone, it may be that the higher the priority gets more execution time slices, on the contrary, the lower priority of the execution time is less but will not allocate less execution time.
Refers to a thread after the execution of the initiative to notify the system to switch to another thread execution, this mode is like a relay race, one person runs their own journey to hand over the baton to the next person, the next person continues to run down. The execution time of a thread is controlled by the thread itself, thread switching can be predicted, there is no multithread synchronization problem, but it has a fatal weakness: if a thread is written with problems and gets stuck halfway through, it can cause the entire system to crash.
After the time slice of the currently executing task (thread) is exhausted, the system CPU normally schedules the next task
Setting the number of threads appropriately can maximize the use of the CPU and reduce the overhead of thread switching.
End of body
1. Please don’t use SpringMVC, it’s too low! Spring has officially announced a more awesome alternative framework!
2. Build a startup backstage technology stack from scratch
3. What platform can programmers generally pick up private jobs?
4. Spring Boot+Redis+interceptor+Custom Annotation implements automatic power of the interface
5. Why can’t the domestic 996 do the foreign 955?
6. What is the level of China’s railway booking system in the world?
7.15 pictures to understand the difference between being busy and being efficient!