As we all know, before mysql5, the default storage engine was: myslam. But after mysql5, the default storage engine has become: innodb, which is our preferred storage engine for table building.
So, here’s the problem:
If you want to know the answers to the three questions above, then, go ahead and look below.
Data is a very important thing for the system, such as: the user’s ID card, mobile phone number, bank number, membership expiration time, points and so on. Once lost, it will have a great impact on users.
So the question is, how can we ensure that these important data are not lost?
Answer: Store the data on disk.
Of course, some people will say, what if the disk is broken?
Then you need to back up, or be the master…
Well, stop, that’s not the point of the day.
Back to the point.
As we all know, reading and writing data from disk requires at least two IO requests to complete. One time is to read IO and the other time is to write IO.
IO requests are time-consuming operations, and frequent IO requests are bound to affect the performance of the database.
So, how can you solve the performance problem of the database?
Storing data in registers?
That’s right, the operating system reads data from registers the fastest because it’s closest to the CPU.
But a very fatal problem with registers is that they can only store a very small amount of data, and they are designed primarily to stage instructions and addresses, not to store large amounts of user data.
In this way, only the data can be stored in memory.
Because memory also meets our needs of reading and writing data quickly, and the performance is very impressive, just slightly slower than registers.
One annoying thing, though, is that memory is a more expensive resource than disk. Usually, 500G or 1T disks are very common. But have you ever heard of having 500G of memory? Others will think you’re crazy. The order of magnitude discussed in memory size is generally 16G or 32G.
Memory can store some user data, but not all of it because it may not be able to store if the amount of data is too large.
In addition, even if the user data can just exist in memory, in the future, if one day, the database server or deployment node hangs up, or restarts, the data will not be lost?
What to do so that you don’t lose data because of abnormal conditions. At the same time, can you ensure the reading and writing speed of data?
We can put a batch of data together.
When you write, the data is written to a batch of memory, and then the data of that batch is swiped to disk at one time. This is shown in the following figure:
During a read operation, a batch of data is read from disk at a time, then loaded into memory, and then operated in memory. This is shown in the following figure:
Brushing the data in memory to the disk, or loading the data from the disk into memory, is in batches, and this batch is what we often say: data pages.
Of course, there are many different types of pages in innodb, data pages are just one of them, and we will focus on data pages here.
So the question is, what is a data page?
Data pages are mainly used to store records in tables, it is connected to a bidirectional linked list on disk, easy to find, can be very fast from one data page, locate to another data page.
Many times, because we have a lot of data in our table, we may store multiple data pages on disk.
One day, when we want to query data according to a certain condition, we need to find another data page from one data page, and the two-way linked list will come in handy. The overall structure of each data page on disk is shown in the following figure: Typically, the default size of a single data page is 16kb. Of course, we can also reset the size by parameter: innodb_page_size. In general, though, it is sufficient to use its default value.
Well, the overall structure of the data page is figured out.
So, what does a single data page contain?
As can be seen from the above figure, the data page mainly contains the following parts:
For the newly requested data page, the user record is empty. When data is inserted, innodb allocates a portion of the free space to the user record.
User records are the top priority of innodb, and the data we usually save to the database is stored in it. So, what’s in it? Aren’t you curious?
In fact, there are four supported data row formats in innodb:
Let’s take the compact line format as an example: a user record mainly contains three parts:
Let’s take a look at these together.
Extra information is not really user data, it is used to assist in storing data.
Some data will have problems if it is stored directly, for example, if a field is of type varchar or text, its length is not fixed, and it can vary depending on the length of the data deposited.
If the true length of the data is not recorded in one place, innodb probably doesn’t know how much space to allocate. Wouldn’t it be wasteful if they all allocated space at a fixed length, but the actual data didn’t take up much space?
Therefore, you need to record the number of bytes occupied by a variable-length field in the variable-length field to facilitate the allocation of space on demand.
Some fields in the database allow null values, and if the null values of each field are saved to the user record, it is obviously a waste of storage space.
Is there a way to simply mark it up and not store the actual null value?
Answer: Save the null field to a list of null values.
In the list, use the binary value of 1 to indicate that the field is allowed to be null, and 0 to indicate that null is not allowed. It only takes up 1 bit, which can indicate whether a character is null or not, and can indeed save a lot of storage space.
Record header information is used to describe some special properties.
It mainly contains:
If there is a primary key in the table, the primary key is used as the row id without additional creation. If there is no primary key in the table, if there is a unique unique key that is not null, it is used as the row id and no additional creation is required.
If there is neither a primary key nor a unique key in the table, the database automatically creates row IDs.
That is to say, in innodb, the transaction id and rollback point in the hidden column will definitely be created, but the row id should be determined according to the actual situation.
The real data column stores the user’s real data, and it can contain many columns of data. This one is relatively simple, there is not much to say.
Through the above introduction, everyone should have a certain understanding of how a user record is stored.
But here’s the question, how is one user record connected to another, and how does innodb know who the next record of a record is?
The answer is: record the location of the next record with the previously mentioned “Record Extra Information” record header information.
Multiple user records form a one-way linked list through the location of the next record. This way you can find all the records from front to back.
As you can see from the above, if there are multiple user records in a data page, they are connected by the location of the next record.
But there is a question: what if you can quickly find the largest and smallest records?
This requires that the user records be saved at the same time, as well as the maximum and minimum records.
The maximum record is saved to the Supremum record.
The minimum record is saved in the Infimum record.
When you save user records, the database automatically creates two additional records: Supremum and Infimum. The relationship between them, as shown in the following figure:
It can be seen from the figure that the user data starts from the smallest record, passes the position of the next record, from small to large, step by step, and finally finds the largest record.
As you can see from the above, if we were to query for a record, the database would start with the smallest record and look up all the records one by one. If found halfway through, the record is returned directly. If the largest record has been found and the desired record has not been found, null is returned.
At first glance, no problem.
But if you think about it.
Will the efficiency be a little low?
Isn’t this to scan the full page of user data?
Is there a more efficient way?
This requires the use of the page directory.
To put it bluntly, it is to divide a page of user records into several groups, and the largest records of each group are saved to one place, which is the page directory. The largest number of records per group is called a slot.
It can be seen that the page directory is composed of multiple slots. As shown in the following figure:
Suppose a page of data is divided into 4 groups, so that in the page directory, there are 4 slots, and the maximum value of the set of data is stored in each slot.
This allows you to compare the records in the slot with the size of the records that need to be found through a binary lookup. If the user needs to look for a record that is smaller than the record in the current slot, look up for the previous slot. If the user needs to find a record that is larger than the record in the current slot, look down for the next slot.
In this way, you can quickly locate the records that need to be found through two-point search.
From the location and page directory of the next record in the row record described earlier, innodb can locate a record very quickly. However, there is a prerequisite that the user records must be in the same data page.
What if the user records a lot and can’t find the data we want on the first data page, so what if we need to go to another page to find it?
This is where you need to use the file header.
It contains several pieces of information, but I’ve listed only 4 of the most critical ones:
As the name suggests, innodb is used to string together different data pages by page number, previous page number, and next page number. As shown in the following figure: Between different data pages, a bidirectional linked list is formed by the previous page number and the next page number. This makes it possible to find all the data from front to back, page by page.
In addition, the page type is also a very important field, it contains a variety of types, of which the more famous are: data pages, index pages (catalog item pages), overflow pages, undo log pages, and so on.
I mentioned earlier that the data in the database is loaded into memory in terms of data pages, and if the data is updated, it needs to be flushed to disk.
However, if one day is unlucky, the program has an exception in the process of flushing to disk, such as: the process is killed, or the server is restarted.
At this time, the data may only be refreshed in part, how to judge that the data of the last brush disk is complete?
This requires the end of the file.
It records the checksum of the page.
Before the data is flushed to disk, a checksum of a page is calculated. Later, if the data is updated, a new value is calculated. This checksum is also recorded in the file header, and since the file header is in front, it is flushed to disk first.
Next, when refreshing the user record to disk, suppose that part of the refresh occurs, and the program has an exception. At this point, the checksum at the end of the file is still an old value. The database will check that the checksum at the end of the file is not equal to the new value of the file header, indicating that the data for the data page is incomplete.
With the above introduction, the data pages can be easily accessed between each other, but there is another important problem left, that is, the status information of the record.
For example, a page of data saves multiple records, or the page directory uses multiple slots. Is this information counted in real time, or is it counted in advance and saved somewhere?
For performance considerations, the above statistics, of course, are first counted and saved to a place. It would be better to read it later when you need to use it. This is where the statistics are saved, which is the top of the page.
Of course, the page header does not only save: the number of slots, the number of records and other information.
It also records:
In fact, there are many more, and I will not list them all here.
A double-linked list is formed by page numbers between multiple data pages. The row data of each data page constitutes a single necklace list by the position of the next record. The overall architecture diagram is as follows:
Well, that’s it for this article first.
Reference: How mysql works.
– EOF –
Add the homepage Jun WeChat, not only data analysis and development skills +1
Homepage Jun will also share data analysis and development related tools, resources and selected technical articles on personal WeChat, and share some interesting activities, job promotion and how to use technology to do amateur projects from time to time
Add a WeChat and open a window
1. What is the difference between statistics and machine learning?
2. Several tips for API interface optimization
3, MYSQL the most simple way of monitoring
Got a harvest after reading this article? Please forward and share it with more people
Likes and looks are the biggest support ❤️