Disclosure of Invention
The embodiment of the application provides a data processing method, a device, electronic equipment and a storage medium, which solve the problem of processing delay of data reading and writing tasks caused by overhigh CPU load in multi-CPU equipment.
In a first aspect, an embodiment of the present application provides a data processing method implemented by a first CPU side, including:
The first CPU acquires a first data reading and writing task;
when the load of the first CPU is greater than or equal to a first load threshold, determining a second CPU with the load smaller than a second load threshold from at least one other CPU;
storing the first data read-write task in an asynchronous dispatch management structure body of a second CPU, wherein the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
And waking up an asynchronous dispatch thread of the second CPU so that the second CPU processes the first data read-write task.
In the data processing method provided by the embodiment of the application, in the electronic equipment with a plurality of CPUs, when a first CPU acquires a first data read-write task, whether the load of the first CPU is larger than or equal to a first load threshold is firstly determined, if so, a second CPU with the load smaller than a second load threshold is determined from other CPUs, the first data read-write task is stored in an asynchronous dispatch management structure of the second CPU, and an asynchronous dispatch thread of the second CPU is awakened, so that the second CPU processes the first data read-write task based on the asynchronous dispatch thread. Because the asynchronous dispatch management structure for storing the data read-write tasks of other CPUs except the second CPU in the electronic device is pre-established in the second CPU, the data read-write tasks migrated from the other CPUs to the second CPU are uniformly managed, and the asynchronous dispatch thread for processing the data read-write tasks migrated from the other CPUs to the second CPU is pre-established for the second CPU, the asynchronous dispatch of the data read-write tasks migrated from the other CPUs by the second CPU based on the asynchronous dispatch thread is realized. Therefore, the data read-write task on the high-load CPU is migrated to the low-load CPU to be executed, the task amount on the high-load CPU is reduced, and the time delay for processing the data read-write task is reduced. Meanwhile, the high-load CPU can restore the load level as soon as possible due to the sharing of the task of the high-load CPU, so that the performance loss of the electronic equipment caused by overhigh load of a certain CPU or some CPUs is relieved, and the overall performance of the electronic equipment is improved.
In one possible implementation manner, the acquiring the first data read-write task specifically includes:
acquiring an IO request generated by a target application, and generating a corresponding block device IO operation structure bio based on the IO request;
Storing the first data read-write task in an asynchronous dispatch management structure of the second CPU, wherein the method specifically comprises the following steps:
the bio is mounted into the asynchronous dispatch management structure.
In the implementation manner, after the first CPU obtains the IO request generated by the target application and converts the IO request into a corresponding bio (block device IO operation structure), the first CPU mounts the bio to an asynchronous dispatch management structure of the second CPU for storage, so that storage of the bio corresponding to the IO request migrated by the first CPU is realized through the asynchronous dispatch management structure of the second CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes linked lists corresponding to a plurality of other CPUs except the own CPU;
The method for mounting the bio into the asynchronous dispatch management structure body specifically comprises the following steps:
and mounting the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure body.
In this implementation manner, the asynchronous dispatch management structure corresponding to the second CPU includes linked lists corresponding to a plurality of other CPUs except the second CPU, and the first CPU may mount the bio corresponding to the IO request to the linked list corresponding to the first CPU in the asynchronous dispatch management structure, so that storage of the bio corresponding to the IO request migrated by the first CPU is implemented through the linked list corresponding to the first CPU set in the asynchronous dispatch management structure of the second CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to the other CPUs except the own CPU.
In this implementation manner, the asynchronous dispatch management structure corresponding to the second CPU may specifically store, through the set storage structure pointer array, linked lists corresponding to a plurality of other CPUs except the second CPU itself.
In one possible implementation manner, the asynchronous dispatch management structure body and the storage structure pointer array are created when a target block device is started, where the target block device is a block device corresponding to the first data read-write task.
In one possible implementation manner, the second data read-write task issued by the first application running on the second CPU is processed by the first process called by the second CPU, and the asynchronous dispatch thread is different from the first process.
In the implementation manner, the second process is called by the second CPU to process the second data read-write task issued by the first application running on the second CPU, and the asynchronous dispatch thread and the first process are two different independent processes, so that the second CPU realizes the data read-write task of parallel processing itself and the data read-write task transferred from the first CPU to the second CPU, and the overall performance of the electronic device is improved.
In a second aspect, an embodiment of the present application provides a data processing apparatus implemented by a first CPU side, including:
the acquisition module is used for acquiring a first data read-write task by the first CPU;
A determining module, configured to determine, from at least one other CPU, a second CPU having a load less than a second load threshold when the load of the first CPU is greater than or equal to a first load threshold;
The sending module is used for storing the first data read-write task in an asynchronous dispatch management structure body of the second CPU, and the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
and the awakening module is used for awakening the asynchronous dispatch thread of the second CPU so that the second CPU processes the first data read-write task.
In one possible implementation manner, the obtaining module is specifically configured to obtain an IO request generated by a target application, and generate a corresponding block device IO operation structure bio based on the IO request;
The sending module is specifically configured to mount the bio to the asynchronous dispatch management structure.
In one possible implementation manner, the asynchronous dispatch management structure body includes linked lists corresponding to a plurality of other CPUs except the own CPU;
the sending module is specifically configured to mount the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to the other CPUs except the own CPU.
In one possible implementation manner, the asynchronous dispatch management structure body and the storage structure pointer array are created when a target block device is started, where the target block device is a block device corresponding to the first data read-write task.
In one possible implementation manner, the second data read-write task issued by the first application running on the second CPU is processed by the first process called by the second CPU, and the asynchronous dispatch thread is different from the first process.
In a third aspect, an embodiment of the present application provides a data processing method implemented by a second CPU side, including:
The method comprises the steps that a second CPU receives a first data read-write task sent by a first CPU, wherein the first data read-write task is stored in an asynchronous dispatch management structure body of the second CPU by the first CPU, and the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
and if the asynchronous dispatch thread is determined to be awakened, processing the first data read-write task based on the asynchronous dispatch thread.
In one possible implementation manner, the processing the first data read-write task based on the asynchronous dispatch thread specifically includes:
and calling the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issuing the first data read-write task to target block equipment.
In one possible implementation manner, the second CPU receives a first data read-write task sent by the first CPU, and specifically includes:
The second CPU receives a block device IO operation structure body bio corresponding to the IO request sent by the first CPU, and the bio is mounted in the asynchronous dispatch management structure body by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body comprises linked lists corresponding to a plurality of other CPUs except the CPU of the asynchronous dispatch management structure body, and the bio is mounted to the linked list corresponding to the first CPU in the asynchronous dispatch management structure body by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to a plurality of CPUs except for the own CPU.
In one possible implementation manner, the method for calling the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body and issue the first data read-write task to a target block device specifically includes:
calling the created asynchronous dispatch thread to execute the following steps:
acquiring a linked list corresponding to the first CPU from the asynchronous dispatch management structure body;
converting the bio contained in the linked list corresponding to the first CPU into a request;
Issuing the request to a software cache queue corresponding to the second CPU;
acquiring the request from the software cache queue, and issuing the request to a hardware dispatch queue corresponding to the software cache queue;
the request is obtained from the hardware dispatch queue, the request is issued to the target block device, the asynchronous dispatch thread is used for processing data read-write tasks migrated from other CPUs except the asynchronous dispatch thread to the second CPU, the second data read-write task issued by a first application running on the second CPU is called by the second CPU to be processed, and the asynchronous dispatch thread is different from the first process.
In the implementation mode, an asynchronous dispatch thread for processing data read-write tasks migrated from other CPUs to the second CPU is established in advance for the second CPU, after the asynchronous dispatch thread is awakened, the second CPU can send the first data read-write task migrated from the first CPU to target block equipment by calling the asynchronous dispatch thread, a linked list corresponding to the first CPU is obtained from an asynchronous dispatch management structure body, a bio contained in the linked list corresponding to the first CPU is converted into a request, the request is sent to a software cache queue corresponding to the second CPU for temporary storage, and then the request is read from the software cache queue, sent to a hardware dispatch queue corresponding to the software cache queue, and then the request is obtained from the hardware dispatch queue and sent to the target block equipment. And the second data read-write task issued by the first application running on the second CPU is processed by the first process called by the second CPU, and the asynchronous dispatch thread and the first process are two different independent processes, so that the second CPU realizes the parallel processing of the data read-write task of the second CPU and the data read-write task transferred from other CPUs to the second CPU, and the overall performance of the electronic equipment is improved.
In one possible implementation, the asynchronous dispatch management structure, the storage structure pointer array, and the program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
In a fourth aspect, an embodiment of the present application provides a data processing apparatus implemented by a second CPU side, including:
The receiving module is used for receiving a first data read-write task sent by a first CPU, wherein the first data read-write task is stored in an asynchronous dispatch management structure body of a second CPU by the first CPU, and the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
And the processing module is used for processing the first data read-write task based on the asynchronous dispatch thread if the asynchronous dispatch thread is determined to be awakened.
In one possible implementation manner, the processing module is specifically configured to invoke the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issue the first data read-write task to a target block device.
In one possible implementation manner, the receiving module is specifically configured to receive a block device IO operation structure bio corresponding to an IO request sent by the first CPU, where the bio is mounted to the asynchronous dispatch management structure by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body comprises linked lists corresponding to a plurality of other CPUs except the CPU of the asynchronous dispatch management structure body, and the bio is mounted to the linked list corresponding to the first CPU in the asynchronous dispatch management structure body by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to a plurality of CPUs except for the own CPU.
In one possible implementation manner, the processing module is specifically configured to call the created asynchronous dispatch thread to execute the steps of acquiring a linked list corresponding to the first CPU from the asynchronous dispatch management structure, converting the bio contained in the linked list corresponding to the first CPU into a request, issuing the request to a software cache queue corresponding to the second CPU, acquiring the request from the software cache queue, issuing the request to a hardware dispatch queue corresponding to the software cache queue, acquiring the request from the hardware dispatch queue, issuing the request to the target block device, wherein the asynchronous dispatch thread is used for processing a data read-write task migrated from other CPUs except for the asynchronous dispatch thread to the second CPU, the second data read-write task issued by the first application running on the second CPU is processed by the second CPU, and the asynchronous dispatch thread is different from the first asynchronous process.
In one possible implementation, the asynchronous dispatch management structure, the storage structure pointer array, and the program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
In a fifth aspect, an embodiment of the present application provides an electronic device, including:
the first CPU is used for acquiring a first data read-write task; when the load of the first CPU is greater than or equal to a first load threshold, determining a second CPU with the load smaller than a second load threshold from at least one other CPU; storing the first data read-write task in an asynchronous dispatch management structure body of a second CPU, wherein the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
And the second CPU is used for receiving the first data read-write task sent by the first CPU and processing the first data read-write task based on the asynchronous dispatch thread.
In one possible implementation manner, the first CPU is specifically configured to obtain an IO request generated by a target application, and generate a corresponding block device IO operation structure bio based on the IO request;
The second CPU is specifically configured to invoke the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issue the first data read-write task to a target block device.
In one possible implementation manner, the first CPU is specifically configured to mount the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure, where the asynchronous dispatch management structure includes linked lists corresponding to a plurality of CPUs other than the self CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to the other CPUs except the own CPU.
In one possible implementation manner, the second CPU is specifically configured to call the created asynchronous dispatch thread to perform the following steps:
acquiring a linked list corresponding to the first CPU from the asynchronous dispatch management structure body;
converting the bio contained in the linked list corresponding to the first CPU into a request;
Issuing the request to a software cache queue corresponding to the second CPU;
acquiring the request from the software cache queue, and issuing the request to a hardware dispatch queue corresponding to the software cache queue;
the request is obtained from the hardware dispatch queue, the request is issued to the target block device, the asynchronous dispatch thread is used for processing data read-write tasks migrated from other CPUs except the asynchronous dispatch thread to the second CPU, the second data read-write task issued by a first application running on the second CPU is called by the second CPU to be processed, and the asynchronous dispatch thread is different from the first process.
In one possible implementation, the asynchronous dispatch management structure, the storage structure pointer array, and the program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
In a sixth aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements steps in a data processing method according to the present application.
The technical effects achieved by any one of the second to sixth aspects described above may be described with reference to any one of the possible designs of the first aspect described above, and the description will not be repeated.
Detailed Description
The embodiment of the application provides a data processing method, a data processing device, electronic equipment and a storage medium, which reduce the time delay for processing data reading and writing tasks and improve the overall performance of the electronic equipment.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one (item) below" or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b or c) of a, b, c, a and b, a and c, b and c, or a, b and c may be represented, wherein a, b and c may be single or plural.
In the embodiments of the present application, the descriptions of "when..once", "in the case of..once", "if", and the like all refer to that the electronic device will make a corresponding process under some objective condition, are not limited in time, nor do they require that the electronic device must have a judging action when implemented, nor are other limitations meant to exist.
In addition, it should be understood that in the description of the present application, the words "first," "second," and the like are used merely to distinguish between the described objects and should not be construed as indicating or implying a relative importance of the described objects or order. For example, the "first load threshold" and "second load threshold" are used to distinguish between different load thresholds, rather than describe a particular order or importance of load thresholds.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. The use of the word "exemplary" or "e.g." is intended to present the relevant concepts in a concrete manner.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
In this context, it is to be understood that the technical terms referred to in the present application are:
The generic Block Layer Generic Block Layer, also referred to simply as the Block Layer, is a kernel component that processes IO requests from Block devices issued by other components of the operating system. The generic block layer contains some generic functions and data structures for block device operations, such as a generic disk structure gendisk structure, a request queue structure request_queue, a request structure request, a block device I/O operation structure bio, and a block device operation structure block_device_operations, etc.
Multiple queues in the application, the multiple queues refer to multiple queues in the IO request implementation of the kernel processing block device. The system is a block device IO scheduling mechanism of a general block layer, and improves concurrency and responsiveness of processing disk IO of the system by supporting a plurality of IO scheduling queues. This mechanism was introduced to fully exploit the capabilities of faster disk hardware (e.g., NVME hard disk, solid state hard disk, etc.) due to the advent of these disk hardware.
The software temporary storage queue (Software Staging Queues) is called a soft queue for short, and the general block layer distributes a soft queue for each CPU for temporarily storing IO requests submitted by users, and the soft queue can complete IO sequencing and merging, IO request marking processing, IO scheduling, IO statistics and the like. Because each CPU has a separate soft queue, these IO operations on each CPU can be performed simultaneously without lock contention issues.
The hardware dispatch queue (HARDWAR DISPATCH Queues) is called hard queue for short, and the general block layer allocates a hardware dispatch queue for each IO receiving buffer zone of the storage device, and is used for storing IO requests dispatched by the soft queue to the IO receiving buffer zone. When the storage device is initialized, one or more software temporary storage queues are mapped to one hardware dispatch queue through a fixed mapping relation in the initialization process (meanwhile, the number of the software temporary storage queues mapped to each hardware dispatch queue is guaranteed to be basically consistent), and then IO requests on the software temporary storage queues are dispatched to the corresponding hardware dispatch queues. After the IO enters the hardware dispatch queues, the IO is dispatched to the IO receiving buffer area on the real medium firmware by the hardware dispatch queues.
Referring to fig. 1, which is a schematic diagram of an application scenario of a data processing method according to an embodiment of the present application, an electronic device 10 shown in fig. 1 includes a plurality of CPUs 0 to CPUn (the number of CPUs is n+1), a general block layer 101, a block device driver (DEVICE DRIVE) 102 and a block device 103, where the CPUs may be physical CPUs or logical CPUs. The general block layer 101 is a component in the Linux kernel of the electronic device 10, and in the multi-queue general block layer implementation, a two-layer queue mechanism is included, wherein the software temporary storage queue and the hardware dispatch queue are software temporary storage queues, the software temporary storage queues are ctx 0-ctxn in fig. 1, each CPU corresponds to one software temporary storage queue ctx, the hardware dispatch queues are hctx-hctxm in fig. 1, the number of the hardware dispatch queues is m+1, the number of the hardware dispatch queues is determined by bottom hardware, and the number of the hardware dispatch queues can be 1 or more. The Linux kernel maintains a mapping table of software scratch queues and hardware dispatch queues, where one hardware dispatch queue corresponds to one or more software scratch queues, for example, in the architecture of fig. 1, hardware dispatch queue hctx0 corresponds to two software scratch queues ctx0 and ctx1, and hardware dispatch queue hctx1 corresponds to software scratch queue ctx2, where the number of hardware dispatch queues m+1 is less than the number of software scratch queues n+1. The application running on the CPU initiates IO Request to the component universal Block layer 101 in the kernel, after the IO Request enters the universal Block layer, the IO Request is processed by two layers, namely a Bio (Block I/O, block device I/O operation structure) layer and a Request (Request) layer, the positions of the Bio layer and the Request layer in the Block layer are shown in figure 2, the Bio layer converts the IO Request into the Bio, the Bio is the Block device IO operation structure, the Request layer sequences and/or merges the Bio and the like to convert the Bio into the Request, the Request is sent to a software temporary storage queue corresponding to the CPU, the software temporary storage queue sends the Request to a hardware dispatch queue corresponding to the Request, the Request is sent to a Block device driver, and the Request is sent to the Block device (such as a hard disk in figure 1) by the Block device driver.
"FILESYSTEMS" in FIG. 2 is a file system, and common file systems are ext3, ext4, xfs, and the like. "/dev/sda" is a disk device name that is common in the Linux system and is used to represent a piece of disk. The block layer is a logical concept that describes the original IO request. md (MAPPED DEVICE, mapping device), which is a logical device provided outside the kernel, is a disk device virtualized by dm (DEVICE MAPPER, device mapping) framework. dm is a mechanism implementation framework for mapping physical devices in the Linux system, and mapping from the physical devices to the logic devices is realized. RAID (Redundant Array of INDEPENDENT DISKS ) is an implementation technique that achieves data protection by providing redundancy, RAID0, RAID1, RAID5 represent different types of redundant arrays of independent disks. For example, raid0 requires at least two hard disks, and the more disks, the faster the read/write speed, without redundancy, and fig. 2 shows a storage device corresponding to one block in the implementation of the technology. Raid1 can only use two hard disks, the data of the two hard disks are mirror images (slow writing and fast reading) and one disk is redundant, and in this figure, a storage device corresponding to one block in the implementation of the technology is shown. raid 5 requires at least 3 hard disks, a disk redundancy, which is the most popular configuration, and a data storage mode with a parity data recovery function, where parity data blocks are distributed in each hard disk in the array, and in this figure, a storage device corresponding to a block under implementation of the technology is shown. dm-crypt (DEVICE MAPPER crypto target, device mapping encryption target) is a logic device mapped by dm framework, implements device encryption through dm-crypt, and improves data security, and in this figure, represents a storage device corresponding to a block under implementation of the technology, where dm-crypt is an encryption tool used on Linux operating system, and can encrypt a file system to protect data security. The dm-snap (DEVICE MAPPER SNAP TARGET, device mapping snapshot target) is mapped by the dm framework, the device snapshot is realized through the dm-snap, the data reliability is improved, and a corresponding storage device of a block under the realization of the technology is shown in the figure. "dm-thin" is a logical storage type implemented by dm, and a plurality of virtual logical devices can be put together to reduce management cost, and in this figure, a storage device corresponding to a block under implementation of the technology is shown. drbd (Distributed Replicated Block Device, distributed replication block device) is a highly available storage scheme that allows data to be mirrored in a synchronized manner between two physically separate servers, representing in this figure a block of corresponding storage devices under implementation of the technique. rbd (Reliable Autonomic Distributed Object Store Block Device, self-healing distributed object storage block device) is a storage middle layer built on RADOS (Reliable Autonomic Distributed Object Store, self-healing distributed object storage) clusters, which provides block devices for clients, and has the characteristics of snapshot, multiple copies, cloning, consistency and the like, and in this figure, represents a corresponding storage device for a block under implementation of the technology. umem (user memory) is a memory buffer area, so that data transmission and reception can be realized without locking, and a corresponding virtual storage device in the implementation of the technology is shown in the figure. bcache (block cache) is a cache technology, and also uses a solid state disk as a read-write cache, and in this figure, a storage device corresponding to a block under the technical support is shown. SCSI (Small Computer SYSTEM INTERFACE) is a data transfer interface protocol, which in this figure represents a disk supporting this protocol, commonly referred to as SCSI disk. ATA (Advanced Technology Attachment, advanced technology attachment specification) is a disk interface protocol, which in this figure represents a disk supporting this protocol, commonly referred to as ATA disk. Flappy is a Floppy disk and ps3disk is a magnetic disk. NVME (Non-Volatile Memory Express, nonvolatile memory) is a solid state disk. nbd (Network Block Device ) allows a user to access a certain disk device through the network, here representing the corresponding disk type.
Block device 103 may be, but is not limited to, a disk, solid state disk, or CD-ROM, which is not limiting with respect to embodiments of the application. The electronic device 10 may also be a server including a Linux kernel, a terminal, and other devices, and the server may be an independent physical server, or may be a cloud server that provides basic cloud computing services such as a cloud server, a cloud database, and a cloud storage, which is not limited in the embodiment of the present application.
Based on the application scenario shown in fig. 1, an exemplary embodiment of the present application will be described in more detail with reference to fig. 3 to 12, and it should be noted that the application scenario is only shown for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in any way. Rather, embodiments of the application may be applied to any scenario where applicable.
Fig. 3 is a schematic flow chart of an implementation of a data processing method according to an embodiment of the present application, where the data processing method may be applied to the electronic device 10 shown in fig. 1, and may include the following steps:
S21, the first CPU acquires a first data reading and writing task.
In one possible implementation manner, when a target application running on a first CPU in an electronic device needs to perform data read/write operation on a target block device in the electronic device, the target application generates a first data read/write task, sends the first data read/write task to a task queue of the first CPU, and the first CPU reads the first data read/write task from the task queue, where the first CPU is one of multiple CPUs of the electronic device, the target block device in the electronic device is a block device corresponding to the first data read/write task, and the first data read/write task is used to perform read/write operation on data stored in the target block device in the electronic device, which may be, but is not limited to, a file system.
For example, the first data reading task may be an IO request, and the first CPU obtains the IO request generated by the target application and generates a corresponding bio based on the IO request.
The target application generates an IO request, sends the IO request to a task queue of the first CPU, and reads the IO request from the task queue, wherein the IO request contains sector position information of data to be read or data to be written in the target block device, the sector position information comprises initial sector position information, the number of continuous sectors read or written from the initial sectors, and the execution action is read or write. When the IO request is a data writing request, the IO request also contains data to be written. The IO request may be submitted to the generic block layer.
In one possible implementation, the first CPU may also set an asynchronous dispatch flag for the bio, the asynchronous dispatch flag identifying that the request queue supports asynchronous dispatch. The bio may be migrated to other CPUs of the plurality of CPUs of the electronic device, except the first CPU, for asynchronous dispatch by the other CPUs.
S22, when the load of the first CPU is larger than or equal to a first load threshold, the first CPU determines a second CPU with the load smaller than a second load threshold from at least one other CPU.
It will be appreciated that the second load threshold here ensures that the second CPU completes the first data read-write task. The second load threshold may be less than or equal to the first load threshold when the processing capabilities of different CPUs on the same electronic device are the same.
In one possible implementation, a first CPU obtains a load of at least one other CPU, other than the first CPU, of a plurality of CPUs of an electronic device.
For example, the electronic device may monitor the load of its respective CPU in real time, such as may monitor the utilization of the respective CPU in real time. The first CPU determines whether the current utilization rate of the first CPU is greater than a first load threshold, and when the utilization rate of the first CPU is greater than the first load threshold, the first CPU obtains the utilization rate of at least one other CPU except the first CPU in the plurality of CPUs of the electronic device. When the electronic device monitors the utilization rate of each CPU, the first load threshold may be set according to the actual requirement, for example, may be set to 85% or 90%, or any other percentage value.
In one possible implementation manner, the first CPU may select, from at least one other CPU other than the first CPU in the plurality of CPUs of the electronic device, a second CPU with a utilization rate smaller than the second load threshold, where the second load threshold may be set by itself according to an actual requirement, for example, may be set to 60% or 65%, or any other percentage value, which is not limited in the embodiment of the present application. If no CPU with the utilization rate smaller than the second load threshold exists in all other CPUs except the first CPU, any second CPU with relatively low utilization rate can be selected, and the second CPU with the lowest utilization rate can also be selected.
S23, the first CPU stores the first data read-write task in an asynchronous dispatch management structure of the second CPU.
When the electronic equipment operating system is started and enters into the initialization stage of the target block equipment, an asynchronous dispatch management structure body and an asynchronous dispatch thread are created for each CPU, the asynchronous dispatch management structure body of each CPU is used for storing data read-write tasks of other CPUs except the CPU, and the asynchronous dispatch thread is used for processing the data read-write tasks of the other CPUs except the CPU. Meanwhile, a storage structure pointer array is created for the asynchronous dispatch management structure body of each CPU, the storage structure pointer array of each CPU is used for storing linked lists corresponding to other CPUs in the electronic equipment except the CPU, and the linked list corresponding to each CPU in the other CPUs is used for storing bio corresponding to IO requests migrated from the corresponding CPU to the current CPU. It will be appreciated that the asynchronous dispatch management structure, storage structure pointer array, and program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
Taking the architecture of the electronic device 10 in fig. 1 as an example, the electronic device 10 includes n+1 CPUs, namely, CPUs 0 to CPUn, in the initialization stage of the block device, an asynchronous dispatch management structure and an asynchronous dispatch thread are created for each CPU, taking the asynchronous dispatch management structure as an example, the asynchronous dispatch management structure is created for CPU0, a storage structure pointer array async_push_io is created for the asynchronous dispatch management structure of CPU0, each storage structure pointer async_push_io [ i ] (i=1 to n) corresponds to a linked list corresponding to storage CPUs 1 to CPUn, wherein n is the number of linked lists, as shown in fig. 4, three bios corresponding to IO requests from CPU1 to CPU0 are in the linked list corresponding to CPU1, 2 bios corresponding to IO requests from CPU2 to CPU0 are in the linked list corresponding to CPU2, 4 bios corresponding to IO requests from CPU3 to CPU0 are in the linked list corresponding to CPU3 IO requests from CPUn to IO 0 are in the linked list corresponding to CPU 3.
In one possible implementation manner, a new asynchronous dispatch management structure may also be created for each CPU, a linked list may be created for a plurality of other CPUs in the electronic device except the CPU thereof, where the bios corresponding to the IO request migrated from the other CPUs to the current CPU is stored, and the identification information of the linked list and the CPU that issues the bios corresponding to the IO request may be stored in the newly created asynchronous dispatch management structure.
In the step, the first CPU mounts the generated bio to an asynchronous dispatch management structure of the second CPU, wherein the asynchronous dispatch management structure of the second CPU is used for storing data read-write tasks of the first CPU and storing data read-write tasks of CPUs except the first CPU, namely, the asynchronous dispatch management structure of the second CPU is used for storing data read-write tasks of other CPUs except the second CPU, namely, the asynchronous dispatch management structure of the second CPU is used for storing bio corresponding to IO requests of other CPUs except the second CPU, and the other CPUs except the second CPU comprise the first CPU.
In one possible implementation, the first CPU mounts the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure of the second CPU. The asynchronous dispatch management structure of the second CPU comprises linked lists corresponding to a plurality of other CPUs except the second CPU. The asynchronous dispatch management structure body comprises a storage structure pointer array which is used for storing linked lists corresponding to other CPUs except the second CPU. Therefore, the storage of the bio corresponding to the IO request migrated from the first CPU is realized through the linked list corresponding to the first CPU arranged in the asynchronous dispatch management structure body of the second CPU.
Illustratively, the asynchronous dispatch architecture may include a linked list for each of the other CPUs in addition to the CPU itself, i.e., one linked list for each of the other CPUs.
Assuming that the second CPU is CPU0 in fig. 1 and the first CPU is CPU1, then CPU1 mounts the bio corresponding to the IO request to the linked list corresponding to CPU1 in the asynchronous dispatch management structure of CPU0 as shown in fig. 4.
Optionally, another asynchronous dispatch structure may further include a linked list corresponding to a plurality of other CPUs except the own CPU, where when the asynchronous dispatch structure receives a bio corresponding to an IO request sent by one other CPU at a time, the identification information of the CPU that issues the bio is stored.
S24, the first CPU wakes up the asynchronous dispatch thread of the second CPU.
After the first CPU mounts the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure of the second CPU, the first CPU wakes up an asynchronous dispatch thread of the second CPU so that the second CPU processes the first data read-write task.
S25, the second CPU processes the first data read-write task based on the asynchronous dispatch thread.
In one possible implementation, if the second CPU determines that the asynchronous dispatch thread is awakened, the second CPU invokes the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issues the first data read-write task to the target block device. The asynchronous dispatch thread is used for processing data read-write tasks migrated from other CPUs except the CPU of the asynchronous dispatch thread to a second CPU, and the second CPU calls a first process to process the second data read-write task issued by a first application running on the second CPU, and the asynchronous dispatch thread is different from the first process.
Because the asynchronous dispatch management structure for storing the data read-write tasks of other CPUs except the second CPU in the electronic device is pre-established in the second CPU, the data read-write tasks migrated to the second CPU by the other CPUs are uniformly managed, and an asynchronous dispatch thread for processing the data read-write tasks migrated to the second CPU by the other CPUs is established for the second CPU, thereby realizing asynchronous dispatch of the data read-write tasks migrated by the other CPUs.
The first application running on the second CPU can be a file system, the second data read-write task initiated by the file system running on the second CPU is called by the second CPU to be processed by a first process which is mutually independent of the asynchronous dispatch thread, the asynchronous dispatch thread is only used for processing the data read-write task migrated from other CPUs to the second CPU, and the asynchronous dispatch thread and the first process are two different independent processes, so that the second CPU realizes parallel processing of the data read-write task of the second CPU and the data read-write task migrated from other CPUs to the second CPU, and the overall performance of the electronic equipment is improved.
Illustratively, the second CPU may invoke the asynchronous dispatch thread to perform the process shown in fig. 5 to issue the first data read-write task to the target block device, which may include the steps of:
s31, the second CPU acquires a linked list corresponding to the first CPU from the asynchronous dispatch management structure body.
In the implementation process, the second CPU obtains a linked list corresponding to the first CPU from an asynchronous dispatch management structure body of the second CPU according to a storage structure pointer corresponding to the first CPU.
In implementation, the second CPU may traverse, by the asynchronous dispatch thread, the linked lists corresponding to all CPUs except the second CPU in the storage structure pointer array async_push_io [ n ] in the asynchronous dispatch management structure of the second CPU, where the linked list corresponding to the first CPU is included.
S32, converting the bio contained in the linked list corresponding to the first CPU into a request.
When the second CPU traverses to the linked list corresponding to the first CPU, all the bio in the linked list corresponding to the first CPU is converted into a corresponding Request through sorting, merging and other processes through the Request layer of the asynchronous dispatch thread in the general block layer.
Optionally, if the first CPU sets an asynchronous dispatch flag for a bio sent to the second CPU, when the second CPU traverses to a linked list corresponding to the first CPU, if the asynchronous dispatch flag is extracted from the bio included in the linked list, all the bio having the asynchronous dispatch flag in the linked list corresponding to the first CPU is converted into a corresponding Request through sorting, merging and other processes in a Request layer of a general block layer by an asynchronous dispatch thread.
S33, issuing the request to a software cache queue corresponding to the second CPU.
In the implementation, the second CPU issues the request to a software cache queue corresponding to the second CPU through an asynchronous dispatch process.
S34, obtaining a request from the software cache queue, and issuing the request to a hardware dispatch queue corresponding to the software cache queue.
In the implementation, the second CPU queries the mapping table map_queue of the software temporary storage queue and the hardware dispatch queue through the asynchronous dispatch process, determines the hardware dispatch queue corresponding to the software temporary storage queue, reads the request from the software cache queue, and issues the request to the hardware dispatch queue corresponding to the software cache queue.
S35, acquiring a request from the hardware dispatch queue, and issuing the request to the target block device.
In the implementation process, the second CPU reads the request from the hardware dispatch queue through the asynchronous dispatch process, sends the request to the block device driver, and sends the request to the target block device through the block device driver.
An example will be given below, taking the electronic device shown in fig. 1 as an example, in conjunction with the data read-write task migration process shown in fig. 6. As shown in fig. 6, the current load of the CPU2 and the CPU3 is too high, the current data read-write task needs to be migrated to other CPUs, the other CPUs asynchronously dispatch to the block device, and the current data read-write task of the CPU2 is migrated to the CPU1 and the current data read-write task of the CPU3 is migrated to CPUn. After converting the IO request into the corresponding bio, the CPU2 mounts the bio to a linked list corresponding to the CPU2 in the storage structure pointer array async_push_io (n) in the asynchronous dispatch management structure of the CPU1, wakes up an asynchronous dispatch thread of the CPU1, calls the asynchronous dispatch thread to convert the bio into a corresponding request, issues the request into a soft queue ctx1 corresponding to the CPU1, and further, the CPU1 reads the request from the soft queue ctx1, sends the request to a hard queue hctx corresponding to the soft queue ctx1, fetches the request from the hard queue hctx0, sends the request to a block device driver, and issues the request to the block device by the block device driver. After converting the IO request into the corresponding bio, the CPU3 mounts the bio to the linked list corresponding to the CPU3 in the storage structure pointer array async_push_io (n) in the asynchronous dispatch management structure of CPUn, wakes up the asynchronous dispatch thread of CPUn, and CPUn calls the asynchronous dispatch thread to convert the bio into the corresponding request, and issues the request to the soft queue ctxn corresponding to CPUn, further CPUn reads the request from the soft queue ctxn, sends the request to the hard queue hctxm corresponding to the soft queue ctxn, and then fetches the request from the hard queue hctxm, sends the request to the block device driver, and issues the request to the block device by the block device driver.
Taking the CPU2 as an example, a specific process of migrating a data read-write task to the CPU1 by the CPU2 and issuing the data to the block device by the CPU1 will be specifically described with reference to fig. 7. After acquiring an IO request initiated by an application (such as a file system) running on the CPU2, submitting the IO request to a general block layer, converting the IO request into a corresponding bio, setting an asynchronous dispatch mark for the bio to identify that a request queue supports asynchronous dispatch, selecting a CPU with a load smaller than a second load threshold from other CPUs if the current load of the CPU2 is greater than or equal to a first load threshold, if the CPU1 is selected, mounting the bio corresponding to the current IO request to a linked list corresponding to the CPU2 in an asynchronous dispatch management structure of the CPU1 by the CPU2, and waking up an asynchronous dispatch thread on the CPU 1. After the asynchronous dispatch thread of the CPU1 is awakened, traversing all linked lists corresponding to other CPUs in a storage structure pointer array async_push_io [ n ] in an asynchronous dispatch management structure body of the CPU1 to issue bio in each linked list, generating a request corresponding to the bio, issuing the request to a soft queue ctx1 corresponding to the CPU1, further, the CPU1 reads the request from the soft queue ctx1, sends the request to the hard queue hctx corresponding to the soft queue ctx1, fetches the request from the hard queue hctx, sends the request to the block device driver, and issues the request to the block device by the block device driver.
In the data processing method provided by the embodiment of the application, in the electronic equipment with a plurality of CPUs, when a first CPU acquires a first data read-write task, whether the load of the first CPU is larger than or equal to a first load threshold is firstly determined, if so, a second CPU with the load smaller than a second load threshold is determined from other CPUs, the first data read-write task is stored in an asynchronous dispatch management structure of the second CPU, and an asynchronous dispatch thread of the second CPU is awakened, so that the second CPU processes the first data read-write task based on the asynchronous dispatch thread. Because the asynchronous dispatch management structure for storing the data read-write tasks of other CPUs except the second CPU in the electronic device is pre-established in the second CPU, the data read-write tasks migrated from the other CPUs to the second CPU are uniformly managed, and the asynchronous dispatch thread for processing the data read-write tasks migrated from the other CPUs to the second CPU is pre-established for the second CPU, the asynchronous dispatch of the data read-write tasks migrated from the other CPUs by the second CPU based on the asynchronous dispatch thread is realized. Therefore, the data read-write task on the high-load CPU is migrated to the low-load CPU to be executed, the task amount on the high-load CPU is reduced, and the time delay for processing the data read-write task is reduced. Meanwhile, the high-load CPU can restore the load level as soon as possible due to the sharing of the task of the high-load CPU, so that the performance loss of the electronic equipment caused by overhigh load of a certain CPU or some CPUs is relieved, and the overall performance of the electronic equipment is improved.
Based on the same inventive concept, the embodiment of the present application further provides a data processing method implemented by the first CPU, and since the principle of solving the problem of the data processing method implemented by the first CPU is similar to that of the data processing method, implementation of the data processing method implemented by the first CPU may refer to implementation of the data processing method, and repeated parts are omitted.
Fig. 8 is a schematic flow chart of a data processing method implemented by a first CPU side according to an embodiment of the present application, where the method is applied to an electronic device having a plurality of CPUs, and may include the following steps:
s41, the first CPU acquires a first data reading and writing task.
S42, when the load of the first CPU is larger than or equal to a first load threshold value, determining a second CPU with the load smaller than a second load threshold value from at least one other CPU.
S43, storing the first data read-write task in an asynchronous dispatch management structure of the second CPU, wherein the asynchronous dispatch management structure is used for storing the data read-write task of the first CPU.
S44, waking up an asynchronous dispatch thread of the second CPU so that the second CPU processes the first data read-write task.
In one possible implementation manner, the acquiring the first data read-write task specifically includes:
acquiring an IO request generated by a target application, and generating a corresponding block device IO operation structure bio based on the IO request;
Storing the first data read-write task in an asynchronous dispatch management structure of the second CPU, wherein the method specifically comprises the following steps:
the bio is mounted into the asynchronous dispatch management structure.
In one possible implementation manner, the asynchronous dispatch management structure body includes linked lists corresponding to a plurality of other CPUs except the own CPU;
The method for mounting the bio into the asynchronous dispatch management structure body specifically comprises the following steps:
and mounting the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure body.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to the other CPUs except the own CPU.
In one possible implementation manner, the asynchronous dispatch management structure body and the storage structure pointer array are created when a target block device is started, where the target block device is a block device corresponding to the first data read-write task.
In one possible implementation manner, the second data read-write task issued by the first application running on the second CPU is processed by the first process called by the second CPU, and the asynchronous dispatch thread is different from the first process.
Based on the same inventive concept, the embodiment of the present application further provides a data processing device implemented on the first CPU side, and since the principle of the data processing device implemented on the first CPU side for solving the problem is similar to that of the data processing method, the implementation of the data processing device implemented on the first CPU side can refer to the implementation of the data processing method, and the repetition is omitted.
As shown in fig. 9, a schematic structural diagram of a data processing apparatus implemented on a first CPU side according to an embodiment of the present application, where the apparatus is applied to a first CPU in an electronic device having a plurality of CPUs, includes:
An acquiring module 51, configured to acquire a first data read-write task by using a first CPU;
A determining module 52, configured to determine, when the load of the first CPU is greater than or equal to a first load threshold, a second CPU having a load less than a second load threshold from at least one other CPU;
A sending module 53, configured to store the first data read-write task in an asynchronous dispatch management structure of a second CPU, where the asynchronous dispatch management structure is configured to store the data read-write task of the first CPU;
And the wake-up module 54 is configured to wake up the asynchronous dispatch thread of the second CPU, so that the second CPU processes the first data read-write task.
In one possible implementation manner, the obtaining module 51 is specifically configured to obtain an IO request generated by a target application, and generate a corresponding block device IO operation structure bio based on the IO request;
The sending module 53 is specifically configured to mount the bio into the asynchronous dispatch management structure.
In one possible implementation manner, the asynchronous dispatch management structure body includes linked lists corresponding to a plurality of other CPUs except the own CPU;
the sending module 53 is specifically configured to mount the bio to a linked list corresponding to the first CPU in the asynchronous dispatch management structure.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to the other CPUs except the own CPU.
In one possible implementation manner, the asynchronous dispatch management structure body and the storage structure pointer array are created when a target block device is started, where the target block device is a block device corresponding to the first data read-write task.
In one possible implementation manner, the second data read-write task issued by the first application running on the second CPU is processed by the first process called by the second CPU, and the asynchronous dispatch thread is different from the first process.
Based on the same inventive concept, the embodiment of the present application further provides a data processing method implemented by the second CPU, and since the principle of solving the problem of the data processing method implemented by the second CPU is similar to that of the data processing method, implementation of the data processing method implemented by the second CPU may refer to implementation of the data processing method, and repeated parts are omitted.
Fig. 10 is a schematic flow chart of a data processing method implemented by a second CPU side according to an embodiment of the present application, where the method is applied to an electronic device having a plurality of CPUs, and may include the following steps:
S61, the second CPU receives a first data read-write task sent by the first CPU.
The first data read-write task is stored in an asynchronous dispatch management structure body of a second CPU by the first CPU, and the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
s62, if the asynchronous dispatch thread is determined to be awakened, processing the first data read-write task based on the asynchronous dispatch thread.
In one possible implementation manner, the processing the first data read-write task based on the asynchronous dispatch thread specifically includes:
and calling the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issuing the first data read-write task to target block equipment.
In one possible implementation manner, the second CPU receives a first data read-write task sent by the first CPU, and specifically includes:
The second CPU receives a block device IO operation structure body bio corresponding to the IO request sent by the first CPU, and the bio is mounted in the asynchronous dispatch management structure body by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body comprises linked lists corresponding to a plurality of other CPUs except the CPU of the asynchronous dispatch management structure body, and the bio is mounted to the linked list corresponding to the first CPU in the asynchronous dispatch management structure body by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to a plurality of CPUs except for the own CPU.
In one possible implementation manner, the method for calling the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body and issue the first data read-write task to a target block device specifically includes:
calling the created asynchronous dispatch thread to execute the following steps:
acquiring a linked list corresponding to the first CPU from the asynchronous dispatch management structure body;
converting the bio contained in the linked list corresponding to the first CPU into a request;
Issuing the request to a software cache queue corresponding to the second CPU;
acquiring the request from the software cache queue, and issuing the request to a hardware dispatch queue corresponding to the software cache queue;
the request is obtained from the hardware dispatch queue, the request is issued to the target block device, the asynchronous dispatch thread is used for processing data read-write tasks migrated from other CPUs except the asynchronous dispatch thread to the second CPU, the second data read-write task issued by a first application running on the second CPU is called by the second CPU to be processed, and the asynchronous dispatch thread is different from the first process.
In one possible implementation, the asynchronous dispatch management structure, the storage structure pointer array, and the program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
Based on the same inventive concept, the embodiment of the present application further provides a data processing device implemented on the second CPU side, and since the principle of solving the problem of the data processing device implemented on the second CPU side is similar to that of the data processing method, the implementation of the data processing device implemented on the second CPU side can refer to the implementation of the data processing method, and the repetition is omitted.
As shown in fig. 11, which is a schematic structural diagram of a data processing apparatus implemented on a second CPU side according to an embodiment of the present application, the apparatus is applied to a second CPU in an electronic device having a plurality of CPUs, and the apparatus may include:
A receiving module 71, configured to receive a first data read-write task sent by a first CPU, where the first data read-write task is stored by the first CPU in an asynchronous dispatch management structure of a second CPU, where the asynchronous dispatch management structure is configured to store the data read-write task of the first CPU;
and the processing module 72 is configured to process the first data read-write task based on the asynchronous dispatch thread if it is determined that the asynchronous dispatch thread is awakened.
In one possible implementation, the processing module 72 is specifically configured to invoke the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issue the first data read-write task to a target block device.
In one possible implementation manner, the receiving module 71 is specifically configured to receive a block device IO operation structure bio corresponding to an IO request sent by the first CPU, where the bio is mounted to the asynchronous dispatch management structure by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body comprises linked lists corresponding to a plurality of other CPUs except the CPU of the asynchronous dispatch management structure body, and the bio is mounted to the linked list corresponding to the first CPU in the asynchronous dispatch management structure body by the first CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to a plurality of CPUs except for the own CPU.
In one possible implementation manner, the processing module 72 is specifically configured to call the created asynchronous dispatch thread to obtain a linked list corresponding to the first CPU from the asynchronous dispatch management structure, convert the bio included in the linked list corresponding to the first CPU into a request, issue the request to a software cache queue corresponding to the second CPU, obtain the request from the software cache queue, issue the request to a hardware dispatch queue corresponding to the software cache queue, obtain the request from the hardware dispatch queue, issue the request to the target block device, wherein the asynchronous dispatch thread is used to process a data read-write task migrated from other CPUs except for the asynchronous dispatch thread to the second CPU, and the second data read-write task issued by the first application running on the second CPU is invoked by the second CPU to be processed by the first process, and is different from the first process.
In one possible implementation, the asynchronous dispatch management structure, the storage structure pointer array, and the program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device may include:
When the load of the first CPU 81 is larger than or equal to a first load threshold value, determining a second CPU 82 with a load smaller than a second load threshold value from at least one other CPU, storing the first data read-write task in an asynchronous dispatch management structure body of the second CPU 82, wherein the asynchronous dispatch management structure body is used for storing the data read-write task of the first CPU;
And a second CPU 82, configured to receive the first data read/write task sent by the first CPU 81, and process the first data read/write task based on the asynchronous dispatch thread.
In one possible implementation manner, the first CPU 81 is specifically configured to obtain an IO request generated by a target application, and generate a corresponding block device IO operation structure bio based on the IO request;
The second CPU 82 is specifically configured to invoke the asynchronous dispatch thread to read the first data read-write task from the asynchronous dispatch management structure body, and issue the first data read-write task to a target block device.
In one possible implementation manner, the first CPU 81 is specifically configured to mount the bio to a linked list corresponding to the first CPU 81 in the asynchronous dispatch management structure, where the asynchronous dispatch management structure includes linked lists corresponding to a plurality of CPUs other than the self CPU.
In one possible implementation manner, the asynchronous dispatch management structure body includes a storage structure pointer array, where the storage structure pointer array is used to store linked lists corresponding to the other CPUs except the own CPU.
In one possible implementation, the second CPU 82 is specifically configured to invoke the created asynchronous dispatch thread to perform the following steps:
acquiring a linked list corresponding to the first CPU 81 from the asynchronous dispatch management structure;
converting the bio contained in the linked list corresponding to the first CPU 81 into a request;
Issuing the request to a software cache queue corresponding to the second CPU 82;
acquiring the request from the software cache queue, and issuing the request to a hardware dispatch queue corresponding to the software cache queue;
The request is obtained from the hardware dispatch queue, the request is issued to the target block device, the asynchronous dispatch thread is used for processing data read-write tasks migrated from other CPUs except the asynchronous dispatch thread to the second CPU 82, the second data read-write task issued by a first application running on the second CPU 82 is processed by a first process called by the second CPU 82, and the asynchronous dispatch thread is different from the first process.
In one possible implementation, the asynchronous dispatch management structure, the storage structure pointer array, and the program corresponding to the asynchronous dispatch thread are created at the time of starting the target block device.
The embodiment of the application also provides a computer readable storage medium storing computer executable instructions required to be executed by a processor, the computer readable storage medium containing a program required to be executed by the processor, the program when executed by the processor implementing the steps in the data processing method of the application. The processor may include a first CPU and may further include a second CPU in the embodiment of the present application.
In some possible embodiments, aspects of the data processing method provided by the present application may also be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps of the data processing method according to the various exemplary embodiments of the application as described in the present specification, when said program product is run on the electronic device.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.