Detailed Description
As described above, currently, highly reliable and highly secure embedded operating systems, such as rail transit systems, special circuitry, etc., typically manage partitions for performing tasks according to a fixed resource allocation ratio. Once the resource quota of a partition is used up, the task that is not executed by the partition is in a stalled state under the condition that the partition is not reconfigured, so that the efficiency is affected.
Therefore, the application aims to provide a technical scheme capable of solving the problem of partition task stagnation under the condition of not changing the partition frame of the operating system.
In order to make the technical solution in the present specification better understood by those skilled in the art, the technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present specification, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
In one aspect, an embodiment of the present application provides a partition task scheduling method of an operating system, which is applied to any one target core in a processor. The processor is responsible for running the operating system, periodically making partition time slice quota requirements, and dividing a plurality of partitions according to the corresponding partition time slice quota requirements in the current period. Wherein each partition is configured with a time quota for the processor, the time quota referring to the number of time slices that have been allocated (per partition time slice quota requirements). Correspondingly, the task of the operating system is configured with a time slice which is required to be consumed for execution after being created, and when the task is allocated to a certain partition for execution, the partition needs to consume a corresponding amount of time slices in the time quota to execute the task. In addition, the operating system may periodically populate the current partition with slots based on the pre-planned partition slot quota during each cycle.
Fig. 1 is a flow chart of a partition task scheduling method according to an embodiment of the present application, which specifically includes the following steps:
S102, under the condition that the operating system creates a first task aiming at the target core, the first task is distributed to one partition, and the first task is added to a local task queue.
In this embodiment, the first task refers to a newly created task in the operating system.
It should be appreciated that the core in the processor is the hardware that runs the tasks and the partition is the virtual environment that runs the tasks. Each partition is configured with a respective task queue, and the operating system selects an appropriate core in the processor to create a corresponding task according to the core affinity mask for each task type. After the task is created, a partition is allocated to the core to which the task belongs.
Note that, in this embodiment, the manner in which the tasks are allocated to the partitions is not particularly limited. As an exemplary introduction, the operating system may pre-configure the corresponding partition according to the task type, and after the operating system creates a first task for the target core, the target core may allocate the partition according to the task type of the first task. Or the operating system can also configure the corresponding partition according to the security level in advance, and after the operating system creates the first task for the target core, the target core can allocate the partition according to the security level of the first task.
Specifically, the present embodiment configures a task queue for each core in the processor, for storing tasks to be executed created by the operating system for the task queue. Wherein the task queue of each core is further divided into a ready task queue and a limit task queue. The execution priority of the task in the ready task queue is higher than the execution priority of the task in the limit task queue, namely, the core of the processor preferentially executes the task in the ready task queue, and the task in the limit task queue is executed again under the condition that the task in the ready task queue is not available.
For the target core, after a partition is allocated to the first task, whether the remaining time slices of the time quota of the partition to which the first task belongs support the execution of the first task is judged. If so (the number of remaining time slices is greater than or equal to the number of time slices consumed for execution of the first task), then the first task is added to the local ready task queue, otherwise, the first task is added to the local limited task queue. In this embodiment, the task in the ready task queue occupies the time slice of the partition time quota when waiting for execution, and the task in the task queue is limited from occupying the time slice of the partition time quota when waiting for execution.
S104, extracting a task to be executed from a local task queue, and determining a partition to which a second task belongs under the condition that the second task to be executed is extracted.
In this step, the target core preferably attempts to extract the second task from the local ready task queue, and if there is no task in the ready task queue, then attempts to extract the second task from the local limited task queue. The ready task queue and the limited task queue may use existing scheduling algorithms, such as first-in first-out (First Input First Output, FIFO), round-Robin (RR), and full-fair scheduling policy (Completely Fair Scheduler, CFS), which are not specifically limited in this specification.
It should be understood that the second task of this embodiment is created according to the step shown in S102 (the second task may be the first task), so that the extracted task is already allocated with a partition, whether it be a ready task queue or a constraint task queue.
Specifically, in the case where there are a plurality of cores of the processor, if there are cores in an idle state (task queues have no task), tasks of the task queues in other cores can be processed, thereby improving overall efficiency. The target core extracts a third task to be executed from the ready task queues and the limiting task queues of other cores according to the execution priority of the task under the condition that the local ready task queues and the limiting task queues do not have the task to be executed, similarly, when the residual time slices of the time quota of the partition to which the third task belongs support the execution of the third task, the target core consumes the time quota of the partition to which the third task belongs to execute the third task, otherwise, the target core borrows the residual time slices of the time quota of the other partition to consume the time quota of the borrowed other partition to execute the third task.
On the basis of the above, it may also be configured that tasks in all ready task queues are executed preferentially among all cores of the processor, and after the task processing in all ready task queues is completed, the tasks in the limiting task queues are processed. That is, the target core first attempts to fetch the task to be executed from the local ready task queue. In the case that the task to be executed cannot be extracted from the local ready task queue, the task to be executed is attempted to be extracted from the ready task queues of other cores. If the task to be executed is not extracted from the ready task queues of other cores, the task to be executed is attempted to be extracted from the local limiting task queues. In addition, if the task to be executed still fails to be extracted from the local limited task queue, the task to be executed is finally attempted to be extracted from the limited task queues of other cores.
S106, under the condition that the remaining time slices of the time quota of the partition to which the second task belongs support the execution of the second task, consuming the time quota of the partition to which the second task belongs to execute the second task, otherwise, borrowing the remaining time slices of the time quota of other partitions to consume the time quota of the borrowed other partitions to execute the second task.
In this step, if there is a remaining time slice in the time quota of the partition to which the second task belongs, but the execution of the second task is not supported, the target core may borrow the time slice required for executing the second task from other partitions. For example, the time quota of the partition to which the second task belongs has 2 remaining time slices, and 3 time slices are required for executing the second task, the target core runs the second task for 2 time slices first, after the time slice expires, if the time quota of the partition to which the second task belongs does not have the remaining time slices, borrows 1 remaining time slice of the time quota in other partitions. Correspondingly, the second task consumes 2 time slices in the belonging partition time quota and 1 remaining time slice in the borrowed other partition time quota.
In summary, the method of the present embodiment divides the operating system into a plurality of partitions, and configures a time quota of the processor for each partition. After the operating system creates a new task for any target core in the processor, the target core distributes the new task to one partition and adds the new task to a task queue local to the target core, meanwhile, the target core extracts a task to be executed from the local task queue, and under the condition that the task to be executed is extracted, the target core firstly determines the partition to which the task to be executed belongs, and then judges whether the partition to which the task to be executed belongs has time-allocated remaining time slices to support execution of a second task according to the remaining time slices of the time quota of the partition to which the task to be executed is distributed. If the target core judges that the partition to which the second task belongs cannot support, the target core timely borrows the remaining time slices of the time quota of the other partitions to consume the time quota of the borrowed other partitions to execute the task to be executed, so that the problem that the task to be executed stagnates is avoided. It can be seen that the method of the embodiment can effectively improve the task execution efficiency of the operating system and the resource utilization rate of the processor.
Further, to further improve the task execution efficiency of the operating system. Under the condition that the second task is blocked in the execution process (the execution is suspended in a blocking state), for example, the operating system enters a sleep state, the processor waits for IO to be ready, and the like, the target core can release the second task, and the unconsumed time slice occupied by the execution of the second task is returned to the original partition. Wherein if the target core borrows a time slice to perform the second task before, then the borrowed portion of the time slice needs to be returned to the other partition.
Furthermore, for such blocked tasks, if the local ready task queue and the restricted task queue have no other tasks to be executed, the target core may directly re-add the second task to the local task queue so that the second task is re-executed immediately. And if the local ready task queue and the limiting task queue have other tasks to be executed, the target core continues to sequentially extract the fourth task from the local ready task queue and the limiting task queue to try to run, and meanwhile, the target core re-adds the second task to the local task queue to enable the second task to be queued.
Furthermore, if the second task is awakened after blocking and is not released and the target core is currently executing the fifth task (the fifth task may be the fourth task), the target core may release the fifth task to re-execute the second task. Correspondingly, the target core also needs to re-add the released fifth task to the local task queue.
In the process of re-adding the second task (the same applies to the fifth task) to the local task queue, the target core determines whether the remaining time slices of the time quota of the partition to which the second task belongs support the execution of the second task. And re-adding the second task to the local ready task queue when the remaining time slices of the time quota of the partition to which the second task belongs are determined to support the execution of the second task, and re-adding the second task to the local limited task queue when the remaining time slices of the time quota of the partition to which the second task belongs are determined to not support the execution of the second task.
In addition, the operating system of the present embodiment may also set a task to preempt execution (hereinafter, simply referred to as a preemption task). After the preemption task is created, the corresponding core needs to put down the currently executed task to execute the preemption task preferentially. For the present embodiment, if preemptively executing by other tasks occurs during the execution of the second task, the target core releases the second task to restore the unconsumed time slices occupied by the execution of the second task to the original partition. The target core then re-adds the second task to the local task queue to re-queue the second task.
Based on the above, in the case that the second task is any one of blocked, waken after locking, and preempted during the running process, the target core may restore the unconsumed time slice occupied by the execution of the second task to the original partition. Further, the target core may also determine whether to transfer the task belonging to the original partition in the restricted task queue to the ready task queue according to the remaining time slice of the returned original partition time quota. For example, after returning the unconsumed time slice occupied by the second task execution to the original partition, and under the condition that the remaining time slice of the time quota of the original partition is greater than zero, the target core transfers the tasks belonging to the original partition in the limiting task queues of the local core and other cores to the ready task queue of the core to which the task belongs.
The method of this embodiment will be described in detail.
The embodiment carries out the processing design of the partition tasks in the scheduling process of the kernel of the operating system, and particularly relates to the following procedures:
1. Partition creation and configuration flow
And (3) pre-planning the proportional relation between the number of partitions of the operating system and the time quota of each partition on the design level according to the service characteristics. And when the operating system is started, automatically creating relevant partitions and setting the time quota proportion of each partition according to the pre-programming. The time slice quota actually obtained by the partition in a certain fixed time window is further calculated based on the time quota proportion.
For example, in a period window with a total time quota of 200ms, the operating system executes A, B partitions and occupies 30% and 70% of the total time quota, so that partition a allocates a 60ms time slice quota in the period window, and partition B allocates a 140ms time slice quota in the period window.
2. Task creation and configuration flow
Tasks in the operating system need to determine the assigned partition and the original scheduling policy of the tasks. Tasks are created not only when the operating system is started, but also dynamically created and consumed during the running of the operating system. The specific process is as follows:
S2.1, if the current core is in an idle state and no task needs to be executed, then:
1) Setting the information of the partition to which the creation task belongs, the current core and the like.
2) Adding the creation task into a corresponding queue of the current core according to the partition condition of the creation task, wherein the method comprises the following steps:
if the partition to which the creation task belongs has a time quota, the creation task is added into a ready task queue of the current core, and an update queue priority bitmap is checked;
If the partition to which the creation task belongs has no time quota, the creation task is added into a limited task queue of the current core, and an update queue priority bitmap is checked;
3) Partition task scheduling is performed.
S2.2, if the current core is in a non-idle state and a current task is executing, then:
1) The remaining time slices of the current task are returned to the partition that previously allocated the task time slice.
2) Further, according to the partition condition of the current task, maintaining a relevant task queue, including:
if the affiliated partition has no residual time slices, adding the current task into a limited task queue of the current core and checking an update queue priority bitmap;
If the affiliated partition still has the residual time slices, adding the current task into a ready task queue of the current core and checking an update queue priority bitmap;
after the allocation partition returns the time slice through this time, if the remaining time quota of the allocation partition reaches a predetermined threshold, all tasks of this allocation partition are moved from the restricted task queue to the ready task queue, and the update queue priority bitmap is checked.
3) A new task is created and the partition to which this task belongs is associated.
4) Adding the creation task into a corresponding queue of the current core according to the situation of the partition to which the creation task belongs, wherein the method comprises the following steps:
if the partition to which the creation task belongs has a time quota, the creation task is added into a ready task queue of the current core, and an update queue priority bitmap is checked;
If the partition to which the creation task belongs has no time quota, the creation task is added to a limited task queue of the current core, and an update queue priority bitmap is checked.
5) Partition task scheduling is performed.
3. Task blocking flow
1) Returning the remaining time slices of the current task to the partition which is allocated with the task time slices before;
2) Associated task queue processing to allocate partitions:
If the remaining time quota of the allocation partition increases from zero to greater than zero, all tasks to which the allocation partition belongs are moved from the limited task queue of the processor into the ready task queue. And checks the update related queue priority bitmap.
If the remaining time quota of the allocation partition is still zero, the task to which the allocation partition belongs is kept in the limited task queue
If the remaining time quota of the allocation partition is always greater than zero, the task to which the allocation partition belongs continues to remain in the ready task queue
3) The state of the current task is modified to be blocking.
4) Partition task scheduling is performed.
4. Task wake-up flow
S4.1, if the current core is in an idle state and no task needs to run, then:
1) The state of the wake-up task is modified to a ready state.
2) According to the partition condition of the wake-up task, the wake-up task is added into a corresponding queue of the current core, and the method comprises the following steps:
if the partition to which the wake-up task belongs has a time quota, the wake-up task is added into a ready task queue of the current core, and an update queue priority bitmap is checked;
If the partition to which the wake-up task belongs has no time quota, the wake-up task is added to a limited task queue of the current core, and an update queue priority bitmap is checked.
3) Partition task scheduling is performed.
S4.2, if the current core is in a non-idle state and a current task is running, then:
1) Returning the remaining time slices of the current task to the partition to which the task time slices were previously allocated.
2) Further, according to the partition condition of the current task, maintaining a relevant task queue, including:
If the affiliated partition has no residual time slices, adding the current task into a limited task queue of the current core, and checking an update queue priority bitmap;
if the partition still has the remaining time slices, adding the current task to a ready task queue of the current core, and checking an update queue priority bitmap;
After the partition is allocated and returned by the time slice, if the partition quota is increased to a predetermined threshold, all tasks of the partition are moved from the limited task queue to the ready task queue, and the update related queue priority bitmap is checked
3) According to the partition condition of the wake-up task, the wake-up task is added into a corresponding queue of the current core, including:
if the partition to which the wake-up task belongs has a time quota, the wake-up task is added into a ready task queue of the current core, and an update queue priority bitmap is checked;
If the partition to which the wake-up task belongs has no time quota, the wake-up task is added to a limited task queue of the current core, and an update queue priority bitmap is checked.
4) Partition task scheduling is performed.
5. Partition refilling and task preemption flow:
To improve the real-time response capability of the operating system, a fixed-cycle scheduling timer is set. Determining whether partition scheduling related processing needs to be executed according to all ready task states of the current processor in a periodic timer. If execution is required, the method operates according to the following steps:
s5.1, if the current partition time window expires, the partition needs to be filled and the run queue restored, i.e.:
1) And calculating and recovering an initial value according to the quota proportion again according to the residual time slices of each partition.
2) And for the partition with the quota being greater than zero, recovering the task to which the partition belongs from the limited task queue to the ready task queue, and checking and updating the related queue priority bitmap.
S5.2, judging whether the current task needs to be preempted according to whether the remaining time slices of the current task run out and whether other task conditions needing to be preempted are existed, if so, returning the remaining time slices and maintaining an operation queue, namely:
1) Returning the remaining time slices of the current task to the partition to which the task time slices were previously allocated.
2) Performing related task queue maintenance according to the partition condition of the current task, including:
if the partition has no remaining time slices, the current task is added to the limited task queue of the current core, and the update queue priority bitmap is checked.
If the partition still has a remaining time slice, the current task is added to the ready task queue of the current core and the update queue priority bitmap is checked.
After the assigned partition passes this return time slice, if the partition quota increases to a predetermined threshold, all tasks of the assigned partition are moved from the restricted task queue to the ready task queue and the update queue priority bitmap is checked.
3) Partition task scheduling is performed.
6. Partition task scheduling flow:
During the operation of the operating system, the task creation, task blocking, task wakeup, task preemption (partition refilling and task preemption) and the like may trigger the dispatch and switch tasks. The process is performed as follows:
s6.1, judging whether the switching needs to be scheduled or not according to the current task and the running queue conditions, wherein the method comprises the following steps:
1) If the current task is an idle task and the following conditions are met, judging that scheduling switching is needed:
there is at least one ready task in the ready task queue or the limit task queue of the current core.
At least one partition in the operating system has a quota.
2) If the current task is not an idle task and the current task belongs to a ready task queue, one of the following conditions is met to judge that scheduling switching is needed:
the remaining runtime quota of the current task has been consumed.
The remaining run time of the current task has not been consumed, but the presence of a higher priority ready task in the system is confirmed by the queue priority bitmap.
3) If the current task is not an idle task and the current task belongs to a limited task queue, one of the following conditions is met to judge that scheduling switching is needed:
The ready task queue of the current core is not empty.
The remaining runtime quota of the current task has been consumed.
The remaining runtime of the current task has not been consumed, but the presence of a higher priority task in the operating system is confirmed by the queue priority bitmap.
S6.2, if the task scheduling is judged not to be needed, the current flow is exited. Otherwise, selecting the next task to be operated according to the following mode, namely:
1) And if one ready task queue in the operating system is not empty, selecting the task with the highest priority from the global view of the system according to the ready task queue bitmap of each core. Otherwise, the following processing is continued.
2) If one of the restricted task queues is not empty and at least one of the partitions has a time quota, then the highest priority task is selected from the restricted task queues. Otherwise, the following processing is continued.
3) Selecting an idle task as a task to be operated next
S6.3, if the next task to be operated is not an idle task, determining an allocation partition of the task according to the following method:
1) If the remaining quota of the partition to which the task to be run belongs is greater than zero, the partition is allocated as the partition to which the task to be run belongs
2) And if the residual quota of the affiliated partition of the task to be operated is smaller than zero, selecting one partition from other partitions with residual quota in the operating system as the affiliated partition of the task to be operated.
S6.4, if the next task to be operated is not an idle task, calculating the time quota allocated to the task according to the following method:
1) And if the residual quota of the affiliated partition of the task to be run is greater than a predetermined threshold value, allocating a threshold-size time slice quota from the allocation partition.
2) And if the residual quota of the affiliated partition of the task to be operated is smaller than a predetermined threshold value, distributing all the residual time quotas in the distributed partition to the task to be operated.
S6.5, if the current running task is not an idle task, the task needs to be used as a time quota of the cut-out task and emptied or returned.
S6.6, if the next task to be operated is not an idle task, maintaining the relevant state of the task to be operated is needed, including:
1) And (4) time slices are allocated, namely the time slices are allocated for the tasks to be operated according to the calculation result of the step (S6.4), and the corresponding time slices are subtracted from the allocation partition.
2) And (3) partitioning, namely recording the partitioning of the task to be operated according to the calculation result in the step S6.3.
3) And modifying the task state of the task to be operated into an operation state and taking the operation state as the current task of the current core.
S6.7, if the task to be operated is not an idle task, maintaining and distributing the situation of the related tasks of the partition is also needed, wherein the situation comprises the following steps:
1) If the remaining time quota of the allocation partition is equal to or less than zero, all relevant tasks of the allocation partition are moved from the ready task queue to the limit task queue.
2) If the remaining time quota of the assigned partition is still greater than or equal to zero, then the associated task of the assigned partition need not be moved.
It should be appreciated that the time complexity of the partition task scheduling is O (1) through the management of the ready task queue and the limit task queue described above.
The partition task scheduling method of the present embodiment is described below in conjunction with an actual application scenario.
Referring to fig. 2, the present application scenario is set as follows:
The partition control structure comprises a partition name, a partition quota proportion, a quota of a partition, a remaining time slice of the partition and a task set in the partition.
The core control structure comprises a partition ready task queue, a partition limit task queue and a current task for each processor. Wherein the ready task queue and the limited task queue can be further split into a plurality of sub-queue sets according to a scheduling policy. Such as FIFO/RR queues for real-time scheduling, CFS task queues for non-real-time scheduling, and other types of task queues. For each ready task queue or limiting task queue, a unified task queue priority bitmap is provided, and is used for marking the priority bitmap of each task in the current queue and improving the efficiency of searching the task with the highest priority.
The task control structure comprises information such as task names, scheduling strategies, priorities, core affinity masks (indicating affinities), remaining time slices, affiliated partitions, allocation partitions and the like. Wherein the partition to which the current task belongs is the partition to which the current task belongs, and the assigned partition is the partition that provides (borrows) time slices for the task to run. If the time quota of a certain partition is not used up and the task is not ready, the limiting tasks of other partitions can temporarily borrow the time slices which are not used up, so that the overall operation efficiency is improved.
Based on the structure shown in fig. 2, with further reference to fig. 3 and 4, the execution flow is as follows:
1) Creating partitions and setting partition quotas
Assuming that the operating system creates A, B partitions, the time quota ratio for partition A, B is 30%:70%, and the total time quota is 200ms. The time slice quota of 60ms can be allocated to the partition A in the time window through scaling, and the time slice quota of 140ms can be allocated to the partition B in the time window.
2) Creating tasks and associating related partitions
Four tasks are created for the partition A, real-time scheduling strategies are adopted for the four tasks, the real-time priorities are 6, 8, 9 and 11 in sequence, and the task names are A6, A8, A9 and A11 in sequence. Suppose that only task A9 is in the ready state and the other tasks are in the blocked state.
Four tasks are created for the partition B, real-time scheduling strategies are adopted for the four tasks, real-time priorities are sequentially 4, 6, 7 and 10, and task names are sequentially B4, B6, B7 and B10. It is assumed that only two tasks of B6, B10 are in ready state and the other tasks are in blocking state.
3) Ready task queue maintenance on multi-core processor
As described in the processes of task creation, task blocking, task wakeup, etc. in the summary, the system will settle the task time quota and adjust the ready task queue and limit the task queue. Assume that in a two-core processor environment, the processor names are core 1 and core 2, respectively. On this two-core process, partition A and partition B are created, along with related tasks, as follows:
The task A9 is created in core 1 and a scheduled time slice (10 ms) is allocated to task A9 from the current remaining time slices 60ms of partition a. The remaining time slice allocated a at this time is 50ms and A9 is in the ready task queue of core 1. Core 1 chooses to choose the A9 task to run from the ready task queue.
The tasks A6, A8, a11 are created in the core 1, the creation process being as described above. Immediately after creation, the block state is set, at which time the scheduled time slices allocated to the task are returned to partition a, at which time the time slice resources of partition a remain for 50ms. Thus, there is no need to adjust the original task of partition a.
The same way creates the B6, B10 tasks in the core 2 and allocates one scheduling time slice (10 ms) from partition B to tasks B6, B10. The remaining time slice for partition B is 120ms at this point, and both B6, B10 tasks are in the ready task queue of core 2. Core 2 selects B10 tasks from the ready task queue for execution.
As described above, in the case where the B4, B7 tasks are created and set to the blocking state in the core 2, the remaining time slice of the partition B is still 120ms, and there is no need to adjust the run queue of the original task.
The relationship of the components of the partition, the task, the processor and the like in the system is shown in fig. 3.
4) Executing partition tasks with sufficient partition time slices
On core 1, when the 10ms time slice expires, a partition task scheduling procedure will be performed, detecting that the remaining time slice of the current task (A9) is exhausted and scheduling the selection of the next task. At this time, a ready task conforming to the policy such as FIFO or RR is selected from the ready task queue. If A9 is still in the ready state after 10ms of operation, then A9 task will be selected from the ready task queue of core 1 and time slice will be allocated 10ms from partition A to which the task belongs, and the task will be scheduled to be executed. The remaining time slice of partition a at this time is 40ms.
On core 2, the expiration of the 10ms time slice is detected first, and the expiration of the remaining time slice of the current task (B10) is detected, so scheduling is performed to select the next ready task. If the B10 task is still in the ready state, the core 2 ready task queue tracks the presence of two tasks, B10, B6. The queue head task is a high priority task according to the real-time task scheduling algorithm rule such as FIFO/RR (first in first out/RR). The ready task selected at this time is still B10, and allocates a time slice from the partition to which the task belongs (partition B) and continues to schedule running task B10. The remaining time slice of partition B is 110ms at this time.
In the case where both partition a and partition B have time quotas, the relationship of the components of the partition, task, processor, etc. is still as shown in fig. 3.
5) Executing partition tasks in the event of insufficient partition A time quota
In the above manner, after the core 1 runs for 60ms, the current task A9 and the remaining time slices of the belonging partition a are all 0. Although task A9 is still in the ready state at this time, the A9 task can only be added to the limited task queue of the corresponding priority because the remaining time slice of the belonging partition is 0. According to the invention, the ready task queues of each priority level are checked preferentially in the partition scheduling process, and the ready tasks in the limiting task queues are checked sequentially only if the ready task queues have no ready task. So that core 1 will now prefer B6 tasks as the next to run task during the partition task scheduling process. The core 1 selects a B6 task, partitions time slices from partitions (partition B) to which the B6 task belongs, and dispatches and runs the B6 task. The remaining time slice of partition B at this time is 70ms.
In the same way, the time slice of the current task B10 is exhausted after 60ms of core 2 operation. Assuming this task is still in a ready state, it is still in the ready task queue of the current priority. According to the partition task scheduling process, the task B10 is selected as the next task to be run. Time slices are allocated from the partition to which B10 belongs (partition B), and B10 tasks are scheduled to run. The remaining time slice of partition B at this time is 60ms.
And then, the core 1 and the core 2 respectively select the tasks B6 and B10 to run according to the partition task scheduling process. And allocates a time quota from partition B until after 100ms the time quota of partition B is exhausted. However, in this process, although the task A9 is in the ready state, since the partition quota is zero, the task A9 is always in the restricted task queue, so in the subsequent process of this period, the core 1 may execute the task A9 by using the time slice of the partition B, and thus the allocation partition of the task A9 is changed from the partition a to the partition B. The relationship of the system partition, task, processor and other components is shown in fig. 4.
6) Partition A and partition B time quota exhaustion case refill time slices
After 100ms of operation of both core 1 and core 2, the remaining time slices for partition A and partition B are reduced to zero. The partition refilling and task preemption process is triggered by a timer, and the partition time slices are refilled for the partition A and the partition B according to the partition refilling and task preemption process. The remaining time slices of partition a and partition B after refilling are restored to 60ms and 140ms.
All tasks associated with partition a and partition B are restored from the restricted task queue to the ready task queue, so task A9 will be restored from the restricted task queue to the ready task queue of the corresponding priority level. Here, the initial state of the system is restored, and the subsequent ready is periodically scheduled according to the foregoing process.
The application scene can ensure the time isolation characteristics among different partitions, namely when the partition A has ready tasks with high priority, but the partition time quota is insufficient, the task (A9) of the partition cannot be operated. Therefore, the time between different partitions is ensured to be isolated according to the preset quota proportion, and the mutual influence between the partitions is avoided.
In another aspect, corresponding to the method shown in FIG. 1, another embodiment of the present application provides a partitioned task scheduling device for an operating system that is applied to a target core of a processor. Fig. 5 is a schematic structural diagram of the partition task scheduling device 500, which includes:
And the task partition module 510 is configured to, in a case where the operating system creates a first task for the target core, allocate the first task to one partition, and add the first task to a local task queue.
The task extraction module 520 is configured to extract a task to be executed from the local task queue, and determine a partition to which a second task to be executed belongs if the task is extracted.
And a task execution module 530, configured to consume the time quota of the partition to which the second task belongs to execute the second task if the remaining time slices of the time quota of the partition to which the second task belongs support the execution of the second task, and otherwise borrow the remaining time slices of the time quota of the other partition to consume the time quota of the borrowed other partition to execute the second task.
The apparatus of this embodiment divides an operating system into a plurality of partitions, and configures a time quota of a processor for each partition. After the operating system creates a new task for any target core in the processor, the target core distributes the new task to one partition and adds the new task to a task queue local to the target core, meanwhile, the target core extracts a task to be executed from the local task queue, and under the condition that the task to be executed is extracted, the target core firstly determines the partition to which the task to be executed belongs, and then judges whether the partition to which the task to be executed belongs has time-allocated remaining time slices to support execution of a second task according to the remaining time slices of the time quota of the partition to which the task to be executed is distributed. If the target core judges that the partition to which the second task belongs cannot support, the target core timely borrows the remaining time slices of the time quota of the other partitions to consume the time quota of the borrowed other partitions to execute the task to be executed, so that the problem that the task to be executed stagnates is avoided. Based on the device of the embodiment, the task execution efficiency of the operating system and the resource utilization rate of the processor can be effectively improved.
Optionally, the local task queue includes a ready task queue and a limiting task queue, the execution priority of the task in the ready task queue is higher than the execution priority of the task in the limiting task queue, the task in the ready task queue occupies a time slice of the partition time quota when waiting for execution, the task in the limiting task queue does not occupy a time slice of the partition time quota when waiting for execution, the task partition module 510 adds the first task to the local task queue, including adding the first task to the local ready task queue if the remaining time slices of the time quota of the partition to which the first task belongs support the first task to execute, otherwise adding the first task to the local limiting task queue, and the task extracting module 520 extracts the task to be executed from the local task queue, including extracting the task to be executed from the local ready task queue and the limiting task queue according to the execution priority of the task.
Optionally, the processor further includes other cores in addition to the target core, and the task extraction module 520 is further configured to:
Extracting a third task to be executed from the ready task queues and the limiting task queues of other cores according to the execution priority of the task under the condition that the local ready task queues and the limiting task queues do not have the task to be executed, consuming the time quota of the partition to which the third task belongs to execute the third task under the condition that the remaining time slices of the time quota of the partition to which the third task belongs support the execution of the third task, and otherwise, borrowing the remaining time slices of the time quota of the other partition to consume the time quota of the borrowed other partition to execute the third task.
Optionally, the task extraction module 520 extracts the task to be executed from the local ready task queue and the limiting task queue according to the execution priority of the task, including attempting to extract the task to be executed from the local ready task queue, attempting to extract the task to be executed from the ready task queues of other cores if the task to be executed is not extracted from the local ready task queue, attempting to extract the task to be executed from the local limiting task queue if the task to be executed is not extracted from the ready task queues of other cores, and attempting to extract the task to be executed from the limiting task queues of other cores if the task to be executed is not extracted from the local limiting task queue.
Optionally, the task execution module 530 is further configured to restore an unconsumed time slice occupied by the second task execution to the original partition in the case that a blockage occurs during the second task execution.
Optionally, the task execution module 530 is further configured to determine whether a remaining time slice of a time quota of a partition to which the second task belongs supports execution of the second task if the second task is awoken after locking occurs during execution of the second task, and if the local ready task queue and the limiting task queue do not have other tasks to be executed, re-add the second task to the local ready task queue if the remaining time slice of the time quota of the partition to which the second task belongs is determined to support execution of the second task, and re-add the second task to the local limiting task queue if the remaining time slice of the time quota of the partition to which the second task belongs is determined not to support execution of the second task.
Optionally, the task execution module 530 is further configured to, when the second task is woken up after being locked in the execution process of the second task, and the local ready task queue and the limiting task queue have other tasks to be executed, extract, according to the execution priority of the task, execution of a fourth task from the local ready task queue and the limiting task queue.
Optionally, the task execution module 530 is further configured to wake up after blocking occurs during execution of the second task, and determine whether a remaining time slice of a time quota of a partition to which the fifth task belongs supports execution of the fifth task if the second task is currently executing the fifth task, re-add the fifth task to a local ready task queue if the fifth task is determined to be supported for execution, and re-add the fifth task to a local limited task queue if the fifth task is determined not to be supported for execution.
Optionally, the task execution module 530 is further configured to, when preemptively executing by another task occurs during the execution of the second task, restore an unconsumed time slice occupied by the execution of the second task to an original partition, determine whether a remaining time slice of a time quota of the partition to which the second task belongs supports the execution of the second task, re-add the second task to a local ready task queue if it is determined that the execution of the second task is supported, and re-add the second task to a local limited task queue if it is determined that the execution of the second task is not supported.
Optionally, the task execution module 530 is further configured to transfer, after returning the unconsumed time slice occupied by the second task execution to the original partition, the tasks belonging to the original partition in the local and other core limited task queues to the ready task queue of the core if the remaining time slice of the time quota of the original partition is greater than zero.
Optionally, the plurality of partitions are periodically partitioned by the operating system according to partition time slice quota requirements, and the operating system fills the current partition with time slices based on the pre-planned partition time slice quota in each period.
It is obvious that the apparatus of this embodiment may be used as an execution body of the method shown in fig. 1, and thus may implement the steps and corresponding functions of the method shown in fig. 1. Because the principle is the same, the description is not repeated here.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present specification. Referring to fig. 6, at the hardware level, the electronic device includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 6, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
Alternatively, the processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program, thereby forming the partition task scheduling device shown in fig. 5 on a logic level. Correspondingly, the processor executes the program stored in the memory and is specifically configured to perform the following operations:
And in the case that the operating system creates a first task for the target core, distributing the first task to one partition, and adding the first task to a local task queue.
And extracting the task to be executed from the local task queue, and determining the partition to which the second task to be executed belongs under the condition that the second task to be executed is extracted.
Consuming the time quota of the partition to which the second task belongs to execute the second task under the condition that the remaining time slices of the time quota of the partition to which the second task belongs support the execution of the second task; otherwise, borrowing the remaining time slices of the time quota of the other partition to consume the time quota of the borrowed other partition to execute the second task.
The partition task scheduling method disclosed in the embodiment shown in the present specification can be applied to a processor and implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The Processor may be a general-purpose Processor including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), etc., or may be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
Of course, in addition to the software implementation, the electronic device in this specification does not exclude other implementations, such as a logic device or a combination of software and hardware, that is, the execution subject of the following process is not limited to each logic unit, but may also be hardware or a logic device.
Furthermore, an embodiment of the present application also proposes a computer-readable storage medium storing one or more programs, the one or more programs including instructions. The above instructions, when executed by a portable electronic device comprising a plurality of applications, enable the portable electronic device to perform the steps of the method shown in fig. 1, comprising:
And in the case that the operating system creates a first task for the target core, distributing the first task to one partition, and adding the first task to a local task queue.
And extracting the task to be executed from the local task queue, and determining the partition to which the second task to be executed belongs under the condition that the second task to be executed is extracted.
Consuming the time quota of the partition to which the second task belongs to execute the second task under the condition that the remaining time slices of the time quota of the partition to which the second task belongs support the execution of the second task; otherwise, borrowing the remaining time slices of the time quota of the other partition to consume the time quota of the borrowed other partition to execute the second task.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely an example of the present specification and is not intended to limit the present specification. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description. Moreover, all other embodiments obtained by those skilled in the art without making any inventive effort shall fall within the scope of protection of this document.