WO2024164369A1 - Resource-aware task allocation method for mixed-criticality partitioned real-time operating system - Google Patents
Resource-aware task allocation method for mixed-criticality partitioned real-time operating system Download PDFInfo
- Publication number
- WO2024164369A1 WO2024164369A1 PCT/CN2023/078467 CN2023078467W WO2024164369A1 WO 2024164369 A1 WO2024164369 A1 WO 2024164369A1 CN 2023078467 W CN2023078467 W CN 2023078467W WO 2024164369 A1 WO2024164369 A1 WO 2024164369A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- task
- criticality
- resource
- tasks
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the invention relates to the field of computer technology, in particular to a resource-aware task allocation method for a mixed-key partition real-time operating system.
- Mixed criticality systems are embedded real-time systems that perform multiple functions on the same computing platform and meet space, power and cost constraints. Such systems are widely used in automotive, aerospace and other industries. In such an integrated system, applications with different certification requirements and different importance (criticality) can coexist and share system resources. Mixed criticality systems must ensure that high-criticality tasks can be executed correctly and meet time constraints, while low-criticality tasks can be abandoned in certain circumstances to ensure the operation of high-criticality tasks.
- Mixed criticality systems usually have two system operation modes: low criticality and high criticality. The system starts running in low criticality mode. In this mode, low criticality tasks run together with high criticality tasks with the execution time in low criticality mode. When a task in the system times out, the system will switch to high criticality mode and suspend the low criticality task. At the same time, the high criticality task will run with a longer execution time.
- low-criticality tasks and high-criticality tasks share various resources on the platform, such as code segments, data, memory, IO devices, etc.
- tasks need to access these shared resources in a mutually exclusive manner with locks.
- the spin locks specified in the AUTOSAR standard are widely used to protect the computational correctness of shared resources.
- Real-time systems use a resource sharing protocol based on spin locks to manage access to shared resources and provide an upper bound on the time for tasks to wait for and execute shared resources.
- spin locks due to the application of spin locks, when tasks request the same resources on different processors, they will wait on their respective processors until they successfully obtain the requested resources. This phenomenon will bring high blocking time to tasks and seriously undermine the schedulability of the system.
- response time analysis is a mainstream schedulability analysis method. This method first predicts the worst-case response time of each task and then compares this value with the task deadline.
- three types of blocking time direct spin blocking, indirect spin blocking, and arrival blocking
- the task assignment rules mainly consider task utilization and deadline, and usually only consider tasks at their highest critical level (i.e., maximum utilization), which will lead to pessimistic estimates of system utilization and thus reduce system schedulability.
- an embodiment of the present invention provides a resource-aware task allocation method for a mixed-criticality partitioned real-time operating system to improve system schedulability.
- An aspect of an embodiment of the present invention provides a resource-aware task allocation method for a mixed-criticality partitioned real-time operating system, comprising:
- high-criticality tasks are assigned while ensuring that the assignment result is schedulable in the high-criticality system mode
- the low-criticality tasks are allocated while ensuring that the allocation results are schedulable in the low-criticality system mode;
- the task migration method is used to schedule the system mode switching and complete the task allocation.
- grouping the tasks according to resource access conditions includes:
- each resource task group and independent task group obtained by division is further divided.
- grouping tasks according to the resource sorting result includes:
- a task Starting from the first resource, if a task needs to request this resource, then add it to the task group corresponding to this resource; If the task has been added to a group, it will not be added to other task groups; tasks that do not request any resources will be treated as an independent group;
- each resource task group and independent task group obtained by the division is further divided, including:
- Each resource task group and independent task group is further divided to obtain a high-criticality task group and a low-criticality task group corresponding to each resource, as well as an independent high-criticality task group and an independent low-criticality task group.
- allocating high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode includes:
- a high-criticality task group corresponding to each resource and an independent high-criticality group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the high criticality;
- Allocate tasks in the current task group in sequence and assign tasks to the first schedulable processor in processor order until all tasks of the high-criticality task groups corresponding to all resources are assigned;
- the tasks in the independent task group are sorted in descending order according to the utilization at the high criticality level.
- the processors are sorted from small to large according to the utilization at the high criticality level and assigned to the first schedulable processor in processor order.
- the allocating low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode includes:
- a low-criticality task group corresponding to each resource and an independent low-criticality task group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the low criticality;
- Sort the processors put the processors with high-criticality tasks accessing the current resources in front, and sort them in ascending order according to the utilization under low criticality, and sort the remaining processors in ascending order according to the utilization under low criticality;
- Allocate tasks in the current task group in sequence and assign tasks to the first schedulable processor in processor order until all low-criticality task groups corresponding to all resources are assigned tasks;
- the tasks in the independent task group are sorted in descending order according to the utilization at the low criticality level.
- the processors are sorted from small to large according to the utilization at the low criticality level, and the processors are traversed and assigned to the first schedulable processor.
- the scheduling of system mode switching using a task migration method according to the allocated high-criticality tasks and low-criticality tasks to complete task allocation includes:
- checking the timeout reasons of the task in sequence until the task can be scheduled includes:
- a high-criticality task accesses the resource on the processor to which the current task belongs: all low-criticality tasks accessing the resource on the processor that blocks the task from reaching the resource will be attempted to be migrated to the processor to which the current task belongs;
- resource access blocking time is greater than 0, traverse by resource number, check each processor in turn by processor number whether there is a low-criticality task accessing resources that causes spin blocking on the task, and migrate the low-criticality task accessing resources on the processor to the core where the current task is located; if this migration will not cause a new task timeout, execute the migration;
- Another aspect of the embodiment of the present invention further provides a resource-aware task allocation device for a hybrid critical partitioned real-time operating system, comprising:
- the first module is used to group tasks according to resource access
- the second module is used to allocate high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode;
- the third module is used to allocate low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode;
- the fourth module is used to schedule the system mode switching according to the assigned high-criticality tasks and low-criticality tasks using the task migration method to complete the task allocation.
- Another aspect of an embodiment of the present invention further provides an electronic device, including a processor and a memory;
- the memory is used to store programs
- the processor executes the program to implement the method described above.
- Another aspect of the embodiments of the present invention further provides a computer-readable storage medium, wherein the storage medium stores a program, and the program is executed by a processor to implement the method described above.
- the embodiment of the present invention also discloses a computer program product or a computer program, which includes a computer instruction stored in a computer-readable storage medium.
- a processor of a computer device can read the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the above method.
- the embodiment of the present invention groups each task according to the resource access situation; according to the result of task grouping, high-key tasks are allocated while ensuring that the allocation result is schedulable in the high-key system mode; according to the result of task grouping, low-key tasks are allocated while ensuring that the allocation result is schedulable in the low-key system mode; according to the allocated high-key tasks and low-key tasks, a task migration method is used to schedule the system mode switching to complete the task allocation.
- the present invention can improve the overall schedulability of the system.
- FIG. 1 is a flowchart of overall steps provided by an embodiment of the present invention.
- FPS Fixed-priority scheduling
- Semi-partitioned Scheme A real-time scheduling scheme for multiple processors. The system statically assigns tasks to processors before running and schedules them independently on each processor. However, unlike full partitioning, semi-partitioned scheduling allows some tasks to be migrated to pre-specified processors under certain conditions.
- Sporadic task model A computing task with clear time constraints. The arrival of its jobs does not have a fixed period, but there is a minimum arrival interval. In the worst case, it can be regarded as a periodic task.
- Shared resources Hardware or software resources that multiple tasks can request simultaneously but must have exclusive access to. To ensure data integrity, each resource is protected by a specified lock. A task is allowed to access a resource only if it obtains the lock corresponding to the resource; if the resource is occupied, the task requesting the resource will wait until the resource is available again.
- Schedulability refers to the fact that all tasks in the system can be completed within the deadline.
- Schedulability test refers to a set of mathematical tools that can calculate the worst response time (i.e., from release to completion) of a task executed on a target system.
- the worst response time usually includes the worst time required for the task itself to execute (WCET), the interference time of local high priority tasks (local high priority interference), and the blocking time caused by the task accessing shared resources (specific explanation below).
- Direct spin delay refers to the situation where a task requesting a resource is directly interfered with by a remote task.
- Indirect spin delay refers to the situation where a low-priority task is transitively blocked by a high-priority task interfered with by a remote task.
- Arrival blocking In systems based on non-preemptive or priority boosting techniques, a high-priority task is blocked by local low-priority tasks because it cannot preempt it upon arrival.
- the present invention proposes a task allocation method oriented to shared resources, which greatly reduces the blocking time of tasks accessing shared resources and improves the schedulability of the system.
- an aspect of an embodiment of the present invention provides a resource-aware task allocation method for a mixed-criticality partitioned real-time operating system, comprising:
- high-criticality tasks are assigned while ensuring that the assignment result is schedulable in the high-criticality system mode
- the low-criticality tasks are allocated while ensuring that the allocation results are schedulable in the low-criticality system mode;
- the task migration method is used to schedule the system mode switching and complete the task allocation.
- grouping the tasks according to resource access conditions includes:
- each resource task group and independent task group obtained by division is further divided.
- grouping tasks according to the resource sorting result includes:
- a task Starting from the first resource, if a task needs to request the resource, it will be added to the task group corresponding to the resource; if the task has been added to the group, it will not be added to other task groups; tasks that do not request any resources will be treated as an independent group;
- each resource task group and independent task group obtained by the division is further divided, including:
- Each resource task group and independent task group is further divided to obtain a high-criticality task group and a low-criticality task group corresponding to each resource, as well as an independent high-criticality task group and an independent low-criticality task group.
- allocating high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode includes:
- a high-criticality task group corresponding to each resource and an independent high-criticality group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the high criticality;
- Allocate tasks in the current task group in sequence and assign tasks to the first schedulable processor in processor order until all tasks of the high-criticality task groups corresponding to all resources are assigned;
- the tasks in the independent task group are sorted in descending order according to the utilization at the high criticality level.
- the processors are sorted from small to large according to the utilization at the high criticality level and assigned to the first schedulable processor in processor order.
- the allocating low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode includes:
- a low-criticality task group corresponding to each resource and an independent low-criticality task group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the low criticality;
- Sort the processors put the processors with high-criticality tasks accessing the current resources in front, and sort them in ascending order according to the utilization under low criticality, and sort the remaining processors in ascending order according to the utilization under low criticality;
- Allocate tasks in the current task group in sequence and assign tasks to the first schedulable processor in processor order until all low-criticality task groups corresponding to all resources are assigned tasks;
- the tasks in the independent task group are sorted in descending order according to the utilization at the low criticality level.
- the processors are sorted from small to large according to the utilization at the low criticality level, and the processors are traversed and assigned to the first schedulable processor.
- the scheduling of system mode switching using a task migration method according to the allocated high-criticality tasks and low-criticality tasks to complete task allocation includes:
- checking the timeout reasons of the task in sequence until the task can be scheduled includes:
- a high-criticality task accesses the resource on the processor to which the current task belongs: all low-criticality tasks accessing the resource on the processor that blocks the task from reaching the resource will be attempted to be migrated to the processor to which the current task belongs;
- resource access blocking time is greater than 0, traverse by resource number, check each processor in turn by processor number whether there is a low-criticality task accessing resources that causes spin blocking on the task, and migrate the low-criticality task accessing resources on the processor to the core where the current task is located; if this migration will not cause a new task timeout, execute the migration;
- Another aspect of the embodiment of the present invention further provides a resource-aware task allocation device for a hybrid critical partitioned real-time operating system, comprising:
- the first module is used to group tasks according to resource access
- the second module is used to allocate high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode;
- the third module is used to allocate low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode;
- the fourth module is used to schedule the system mode switching according to the assigned high-criticality tasks and low-criticality tasks using the task migration method to complete the task allocation.
- Another aspect of an embodiment of the present invention further provides an electronic device, including a processor and a memory;
- the memory is used to store programs
- the processor executes the program to implement the method described above.
- Another aspect of the embodiment of the present invention further provides a computer-readable storage medium, wherein the storage medium stores a program.
- the program is executed by a processor to implement the above-mentioned method.
- the embodiment of the present invention also discloses a computer program product or a computer program, which includes a computer instruction stored in a computer-readable storage medium.
- a processor of a computer device can read the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the above method.
- the present invention is based on a mixed-criticity system (Mixed-Criticality System) including two system execution modes (System Execution Mode, L ⁇ HI,LO ⁇ ) of high criticality and low criticality.
- the system includes a group of identical processors ⁇ (symmetric multiprocessor) and a group of sporadic periodic tasks ⁇ (sporadic task).
- the system adopts a semi-partitioned fixed-priority scheme.
- each task (represented by ⁇ i ) is defined by its period, deadline, worst-case execution time estimate, priority, criticality, allocation scheme, and migration scheme, namely Different from ordinary periodic tasks, the worst execution estimate of mixed critical tasks presents vector characteristics, that is, different worst time estimates exist in different system execution modes.
- the worst response time of the task in mode L is represented by Ri (L), and the utilization of the task in mode L is represented by u i processor
- the utilization rate under mode L is expressed as
- the system also contains a set of shared resources protected by spin locks (the resource set is represented by R).
- R the resource set is represented by R.
- c k (L) represents the worst execution time estimate of executing r k at criticality level L, and it is also assumed that c k (HI)>c k (LO). represents the number of times task ⁇ i accesses resource r k during one run; function F( ⁇ ) represents the set of resources accessed by a given task, and function G( ⁇ ) represents the set of tasks that access a given resource. represents the task grouping corresponding to resource r k , Represents a grouping of tasks that do not request resources.
- the mixed criticality model assumes that the system starts from a low criticality level and all tasks are scheduled with the budget under the low criticality level. In actual operation, when a task runs for longer than the budget under this mode, the system will switch to a high criticality level and suspend all low criticality tasks.
- the present invention includes four core steps: first, tasks are grouped based on resource access conditions; then, high-criticality tasks are assigned first according to the results of task grouping and the assigned results are ensured to be schedulable in the high-criticality system mode; then, low-criticality tasks are assigned and the assigned results are ensured to be schedulable in the low-criticality system mode; finally, task migration method is used to ensure that the tasks are scheduled in the low-criticality system mode. Ensure the schedulability when the system mode switches.
- Step 1 Group tasks:
- Step 1.1 Sort the shared resources by the total number of accesses from largest to smallest, i.e.
- Step 2.1 Group the tasks according to the resource order in step 1.1. Starting from the first resource, if a task needs to request the resource r k , then add the task group corresponding to the resource r k If a task has been added to a group, it will not be added to other task groups. Tasks that do not request task resources are treated as an independent group.
- Step 3.1 Further divide each resource task group and independent task group according to the criticality level to obtain the high criticality task group corresponding to each resource r k and low criticality task groups and an independent high-criticality task force and an independent low-criticality task group
- Step 2 Allocation of high-criticality tasks:
- Step 2.1 Based on the task grouping (Algorithm 1), the high-criticality task group corresponding to each resource is obtained and a separate high-key group Sort the tasks in the task group corresponding to each resource in ascending order according to the utilization rate ui (HI) at the high criticality level.
- Algorithm 1 Based on the task grouping (Algorithm 1), the high-criticality task group corresponding to each resource is obtained and a separate high-key group Sort the tasks in the task group corresponding to each resource in ascending order according to the utilization rate ui (HI) at the high criticality level.
- Step 2.2 Start traversing from the task group corresponding to the resource with the largest total number of accesses N k .
- Step 2.3 Set the processor utilization to the highest criticality Sort from small to large (processors with less load can be assigned more tasks from the same task group);
- Step 2.4 Allocate tasks in the current task group in sequence, and assign tasks to the first schedulable processor in processor order until the task allocation of the high-criticality task group corresponding to all resources is completed.
- the schedulability test uses the schedulability analysis under the high-criticality mode.
- Step 2.5 The tasks in the independent task group are sorted in descending order according to the utilization u i (HI) under the high criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization under the high criticality level, and assigned to the first schedulable processor in processor order. The schedulability test uses the schedulability analysis under the high criticality level mode.
- Step 3 Allocation of low-criticality tasks:
- Step 3.1 The low-criticality task group corresponding to each resource obtained based on the task grouping (Algorithm 1) and a separate low-criticality task force Sort the tasks in the task group corresponding to each resource in ascending order according to the utilization rate ui (LO) at the low criticality level.
- Step 3.2 Start traversing from the task group corresponding to the resource with the largest total number of accesses N k .
- Step 3.3 Sort the processors.
- the processors with high-criticality tasks accessing the resource are sorted first and ranked according to the utilization of low-criticality tasks. Sort by small to large, and the remaining processors are sorted by utilization under low criticality Sort from smallest to largest.
- Step 3.4 Allocate the tasks in the current task group in sequence, and assign the tasks to the first schedulable processor in processor order.
- the schedulability test uses the schedulability analysis in low criticality mode.
- Step 3.5 The tasks in the independent task group are sorted in descending order according to the utilization u i (LO) under the low criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization under the low criticality level, and the processors are traversed and assigned to the first schedulable processor.
- the schedulability test uses the schedulability analysis under the low criticality level mode.
- Step 4 Task migration method when switching modes:
- the system mode When a mixed-criticality system is running, if a task's running time exceeds the worst-case time estimate, the system mode will be upgraded (i.e., mode switching). After the upgrade, the low-criticality task will be suspended, and the high-criticality task will run with a more pessimistic worst-case time. Due to the existence of shared resources, in order to ensure the integrity of shared resources, when the system mode is switched, if a low-criticality task holds a shared resource, the task will be suspended after the current resource execution is completed. If the task does not occupy a shared resource, it will be suspended directly.
- high-criticality task allocation steps we allocate high-criticality tasks while ensuring their schedulability in high-criticality mode, that is, R i (HI) ⁇ D i .
- low-criticality tasks accessing resources will block the operation of high-criticality tasks, resulting in some high-criticality tasks being unschedulable.
- migrating low-criticality tasks to resolve their impact on high-criticality tasks during mode switching, thereby ensuring the schedulability of high-criticality tasks during mode switching.
- Ii is the spin blocking time caused by the low-criticality task on the remote processor to ⁇ i
- Bi is the increased arrival blocking time suffered by ⁇ i .
- the migration algorithm steps are as follows:
- Step 4.2 If any task ⁇ i times out, check the reasons for its timeout one by one;
- Step 4.2.1 If Bi > 0, obtain the resource r k that causes the maximum arrival blockage Bi and process it in two cases:
- Step 4.2.2 If I i > 0, traverse by resource number k and check each processor in turn by processor number m Is there a low-criticality task accessing resource r k that causes spin blocking for task ⁇ i , that is, if The processor Migrate the low-criticality tasks that access resource r k to the core where task ⁇ i is located. If this migration will not cause a new task timeout, perform the migration. Stop when task ⁇ i is schedulable (R i ⁇ D i ) or all resources have been traversed.
- Step 4.3 If step 4.2 is completed and the task is schedulable, continue to execute step 4.2 until all high-criticality tasks are schedulable (R i ⁇ D i ).
- the present invention has the following characteristics:
- the solution takes into account the impact of shared resources and localizes the resources with fierce competition as much as possible, thereby reducing the resources between cores. Access conflicts can be avoided, task blocking time can be reduced, and the schedulability of the system can be improved.
- this scheme takes into account the worst running time of tasks at different critical levels in mixed critical systems (that is, tasks have different utilization rates at different critical levels, and the higher the critical level, the higher the utilization rate); it proposes a task allocation and migration method when switching between modes to ensure the schedulability of the system in all operating scenarios.
- the present invention localizes the fiercely competitive shared resources through task allocation, reduces resource blocking time, and improves system schedulability.
- the present invention takes into account the criticality characteristics of the mixed criticality system, ensures the schedulability of each criticality mode during the allocation process, uses task migration to ensure the schedulability when switching modes, reasonably utilizes system capacity, and improves the overall schedulability of the system.
- the function/operation mentioned in the block diagram may not occur in the order mentioned in the operation diagram.
- the two boxes shown in succession can actually be executed substantially simultaneously or the boxes can sometimes be executed in reverse order.
- the embodiment presented and described in the flow chart of the present invention is provided by way of example, for the purpose of providing a more comprehensive understanding of technology. The disclosed method is not limited to the operation and logic flow presented herein. Selectable embodiments are expected, wherein the order of various operations is changed and the sub-operation of a part for which is described as a larger operation is performed independently.
- the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in each embodiment of the present invention.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk, and other media that can store program codes.
- An ordered list of executable instructions for a logical function may be embodied in any computer-readable medium for use by an instruction execution system, apparatus or device (e.g., a computer-based system, a system including a processor, or other system that can fetch instructions from an instruction execution system, apparatus or device and execute the instructions), or in conjunction with such instruction execution system, apparatus or device.
- a "computer-readable medium” may be any device that can contain, store, communicate, propagate or transmit a program for use by an instruction execution system, apparatus or device, or in conjunction with such instruction execution system, apparatus or device.
- computer-readable media include the following: an electrical connection with one or more wires (electronic device), a portable computer disk case (magnetic device), a random access memory (RAM), a read-only memory (ROM), an erasable and programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disk read-only memory (CDROM).
- the computer-readable medium may even be a paper or other suitable medium on which the program is printed, since the program may be obtained electronically, for example, by optically scanning the paper or other medium, followed by editing, deciphering or, if necessary, processing in another suitable manner, and then stored in a computer memory.
- a plurality of steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
- a discrete logic circuit having a logic gate circuit for implementing a logic function for a data signal
- a dedicated integrated circuit having a suitable combination of logic gate circuits
- PGA programmable gate array
- FPGA field programmable gate array
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Multi Processors (AREA)
Abstract
Description
本发明涉及计算机技术领域,尤其是混合关键分区实时操作系统的资源感知型任务分配方法。The invention relates to the field of computer technology, in particular to a resource-aware task allocation method for a mixed-key partition real-time operating system.
混合关键系统是指在同一个计算平台上完成多种功能且满足空间、功率和成本限制的嵌入式实时系统。这样的系统广泛应用于汽车、航空航天和其他行业领域。在这样的集成系统中,具有不同认证要求和不同重要性(关键性)的应用程序可以共存并共享系统资源。混合关键系统必须确保高关键度任务能够得到正确的执行并满足时间约束而低关键任务在特定情况下可以放弃以确保高关键度任务的运行。混合关键系统通常拥有低关键度、高关键度两个系统运行模式。系统以低关键度模式(low criticality mode)开始运行。在该模式下,低关键度任务(low criticality tasks)与高关键度任务(high criticality tasks)一起以低关键度模式下的执行时间运行。当系统中有任务运行超时,系统将切换为高关键度模式,并挂起低关键任务,同时,高关键度任务将以更长的执行时间运行。Mixed criticality systems are embedded real-time systems that perform multiple functions on the same computing platform and meet space, power and cost constraints. Such systems are widely used in automotive, aerospace and other industries. In such an integrated system, applications with different certification requirements and different importance (criticality) can coexist and share system resources. Mixed criticality systems must ensure that high-criticality tasks can be executed correctly and meet time constraints, while low-criticality tasks can be abandoned in certain circumstances to ensure the operation of high-criticality tasks. Mixed criticality systems usually have two system operation modes: low criticality and high criticality. The system starts running in low criticality mode. In this mode, low criticality tasks run together with high criticality tasks with the execution time in low criticality mode. When a task in the system times out, the system will switch to high criticality mode and suspend the low criticality task. At the same time, the high criticality task will run with a longer execution time.
在多核混合关键系统中,低关键度任务与高关键度任务共享着平台上的各种资源,例如代码段、数据、内存、IO设备等。为了保证数据的完整与一致性,任务需要以锁以互斥的方式访问这些共享资源。AUTOSAR标准规定的自旋锁(spin lock)被广泛应用于保护共享资源的计算正确性,实时系统使用基于自旋锁的资源共享协议(resource sharing protocol)来管理对共享资源的访问,并提供任务等待与执行共享资源的时间上界。但是,由于自旋锁的应用,当任务在不同的处理器上请求相同的资源时,它们会在所属处理器上忙等,直到成功获得请求的资源。这种现象将会给任务带来很高的阻塞时间,并且会严重破坏系统的可调度性。In a multi-core mixed-criticality system, low-criticality tasks and high-criticality tasks share various resources on the platform, such as code segments, data, memory, IO devices, etc. In order to ensure the integrity and consistency of the data, tasks need to access these shared resources in a mutually exclusive manner with locks. The spin locks specified in the AUTOSAR standard are widely used to protect the computational correctness of shared resources. Real-time systems use a resource sharing protocol based on spin locks to manage access to shared resources and provide an upper bound on the time for tasks to wait for and execute shared resources. However, due to the application of spin locks, when tasks request the same resources on different processors, they will wait on their respective processors until they successfully obtain the requested resources. This phenomenon will bring high blocking time to tasks and seriously undermine the schedulability of the system.
在实时系统中,响应时间分析是一种主流的可调度性分析方法,该方法首先预测每个任务最坏情况的响应时间,然后将该值同任务的截止时间进行比较。目前,在基于自旋锁的共享资源协议的可调度分析中,通过分析给定时间内资源访问阻塞情况,可以精确约束三种阻塞时间(直接自旋阻塞、间接自旋阻塞以及到达阻塞),提供了任务等待与执行共享资源的时间上界,将该时间项拟合到响应时间分析方程中可以提供更为精确的分析结果。In real-time systems, response time analysis is a mainstream schedulability analysis method. This method first predicts the worst-case response time of each task and then compares this value with the task deadline. At present, in the schedulability analysis of shared resource protocols based on spin locks, by analyzing the resource access blocking within a given time, three types of blocking time (direct spin blocking, indirect spin blocking, and arrival blocking) can be accurately constrained, providing an upper bound on the time for tasks to wait and execute shared resources. Fitting this time term to the response time analysis equation can provide more accurate analysis results.
近年来,在多处理器系统中,为了减少处理器间的资源争用,资源感知的任务分配方法 受到了广泛关注。这种任务分配的思想是通过本地化共享资源来减少竞争,即尽可能将访问同一共享资源的任务分配至相同核心。然而,在混合关键级系统中,很少有研究工作以资源感知为主要方向来设计任务分配算法。同时,广泛存在于现实系统之中的频繁的共享资源访问导致了任务阻塞时间长、系统可调度性低下等问题。In recent years, in order to reduce resource contention between processors in multi-processor systems, resource-aware task allocation methods have been developed. It has received widespread attention. The idea of this task allocation is to reduce contention by localizing shared resources, that is, to allocate tasks that access the same shared resource to the same core as much as possible. However, in mixed-criticality systems, there are few research works that design task allocation algorithms with resource awareness as the main direction. At the same time, the frequent shared resource access that is widely present in real systems leads to problems such as long task blocking time and poor system schedulability.
目前,混合关键系统下的任务分配算法的主要存在以下问题:At present, the main problems of task allocation algorithms in mixed critical systems are as follows:
现有的研究普遍采用启发式的算法对任务进行分配,任务分配规则主要考虑任务利用率、截止时间,并且通常只考虑任务在其最高关键级别(即最大利用率),这会导致对系统利用率的悲观估计,从而降低系统可调度性。Existing research generally uses heuristic algorithms to assign tasks. The task assignment rules mainly consider task utilization and deadline, and usually only consider tasks at their highest critical level (i.e., maximum utilization), which will lead to pessimistic estimates of system utilization and thus reduce system schedulability.
部分研究考虑到了多个关键级别上的利用率差异,可以提高系统整体的可调度性。但是,以上方案都未考虑到共享资源访问的影响,导致系统在实际运行时因过长的阻塞时延而不可调度。Some studies have considered the utilization differences at multiple key levels, which can improve the overall schedulability of the system. However, none of the above solutions consider the impact of shared resource access, which results in the system being unschedulable due to excessive blocking delays during actual operation.
发明内容Summary of the invention
有鉴于此,本发明实施例提供混合关键分区实时操作系统的资源感知型任务分配方法,以提高系统可调度性。In view of this, an embodiment of the present invention provides a resource-aware task allocation method for a mixed-criticality partitioned real-time operating system to improve system schedulability.
本发明实施例的一方面提供了混合关键分区实时操作系统的资源感知型任务分配方法,包括:An aspect of an embodiment of the present invention provides a resource-aware task allocation method for a mixed-criticality partitioned real-time operating system, comprising:
根据资源访问情况对各个任务进行分组;Group tasks according to resource access;
根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务;According to the result of task grouping, high-criticality tasks are assigned while ensuring that the assignment result is schedulable in the high-criticality system mode;
根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务;According to the result of task grouping, the low-criticality tasks are allocated while ensuring that the allocation results are schedulable in the low-criticality system mode;
根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配。According to the assigned high-criticality tasks and low-criticality tasks, the task migration method is used to schedule the system mode switching and complete the task allocation.
可选地,所述根据资源访问情况对各个任务进行分组,包括:Optionally, grouping the tasks according to resource access conditions includes:
将共享资源按照被访问总次数从大到小排序,得到资源排序结果;Sort the shared resources from largest to smallest according to the total number of accesses to obtain the resource sorting result;
根据所述资源排序结果进行任务分组;Grouping tasks according to the resource sorting result;
根据任务的关键级,对划分得到的每个资源任务组和独立任务组进一步划分。According to the critical level of the task, each resource task group and independent task group obtained by division is further divided.
可选地,所述根据所述资源排序结果进行任务分组,包括:Optionally, grouping tasks according to the resource sorting result includes:
从第一个资源开始,如果某个任务需要请求该资源,则加入该资源对应的任务分组;如 果任务已加入分组,则不再加入其他任务组;将不请求任何资源的任务都作为一个独立分组;Starting from the first resource, if a task needs to request this resource, then add it to the task group corresponding to this resource; If the task has been added to a group, it will not be added to other task groups; tasks that do not request any resources will be treated as an independent group;
所述根据任务的关键级,对划分得到的每个资源任务组和独立任务组进一步划分,包括:According to the critical level of the task, each resource task group and independent task group obtained by the division is further divided, including:
对每个资源任务组和独立任务组进一步划分,得到每个资源对应的高关键级任务组和低关键级任务组,以及得到一个独立高关键级任务组和一个独立低关键级任务组。Each resource task group and independent task group is further divided to obtain a high-criticality task group and a low-criticality task group corresponding to each resource, as well as an independent high-criticality task group and an independent low-criticality task group.
可选地,所述根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务,包括:Optionally, allocating high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode includes:
根据任务分组的结果,得到每个资源对应的高关键级任务组以及一个独立高关键级组,将每个资源对应的任务组内的任务按照高关键级下的利用率进行升序排序;According to the result of task grouping, a high-criticality task group corresponding to each resource and an independent high-criticality group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the high criticality;
从被访问总次数最大的资源对应的任务组开始遍历;Start traversing from the task group corresponding to the resource with the largest total number of accesses;
将处理器按照高关键级下的利用率从小到大进行排序;Sort the processors in ascending order according to utilization at the high criticality level;
依次分配当前任务组内的任务,将任务按处理器顺序分配给第一个可调度的处理器,直到所有资源对应的高关键级任务组的任务分配完成;Allocate tasks in the current task group in sequence, and assign tasks to the first schedulable processor in processor order until all tasks of the high-criticality task groups corresponding to all resources are assigned;
独立任务分组内的任务按照高关键级下的利用率降序排序,每次分配独立任务前,将处理器按照高关键级下的利用率从小到大排序,按处理器顺序分配给第一个可调度的处理器。The tasks in the independent task group are sorted in descending order according to the utilization at the high criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization at the high criticality level and assigned to the first schedulable processor in processor order.
可选地,所述根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务,包括:Optionally, the allocating low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode includes:
根据任务分组的结果,得到的每个资源对应的低关键级任务组以及一个独立低关键级任务组,将每个资源对应的任务组内的任务按照低关键级下的利用率升序排序;According to the result of task grouping, a low-criticality task group corresponding to each resource and an independent low-criticality task group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the low criticality;
从被访问总次数最大的资源对应的任务组开始遍历;Start traversing from the task group corresponding to the resource with the largest total number of accesses;
对处理器进行排序,将存在访问当前资源的高关键级任务的处理器排列在前,并且按照低关键级下的利用率从小到大进行排序,剩余处理器按照低关键级下的利用率从小到大排序;Sort the processors, put the processors with high-criticality tasks accessing the current resources in front, and sort them in ascending order according to the utilization under low criticality, and sort the remaining processors in ascending order according to the utilization under low criticality;
依次分配当前任务组内的任务,将任务按处理器顺序分配给第一个能调度的处理器,直到所有资源对应的低关键级任务组的任务分配完成;Allocate tasks in the current task group in sequence, and assign tasks to the first schedulable processor in processor order until all low-criticality task groups corresponding to all resources are assigned tasks;
独立任务分组内的任务按照低关键级下的利用率降序排序,每次分配独立任务前,将处理器按照低关键级下的利用率从小到大排序,遍历处理器,分配给第一个可调度的处理器。The tasks in the independent task group are sorted in descending order according to the utilization at the low criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization at the low criticality level, and the processors are traversed and assigned to the first schedulable processor.
可选地,所述根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配,包括:Optionally, the scheduling of system mode switching using a task migration method according to the allocated high-criticality tasks and low-criticality tasks to complete task allocation includes:
计算每个高关键级任务在模式切换时的响应时间,检查模式切换时任务的响应时间是否超过截止期限;Calculate the response time of each high-criticality task when switching modes, and check whether the response time of the task exceeds the deadline when switching modes;
当任务的响应时间超过截止期限,依次检查任务的超时原因,直至任务可被调度; When the response time of a task exceeds the deadline, check the timeout reasons of the task in turn until the task can be scheduled;
完成所有高关键级任务的调度。Complete the scheduling of all high-criticality tasks.
可选地,所述当任务的响应时间超过截止期限,依次检查任务的超时原因,直至任务可被调度,包括:Optionally, when the response time of the task exceeds the deadline, checking the timeout reasons of the task in sequence until the task can be scheduled includes:
当高关键任务在模式切换时受到的到达阻塞时间大于高关键级下受到的到达阻塞时间时,获取造成最大到达阻塞的资源,并执行以下步骤:When the arrival blocking time of the high-criticality task during mode switching is greater than the arrival blocking time under the high-criticality level, obtain the resource that causes the maximum arrival blocking and perform the following steps:
如果在当前任务所属处理器上没有高关键任务访问该资源,只有低关键级任务访问该资源:将处理器进行排序,将能够对低关键任务造成自旋阻塞的处理器排列在前,剩余处理器按照松弛时间从大到小排序;按处理器顺序,将该处理器上所有访问该资源的低关键任务迁移至第一个可以满足条件的处理器;If there is no high-criticality task accessing the resource on the processor to which the current task belongs, and only low-criticality tasks are accessing the resource: sort the processors, put the processors that can cause spin blocking for low-criticality tasks in the front, and sort the remaining processors from large to small according to the slack time; according to the processor order, migrate all low-criticality tasks accessing the resource on the processor to the first processor that can meet the conditions;
如果在当前任务所属处理器上有高关键任务访问该资源:将对该任务造成到达阻塞的处理器上所有访问该资源的低关键级任务尝试迁移至当前任务所在处理器;If a high-criticality task accesses the resource on the processor to which the current task belongs: all low-criticality tasks accessing the resource on the processor that blocks the task from reaching the resource will be attempted to be migrated to the processor to which the current task belongs;
当执行以上步骤后,当前任务仍然不可调度,则继续执行以下步骤:If the current task is still not schedulable after executing the above steps, continue to execute the following steps:
如果资源访问阻塞时间大于0,则按资源编号遍历,按处理器编号依次检查每个处理器上是否有访问资源的低关键级任务对任务造成自旋阻塞,并将处理器上访问资源的低关键任务迁移至当前任务所在核心;如果本次迁移不会新增任务超时,则执行迁移;If the resource access blocking time is greater than 0, traverse by resource number, check each processor in turn by processor number whether there is a low-criticality task accessing resources that causes spin blocking on the task, and migrate the low-criticality task accessing resources on the processor to the core where the current task is located; if this migration will not cause a new task timeout, execute the migration;
当当前任务可调度或者遍历完所有资源时停止。Stop when the current task is schedulable or all resources are traversed.
本发明实施例的另一方面还提供了一种混合关键分区实时操作系统的资源感知型任务分配装置,包括:Another aspect of the embodiment of the present invention further provides a resource-aware task allocation device for a hybrid critical partitioned real-time operating system, comprising:
第一模块,用于根据资源访问情况对各个任务进行分组;The first module is used to group tasks according to resource access;
第二模块,用于根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务;The second module is used to allocate high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode;
第三模块,用于根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务;The third module is used to allocate low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode;
第四模块,用于根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配。The fourth module is used to schedule the system mode switching according to the assigned high-criticality tasks and low-criticality tasks using the task migration method to complete the task allocation.
本发明实施例的另一方面还提供了一种电子设备,包括处理器以及存储器;Another aspect of an embodiment of the present invention further provides an electronic device, including a processor and a memory;
所述存储器用于存储程序;The memory is used to store programs;
所述处理器执行所述程序实现如前面所述的方法。The processor executes the program to implement the method described above.
本发明实施例的另一方面还提供了一种计算机可读存储介质,所述存储介质存储有程序,所述程序被处理器执行实现如前面所述的方法。 Another aspect of the embodiments of the present invention further provides a computer-readable storage medium, wherein the storage medium stores a program, and the program is executed by a processor to implement the method described above.
本发明实施例还公开了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器可以从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行前面的方法。The embodiment of the present invention also discloses a computer program product or a computer program, which includes a computer instruction stored in a computer-readable storage medium. A processor of a computer device can read the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the above method.
本发明的实施例根据资源访问情况对各个任务进行分组;根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务;根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务;根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配。本发明能够提高系统的整体可调度性。The embodiment of the present invention groups each task according to the resource access situation; according to the result of task grouping, high-key tasks are allocated while ensuring that the allocation result is schedulable in the high-key system mode; according to the result of task grouping, low-key tasks are allocated while ensuring that the allocation result is schedulable in the low-key system mode; according to the allocated high-key tasks and low-key tasks, a task migration method is used to schedule the system mode switching to complete the task allocation. The present invention can improve the overall schedulability of the system.
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present application. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying any creative work.
图1为本发明实施例提供的整体步骤流程图。FIG. 1 is a flowchart of overall steps provided by an embodiment of the present invention.
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and advantages of the present application more clearly understood, the present application is further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application and are not used to limit the present application.
下面首先对本发明可能出现的相关技术名词进行解释说明:First, the related technical terms that may appear in the present invention are explained below:
固定优先级调度(fixed-priority scheduling,FPS):一种实时系统中的调度方法。任务的优先级在系统运行前静态分配,并在任务的整个生命周期固定使用。在系统运行期间,优先级更大的任务会优先被调度运行。Fixed-priority scheduling (FPS): A scheduling method in real-time systems. The priority of a task is statically assigned before the system runs and is fixed throughout the life cycle of the task. During system operation, tasks with higher priorities are scheduled to run first.
半分区调度(semi-partitioned Scheme):一种多处理器下的实时调度方案。系统在运行前将任务静态分配给各处理器,并在每个处理器上独立进行调度。然而,与全分区不同的是,半分区调度允许一些任务在特定条件下迁移到预先指定的处理器上运行。Semi-partitioned Scheme: A real-time scheduling scheme for multiple processors. The system statically assigns tasks to processors before running and schedules them independently on each processor. However, unlike full partitioning, semi-partitioned scheduling allows some tasks to be migrated to pre-specified processors under certain conditions.
偶发性周期任务(sporadic task model):具有明确时间约束的计算任务,其作业的到达没有固定的周期(period),但是有最小到达时间间隔的限制(minimal arrival interval),在最坏情况下可以视作周期性任务。 Sporadic task model: A computing task with clear time constraints. The arrival of its jobs does not have a fixed period, but there is a minimum arrival interval. In the worst case, it can be regarded as a periodic task.
共享资源(shared resources):多个任务可以同时请求但必须独占访问的硬件或软件资源。为保证数据的完整性,每个资源都由指定的锁来保护。任务只有获得资源对应的锁才被允许访问资源;如果资源被占有,则请求该资源的任务将等待,直到资源再次可用。Shared resources: Hardware or software resources that multiple tasks can request simultaneously but must have exclusive access to. To ensure data integrity, each resource is protected by a specified lock. A task is allowed to access a resource only if it obtains the lock corresponding to the resource; if the resource is occupied, the task requesting the resource will wait until the resource is available again.
可调度性(schedulability):指系统中所有任务可以在截止期限内完成运行。Schedulability: refers to the fact that all tasks in the system can be completed within the deadline.
可调度性分析(schedulability test):指一种能够计算出任务在目标系统执行的最差响应时间(即从释放到完成)的一套数学工具。最差响应时间通常包括任务本身执行需要的最坏时间(WCET)、本地高优先级任务的干扰时间(local high priority interference)、任务访问共享资源造成的阻塞时间(具体解释如下)等几个主要部分。Schedulability test: refers to a set of mathematical tools that can calculate the worst response time (i.e., from release to completion) of a task executed on a target system. The worst response time usually includes the worst time required for the task itself to execute (WCET), the interference time of local high priority tasks (local high priority interference), and the blocking time caused by the task accessing shared resources (specific explanation below).
自旋阻塞(spin delay):在使用基于自旋锁的资源共享协议的多处理器系统中,远程处理器上的任务访问资源造成本地处理器上请求该资源的任务自旋等待而造成的阻塞。请求资源的任务直接受到远程任务干扰称为直接自旋阻塞,低优先级任务传递性地受到被远程任务干扰的高优先级任务的阻塞,称为间接自旋阻塞。Spin delay: In a multiprocessor system using a spin lock-based resource sharing protocol, a task on a remote processor accesses a resource, causing the task on the local processor to spin and wait for the resource. Direct spin delay refers to the situation where a task requesting a resource is directly interfered with by a remote task. Indirect spin delay refers to the situation where a low-priority task is transitively blocked by a high-priority task interfered with by a remote task.
到达阻塞(arrival blocking):在基于非抢占或优先级提升技术的系统中,高优先级任务在到达时因为无法抢占而受到本地低优先级任务的阻塞。Arrival blocking: In systems based on non-preemptive or priority boosting techniques, a high-priority task is blocked by local low-priority tasks because it cannot preempt it upon arrival.
针对现有技术存在的问题,本发明提出一种面向共享资源的任务分配方法,大幅降低任务访问共享资源的阻塞时间,并提高系统可调度性。In view of the problems existing in the prior art, the present invention proposes a task allocation method oriented to shared resources, which greatly reduces the blocking time of tasks accessing shared resources and improves the schedulability of the system.
具体地,本发明实施例的一方面提供了混合关键分区实时操作系统的资源感知型任务分配方法,包括:Specifically, an aspect of an embodiment of the present invention provides a resource-aware task allocation method for a mixed-criticality partitioned real-time operating system, comprising:
根据资源访问情况对各个任务进行分组;Group tasks according to resource access;
根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务;According to the result of task grouping, high-criticality tasks are assigned while ensuring that the assignment result is schedulable in the high-criticality system mode;
根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务;According to the result of task grouping, the low-criticality tasks are allocated while ensuring that the allocation results are schedulable in the low-criticality system mode;
根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配。According to the assigned high-criticality tasks and low-criticality tasks, the task migration method is used to schedule the system mode switching and complete the task allocation.
可选地,所述根据资源访问情况对各个任务进行分组,包括:Optionally, grouping the tasks according to resource access conditions includes:
将共享资源按照被访问总次数从大到小排序,得到资源排序结果;Sort the shared resources from largest to smallest according to the total number of accesses to obtain the resource sorting result;
根据所述资源排序结果进行任务分组;Grouping tasks according to the resource sorting result;
根据任务的关键级,对划分得到的每个资源任务组和独立任务组进一步划分。According to the critical level of the task, each resource task group and independent task group obtained by division is further divided.
可选地,所述根据所述资源排序结果进行任务分组,包括: Optionally, grouping tasks according to the resource sorting result includes:
从第一个资源开始,如果某个任务需要请求该资源,则加入该资源对应的任务分组;如果任务已加入分组,则不再加入其他任务组;将不请求任何资源的任务都作为一个独立分组;Starting from the first resource, if a task needs to request the resource, it will be added to the task group corresponding to the resource; if the task has been added to the group, it will not be added to other task groups; tasks that do not request any resources will be treated as an independent group;
所述根据任务的关键级,对划分得到的每个资源任务组和独立任务组进一步划分,包括:According to the critical level of the task, each resource task group and independent task group obtained by the division is further divided, including:
对每个资源任务组和独立任务组进一步划分,得到每个资源对应的高关键级任务组和低关键级任务组,以及得到一个独立高关键级任务组和一个独立低关键级任务组。Each resource task group and independent task group is further divided to obtain a high-criticality task group and a low-criticality task group corresponding to each resource, as well as an independent high-criticality task group and an independent low-criticality task group.
可选地,所述根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务,包括:Optionally, allocating high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode includes:
根据任务分组的结果,得到每个资源对应的高关键级任务组以及一个独立高关键级组,将每个资源对应的任务组内的任务按照高关键级下的利用率进行升序排序;According to the result of task grouping, a high-criticality task group corresponding to each resource and an independent high-criticality group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the high criticality;
从被访问总次数最大的资源对应的任务组开始遍历;Start traversing from the task group corresponding to the resource with the largest total number of accesses;
将处理器按照高关键级下的利用率从小到大进行排序;Sort the processors in ascending order according to utilization at the high criticality level;
依次分配当前任务组内的任务,将任务按处理器顺序分配给第一个可调度的处理器,直到所有资源对应的高关键级任务组的任务分配完成;Allocate tasks in the current task group in sequence, and assign tasks to the first schedulable processor in processor order until all tasks of the high-criticality task groups corresponding to all resources are assigned;
独立任务分组内的任务按照高关键级下的利用率降序排序,每次分配独立任务前,将处理器按照高关键级下的利用率从小到大排序,按处理器顺序分配给第一个可调度的处理器。The tasks in the independent task group are sorted in descending order according to the utilization at the high criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization at the high criticality level and assigned to the first schedulable processor in processor order.
可选地,所述根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务,包括:Optionally, the allocating low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode includes:
根据任务分组的结果,得到的每个资源对应的低关键级任务组以及一个独立低关键级任务组,将每个资源对应的任务组内的任务按照低关键级下的利用率升序排序;According to the result of task grouping, a low-criticality task group corresponding to each resource and an independent low-criticality task group are obtained, and the tasks in the task group corresponding to each resource are sorted in ascending order according to the utilization rate under the low criticality;
从被访问总次数最大的资源对应的任务组开始遍历;Start traversing from the task group corresponding to the resource with the largest total number of accesses;
对处理器进行排序,将存在访问当前资源的高关键级任务的处理器排列在前,并且按照低关键级下的利用率从小到大进行排序,剩余处理器按照低关键级下的利用率从小到大排序;Sort the processors, put the processors with high-criticality tasks accessing the current resources in front, and sort them in ascending order according to the utilization under low criticality, and sort the remaining processors in ascending order according to the utilization under low criticality;
依次分配当前任务组内的任务,将任务按处理器顺序分配给第一个能调度的处理器,直到所有资源对应的低关键级任务组的任务分配完成;Allocate tasks in the current task group in sequence, and assign tasks to the first schedulable processor in processor order until all low-criticality task groups corresponding to all resources are assigned tasks;
独立任务分组内的任务按照低关键级下的利用率降序排序,每次分配独立任务前,将处理器按照低关键级下的利用率从小到大排序,遍历处理器,分配给第一个可调度的处理器。The tasks in the independent task group are sorted in descending order according to the utilization at the low criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization at the low criticality level, and the processors are traversed and assigned to the first schedulable processor.
可选地,所述根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配,包括:Optionally, the scheduling of system mode switching using a task migration method according to the allocated high-criticality tasks and low-criticality tasks to complete task allocation includes:
计算每个高关键级任务在模式切换时的响应时间,检查模式切换时任务的响应时间是否超过截止期限; Calculate the response time of each high-criticality task when switching modes, and check whether the response time of the task exceeds the deadline when switching modes;
当任务的响应时间超过截止期限,依次检查任务的超时原因,直至任务可被调度;When the response time of a task exceeds the deadline, check the timeout reasons of the task in turn until the task can be scheduled;
完成所有高关键级任务的调度。Complete the scheduling of all high-criticality tasks.
可选地,所述当任务的响应时间超过截止期限,依次检查任务的超时原因,直至任务可被调度,包括:Optionally, when the response time of the task exceeds the deadline, checking the timeout reasons of the task in sequence until the task can be scheduled includes:
当高关键任务在模式切换时受到的到达阻塞时间大于高关键级下受到的到达阻塞时间时,获取造成最大到达阻塞的资源,并执行以下步骤:When the arrival blocking time of the high-criticality task during mode switching is greater than the arrival blocking time under the high-criticality level, obtain the resource that causes the maximum arrival blocking and perform the following steps:
如果在当前任务所属处理器上没有高关键任务访问该资源,只有低关键级任务访问该资源:将处理器进行排序,将能够对低关键任务造成自旋阻塞的处理器排列在前,剩余处理器按照松弛时间从大到小排序;按处理器顺序,将该处理器上所有访问该资源的低关键任务迁移至第一个可以满足条件的处理器;If there is no high-criticality task accessing the resource on the processor to which the current task belongs, and only low-criticality tasks are accessing the resource: sort the processors, put the processors that can cause spin blocking for low-criticality tasks in the front, and sort the remaining processors from large to small according to the slack time; according to the processor order, migrate all low-criticality tasks accessing the resource on the processor to the first processor that can meet the conditions;
如果在当前任务所属处理器上有高关键任务访问该资源:将对该任务造成到达阻塞的处理器上所有访问该资源的低关键级任务尝试迁移至当前任务所在处理器;If a high-criticality task accesses the resource on the processor to which the current task belongs: all low-criticality tasks accessing the resource on the processor that blocks the task from reaching the resource will be attempted to be migrated to the processor to which the current task belongs;
当执行以上步骤后,当前任务仍然不可调度,则继续执行以下步骤:If the current task is still not schedulable after executing the above steps, continue to execute the following steps:
如果资源访问阻塞时间大于0,则按资源编号遍历,按处理器编号依次检查每个处理器上是否有访问资源的低关键级任务对任务造成自旋阻塞,并将处理器上访问资源的低关键任务迁移至当前任务所在核心;如果本次迁移不会新增任务超时,则执行迁移;If the resource access blocking time is greater than 0, traverse by resource number, check each processor in turn by processor number whether there is a low-criticality task accessing resources that causes spin blocking on the task, and migrate the low-criticality task accessing resources on the processor to the core where the current task is located; if this migration will not cause a new task timeout, execute the migration;
当当前任务可调度或者遍历完所有资源时停止。Stop when the current task is schedulable or all resources are traversed.
本发明实施例的另一方面还提供了一种混合关键分区实时操作系统的资源感知型任务分配装置,包括:Another aspect of the embodiment of the present invention further provides a resource-aware task allocation device for a hybrid critical partitioned real-time operating system, comprising:
第一模块,用于根据资源访问情况对各个任务进行分组;The first module is used to group tasks according to resource access;
第二模块,用于根据任务分组的结果,在保证分配结果在高关键级系统模式下可调度的情况下分配高关键级任务;The second module is used to allocate high-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the high-criticality system mode;
第三模块,用于根据任务分组的结果,在保证分配结果在低关键级系统模式下可调度的情况下分配低关键级任务;The third module is used to allocate low-criticality tasks according to the result of task grouping while ensuring that the allocation result is schedulable in the low-criticality system mode;
第四模块,用于根据分配好的高关键级任务和低关键级任务,使用任务迁移方法进行系统模式切换的调度,完成任务分配。The fourth module is used to schedule the system mode switching according to the assigned high-criticality tasks and low-criticality tasks using the task migration method to complete the task allocation.
本发明实施例的另一方面还提供了一种电子设备,包括处理器以及存储器;Another aspect of an embodiment of the present invention further provides an electronic device, including a processor and a memory;
所述存储器用于存储程序;The memory is used to store programs;
所述处理器执行所述程序实现如前面所述的方法。The processor executes the program to implement the method described above.
本发明实施例的另一方面还提供了一种计算机可读存储介质,所述存储介质存储有程序, 所述程序被处理器执行实现如前面所述的方法。Another aspect of the embodiment of the present invention further provides a computer-readable storage medium, wherein the storage medium stores a program. The program is executed by a processor to implement the above-mentioned method.
本发明实施例还公开了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器可以从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行前面的方法。The embodiment of the present invention also discloses a computer program product or a computer program, which includes a computer instruction stored in a computer-readable storage medium. A processor of a computer device can read the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the above method.
下面结合说明书附图,对本发明的具体实现过程进行详细描述:The specific implementation process of the present invention is described in detail below in conjunction with the accompanying drawings:
本发明将基于包含高关键级、低关键级两个系统模式(System Execution Mode,L∈{HI,LO})的混合关键系统(Mixed-Criticality System)展开。系统包含一组同样的处理器Φ(symmetric multiprocessor)和一组偶发周期性任务Γ(sporadic task)。系统采用半分区式固定优先级调度方法(semi-partitioned fixed-priority scheme)。The present invention is based on a mixed-criticity system (Mixed-Criticality System) including two system execution modes (System Execution Mode, L∈{HI,LO}) of high criticality and low criticality. The system includes a group of identical processors Φ (symmetric multiprocessor) and a group of sporadic periodic tasks Γ (sporadic task). The system adopts a semi-partitioned fixed-priority scheme.
在系统中,每个任务(task,以τi表示)由周期、截止时间、最坏执行时间估计、优先级、关键级、分配方案(Allocation)以及迁移方案(Migration)来定义,即 与普通周期性任务不同,混合关键任务的最坏执行估计呈现矢量特性,即在不同的系统执行模式下具有不同的最坏时间估计,越高等级的系统模式将会采用更保守的方式估计任务的执行时间(Ci(HI)>Ci(LO))。任务在模式L下的最差响应时间表示为Ri(L),任务在模式L下的利用率表示为ui 处理器在模式L下的利用率表示为 In the system, each task (represented by τ i ) is defined by its period, deadline, worst-case execution time estimate, priority, criticality, allocation scheme, and migration scheme, namely Different from ordinary periodic tasks, the worst execution estimate of mixed critical tasks presents vector characteristics, that is, different worst time estimates exist in different system execution modes. The higher the level of system mode, the more conservative the task execution time will be ( Ci (HI)> Ci (LO)). The worst response time of the task in mode L is represented by Ri (L), and the utilization of the task in mode L is represented by u i processor The utilization rate under mode L is expressed as
同时,系统中还包含一组由自旋锁保护的共享资源(资源集合以R表示)。对于系统中的一个资源(以rk表示),ck(L)表示在关键级L下执行rk的最坏执行时间估计,同样假设ck(HI)>ck(LO)。表示任务τi在一次运行期间访问资源rk的次数;函数F(·)表示给定任务所访问的资源集合,函数G(·)表示访问给定资源的任务集合。以表示资源rk对应的任务分组,表示不请求资源的任务分组。At the same time, the system also contains a set of shared resources protected by spin locks (the resource set is represented by R). For a resource in the system (represented by r k ), c k (L) represents the worst execution time estimate of executing r k at criticality level L, and it is also assumed that c k (HI)>c k (LO). represents the number of times task τ i accesses resource r k during one run; function F(·) represents the set of resources accessed by a given task, and function G(·) represents the set of tasks that access a given resource. represents the task grouping corresponding to resource r k , Represents a grouping of tasks that do not request resources.
混合关键级模型假设系统从低关键级启动,所有任务以低关键级下的预算进行调度。在实际运行中,当有一个任务运行时间超过该模式下的预算,则系统将切换为高关键级,并暂停所有低关键级的任务。The mixed criticality model assumes that the system starts from a low criticality level and all tasks are scheduled with the budget under the low criticality level. In actual operation, when a task runs for longer than the budget under this mode, the system will switch to a high criticality level and suspend all low criticality tasks.
如图1所示,本发明包含四个核心步骤,首先基于资源访问情况对任务分组,然后根据任务分组的结果优先分配高关键级任务并保证分配结果在高关键系统模式下可调度,接着分配低关键级的任务并保证分配结果在低关键级系统模式下可调度,最后使用任务迁移方法保 证系统模式切换时的可调度性。As shown in Figure 1, the present invention includes four core steps: first, tasks are grouped based on resource access conditions; then, high-criticality tasks are assigned first according to the results of task grouping and the assigned results are ensured to be schedulable in the high-criticality system mode; then, low-criticality tasks are assigned and the assigned results are ensured to be schedulable in the low-criticality system mode; finally, task migration method is used to ensure that the tasks are scheduled in the low-criticality system mode. Ensure the schedulability when the system mode switches.
本发明的具体步骤如下:The specific steps of the present invention are as follows:
步骤1:任务分组:Step 1: Group tasks:
步骤1.1:将共享资源按照被访问总次数从大到小排序,即 Step 1.1: Sort the shared resources by the total number of accesses from largest to smallest, i.e.
步骤2.1:按照步骤1.1的资源排序对任务进行分组。从第一个资源开始,如果某个任务需要请求该资源rk,则加入该资源rk对应的任务分组如果任务已加入分组,则不再加入其他任务组。不请求任务资源的任务都作为一个独立分组 Step 2.1: Group the tasks according to the resource order in step 1.1. Starting from the first resource, if a task needs to request the resource r k , then add the task group corresponding to the resource r k If a task has been added to a group, it will not be added to other task groups. Tasks that do not request task resources are treated as an independent group.
步骤3.1:对每个资源任务组和独立任务组根据关键级进一步划分,得到每个资源rk对应的高关键级任务组和低关键级任务组以及一个独立高关键级任务组和一个独立低关键级任务组 Step 3.1: Further divide each resource task group and independent task group according to the criticality level to obtain the high criticality task group corresponding to each resource r k and low criticality task groups and an independent high-criticality task force and an independent low-criticality task group
步骤2:高关键级任务的分配:Step 2: Allocation of high-criticality tasks:
步骤2.1:基于任务分组(算法1)得到的每个资源对应的高关键级任务组以及一个独立高关键级组将每个资源对应的任务组内的任务按照高关键级下的利用率ui(HI)升序排序。Step 2.1: Based on the task grouping (Algorithm 1), the high-criticality task group corresponding to each resource is obtained and a separate high-key group Sort the tasks in the task group corresponding to each resource in ascending order according to the utilization rate ui (HI) at the high criticality level.
步骤2.2:从被访问总次数Nk最大的资源对应的任务组开始遍历。Step 2.2: Start traversing from the task group corresponding to the resource with the largest total number of accesses N k .
步骤2.3:将处理器按照高关键下的利用率从小到大排序(负载较小的处理器能分配到更多同一个任务组的任务);Step 2.3: Set the processor utilization to the highest criticality Sort from small to large (processors with less load can be assigned more tasks from the same task group);
步骤2.4:依次分配当前任务组内的任务,将任务按处理器顺序分配给第一个可调度的处理器,直到所有资源对应的高关键级任务组的任务分配完成,可调度性测试使用高关键级模式下的可调度性分析。Step 2.4: Allocate tasks in the current task group in sequence, and assign tasks to the first schedulable processor in processor order until the task allocation of the high-criticality task group corresponding to all resources is completed. The schedulability test uses the schedulability analysis under the high-criticality mode.
重复执行步骤2.3、2.4,直到所有资源对应的高关键级任务组的任务分配完成。Repeat steps 2.3 and 2.4 until the task allocation of the high-criticality task groups corresponding to all resources is completed.
步骤2.5:独立任务分组内的任务按照高关键级下的利用率ui(HI)降序排序,每次分配独立任务前,将处理器按照高关键下的利用率从小到大排序,按处理器顺序分配给第一个可调度的处理器,可调度性测试使用高关键级模式下的可调度性分析。Step 2.5: The tasks in the independent task group are sorted in descending order according to the utilization u i (HI) under the high criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization under the high criticality level, and assigned to the first schedulable processor in processor order. The schedulability test uses the schedulability analysis under the high criticality level mode.
步骤3:低关键级任务的分配:Step 3: Allocation of low-criticality tasks:
步骤3.1:基于任务分组(算法1)得到的每个资源对应的低关键级任务组以及一个独立低关键级任务组将每个资源对应的任务组内的任务按照低关键级下的利用率ui(LO)升序排序。 Step 3.1: The low-criticality task group corresponding to each resource obtained based on the task grouping (Algorithm 1) and a separate low-criticality task force Sort the tasks in the task group corresponding to each resource in ascending order according to the utilization rate ui (LO) at the low criticality level.
步骤3.2:从被访问总次数Nk最大的资源对应的任务组开始遍历。Step 3.2: Start traversing from the task group corresponding to the resource with the largest total number of accesses N k .
步骤3.3:对处理器进行排序。存在访问该资源的高关键级任务的处理器排列在前且按照低关键下的利用率从小到大排序,剩余处理器按照低关键下的利用率从小到大排序。Step 3.3: Sort the processors. The processors with high-criticality tasks accessing the resource are sorted first and ranked according to the utilization of low-criticality tasks. Sort by small to large, and the remaining processors are sorted by utilization under low criticality Sort from smallest to largest.
步骤3.4:依次分配当前任务组内的任务,将任务按处理器顺序分配给第一个能调度的处理器,可调度性测试使用低关键级模式下的可调度性分析。Step 3.4: Allocate the tasks in the current task group in sequence, and assign the tasks to the first schedulable processor in processor order. The schedulability test uses the schedulability analysis in low criticality mode.
重复执行步骤3.3,步骤3.4,直到所有资源对应的低关键级任务组的任务分配完成。Repeat steps 3.3 and 3.4 until the task allocation of the low-criticality task groups corresponding to all resources is completed.
步骤3.5:独立任务分组内的任务按照低关键级下的利用率ui(LO)降序排序,每次分配独立任务前,将处理器按照低关键下的利用率从小到大排序,遍历处理器,分配给第一个可调度的处理器,可调度性测试使用低关键级模式下的可调度性分析。Step 3.5: The tasks in the independent task group are sorted in descending order according to the utilization u i (LO) under the low criticality level. Before each independent task is assigned, the processors are sorted from small to large according to the utilization under the low criticality level, and the processors are traversed and assigned to the first schedulable processor. The schedulability test uses the schedulability analysis under the low criticality level mode.
步骤4:模式切换时任务迁移方法:Step 4: Task migration method when switching modes:
混合关键系统在运行中,如果一个任务运行时间超过了最坏时间估计,则系统模式将升级(即模式切换)。升级后低关键的任务将被挂起,而高关键级任务将以更悲观的最坏时间运行。由于共享资源的存在,为了保证共享资源的完整性,在系统模式切换时如果低关键任务持有一个共享资源,则该任务将在当前资源执行完成后挂起,如果任务没有占有共享资源,则直接挂起。When a mixed-criticality system is running, if a task's running time exceeds the worst-case time estimate, the system mode will be upgraded (i.e., mode switching). After the upgrade, the low-criticality task will be suspended, and the high-criticality task will run with a more pessimistic worst-case time. Due to the existence of shared resources, in order to ensure the integrity of shared resources, when the system mode is switched, if a low-criticality task holds a shared resource, the task will be suspended after the current resource execution is completed. If the task does not occupy a shared resource, it will be suspended directly.
在分析任务的最差响应时间(Ri)时,我们使用更悲观的假设以提供安全的最差响应时间边界。具体的,在模式切换时,低关键级任务将以最低优先级运行,并访问其所需的所有资源一次后挂起。即对于一个低关键级任务τj,如果rk∈F(τj), When analyzing the worst response time (R i ) of a task, we use a more pessimistic assumption to provide a safe worst response time bound. Specifically, when the mode is switched, the low-criticality task will run at the lowest priority and access all the resources it needs once before suspending. That is, for a low-criticality task τ j , if r k ∈ F(τ j ),
根据高关键任务分配步骤,我们在分配高关键任务的同时也保证了其在高关键模式下的可调度性,即Ri(HI)<Di。在模式切换时,低关键级任务访问资源将会对高关键级任务的运行造成阻塞,从而导致部分高关键级任务不可调度。我们考虑在模式切换时,通过迁移低关键级任务来解决其对高关键任务带来的影响,从而保证模式切换时高关键级任务的可调度性。According to the high-criticality task allocation steps, we allocate high-criticality tasks while ensuring their schedulability in high-criticality mode, that is, R i (HI) < D i . When switching modes, low-criticality tasks accessing resources will block the operation of high-criticality tasks, resulting in some high-criticality tasks being unschedulable. We consider migrating low-criticality tasks to resolve their impact on high-criticality tasks during mode switching, thereby ensuring the schedulability of high-criticality tasks during mode switching.
首先,判断模式切换时高关键级任务受到低关键级任务的干扰后是否会超时。
Ri=Ri(HI)+Ii+Bi
First, determine whether the high-criticality task will time out after being interfered by the low-criticality task during mode switching.
R i = R i (HI) + I i + B i
其中,Ii为远程处理器上低关键级任务对τi造成的自旋阻塞时间,Bi为τi受到的增加的到达阻塞时间。利用现有的阻塞时间分析方法,可以计算出资源访问阻塞时间Ii和Bi。
Wherein, Ii is the spin blocking time caused by the low-criticality task on the remote processor to τi , and Bi is the increased arrival blocking time suffered by τi . Using the existing blocking time analysis method, the resource access blocking time Ii and Bi can be calculated.
为处理器上访问资源rk的低关键级任务对任务τi造成的阻塞,其中,FA(τi)为可能对任务τi造成到达阻塞的资源集合,Bi(HI)为任务τi在高关键级模式下受到的到达阻塞时间。
For processor The blocking caused to task τ i by the low-criticality task accessing resource r k , where FA (τ i ) is the set of resources that may cause arrival blocking to task τ i , and B i (HI) is the arrival blocking time suffered by task τ i in high-criticality mode.
迁移算法步骤如下:The migration algorithm steps are as follows:
步骤4.1:利用Ri=Ri(HI)+Ii+Bi计算每个高关键级任务在模式切换时的响应时间Ri,判断是否超过截止期限Di。基于更新后的Ri,定义核心的松弛(Slack)时间为 Step 4.1: Use Ri = Ri (HI) + Ii + Bi to calculate the response time Ri of each high-criticality task during mode switching, and determine whether it exceeds the deadline Di. Based on the updated Ri , define the core slack time as
步骤4.2:如果有任务τi超时,依次检查其超时原因;Step 4.2: If any task τ i times out, check the reasons for its timeout one by one;
步骤4.2.1:如果Bi>0,获取造成最大到达阻塞Bi的资源rk,分为两种情况进行处理:Step 4.2.1: If Bi > 0, obtain the resource r k that causes the maximum arrival blockage Bi and process it in two cases:
1)、如果在任务τi所属处理器上,没有高关键任务访问该资源,只有低关键级任务访问该资源:将处理器进行排序,能够对这些低关键任务造成自旋阻塞的处理器排列在前,剩余处理器按照松弛时间从大到小排序。按处理器顺序,将该处理器上所有访问该资源rk的低关键任务迁移至第一个可以满足条件的处理器。可以迁移的条件为,没有新增的超时任务。1) If there is no high-criticality task accessing the resource on the processor to which task τ i belongs, and only low-criticality tasks are accessing the resource: sort the processors, and put the processors that can cause spin blocking for these low-criticality tasks in the front, and sort the remaining processors from large to small according to the slack time. In the order of processors, migrate all low-criticality tasks on the processor that access the resource r k to the first processor that can meet the conditions. The condition for migration is that there are no new timed-out tasks.
2)、如果在任务τi所属处理器上,有高关键任务访问该资源:将对该任务造成到达阻塞的处理器上所有访问该资源的低关键级任务尝试迁移至任务τi所在处理器。可以迁移的条件为,没有新增的超时任务。2) If there is a high-criticality task accessing the resource on the processor to which task τ i belongs: all low-criticality tasks accessing the resource on the processor that blocks the task from reaching the resource will be attempted to be migrated to the processor to which task τ i belongs. The migration is possible only if there are no new timed-out tasks.
重复执行步骤4.2.1,直到Bi=0或Ri<Di。如果上述迁移不成功,则进入步骤4.2.2。Repeat step 4.2.1 until Bi = 0 or Ri < Di. If the above migration is unsuccessful, proceed to step 4.2.2.
步骤4.2.2:如果Ii>0,则按资源编号k遍历,按处理器编号m依次检查每个处理器上是否有访问资源rk的低关键级任务对任务τi造成自旋阻塞,即如果则将处理器上访问资源rk的低关键任务迁移至任务τi所在核心。如果本次迁移不会新增任务超时,则执行迁移。当任务τi可调度(Ri<Di)或者遍历完所有资源时停止。Step 4.2.2: If I i > 0, traverse by resource number k and check each processor in turn by processor number m Is there a low-criticality task accessing resource r k that causes spin blocking for task τ i , that is, if The processor Migrate the low-criticality tasks that access resource r k to the core where task τ i is located. If this migration will not cause a new task timeout, perform the migration. Stop when task τ i is schedulable (R i <D i ) or all resources have been traversed.
步骤4.3:如果步骤4.2执行结束,该任务可调度,则继续执行步骤4.2,直到所有高关键级任务可调度(Ri<Di)。Step 4.3: If step 4.2 is completed and the task is schedulable, continue to execute step 4.2 until all high-criticality tasks are schedulable (R i <D i ).
综上所述,相较于现有技术,本发明具有以下特点:In summary, compared with the prior art, the present invention has the following characteristics:
1、该方案考虑了共享资源的影响,尽可能本地化竞争激烈的资源,从而减少核心间资源 访问冲突,降低任务阻塞时间,提高系统的可调度性。1. The solution takes into account the impact of shared resources and localizes the resources with fierce competition as much as possible, thereby reducing the resources between cores. Access conflicts can be avoided, task blocking time can be reduced, and the schedulability of the system can be improved.
2、该方案在分配的过程中同时考虑了混合关键系统下任务在不同关键级的最坏运行时间(即任务在不同关键级下有不同利用率,关键级越高利用率越高);提出了在各模式与模式切换时的任务分配与迁移方法,保证系统全运行场景下的可调度性。2. During the allocation process, this scheme takes into account the worst running time of tasks at different critical levels in mixed critical systems (that is, tasks have different utilization rates at different critical levels, and the higher the critical level, the higher the utilization rate); it proposes a task allocation and migration method when switching between modes to ensure the schedulability of the system in all operating scenarios.
本发明通过任务分配本地化竞争激烈的共享资源,减少资源阻塞时间,提高系统可调度性。本发明考虑了混合关键级系统的关键级特性,分配过程中保证各关键级模式下的可调度性;使用任务迁移保证模式切换时的可调度性,合理利用系统容量,提高系统整体可调度性。The present invention localizes the fiercely competitive shared resources through task allocation, reduces resource blocking time, and improves system schedulability. The present invention takes into account the criticality characteristics of the mixed criticality system, ensures the schedulability of each criticality mode during the allocation process, uses task migration to ensure the schedulability when switching modes, reasonably utilizes system capacity, and improves the overall schedulability of the system.
在一些可选择的实施例中,在方框图中提到的功能/操作可以不按照操作示图提到的顺序发生。例如,取决于所涉及的功能/操作,连续示出的两个方框实际上可以被大体上同时地执行或所述方框有时能以相反顺序被执行。此外,在本发明的流程图中所呈现和描述的实施例以示例的方式被提供,目的在于提供对技术更全面的理解。所公开的方法不限于本文所呈现的操作和逻辑流程。可选择的实施例是可预期的,其中各种操作的顺序被改变以及其中被描述为较大操作的一部分的子操作被独立地执行。In some selectable embodiments, the function/operation mentioned in the block diagram may not occur in the order mentioned in the operation diagram. For example, depending on the function/operation involved, the two boxes shown in succession can actually be executed substantially simultaneously or the boxes can sometimes be executed in reverse order. In addition, the embodiment presented and described in the flow chart of the present invention is provided by way of example, for the purpose of providing a more comprehensive understanding of technology. The disclosed method is not limited to the operation and logic flow presented herein. Selectable embodiments are expected, wherein the order of various operations is changed and the sub-operation of a part for which is described as a larger operation is performed independently.
此外,虽然在功能性模块的背景下描述了本发明,但应当理解的是,除非另有相反说明,所述的功能和/或特征中的一个或多个可以被集成在单个物理装置和/或软件模块中,或者一个或多个功能和/或特征可以在单独的物理装置或软件模块中被实现。还可以理解的是,有关每个模块的实际实现的详细讨论对于理解本发明是不必要的。更确切地说,考虑到在本文中公开的装置中各种功能模块的属性、功能和内部关系的情况下,在工程师的常规技术内将会了解该模块的实际实现。因此,本领域技术人员运用普通技术就能够在无需过度试验的情况下实现在权利要求书中所阐明的本发明。还可以理解的是,所公开的特定概念仅仅是说明性的,并不意在限制本发明的范围,本发明的范围由所附权利要求书及其等同方案的全部范围来决定。In addition, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise specified, one or more of the functions and/or features described may be integrated into a single physical device and/or software module, or one or more functions and/or features may be implemented in separate physical devices or software modules. It is also understood that a detailed discussion of the actual implementation of each module is unnecessary for understanding the present invention. More specifically, in view of the properties, functions, and internal relationships of the various functional modules in the device disclosed herein, the actual implementation of the module will be understood within the conventional skills of the engineer. Therefore, those skilled in the art can implement the present invention set forth in the claims without excessive experimentation using ordinary techniques. It is also understood that the specific concepts disclosed are merely illustrative and are not intended to limit the scope of the present invention, which is determined by the full scope of the appended claims and their equivalents.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention, or the part that contributes to the prior art or the part of the technical solution, can be embodied in the form of a software product. The computer software product is stored in a storage medium, including several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in each embodiment of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk, and other media that can store program codes.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现 逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。The logic and/or steps represented in the flowchart or otherwise described herein, for example, may be considered to be used to implement An ordered list of executable instructions for a logical function may be embodied in any computer-readable medium for use by an instruction execution system, apparatus or device (e.g., a computer-based system, a system including a processor, or other system that can fetch instructions from an instruction execution system, apparatus or device and execute the instructions), or in conjunction with such instruction execution system, apparatus or device. For purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate or transmit a program for use by an instruction execution system, apparatus or device, or in conjunction with such instruction execution system, apparatus or device.
计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置)、便携式计算机盘盒(磁装置)、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编辑只读存储器(EPROM或闪速存储器)、光纤装置以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。More specific examples of computer-readable media (a non-exhaustive list) include the following: an electrical connection with one or more wires (electronic device), a portable computer disk case (magnetic device), a random access memory (RAM), a read-only memory (ROM), an erasable and programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disk read-only memory (CDROM). In addition, the computer-readable medium may even be a paper or other suitable medium on which the program is printed, since the program may be obtained electronically, for example, by optically scanning the paper or other medium, followed by editing, deciphering or, if necessary, processing in another suitable manner, and then stored in a computer memory.
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that the various parts of the present invention can be implemented by hardware, software, firmware or a combination thereof. In the above-mentioned embodiments, a plurality of steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented by hardware, as in another embodiment, it can be implemented by any one of the following technologies known in the art or their combination: a discrete logic circuit having a logic gate circuit for implementing a logic function for a data signal, a dedicated integrated circuit having a suitable combination of logic gate circuits, a programmable gate array (PGA), a field programmable gate array (FPGA), etc.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, the description with reference to the terms "one embodiment", "some embodiments", "examples", "specific examples", or "some examples" means that the specific features, structures, materials or characteristics described in conjunction with the embodiment or example are included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Moreover, the specific features, structures, materials or characteristics described may be combined in any one or more embodiments or examples in a suitable manner.
尽管已经示出和描述了本发明的实施例,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and variations may be made to the embodiments without departing from the principles and spirit of the present invention, and that the scope of the present invention is defined by the claims and their equivalents.
以上是对本发明的较佳实施进行了具体说明,但本发明并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。 The above is a specific description of the preferred implementation of the present invention, but the present invention is not limited to the described embodiments. Those skilled in the art may make various equivalent modifications or substitutions without violating the spirit of the present invention. These equivalent modifications or substitutions are all included in the scope defined by the claims of this application.
Claims (10)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310095960.XA CN116244073A (en) | 2023-02-06 | 2023-02-06 | Resource-aware task allocation method for hybrid key partition real-time operating system |
| CN202310095960.X | 2023-02-06 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024164369A1 true WO2024164369A1 (en) | 2024-08-15 |
Family
ID=86632493
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2023/078467 Pending WO2024164369A1 (en) | 2023-02-06 | 2023-02-27 | Resource-aware task allocation method for mixed-criticality partitioned real-time operating system |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN116244073A (en) |
| WO (1) | WO2024164369A1 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116430738B (en) * | 2023-06-14 | 2023-08-15 | 北京理工大学 | An Adaptive Dynamic Scheduling Method for Hybrid Critical Systems |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109298920A (en) * | 2018-08-28 | 2019-02-01 | 西安工业大学 | A Hybrid Critical Task Scheduling Method Based on Quasi-Partition Idea |
| CN110196762A (en) * | 2019-04-18 | 2019-09-03 | 中山大学 | Mix key tolerant system dynamic resource management agreement and the dispatching method of the agreement |
| CN110928657A (en) * | 2019-11-18 | 2020-03-27 | 西北工业大学 | Embedded system certainty analysis method |
| CN115619002A (en) * | 2022-09-21 | 2023-01-17 | 浙江大学 | Flexible dynamic mixed key system scheduling method |
-
2023
- 2023-02-06 CN CN202310095960.XA patent/CN116244073A/en active Pending
- 2023-02-27 WO PCT/CN2023/078467 patent/WO2024164369A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109298920A (en) * | 2018-08-28 | 2019-02-01 | 西安工业大学 | A Hybrid Critical Task Scheduling Method Based on Quasi-Partition Idea |
| CN110196762A (en) * | 2019-04-18 | 2019-09-03 | 中山大学 | Mix key tolerant system dynamic resource management agreement and the dispatching method of the agreement |
| CN110928657A (en) * | 2019-11-18 | 2020-03-27 | 西北工业大学 | Embedded system certainty analysis method |
| CN115619002A (en) * | 2022-09-21 | 2023-01-17 | 浙江大学 | Flexible dynamic mixed key system scheduling method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116244073A (en) | 2023-06-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9513962B2 (en) | Migrating a running, preempted workload in a grid computing system | |
| CN103729480B (en) | Method for rapidly finding and scheduling multiple ready tasks of multi-kernel real-time operating system | |
| US10013264B2 (en) | Affinity of virtual processor dispatching | |
| US8627325B2 (en) | Scheduling memory usage of a workload | |
| CN108549574A (en) | Threading scheduling management method, device, computer equipment and storage medium | |
| US20150254113A1 (en) | Lock Spin Wait Operation for Multi-Threaded Applications in a Multi-Core Computing Environment | |
| CN112925616A (en) | Task allocation method and device, storage medium and electronic equipment | |
| CN110347485A (en) | The multi-level fusion real-time scheduling method of multicore preemptive type based on fixed priority | |
| Guo et al. | The concurrent consideration of uncertainty in WCETs and processor speeds in mixed-criticality systems | |
| CN111625339A (en) | Cluster resource scheduling method, device, medium and computing equipment | |
| CN114546587A (en) | A method for expanding and shrinking capacity of online image recognition service and related device | |
| CN117093335A (en) | Task scheduling method and device for distributed storage system | |
| Zhao et al. | Minimizing stack memory for partitioned mixed-criticality scheduling on multiprocessor platforms | |
| WO2024164369A1 (en) | Resource-aware task allocation method for mixed-criticality partitioned real-time operating system | |
| CN117806789B (en) | Task processing method and device of multi-core operating system and computing device | |
| CN118409705A (en) | Real-time guaranteeing method and device for operating system block storage subsystem | |
| KR101639947B1 (en) | Hadoop preemptive deadline constraint scheduling method, execution program thereof method and recorded medium of the program | |
| CN111638950A (en) | Excel data self-adaptive writing method and device, Java application server and storage medium | |
| CN117441161A (en) | Software optimization method and equipment of NUMA architecture | |
| Komarasamy et al. | Deadline Constrained Adaptive Multilevel Scheduling System in Cloud Environment. | |
| CN117632394A (en) | Task scheduling method and device | |
| Gai et al. | Stack size minimization for embedded real-time systems-on-a-chip | |
| CN110162399A (en) | A kind of time determinability method towards multicore real-time system | |
| Qiao et al. | An online workflow scheduling algorithm considering license limitation in heterogeneous environment | |
| Sun et al. | Response Time Analysis for Fixed-Priority Preemptive Uniform Multiprocessor Systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23920576 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |