CN118467124A - Task scheduling method, device, computer equipment, program product and storage medium - Google Patents
Task scheduling method, device, computer equipment, program product and storage medium Download PDFInfo
- Publication number
- CN118467124A CN118467124A CN202410550459.2A CN202410550459A CN118467124A CN 118467124 A CN118467124 A CN 118467124A CN 202410550459 A CN202410550459 A CN 202410550459A CN 118467124 A CN118467124 A CN 118467124A
- Authority
- CN
- China
- Prior art keywords
- task
- current
- subtasks
- thread
- subtask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a task scheduling method, a task scheduling device, computer equipment, a program product and a storage medium. The method comprises the steps that in a preset thread pool, corresponding target threads are respectively allocated for current subtasks of all tasks to execute all the current subtasks, the current subtasks are any one of all the subtasks corresponding to the tasks, and waiting time exists between at least part of the subtasks and dependent subtasks; distributing the current subtask of the first task to a target thread corresponding to the current subtask of the second task for execution; the states of the tasks comprise a ready state and a waiting state, wherein the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the unassigned corresponding target thread is in a ready state, and the task corresponding to the target thread during the execution to the waiting time is in a waiting state. The thread and the task are converted into an m-n execution mode, the waiting time is reasonably utilized, the resource waste is avoided, and the task execution efficiency is improved.
Description
Technical Field
The present application relates to the field of cloud technology and computer technology, and more particularly, to a task scheduling method, a task scheduling apparatus, a computer device, a computer program product, and a non-volatile computer readable storage medium.
Background
In The process of upgrading a device based on an Over The Air (OTA), a corresponding thread is generally started based on an OTA upgrading task to implement The OTA upgrading of The device.
However, when there are more devices to be upgraded, the number of threads required is greatly increased, and the performance requirement of the server performing the upgrade task is high.
Disclosure of Invention
Embodiments of the present application provide a task scheduling method, task scheduling apparatus, computer device, computer program product, and non-transitory computer readable storage medium. Idle threads can be allocated for the tasks in each ready state to execute the current subtasks of each task, so that the number of threads required is reduced, and the performance requirement on a server for executing the upgrade task is reduced.
The task scheduling method of the application comprises the following steps: respectively distributing corresponding target threads for current subtasks of each task in a preset thread pool to execute each current subtask, wherein the current subtask is any one of the subtasks corresponding to the task, and waiting time exists between at least part of the subtasks and the dependent subtasks; distributing the current subtask of a first task to the target thread corresponding to the current subtask of a second task for execution; the states of the tasks comprise a ready state and a waiting state, the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the target thread to which no corresponding assignment is made is in the ready state, and the task corresponding to the target thread during execution until the waiting time is in the waiting state.
In some embodiments, the task scheduling method further comprises: and determining that the first task is in the ready state when the duration of the first task in the waiting state reaches the waiting time.
In some embodiments, the state of the task further includes a completion state, where the task is in the completion state when each of the subtasks corresponding to the task is performed and completed; the task scheduling method further comprises the following steps: and distributing the current subtask corresponding to the first task to the target thread corresponding to a third task for execution, wherein the third task is in the completion state.
In some embodiments, the allocating, in a preset thread pool, a corresponding target thread for a current subtask of each task includes: and distributing corresponding target threads for the current subtasks of each task in the preset thread pool based on the preset thread number of the thread pool and the request time of each task request.
In some embodiments, the allocating, in the preset thread pool, a corresponding target thread for the current subtask of each task based on the preset thread number of the thread pool and the request time of each task request includes: sequencing each current subtask based on the request time of each task request; and respectively distributing corresponding target threads for sequencing the current subtasks positioned in a first preset sequencing range in a preset thread pool, wherein the first preset sequencing range is determined based on the number of the preset threads.
In some embodiments, the assigning the current subtask corresponding to the first task to the target thread corresponding to the second task includes: sorting the current subtasks corresponding to the first tasks based on the queuing time of the current subtasks corresponding to the first tasks; and respectively distributing each current subtask in a second preset sequencing range to each target thread corresponding to each second task for execution, wherein the second preset sequencing range is determined based on the number of the second tasks, and the queuing time is determined based on at least one of the duration of the current subtask waiting to distribute the corresponding target thread and the request time of the task request corresponding to the current subtask.
In some embodiments, the task scheduling method further comprises: ending the thread Chi Zhongkong which is idle and reaches the preset duration.
In some implementations, the task includes an OTA task.
The task scheduling device comprises an allocation module and an execution module. The allocation module is used for allocating corresponding target threads for current subtasks of each task in a preset thread pool respectively to execute each current subtask, wherein the current subtask is any one of the subtasks corresponding to the task, and waiting time exists between at least part of the subtasks and the dependent subtasks; the execution module is used for distributing the current subtask of the first task to the target thread corresponding to the current subtask of the second task for execution; the states of the tasks comprise a ready state and a waiting state, the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the target thread to which no corresponding assignment is made is in the ready state, and the task corresponding to the target thread during execution until the waiting time is in the waiting state.
The computer device of the present application includes: the processor is connected with the memory; the memory stores a computer program, and the processor executes the computer program to implement instructions of the task scheduling method according to any one of the above embodiments.
The computer program product of the present application comprises a computer program comprising instructions for the task scheduling method of any of the embodiments described above.
The non-transitory computer readable storage medium according to an embodiment of the present application includes a computer program that, when executed by a processor, causes the processor to execute the task scheduling method according to any one of the above embodiments.
According to the task scheduling method, the task scheduling device, the computer equipment, the computer program product and the nonvolatile computer readable storage medium, corresponding target threads are respectively distributed for the current subtasks of each task in the preset thread pool so as to execute each current subtask, wherein the current subtask is any one of the subtasks corresponding to the task, waiting time exists between at least part of the subtasks and dependent subtasks, frequent creation and destruction of threads are avoided, thread utilization rate is improved, thread switching overhead is reduced, and system operation efficiency is improved; distributing the current subtask of the first task to a target thread corresponding to the current subtask of the second task for execution, wherein the states of the tasks comprise a ready state and a waiting state, the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the unassigned corresponding target thread is in a ready state, the task corresponding to the target thread is in a waiting state during the execution to the waiting time, and the waiting time of the second task in the waiting state and the idle thread corresponding to the second task are utilized to execute the current subtask of the first task in the ready state, so that the number of threads required for executing the task is reduced, and the performance requirement on a server for executing the upgrade task is reduced. In addition, the thread and the task are converted into the m-n execution mode, so that the waiting time can be reasonably utilized, the waiting time of waiting for execution of the task is reduced, the resource waste is avoided, and the task execution efficiency can be improved.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic illustration of an application scenario of a task scheduling method of some embodiments of the present application;
FIG. 2 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 3 is a schematic diagram of a scenario of a prior art task execution and scheduling process;
FIG. 4 is a schematic illustration of a scenario of a task scheduling method of some embodiments of the application;
FIG. 5 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 6 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 7 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 8 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 9 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 10 is a flow chart of a task scheduling method of some embodiments of the application;
FIG. 11 is a block diagram of a task scheduler according to some embodiments of the present application;
FIG. 12 is a schematic structural diagram of a computer device in accordance with certain embodiments of the present application;
FIG. 13 is a schematic diagram of the connection state of a non-transitory computer readable storage medium and a processor of some embodiments of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
The terms appearing in the present application are explained below:
State machine: the state machine is composed of a state register and a combinational logic circuit, can perform state transition according to a preset state according to a control signal, is a control center for coordinating related signal actions and completing specific operation, and responds to the state of each task by setting the state machine.
The Over The Air (OTA) technology is a wireless communication technology, and generally refers to a technology that a device can complete a software update by receiving and installing through a wireless connection without being physically connected to a computer or other devices. In the field of internet of things (Internet of Things, ioT), OTA technology can be used to enable remote software upgrades and repairs.
In the process of upgrading a terminal or device with limited capability (such as a terminal or device without the capability of connecting and accessing an ethernet network or with poor storage capability, etc.) based on the OTA technology, a hypertext transfer protocol (Hypertext Transfer Protocol, HTTP) distribution scheme cannot be used due to the insufficient storage capability or network access capability of the terminal or device with limited capability, that is, a scheme that the terminal or device downloads a corresponding upgraded software package or update package based on the HTTP protocol to perform the upgrade.
At this time, the files of the OTA task are generally sliced to obtain a plurality of slice files, and then the plurality of slice files are transmitted to the terminal or the device in batches, so that the upgrade is completed. For example, taking an example that the OTA file includes 129 Kilobytes (KIB) and the OTA upgrading is performed on the terminal with limited capability, the OTA file of 129KIB may be sliced to obtain 601 parts of slice files (including 600 parts of slice files of 220 bytes (KB) and 1 part of slice files of 96 KB), and then the 601 parts of slice files are sequentially transferred to the terminal, so that the terminal completes the OTA upgrading.
In the process of executing the OTA task, a large number of threads are started to execute the task in the process of the OTA system, and the large number of threads execute each task to complete the transmission of the slice file, thereby completing the OTA upgrading.
However, when there are more devices to be upgraded, the number of threads required is greatly increased, and the performance requirement of the server performing the upgrade task is high. Since during the execution of any thread, other threads are also started and are in a state waiting for execution. In the OTA process, the thread and the task are generally 1:1 bound, that is, after a corresponding thread is allocated to a task, the corresponding thread can execute another task to be executed only when the task is executed, and the task includes a plurality of steps, after the last step of the task is executed by the thread, the next step of the next task can be executed by the thread after waiting for a preset time, so that a great number of running time windows of the thread (that is, a time period when the thread is allocated to resources such as a central processing unit (Central Processing Unit, a CPU) resource) are wasted during the waiting time, thereby causing the problem that resources of the OTA system are wasted.
For example, referring to fig. 3, taking a task executed in a thread, step 1, step 2 and step 3 are described as an example, after the thread has executed step 1, it needs to wait 10 seconds and then step 2, after the thread has executed step 2, it needs to wait 10 seconds and then step 3, and finally, when step 3 is executed, the task is completed.
Since the simultaneous starting of a large number of threads greatly increases the context switching cost of the OTA system, if the problem that the running time window of the threads is wasted is avoided by using the simultaneous starting of a large number of threads, the upper limit growth rate of the effective capacity of the OTA system is obviously smaller than the upper limit growth rate of the system load, and the normal use of the OTA is affected.
In order to solve the technical problems, an embodiment of the present application provides a task scheduling method.
The following describes an application scenario of the technical scheme of the present application, as shown in fig. 1, the task scheduling method provided by the present application may be applied to the application scenario shown in fig. 1. The task scheduling method is applied to the task scheduling system 1000. The task scheduling system 1000 may schedule OTA tasks.
In one embodiment, the task scheduling system 1000 may include a server 100 or a terminal device 200, etc., and the task scheduling method may be performed by the terminal device or the server, etc.
In the embodiment of the present application, the terminal device 200 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a desktop computer, an electronic book reader, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, an aircraft, and the like; the terminal device may be provided with a blockchain related service, and the server 100 is a background server 100 corresponding to software, a web page, an applet, or the like, or a server 100 specially scheduled by a task, which is not particularly limited in the present application.
The server 100 may be an independent physical server 100, or may be a server 100 cluster or a distributed system formed by a plurality of physical servers 100, or may be a cloud server 100 that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like. The embodiments of the present application are not limited in this regard.
It should be explained that cloud computing (closed computing) as described above is a computing mode that distributes computing tasks over a resource pool formed by a large number of computers, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud can be infinitely expanded in the sense of users, can be acquired at any time, can be used as required and can be expanded at any time. By establishing a cloud computing resource pool (cloud platform for short, generally called IaaS (Infrastructure AS A SERVICE) platform), multiple types of virtual resources are deployed in the resource pool for external clients to select for use.
It should be noted that, the task scheduling method of the present application may also be performed by an electronic device, which may be the server 100 or the terminal device 200, that is, the method may be performed by the server 100 or the terminal device 200 alone, or may be performed by the server 100 and the terminal device 200 together. For example, when executed by the server 100, the server 100 assigns a corresponding target thread to each task's current subtask.
In one embodiment, the server 100 and the terminal device 200 may communicate via a communication network.
In one embodiment, the communication network is a wired network or a wireless network.
It should be noted that, the number of servers 100 and terminal devices 200 shown in fig. 1 is merely illustrative, and the number of servers and terminal devices 200 is not limited in practice, and is not particularly limited in the embodiment of the present application.
The task scheduling method provided by the exemplary embodiments of the present application will be described below with reference to the accompanying drawings in conjunction with the above-described application scenario, and it should be noted that the above-described application scenario is only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in any way in this respect.
Referring to fig. 1 and 2, an embodiment of the present application provides a task scheduling method, where the task scheduling method includes:
Step 011: and respectively distributing corresponding target threads for the current subtasks of each task in a preset thread pool so as to execute each current subtask.
The current subtask is any one of the subtasks corresponding to the tasks, and waiting time exists between at least part of the subtasks and the dependent subtasks.
In particular, a task may be a specific operation that needs to be performed, for example, a task may be an operation that needs to be performed by a data computing task (e.g., data analysis, etc.), a data processing task (e.g., data cleansing, data conversion, etc.), a storage task (e.g., file backup, etc.), a system maintenance task (e.g., security patch update, etc.), etc. A task may include a plurality of subtasks, which may be specific steps to complete the task. For example, taking a task as a security patch task, the security patch task may include a vulnerability assessment subtask (for determining security risks existing in a system, etc.), a patch collection subtask (for collecting applicable security patch files, etc.), a patch test subtask (for simulating whether the security patch files corresponding to the application can be executed after being executed, etc.), a patch deployment subtask (for executing the security patch files, etc.), and the like subtasks, by executing the subtasks, the patch collection subtask, the patch test subtask, the patch deployment subtask, etc., to complete the security patch task.
There are dependencies between subtasks. Because some subtasks need to rely on the completion or output of other subtasks when executing, the subtasks need to be executed after the completion of the execution of the dependent subtasks when executing. For example, continuing the above example, before executing the patch collection subtask, the security patch file required by the corresponding security risk needs to be found after confirming the specific security risk, that is, the patch collection subtask needs to be executed after the execution of the dependent vulnerability assessment subtask confirming the specific security risk is completed, by setting the dependency relationship between the subtasks, it can be ensured that each subtask can be executed in the correct order, and the problem in execution is avoided.
Optionally, the task comprises an OTA task.
In particular, the tasks may include an OTA task, which may be a task based on OTA technology for updating software, firmware, configuration information, or the like. For example, the OTA task may be a software update task, a configuration update task, or a bug fix task, etc. The plurality of subtasks included in the OTA task may be, for example, a version management subtask, a device identification and management subtask, a corresponding version of firmware or software distribution subtask, and the like.
Threads may be the smallest unit capable of operation and scheduling, threads may be used to perform subtasks, and thread pools may be used to manage and reuse threads. The method can be created through a task scheduling system or user-defined creation to obtain a plurality of threads, and a thread pool is generated according to the plurality of threads; the corresponding subtasks are executed by assigning threads in the thread pool to subtasks to be executed.
It can be understood that after a certain subtask is executed, a large amount of intermediate results or states may be generated, resulting in too fast consumption of resources such as system memory or network, and by setting a waiting time between at least part of the subtasks and the dependent subtasks, after the subtasks are executed, the thread executes the next subtask depending on the subtask again after the preset waiting time, so as to smooth consumption of resources and avoid overload of the system.
That is, by building a thread pool that includes threads required to perform each task, frequent creation and destruction of threads is avoided; and then, a thread pool scheduler (such as time scheduling) is arranged, target threads in the thread pool are respectively allocated to the current subtasks of each task, and the allocated current subtasks are executed by the target threads, so that the thread utilization rate is improved, the thread switching overhead is reduced, and the running efficiency of the system is improved.
Step 012: and distributing the current subtask of the first task to a target thread corresponding to the current subtask of the second task for execution.
The task state comprises a ready state and a waiting state, wherein the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the unassigned corresponding target thread is in a ready state, and the task corresponding to the target thread during the execution to the waiting time is in a waiting state.
Specifically, after the current subtask of the task is executed by the thread, determining that the task is in a waiting state under the condition that the task is in a preset waiting time, namely, the task is a second task; and determining that the task is in a ready state, namely the task is the first task, under the condition that the current subtask of the task is not allocated to the corresponding target thread or the current subtask of the task does not need to pass the waiting time and can be immediately executed.
It can be understood that the current subtask corresponding to the second task in the waiting state cannot be executed at this time, and the thread is in an idle state; the state of each task is obtained, and then the state of each task is responded through setting a state machine, so that the current subtask corresponding to the first task in the ready state is allocated to the target thread corresponding to the second task in the waiting state for execution, namely, the waiting time of the second task in the waiting state and the idle thread corresponding to the second task are utilized to execute the current subtask of the first task in the ready state, the waiting time of task execution is reasonably utilized, the waiting time of task execution is reduced, and the task execution efficiency is improved.
For example, referring to fig. 4, the task a includes a subtask A1 and a subtask A2, the subtask A2 depends on the subtask A1, that is, after the subtask A1 is executed and a preset waiting time (for example, 15 seconds) elapses, the subtask A2 is executed, the task B includes a subtask B1, and the task C includes a subtask C1 for illustration. Assume that task B and task C are both in a ready state, i.e., task B and task C are both the first task. The method comprises the steps that a current subtask A1 of a task A is distributed to a target thread 1 through a preset thread pool to be executed, after the subtask A1 is completely executed by the target thread 1, a waiting time of 15 seconds is entered, namely, the task A is in a waiting state, and under the condition that the task A is determined to be a second task, the target thread 1 is distributed to the subtask B1 of the task B to execute the subtask B1, namely, the current subtask (namely, the subtask B1) of a first task (namely, the task B) in a ready state is distributed to a thread (namely, the target thread 1) corresponding to the current subtask (namely, the subtask A1) of the second task (namely, the task A) in the waiting state to be executed; under the condition that the execution of the execution subtask B1 is completed, if any task A is in a waiting state, the target thread 1 is distributed to the subtask C1 of the task C to execute the subtask C1; after the execution of the subtask C1 is completed, the task a is turned into a ready state, that is, the task a is the first task, and then the target thread 1 is allocated to the current subtask A2 of the task a to execute the subtask A2.
In this way, corresponding target threads are respectively allocated for the current subtasks of each task in a preset thread pool so as to execute each current subtask, wherein the current subtask is any one of the subtasks corresponding to the task, and waiting time exists between at least part of the subtasks and dependent subtasks, so that threads are prevented from being frequently created and destroyed, the thread utilization rate is improved, the thread switching overhead is reduced, and the running efficiency of the system is improved; distributing the current subtask of the first task to a target thread corresponding to the current subtask of the second task for execution, wherein the states of the tasks comprise a ready state and a waiting state, the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the unassigned corresponding target thread is in a ready state, the task corresponding to the target thread is in a waiting state during the execution to the waiting time, and the waiting time of the second task in the waiting state and the idle thread corresponding to the second task are utilized to execute the current subtask of the first task in the ready state, so that the number of threads required for executing the task is reduced, and the performance requirement on a server for executing the upgrade task is reduced. In addition, the thread and the task are converted into the m-n execution mode, so that the waiting time can be reasonably utilized, the waiting time of waiting for execution of the task is reduced, the resource waste is avoided, and the task execution efficiency can be improved.
Compared with the current execution mode of binding threads and tasks 1:1, idle threads are allocated to the current subtasks of each task according to the state of each task, the threads and the tasks are converted into m: n execution modes, resource waste is avoided, and task execution efficiency is improved.
Referring to fig. 5, in some embodiments, the task scheduling method further includes:
step 013: and determining that the first task is in the ready state when the duration of the first task in the waiting state reaches the waiting time.
Specifically, when the duration of the first task in the waiting state reaches the waiting time, the state machine determines that the first task is in the ready state, and the first task can be immediately executed by the threads in the thread pool, and when the idle threads exist, the first task in the ready state is distributed to the idle threads to execute the current sub-task of the first task.
Referring to fig. 6, in some embodiments, the state of the task further includes a completion state, where the task is in the completion state when all the subtasks corresponding to the task are executed to complete; the task scheduling method further comprises the following steps:
step 014: and distributing the current subtask corresponding to the first task to a target thread corresponding to a third task for execution, wherein the third task is in a completion state.
The state of the task also comprises a completion state, and the task is determined to be in the completion state under the condition that all the subtasks corresponding to the task are executed and completed.
Specifically, when the third task is in a completed state, all the subtasks of the third task are executed, and then the target thread corresponding to the third task is in an idle state, the target thread corresponding to the third task can be distributed to the current subtask of the first task which is in a ready state and can be immediately executed, so as to execute the current subtask of the first task.
For example, continuing the above example, assume that the task further includes a task D, where the task D is a first task, the task D includes a subtask D1, and the task a includes only a subtask A1 and a subtask A2, and when the subtask A1 and the subtask A2 are both executed, it is determined that the task a is in a completed state, that is, the task a is a third task, and the current subtask D1 corresponding to the task D is allocated to the target thread 1 corresponding to the task a for execution.
Referring to fig. 7, in some embodiments, step 011: in a preset thread pool, distributing corresponding target threads for current subtasks of each task, wherein the method comprises the following steps:
Step 0111: and distributing corresponding target threads for the current subtasks of each task in the preset thread pool based on the preset thread number of the thread pool and the request time of each task request.
The number of the preset threads can be determined according to task characteristics, the number of tasks, the number of subtasks, the task completion time and the like of each task, and the number of the preset threads can be 10, 15, 20, 50 and the like; the request time of each task request may be a point in time (timestamp) at which the current subtask of each task starts requesting execution; or the time when each task requests execution for the first time, or the time after the task is transferred from the waiting state to the ready state, etc.
Specifically, according to the preset thread number of the thread pool and the request time of each task request, the next task which can be executed immediately can be determined, and the idle target thread in the preset thread pool is distributed to the current subtask corresponding to the task which can be executed immediately.
Referring to fig. 8, optionally, step 0111: based on the preset thread number of the thread pool and the request time of each task request, in the preset thread pool, corresponding target threads are allocated for the current subtasks of each task, and the method comprises the following steps:
step 01111: sequencing each current subtask based on the request time of each task request;
step 01112: and respectively distributing corresponding target threads for sequencing each current subtask positioned in a first preset sequencing range in a preset thread pool, wherein the first preset sequencing range is determined based on the number of the preset threads.
The first preset sequencing range is determined based on the number of preset threads, and the first preset sequencing range can be the same as the number of the preset threads; or the first preset sequencing range may be a preset percentage of the number of preset threads, for example, the first preset sequencing range accounts for 80%,90%,95% of the number of preset threads, and so on; or correspondingly setting a first preset sequencing range according to the data range of the preset thread number, for example, setting the first preset sequencing range to be 20 when the preset thread number is (50, 100), setting the first preset sequencing range to be 50 when the preset thread number is (100, 200), and the like.
Specifically, according to the request time of each task request, sequencing the current subtasks corresponding to each task, wherein the earlier the request time of each task request is, the earlier the sequencing of the current subtasks corresponding to the tasks is; and determining a first preset sequencing range according to the number of the preset threads, and distributing the threads in the thread pool to each current subtask sequenced in the first sequencing range so as to execute each current subtask sequenced in the first sequencing range.
For example, taking the same first ordering scope and the same number of preset threads in the thread pool as an example, assuming that the number of the preset threads in the thread pool is 20, the first ordering scope includes the first 20 bits in the queuing sequence of the current subtasks, ordering the current subtasks corresponding to the tasks according to the request time of the task requests, and respectively distributing the 20 threads in the thread pool to the current subtasks located in the first 20 bits in the ordering to execute the current subtasks located in the first 20 bits in the ordering.
Referring to fig. 9, in certain embodiments, step 012: the method for distributing the current subtask corresponding to the first task to the target thread corresponding to the second task for execution comprises the following steps:
step 0121: sequencing the current subtasks corresponding to the first tasks based on the queuing time of the current subtasks corresponding to the first tasks;
Step 0122: and respectively distributing each current subtask in a second preset sequencing range to target threads corresponding to each second task for execution, wherein the second preset sequencing range is determined based on the number of the second tasks, and the queuing time is determined based on at least one of the duration of the current subtask waiting to distribute the corresponding target thread and the request time of the task request corresponding to the current subtask.
The queuing time can be determined based on the time length of the corresponding target thread to be allocated to the current subtask, namely, the time difference between the request time of the current subtask request execution and the current time; or the queuing time can be determined based on the request time of the task request corresponding to the current subtask, namely, the time difference between the request time and the current time when the first request is executed is determined through the task corresponding to the current subtask; or the queuing time can be determined based on the duration of the corresponding target thread to be allocated to the current subtask and the request time of the task request corresponding to the current subtask, for example, different weights are respectively given to the duration of the corresponding target thread to be allocated to the current subtask and the request time of the task request corresponding to the current subtask, and the queuing time of the current subtask corresponding to each first task is determined according to the weights; the second preset ordering range is determined according to the number of second tasks, for example, the second preset ordering range may be the same as the number of second tasks; or the second preset ranking range may be a preset percentage of the number of second tasks, such as 80%,90%,95%, etc., for example, taking the number of second tasks as10 as an example, the second preset ranking range may be 90% of the number of second tasks, i.e. the second preset ranking range includes 9.
Specifically, based on the queuing time of the current subtasks corresponding to the first tasks, the current subtasks corresponding to the first tasks are ordered according to the queuing time from big to small, and in the ordering process, the current subtasks in the second preset ordering range are respectively distributed to the target threads corresponding to the second tasks to be executed, so that queuing and waiting time of the subtasks are balanced, the situation that part of task waiting time is too long, system stability is affected, user experience is damaged and the like is avoided.
For example, taking tasks including a task D, a task E and a task F, where the task D is a first task D, the task E is a first task E, the first task D includes a subtask D1, the first task E includes a subtask E1 and a subtask E2, the second ordering range accounts for 50% of the number of the second tasks, that is, the second ordering range includes the 1 st bit of the ordering, and it is assumed that, at 0:00, the corresponding task E of the current subtask E1 first requests execution, at 0:01, the target thread 2 is executing the subtask F1 of the task F, at 0:01, the current subtask D1 of the first task D requests execution (and 0:01 first requests execution), at 0:02, the target thread 2 is still executing the subtask F1 of the task F, the subtask E1 of the first task E is already executed, and the current subtask E2 is queued and requests execution, and at 0:03, the subtask F1 of the task F completes execution, and the task F is in a second state.
When the queuing time is determined based on the duration of the current subtask waiting to be allocated with the corresponding target thread, and when the queuing time is 0:03, the queuing time of the current subtask D1 of the first task D is 2 minutes (min), the queuing time of the current subtask E2 of the first task E is 1min, and the current subtask D1 and the current subtask E2 are ordered to obtain: the current subtask D1> the sequencing result of the current subtask E2, and the second preset sequencing range is the first 1 bit, so that the current subtask D1 of the first task D is distributed to the target thread 2 for execution under the condition that the target thread 2 is idle when the ratio of the current subtask D1 to the target thread 2 is 0:03.
Under the condition that the queuing time is determined based on the request time of the task request corresponding to the current subtask, because the task E corresponding to the current subtask E2 requests to be executed when 0:00, the task D corresponding to the current subtask D1 requests to be executed when 0:01, and the current subtask D1 and the current subtask E2 are ordered to obtain: the current subtask E2> the sequencing result of the current subtask D1, and the second preset sequencing range is the first 1 bit, so that the current subtask E2 of the first task E is distributed to the target thread 2 for execution under the condition that the target thread 2 is idle when the target thread 2 is 0:03.
Under the condition that the queuing time is determined based on the time length of the corresponding target thread to be allocated by the current subtask and the request time of the task request corresponding to the current subtask, giving the current subtask a weight of 0.4 of the time length of the corresponding target thread to be allocated, giving the current subtask a weight of 0.6 of the request time of the task request corresponding to the current subtask, and calculating the queuing time according to the weight, the time difference between the task request time and the current time and the time difference between the current subtask and the current time:
The queuing time corresponding to the current subtask D1 is: 0.4 x 2min+0.6 x 2min=2 min; the queuing time corresponding to the current subtask E2 is: 0.4 x 1min+0.6 x 3 min=2.2 min;
the sequencing is as follows: the current subtask D1> the current subtask E2, and the second preset sequencing range is the first 1 bit, so that when the target thread 2 is idle at 0:03, the current subtask D1 of the first task D is allocated to the target thread 2 for execution.
Referring to fig. 10, in some embodiments, the task scheduling method further includes:
step 015: ending thread Chi Zhongkong is a thread that is idle for a preset duration.
The preset time length can be determined according to the load, the task execution requirement and the like, and can be, for example, 10min,15min,20min,22min,25min and the like.
Specifically, the time difference between the current time and the last time of completing execution can be calculated after the thread is completed to execute the subtask by recording the starting time and the ending time of each thread in the thread pool, namely, the idle time of the thread is confirmed; under the condition that the idle time of the thread reaches the preset time, the thread is not required to be used for executing the task in the current scene, and in order to avoid resource waste, resources (such as memory resources, network resources and the like) occupied by the thread which is not required to be used in the current scene can be released by ending the thread Chi Zhongkong and reaching the preset time in the idle time.
Referring to fig. 11, in order to facilitate better implementation of the task scheduling method according to the embodiment of the present application, the embodiment of the present application further provides a task scheduling device 10. The task scheduler 10 may include an allocation module 11 and an execution module 12. The allocation module 11 is configured to allocate corresponding target threads to current subtasks of each task in a preset thread pool, so as to execute each current subtask, where the current subtask is any one of the subtasks corresponding to the task, and waiting time exists between at least part of the subtasks and dependent subtasks; the execution module 12 is configured to allocate a current subtask of the first task to a target thread corresponding to a current subtask of the second task for execution; the task state comprises a ready state and a waiting state, wherein the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the unassigned corresponding target thread is in a ready state, and the task corresponding to the target thread during the execution to the waiting time is in a waiting state.
In one embodiment, the task scheduling device 10 further includes a determining module 13, where the determining module 13 is configured to determine that the first task is in the ready state if the duration of the first task in the waiting state reaches the waiting time.
In one embodiment, the state of the task further includes a completion state, where the task is in the completion state when each sub-task corresponding to the task is executed and completed, and the allocation module 11 is specifically further configured to allocate the current sub-task corresponding to the first task to the target thread corresponding to the third task for execution, where the third task is in the completion state.
In one embodiment, the allocation module 11 is further specifically configured to allocate, in the preset thread pool, a corresponding target thread for a current subtask of each task based on a preset thread number of the thread pool and a request time of each task request.
In one embodiment, the allocation module 11 is specifically further configured to order each current subtask based on a request time of each task request; and respectively distributing corresponding target threads for sequencing each current subtask positioned in a first preset sequencing range in a preset thread pool, wherein the first preset sequencing range is determined based on the number of the preset threads.
In one embodiment, the execution module 12 is specifically further configured to sort the current subtasks corresponding to the first tasks based on the queuing time of the current subtasks corresponding to the first tasks; and respectively distributing each current subtask in a second preset sequencing range to target threads corresponding to each second task for execution, wherein the second preset sequencing range is determined based on the number of the second tasks, and the queuing time is determined based on at least one of the duration of the current subtask waiting to distribute the corresponding target thread and the request time of the task request corresponding to the current subtask.
In one embodiment, the task scheduler 10 further includes an ending module 14, where the ending module 14 is configured to end the thread Chi Zhongkong for a thread that is idle for a preset duration.
The task scheduling device 10 is described above in terms of functional modules in combination with the accompanying drawings, where the functional modules may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware encoding processor or implemented by a combination of hardware and software modules in the encoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Referring to fig. 12, a computer device according to an embodiment of the present application includes a processor 402, a memory 403, and a computer program, wherein the computer program is stored in the memory 403 and executed by the processor 402, and the computer program includes instructions for executing the task scheduling method according to any of the above embodiments.
In one embodiment, the computer device may be a terminal 400 or a server. The internal structure thereof can be shown in fig. 12. The computer device comprises a processor 402, a memory 403, a network interface 404, a display 401 and an input means 405, which are connected by a system bus.
Wherein the processor 402 of the computer device is used to provide computing and control capabilities. The memory 403 of the computer device includes a nonvolatile storage medium, internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface 404 of the computer device is used to communicate with external devices via a network connection. The computer program, when executed by a processor, implements the task scheduling method of any of the above embodiments. The display 401 of the computer device may be a liquid crystal display or an electronic ink display, and the input device 405 of the computer device may be a touch layer covered on the display 401, or may be a key, a track ball or a touch pad arranged on a casing of the computer device, or may be an external keyboard, a touch pad or a mouse.
It will be appreciated by those skilled in the art that the structure shown in FIG. 12 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program comprises instructions of the task scheduling method in any one of the above embodiments, and the details are not repeated herein for brevity.
Referring to fig. 13, an embodiment of the present application further provides a computer readable storage medium 600, on which a computer program 610 is stored, where the computer program 610, when executed by the processor 620, implements the steps of the task scheduling method of any of the foregoing embodiments, which is not described herein for brevity.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (11)
1. A method for task scheduling, comprising:
Respectively distributing corresponding target threads for current subtasks of each task in a preset thread pool to execute each current subtask, wherein the current subtask is any one of the subtasks corresponding to the task, and waiting time exists between at least part of the subtasks and the dependent subtasks;
Distributing the current subtask of a first task to the target thread corresponding to the current subtask of a second task for execution;
The states of the tasks comprise a ready state and a waiting state, the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the target thread to which no corresponding assignment is made is in the ready state, and the task corresponding to the target thread during execution until the waiting time is in the waiting state.
2. The task scheduling method according to claim 1, characterized in that the task scheduling method further comprises:
And determining that the first task is in the ready state when the duration of the first task in the waiting state reaches the waiting time.
3. The task scheduling method according to claim 1, wherein the state of the task further includes a completion state, and the task is in the completion state in the case where each of the subtasks corresponding to the task is completed; the task scheduling method further comprises the following steps:
And distributing the current subtask corresponding to the first task to the target thread corresponding to a third task for execution, wherein the third task is in the completion state.
4. The task scheduling method according to claim 1, wherein the allocating, in a preset thread pool, a corresponding target thread for a current subtask of each task includes:
And distributing corresponding target threads for the current subtasks of each task in the preset thread pool based on the preset thread number of the thread pool and the request time of each task request.
5. The task scheduling method according to claim 4, wherein the allocating, in the preset thread pool, a corresponding target thread for a current sub-task of each task based on a preset thread number of the thread pool and a request time of each task request includes:
sequencing each current subtask based on the request time of each task request;
And respectively distributing corresponding target threads for sequencing the current subtasks positioned in a first preset sequencing range in a preset thread pool, wherein the first preset sequencing range is determined based on the number of the preset threads.
6. The task scheduling method according to claim 1, wherein the assigning the current sub-task corresponding to a first task to the target thread corresponding to a second task for execution includes:
Sorting the current subtasks corresponding to the first tasks based on the queuing time of the current subtasks corresponding to the first tasks;
And respectively distributing each current subtask in a second preset sequencing range to each target thread corresponding to each second task for execution, wherein the second preset sequencing range is determined based on the number of the second tasks, and the queuing time is determined based on at least one of the duration of the current subtask waiting to distribute the corresponding target thread and the request time of the task request corresponding to the current subtask.
7. The task scheduling method according to claim 1, characterized in that the task scheduling method further comprises:
Ending the thread Chi Zhongkong which is idle and reaches the preset duration.
8. A task scheduling device, comprising:
The distribution module is used for distributing corresponding target threads for current subtasks of all tasks in a preset thread pool respectively to execute all the current subtasks, wherein the current subtasks are any one of the subtasks corresponding to the tasks, and waiting time exists between at least part of the subtasks and the dependent subtasks;
The execution module is used for distributing the current subtask of the first task to the target thread corresponding to the current subtask of the second task for execution; the states of the tasks comprise a ready state and a waiting state, the first task is a task in the ready state, and the second task is a task in the waiting state; the task corresponding to the current subtask of the target thread to which no corresponding assignment is made is in the ready state, and the task corresponding to the target thread during execution until the waiting time is in the waiting state.
9. A computer device, comprising:
The processor is connected with the memory; the memory stores a computer program, and the processor executes the computer program to implement instructions of the task scheduling method according to any one of claims 1 to 7.
10. A computer program product comprising a computer program comprising instructions for performing the task scheduling method of any one of claims 1 to 7.
11. A non-transitory computer readable storage medium comprising a computer program which, when executed by a processor, causes the processor to perform the task scheduling method of any one of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410550459.2A CN118467124A (en) | 2024-05-06 | 2024-05-06 | Task scheduling method, device, computer equipment, program product and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410550459.2A CN118467124A (en) | 2024-05-06 | 2024-05-06 | Task scheduling method, device, computer equipment, program product and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118467124A true CN118467124A (en) | 2024-08-09 |
Family
ID=92148828
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410550459.2A Pending CN118467124A (en) | 2024-05-06 | 2024-05-06 | Task scheduling method, device, computer equipment, program product and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118467124A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120353611A (en) * | 2025-06-24 | 2025-07-22 | 苏州元脑智能科技有限公司 | Service upgrading system, electronic equipment, method, storage medium and product |
-
2024
- 2024-05-06 CN CN202410550459.2A patent/CN118467124A/en active Pending
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120353611A (en) * | 2025-06-24 | 2025-07-22 | 苏州元脑智能科技有限公司 | Service upgrading system, electronic equipment, method, storage medium and product |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11508021B2 (en) | Processes and systems that determine sustainability of a virtual infrastructure of a distributed computing system | |
| US11231955B1 (en) | Dynamically reallocating memory in an on-demand code execution system | |
| US8122446B2 (en) | Method and apparatus for provisioning software on a network of computers | |
| US11385883B2 (en) | Methods and systems that carry out live migration of multi-node applications | |
| US20100287280A1 (en) | System and method for cloud computing based on multiple providers | |
| CN103873534A (en) | Method and device for application cluster migration | |
| US8104038B1 (en) | Matching descriptions of resources with workload requirements | |
| US11748168B2 (en) | Flexible batch job scheduling in virtualization environments | |
| CN114281444B (en) | Arrangement method for implementing cloud desktop client | |
| US20070101336A1 (en) | Method and apparatus for scheduling jobs on a network | |
| CN118467124A (en) | Task scheduling method, device, computer equipment, program product and storage medium | |
| WO2025124172A1 (en) | Method and apparatus for component deployment and updating, computer device, and storage medium | |
| CN113301087A (en) | Resource scheduling method, device, computing equipment and medium | |
| JP7691187B2 (en) | Pool management for launching in-vehicle devices and applications | |
| CN112860421A (en) | Method, apparatus and computer program product for job processing | |
| KR101695238B1 (en) | System and method for job scheduling using multi computing resource | |
| US11057263B2 (en) | Methods and subsystems that efficiently distribute VM images in distributed computing systems | |
| CN114780232B (en) | Cloud application scheduling method and device, electronic equipment and storage medium | |
| CN117827428A (en) | Cloud service initialization method and system based on rule engine and token bucket algorithm | |
| CN111143033A (en) | Operation execution method and device based on scalable operating system | |
| CN114281442B (en) | A cloud desktop client and method thereof | |
| CN107454137B (en) | Method, device and equipment for on-line business on-demand service | |
| CN116795477A (en) | A function calculation method and related devices | |
| CN114281443B (en) | Cloud desktop system and method | |
| CN114281354B (en) | Cloud desktop upgrading method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |