CN119578850B - Data flow-based running context parameter transmission and task instance dynamic scheduling method - Google Patents
Data flow-based running context parameter transmission and task instance dynamic scheduling method Download PDFInfo
- Publication number
- CN119578850B CN119578850B CN202510140290.8A CN202510140290A CN119578850B CN 119578850 B CN119578850 B CN 119578850B CN 202510140290 A CN202510140290 A CN 202510140290A CN 119578850 B CN119578850 B CN 119578850B
- Authority
- CN
- China
- Prior art keywords
- task
- parameter
- information
- data flow
- instance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24568—Data stream processing; Continuous queries
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Development Economics (AREA)
- Databases & Information Systems (AREA)
- Educational Administration (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- Computational Linguistics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention provides a data flow-based running context parameter transferring and task instance dynamic scheduling method, which relates to the technical field of data processing and comprises the steps of receiving a task scheduling request, and constructing a data flow dependency graph containing task nodes and data flow relation, and acquiring task instance running context information corresponding to each task node, wherein the task instance running context information comprises task instance identification, state information and input and output parameter information. And establishing a mapping relation between input and output parameters of the task instance by analyzing the data flow dependency graph and the runtime context information, and generating a parameter transfer rule. The method monitors task instance state information, when the state of an upstream task changes, an upstream output parameter is transmitted to a downstream input parameter according to a parameter transmission rule, and when the downstream input parameter is complete, the dynamic scheduling execution of the downstream task instance is triggered, so that the task scheduling driven by a data stream is realized, and the scheduling efficiency and the scheduling flexibility are improved.
Description
Technical Field
The invention relates to a data processing technology, in particular to a method for dynamically scheduling a running context parameter and a task instance based on a data stream.
Background
Currently, with the increasing size and complexity of data, data processing tasks are often broken down into multiple interdependent sub-tasks, forming a complex data stream. How to efficiently manage and schedule these subtasks, ensure the correct transfer of data between the different tasks, and achieve efficient execution of the entire data stream becomes an important challenge.
The existing task scheduling method mainly focuses on static scheduling, namely, the execution sequence and the dependency relationship of tasks are predefined before the tasks are executed. Such an approach is difficult to accommodate dynamic changes in the data stream, such as changes in the amount of output data or data format of upstream tasks, fluctuations in task execution time, and the like.
The prior art has the following defects and shortcomings:
1. and the dynamic data flow is difficult to process, and the static scheduling method is difficult to adapt to the dynamic change of the data flow, so that the task execution efficiency is low, and even the situation of failure of task execution occurs.
2. The parameter transfer efficiency is low, the existing parameter transfer mechanism generally depends on middleware such as shared storage or message queues, so that the cost of data transmission is increased, and the task execution efficiency is reduced.
3. The existing scheduling method generally lacks flexible scheduling strategies, and the execution sequence and the resource allocation of the tasks cannot be dynamically adjusted according to the actual execution conditions of the tasks, so that the resource utilization rate is low.
Disclosure of Invention
The embodiment of the invention provides a data flow-based running context parameter transmission and task instance dynamic scheduling method, which can solve the problems in the prior art.
In a first aspect of an embodiment of the present invention,
The method for dynamically scheduling the running context parameter and the task instance based on the data stream comprises the following steps:
Receiving a task scheduling request, wherein the task scheduling request comprises task identification information and data flow configuration information, constructing a data flow dependency graph based on the data flow configuration information, wherein the data flow dependency graph comprises a plurality of task nodes and data flow relationships among the task nodes, and acquiring corresponding task instance running context information for each task node in the data flow dependency graph, wherein the task instance running context information comprises task instance identification, task instance state information, task instance input parameter information and task instance output parameter information;
Based on the data flow direction relation between task nodes in the data flow dependency graph, analyzing the context information during the operation of the task instance, establishing a mapping relation between the input parameter information of the task instance and the output parameter information of the task instance, and generating a parameter transfer rule according to the mapping relation, wherein the parameter transfer rule defines a transfer mode of the output parameter information of an upstream task instance to the input parameter information of a downstream task instance;
and monitoring task instance state information of each task node in the data flow dependency graph, when detecting that the state information of an upstream task instance changes, transmitting output parameter information of the upstream task instance to input parameter information of a downstream task instance according to the parameter transmission rule, judging whether the downstream task instance meets the execution condition or not based on the integrity of the input parameter information of the downstream task instance, and when the execution condition is met, sending a task instance scheduling instruction to a task scheduling system to trigger dynamic scheduling execution of the downstream task instance.
Constructing a data flow dependency graph based on the data flow configuration information, wherein the data flow dependency graph comprises a plurality of task nodes and data flow relationships among the task nodes, and acquiring the corresponding task instance running context information for each task node in the data flow dependency graph comprises:
The data flow configuration information comprises task node information and data flow direction information, a node dependency relation matrix is generated based on the data flow configuration information, matrix elements in the node dependency relation matrix are used for representing data transfer relations among task nodes, the task nodes are subjected to topological ordering according to the node dependency relation matrix to obtain a task node sequence, and a data flow dependency graph is constructed based on the task node sequence;
Establishing a context information model of a task instance aiming at a task node in the data flow dependency graph, wherein the context information model of the task instance comprises task instance identification information, task instance running state information, task instance input parameter information and task instance output parameter information, generating a context acquisition function based on the context information model of the task instance, wherein the context acquisition function is used for acquiring context information of the task node under a specified time stamp, and calculating a minimum update time interval of the context information according to the context acquisition function;
And constructing a version vector of each task node in the data flow dependency graph, wherein the version vector comprises version information of related task nodes with data transfer relation with the task nodes, performing context information consistency check based on the version vector, determining that the context information consistency check passes when the version information of the related task nodes meets a preset time deviation requirement, updating a context information cache of the task nodes according to the minimum updating time interval, and rolling back the context information of the task nodes to a state corresponding to a latest valid check point when the task nodes are detected to be invalid.
Establishing a context information model of a task instance aiming at a task node in the data flow dependency graph, wherein the context information model of the task instance comprises task instance identification information, task instance running state information, task instance input parameter information and task instance output parameter information, generating a context acquisition function based on the context information model of the task instance, wherein the context acquisition function is used for acquiring context information of the task node under a specified timestamp, and calculating a minimum update time interval of the context information according to the context acquisition function comprises the following steps:
constructing a context information model of a task instance, wherein the context information model comprises instance identification information, task state information, input parameter information and output parameter information, constructing a state transition matrix aiming at the context information model, wherein the state transition matrix is used for representing the transition probability of the task state information, calculating the distribution parameter of state duration time based on the state transition matrix, and evaluating the stability index of the task state according to the distribution parameter;
constructing a context acquisition function based on the context information model, wherein the context acquisition function is used for acquiring instance identification information, task state information, input parameter information and output parameter information at a specified time point, analyzing the dependency relationship between the input parameter information and the output parameter information, establishing a parameter dependency relationship table, and calculating an integrity measurement value of the context information, wherein the integrity measurement value is the ratio of the number of effective input parameters to the total number of required input parameters;
Acquiring the parameter change rate of input parameter information and output parameter information, establishing an updating threshold function based on the parameter change rate, calculating a minimum updating time interval by the updating threshold function, and constructing a parameter time sequence correlation matrix, wherein the parameter time sequence correlation matrix is used for representing the time sequence correlation degree among parameters, establishing a parameter prediction model based on the parameter time sequence correlation matrix, and calculating a prediction precision evaluation value according to the parameter prediction model.
Based on the data flow direction relation between task nodes in the data flow dependency graph, analyzing the context information during the task instance running, establishing a mapping relation between task instance input parameter information and task instance output parameter information, and generating a parameter transfer rule according to the mapping relation comprises:
Constructing a data flow dependency graph, wherein the data flow dependency graph comprises a task node set, a data flow side set and a side weight matrix, the side weight matrix is used for representing the importance degree of the data flow side, and the data flow metric value between any two task nodes is calculated based on the side weight matrix and is related to the corresponding weight value in the side weight matrix and the data flow between the nodes;
Establishing a context incidence matrix between task nodes, wherein the context incidence matrix is used for describing the incidence degree between an output parameter set of a source task node and an input parameter set of a target task node, determining the correlation coefficient of any two task nodes based on the context incidence matrix, calculating the correlation coefficient based on the parameter covariance of the task nodes and the parameter standard deviation of the task nodes, and determining a parameter mapping function according to the correlation coefficient, wherein the parameter mapping function comprises a mapping weight coefficient and a parameter conversion function;
And constructing a parameter dependency relationship graph based on the parameter mapping function, calculating the dependency distance between task nodes in the parameter dependency relationship graph, calculating the dependency strength between parameters according to the dependency distance, wherein the dependency strength decays along with the increase of the dependency distance, analyzing the transfer characteristics between parameter sequences with the dependency relationship, generating a parameter transfer rule template according to the transfer characteristics, wherein the parameter transfer rule template comprises a source parameter set, a target parameter set, a triggering condition and a conversion action, and distributing priority to each parameter transfer rule based on the importance degree and the execution frequency of the rule.
Constructing a parameter dependency relationship graph based on the parameter mapping function, calculating the dependency distance between task nodes in the parameter dependency relationship graph, calculating the dependency strength between parameters according to the dependency distance, wherein the dependency strength decays along with the increase of the dependency distance, analyzing the transfer characteristic between parameter sequences with the dependency relationship, and generating a parameter transfer rule template according to the transfer characteristic comprises the following steps:
calculating the shortest path distance between task nodes based on the parameter dependency graph, wherein the shortest path distance is the minimum value of the sum of edge weights in paths connecting two task nodes, acquiring the type information of the task nodes and calculating the semantic distance between the task nodes, wherein the semantic distance is calculated based on a similarity function of the parameter types, and the shortest path distance and the semantic distance are combined through weight coefficients to obtain a comprehensive distance metric;
The method comprises the steps of calculating the dependence strength of parameters according to the comprehensive distance measurement value, wherein the dependence strength of the parameters adopts a product form of a basic strength function and a time sequence influence factor, the basic strength function decays exponentially along with the increase of the comprehensive distance measurement value, the time sequence influence factor is related to a time interval, a parameter transmission chain is built based on the dependence strength of the parameters, an integrity index and a reliability index of the parameter transmission chain are calculated, the integrity index is the minimum value of the dependence strength of adjacent parameters in the parameter transmission chain, the reliability index is calculated based on the error rate in the transmission process, and a parameter transmission rule template is generated based on the parameter transmission chain and comprises a source mode, a target mode, a conversion rule and constraint conditions.
Transmitting the output parameter information of the upstream task instance to the input parameter information of the downstream task instance according to the parameter transmission rule, judging whether the downstream task instance meets the execution condition based on the integrity of the input parameter information of the downstream task instance, and when the execution condition is met, transmitting a task instance scheduling instruction to a task scheduling system, wherein the triggering the dynamic scheduling execution of the downstream task instance comprises the following steps:
Acquiring an output parameter set of an upstream task instance, calculating accuracy and timeliness indexes of each parameter in the output parameter set, determining a parameter quality evaluation value based on the accuracy and the timeliness indexes, and converting the parameter quality evaluation value into a parameter availability index according to a preset weight coefficient;
Counting an input parameter set required by a downstream task instance, calculating a parameter integrity measurement value of an output parameter set of the upstream task instance and an input parameter set required by the downstream task instance, wherein the parameter integrity measurement value is a ratio of the number of actually received parameters to the number of parameters required by the task, and calculating a condition satisfaction degree based on the parameter integrity measurement value and the parameter availability index;
acquiring resource consumption information and task emergency degree information of a downstream task instance, calculating task basic priority based on the resource consumption information and the task emergency degree information, acquiring a load balancing factor of a current time point, wherein the load balancing factor is reduced along with the increase of the current load of a system, and multiplying the task basic priority by the load balancing factor to obtain task dynamic priority;
And carrying out matching degree evaluation on the available computing resources based on the task dynamic priority, wherein the matching degree evaluation comprises matching degree calculation of resource feature dimensions, generating a scheduling instruction comprising task identification, resource identification, priority information, execution constraint and parameter requirements, and carrying out resource availability and parameter completeness verification on the scheduling instruction.
In a second aspect of the embodiment of the present invention, a system for dynamically scheduling a task instance and a context transfer in a data flow based operation is provided, including:
A first unit, configured to receive a task scheduling request, where the task scheduling request includes task identification information and data flow configuration information, and construct a data flow dependency graph based on the data flow configuration information, where the data flow dependency graph includes a plurality of task nodes and a data flow relationship between task nodes, and obtain, for each task node in the data flow dependency graph, corresponding task instance runtime context information, where the task instance runtime context information includes task instance identifier, task instance state information, task instance input parameter information, and task instance output parameter information;
The second unit is used for analyzing the context information during the operation of the task instance based on the data flow relation between task nodes in the data flow dependency graph, establishing a mapping relation between the input parameter information of the task instance and the output parameter information of the task instance, and generating a parameter transfer rule according to the mapping relation, wherein the parameter transfer rule defines a transfer mode of the output parameter information of an upstream task instance to the input parameter information of a downstream task instance;
And the third unit is used for monitoring task instance state information of each task node in the data flow dependency graph, transmitting output parameter information of an upstream task instance to input parameter information of a downstream task instance according to the parameter transmission rule when detecting that the state information of the upstream task instance changes, judging whether the downstream task instance meets the execution condition or not based on the input parameter information integrity of the downstream task instance, and transmitting a task instance scheduling instruction to a task scheduling system when the execution condition is met to trigger dynamic scheduling execution of the downstream task instance.
Third aspect of embodiments of the invention
There is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The beneficial effects of the application are as follows:
1. the dynamic scheduling of the task instance is realized, namely the execution of the downstream task is automatically triggered by monitoring the state change and the dependency relationship of the task instance, manual intervention is not needed, and the efficiency and the automation degree of task scheduling are improved.
2. And flexible parameter transmission, namely, establishing a parameter mapping relation and a transmission rule based on the data flow dependency graph and the running context information, realizing automatic transmission from the output parameters of the upstream task to the input parameters of the downstream task, avoiding complicated operation of manually configuring the parameters and reducing human errors.
3. The data flow processing efficiency is improved, namely, the downstream task is ensured to be executed under the condition of complete input parameters through a parameter transmission rule and a dynamic scheduling mechanism, invalid scheduling and waiting are avoided, and the overall efficiency of data flow processing is improved.
Drawings
FIG. 1 is a flow chart of a method for dynamic scheduling of context transfer and task instances during data flow based operation in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a task context parameter definition based on a business process according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating dynamic generation of sub-instances based on runtime context-based afferent parameter values in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an example of scheduling, allocating resources and executing resources by distributed scheduling according to an embodiment of the present invention;
FIG. 5 is a flow chart of an example of a dynamic scheduling task based on a data flow runtime context transfer in an embodiment of the present invention;
FIG. 6 is a schematic diagram of a dynamic scheduling system for data flow-based runtime context transfer and task instance in accordance with an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 1 to 5, a method for dynamically scheduling a context transfer and task instance based on a data stream in a runtime of an embodiment of the present invention includes:
S101, receiving a task scheduling request, wherein the task scheduling request comprises task identification information and data flow configuration information, constructing a data flow dependency graph based on the data flow configuration information, wherein the data flow dependency graph comprises a plurality of task nodes and data flow relationships among the task nodes, and acquiring corresponding task instance running context information for each task node in the data flow dependency graph, wherein the task instance running context information comprises task instance identification, task instance state information, task instance input parameter information and task instance output parameter information;
S102, analyzing the context information during the operation of the task instance based on the data flow relation between task nodes in the data flow dependency graph, establishing a mapping relation between the input parameter information of the task instance and the output parameter information of the task instance, and generating a parameter transfer rule according to the mapping relation, wherein the parameter transfer rule defines a transfer mode of the output parameter information of an upstream task instance to the input parameter information of a downstream task instance;
S103, task instance state information of each task node in the data flow dependency graph is monitored, when the state information of an upstream task instance is detected to change, output parameter information of the upstream task instance is transmitted to input parameter information of a downstream task instance according to the parameter transmission rule, whether the downstream task instance meets execution conditions is judged based on the integrity of the input parameter information of the downstream task instance, and when the execution conditions are met, a task instance scheduling instruction is sent to a task scheduling system to trigger dynamic scheduling execution of the downstream task instance.
In an optional implementation manner, a data flow dependency graph is constructed based on the data flow configuration information, the data flow dependency graph includes a plurality of task nodes and a data flow relation between the task nodes, and for each task node in the data flow dependency graph, acquiring the context information of the corresponding task instance during operation includes:
The data flow configuration information comprises task node information and data flow direction information, a node dependency relation matrix is generated based on the data flow configuration information, matrix elements in the node dependency relation matrix are used for representing data transfer relations among task nodes, the task nodes are subjected to topological ordering according to the node dependency relation matrix to obtain a task node sequence, and a data flow dependency graph is constructed based on the task node sequence;
Establishing a context information model of a task instance aiming at a task node in the data flow dependency graph, wherein the context information model of the task instance comprises task instance identification information, task instance running state information, task instance input parameter information and task instance output parameter information, generating a context acquisition function based on the context information model of the task instance, wherein the context acquisition function is used for acquiring context information of the task node under a specified time stamp, and calculating a minimum update time interval of the context information according to the context acquisition function;
And constructing a version vector of each task node in the data flow dependency graph, wherein the version vector comprises version information of related task nodes with data transfer relation with the task nodes, performing context information consistency check based on the version vector, determining that the context information consistency check passes when the version information of the related task nodes meets a preset time deviation requirement, updating a context information cache of the task nodes according to the minimum updating time interval, and rolling back the context information of the task nodes to a state corresponding to a latest valid check point when the task nodes are detected to be invalid.
The data flow dependency graph construction and context information management method is used for constructing a dependency graph of a data flow task and managing the runtime context information of a task instance so as to ensure data consistency and task reliability.
First, the data stream information is configured. The data stream configuration information is stored in JSON format and contains task node information and data stream information. The task node information includes a task ID, a task name, a task type, and the like. The data flow information describes the data transfer relationship between task nodes, e.g., the output of task a is the input of task B.
Then, a node dependency matrix is generated. And generating a node dependency matrix according to the data flow information in the data flow configuration information. The rows and columns of the matrix represent task nodes and the matrix elements represent data transfer relationships between the task nodes. If the output of task A is the input of task B, the element value of the corresponding position in the matrix is 1, otherwise it is 0.
Next, topology ordering is performed. And performing topological ordering on the task nodes according to the node dependency relation matrix to obtain a task node sequence. Topological ordering ensures that the order of task execution conforms to data dependencies. For example, in the above example, the task node sequence is task_a- > task_b- > task_c.
Subsequently, a dataflow dependency graph is constructed. And constructing a data flow dependency graph based on the task node sequence after topological sequencing. The data flow dependency graph shows the dependency relationship among task nodes in the form of a directed acyclic graph, wherein the nodes in the graph represent tasks, and the edges represent data flow directions.
And establishing a task instance context information model. And establishing a context information model of the task instance for each task node. The model contains task instance identification information (e.g., task instance ID), task instance running status information (e.g., running, success, failure), task instance input parameter information, and task instance output parameter information.
A context acquisition function is generated. A context acquisition function is generated based on the context information model of the task instance. The context acquisition function is used for acquiring the context information of the task node under the designated time stamp. A minimum update time interval of the context information is calculated. A minimum update time interval for the context information is calculated based on the context acquisition function. The time interval is used for controlling the frequency of updating the context information, so that performance waste caused by frequent updating is avoided. For example, the minimum update time interval may be determined according to the execution frequency of the task or the frequency of data change.
And constructing a version vector of the task node. A version vector is constructed for each task node. The version vector contains version information of the associated task node in data transfer relationship with the task node. The version information may be a time stamp or a version number.
And performing a context information consistency check. Context information consistency checking is performed based on the version vector. And when the version information of the related task node meets the preset time deviation requirement, determining that the context information consistency check passes. For example, if task B depends on task a and the difference in version timestamps of task a and task B does not exceed 1 minute, then the context information consistency check is considered to pass.
And updating the context information cache of the task node. And according to the minimum updating time interval, the context information cache of the task node is updated periodically.
And (5) performing failure processing on the task nodes. And when the failure of the task node is detected, the context information of the task node is rolled back to a state corresponding to the latest valid check point, so that the data consistency is ensured.
The application can realize the following steps:
And the data consistency is improved, namely the consistency check of the version vector and the context information ensures that the context information of the task node is kept consistent, and data collision and error are avoided. And the reliability of the task is enhanced, namely, the context information of the task node is rolled back to the latest effective check point through a task node failure processing mechanism, so that the reliable execution of the task is ensured. Optimizing resource utilization, namely controlling the update frequency of the context information through the minimum update time interval, avoiding resource waste caused by frequent update and improving the system efficiency.
In an alternative embodiment, for a task node in the data flow dependency graph, a context information model of a task instance is established, where the context information model of the task instance includes task instance identification information, task instance running state information, task instance input parameter information, and task instance output parameter information, and a context acquisition function is generated based on the context information model of the task instance, where the context acquisition function is used to acquire context information of the task node under a specified timestamp, and calculating a minimum update time interval of the context information according to the context acquisition function includes:
constructing a context information model of a task instance, wherein the context information model comprises instance identification information, task state information, input parameter information and output parameter information, constructing a state transition matrix aiming at the context information model, wherein the state transition matrix is used for representing the transition probability of the task state information, calculating the distribution parameter of state duration time based on the state transition matrix, and evaluating the stability index of the task state according to the distribution parameter;
constructing a context acquisition function based on the context information model, wherein the context acquisition function is used for acquiring instance identification information, task state information, input parameter information and output parameter information at a specified time point, analyzing the dependency relationship between the input parameter information and the output parameter information, establishing a parameter dependency relationship table, and calculating an integrity measurement value of the context information, wherein the integrity measurement value is the ratio of the number of effective input parameters to the total number of required input parameters;
Acquiring the parameter change rate of input parameter information and output parameter information, establishing an updating threshold function based on the parameter change rate, calculating a minimum updating time interval by the updating threshold function, and constructing a parameter time sequence correlation matrix, wherein the parameter time sequence correlation matrix is used for representing the time sequence correlation degree among parameters, establishing a parameter prediction model based on the parameter time sequence correlation matrix, and calculating a prediction precision evaluation value according to the parameter prediction model.
Aiming at task nodes in the data flow dependency graph, the embodiment provides a method for constructing a task instance context information model and calculating a minimum update time interval so as to improve task monitoring and management efficiency.
First, a context information model of the task instance is constructed. The model comprises four key information, namely instance identification information, task running state information, input parameter information and output parameter information. For example, for a task instance named "data cleaning", the identification information may be UUID, for example, "a1b2c3d4-e5f6-7890-1234-567890abcdef", the running status information may be "running", "success", "failure", etc., the input parameter information may be the data file name to be cleaned and the cleaning rule, and the output parameter information may be the cleaned data file name and the cleaning report.
Next, a state transition matrix is constructed for the context information model. The matrix is used to characterize the transition probabilities of the task state information. For example, the "in-flight" state may transition to a "success" or "failure" state, while the "success" state typically does not transition to other states. By counting a large amount of task instance state transition data, the state transition probability can be obtained, for example, the probability from "running" to "success" is 90%, and the probability from "failure" is 10%. The state transition probabilities for the "data cleansing" task instance are assumed to be 90% from "running" to "successful" and 10% to "failed", with the "successful" and "failed" states remaining unchanged.
Then, a distribution parameter of the state duration is calculated based on the state transition matrix. For example, the average duration, the longest duration, the shortest duration, etc. of each state may be calculated. These parameters may be used to evaluate the stability of the task state. For example, if the average duration of the "on-the-fly" state is long, it may indicate that the task is being performed inefficiently. Assuming that the average duration of the "on-the-fly" state for the "data cleansing" task instance is 10 minutes, the duration of the "success" and "failure" states can be considered to be infinitely long.
Then, a context acquisition function is constructed based on the context information model. The function is used for acquiring instance identification information, task state information, input parameter information and output parameter information at a specified time point. This information may be obtained, for example, by querying a database or calling an API interface.
Then, the dependency relationship between the input parameter information and the output parameter information is analyzed, and a parameter dependency relationship table is established. For example, the data file name after purging depends on the data file name to be purged and the purging rule. An integrity metric value of the context information is then calculated, the integrity metric value being defined as the ratio of the number of valid input parameters to the total number of required input parameters. For example, if the "data cleansing" task requires two input parameters, a data file name and cleansing rules, and only the data file name is actually provided, the integrity metric value is 50%.
And acquiring the parameter change rates of the input parameter information and the output parameter information. For example, the number of changes or the magnitude of the changes in each parameter over a period of time may be calculated.
An update threshold function is established based on the rate of change of the parameter. The function contains the adjustment coefficients and the base update time interval. For example, the adjustment coefficient may be set to 2 and the base update time interval to 1 minute. The update time interval will be shortened to 0.5 minutes when the rate of change of the parameter exceeds a certain threshold. Assuming that the input parameter "data file name" of the "data cleansing" task changes once per minute, the update time interval is 0.5 minutes.
Finally, a minimum update time interval is calculated. The time interval is used to determine when the context information needs to be updated. For example, if the minimum update time interval is 1 minute, it is necessary to call a context acquisition function to acquire the latest context information every 1 minute. And constructing a parameter time sequence correlation matrix, wherein the matrix is used for representing the time sequence correlation degree among the parameters. For example, a correlation coefficient of two parameters over time may be calculated. Then, a parameter prediction model is established based on the parameter timing correlation matrix. For example, a time series analysis method may be used to predict future values of the parameters. And finally, calculating a prediction accuracy evaluation value according to the parameter prediction model. For example, the prediction accuracy may be estimated using a mean square error. Assuming that the output parameter "amount of data after cleaning" of the "data cleaning" task is highly correlated with the input parameter "amount of data to be cleaned", a simple linear regression model can be built to predict the "amount of data after cleaning".
The application can realize the following steps:
The task monitoring efficiency is improved, namely task abnormality can be timely found and processed by acquiring the real-time context information of the task instance, and the task monitoring efficiency is improved. Optimizing resource scheduling, namely, by analyzing the distribution parameters of the task state duration, the stability and the resource demand of the task can be evaluated, so that the resource scheduling strategy is optimized. And the prediction accuracy is improved, namely, the future behavior of the task, such as the completion time or the resource consumption of the task, can be predicted by establishing a parameter prediction model, so that the prediction accuracy is improved.
In an alternative embodiment, based on a data flow direction relationship between task nodes in the data flow dependency graph, analyzing the context information during task instance operation, establishing a mapping relationship between task instance input parameter information and task instance output parameter information, and generating a parameter transfer rule according to the mapping relationship includes:
Constructing a data flow dependency graph, wherein the data flow dependency graph comprises a task node set, a data flow side set and a side weight matrix, the side weight matrix is used for representing the importance degree of the data flow side, and the data flow metric value between any two task nodes is calculated based on the side weight matrix and is related to the corresponding weight value in the side weight matrix and the data flow between the nodes;
Establishing a context incidence matrix between task nodes, wherein the context incidence matrix is used for describing the incidence degree between an output parameter set of a source task node and an input parameter set of a target task node, determining the correlation coefficient of any two task nodes based on the context incidence matrix, calculating the correlation coefficient based on the parameter covariance of the task nodes and the parameter standard deviation of the task nodes, and determining a parameter mapping function according to the correlation coefficient, wherein the parameter mapping function comprises a mapping weight coefficient and a parameter conversion function;
And constructing a parameter dependency relationship graph based on the parameter mapping function, calculating the dependency distance between task nodes in the parameter dependency relationship graph, calculating the dependency strength between parameters according to the dependency distance, wherein the dependency strength decays along with the increase of the dependency distance, analyzing the transfer characteristics between parameter sequences with the dependency relationship, generating a parameter transfer rule template according to the transfer characteristics, wherein the parameter transfer rule template comprises a source parameter set, a target parameter set, a triggering condition and a conversion action, and distributing priority to each parameter transfer rule based on the importance degree and the execution frequency of the rule.
In order to effectively manage and transfer parameter information in a complex workflow, the present embodiment provides a parameter transfer rule generation method based on data flow dependence and context information.
First, a data flow dependency graph is constructed. The graph contains a set of nodes representing individual tasks, and a set of directed edges representing the flow of data between tasks. Each edge is given a weight to indicate the importance of the data flow. For example, in one video processing flow, the weight of the side of the "video decoding" task directed to the "video encoding" task may be set to 0.9, which indicates that the decoding result is critical to the encoding process, and the weight of the side of the "subtitle" task directed to the "video encoding" task may be set to 0.5, which indicates that the influence of subtitle information on the encoding process is relatively small. By analyzing the weight of the edge and the data flow between the nodes, a data flow metric value between any two task nodes can be calculated, for example, if the "video decoding" task outputs 10MB data to the "video encoding" task, the data flow metric value between the two tasks is 0.9x10=9.
Next, a contextual relevance matrix between the task nodes is established. The matrix describes the degree of association between the output parameter set of the source task node and the input parameter set of the target task node. For example, the parameters output by the "video decoding" task include resolution, frame rate, and encoding format, while the input parameters for the "video encoding" task include resolution, frame rate, and code rate. By analyzing the relationship between these parameters, the correlation coefficient of the two task nodes can be determined. For example, resolution and frame rate are highly correlated, their correlation coefficients approach 1, while the correlation of coding format and code rate is lower, the coefficients approach 0. And determining a parameter mapping function according to the correlation coefficient. The function comprises a mapping weight coefficient and a parameter conversion function. For example, if the source task output resolution is 1920x1080 and the target task requires 720x480, the parameter transfer function may perform a corresponding scaling operation, and the mapping weight coefficient may be set according to the importance of the resolution.
Then, a parameter dependency graph is constructed based on the parameter mapping function. The dependency distance between task nodes in the graph is calculated, for example, if there are no other tasks between the "video decoding" task and the "video encoding" task, their dependency distance is 1, and if there is one "watermarking" task in the middle, the dependency distance is 2. And calculating the dependence intensity among the parameters according to the dependence distance, wherein the dependence intensity decays with the increase of the dependence distance. For example, the dependence intensity is 1 when the distance is 1, and 0.5 when the distance is 2. Analysis of transfer characteristics between parameter sequences having a dependency relationship, for example, resolution parameters need to be consistent throughout the video processing flow. And generating a parameter transfer rule template according to the transfer characteristic. The template contains a set of source parameters, a set of target parameters, trigger conditions, and a conversion action. For example, one rule template may be to pass the decoded resolution to the "video encoding" task and perform a corresponding scaling operation when the "video decoding" task is completed.
Finally, each parameter delivery rule is assigned a priority based on the importance level and execution frequency of the rule. For example, rules that guarantee consistency of video resolution are prioritized over subtitle-added rules. In the video processing flow, three parameters of resolution, frame rate and code rate are required to be transmitted, and priorities 1, 2 and 3 are respectively assigned according to the importance degree and the use frequency.
The application can realize the following steps:
And the parameter transmission efficiency is improved. By analyzing the dependency relationship between tasks and the association degree between parameters, the parameter transmission rule is automatically generated, and the complicated operation of manually configuring the parameters is avoided, so that the efficiency of parameter transmission is improved. Parameter delivery errors are reduced. By establishing the parameter dependency graph and the context correlation matrix, errors in the parameter transmission process, such as parameter type mismatch, parameter value deletion and the like, can be effectively identified and avoided, so that the accuracy of parameter transmission is improved. The flexibility of the workflow is enhanced. Through the parameter transfer rule template, the parameter transfer rule can be dynamically adjusted according to different task demands, so that the flexibility and adaptability of the workflow are enhanced, and for example, different coding parameters can be selected according to different video formats.
In an alternative embodiment, a parameter dependency graph is constructed based on the parameter mapping function, a dependency distance between task nodes in the parameter dependency graph is calculated, a dependency strength between parameters is calculated according to the dependency distance, the dependency strength decays along with the increase of the dependency distance, a transfer characteristic between parameter sequences with a dependency relationship is analyzed, and a parameter transfer rule template is generated according to the transfer characteristic, which comprises:
calculating the shortest path distance between task nodes based on the parameter dependency graph, wherein the shortest path distance is the minimum value of the sum of edge weights in paths connecting two task nodes, acquiring the type information of the task nodes and calculating the semantic distance between the task nodes, wherein the semantic distance is calculated based on a similarity function of the parameter types, and the shortest path distance and the semantic distance are combined through weight coefficients to obtain a comprehensive distance metric;
The method comprises the steps of calculating the dependence strength of parameters according to the comprehensive distance measurement value, wherein the dependence strength of the parameters adopts a product form of a basic strength function and a time sequence influence factor, the basic strength function decays exponentially along with the increase of the comprehensive distance measurement value, the time sequence influence factor is related to a time interval, a parameter transmission chain is built based on the dependence strength of the parameters, an integrity index and a reliability index of the parameter transmission chain are calculated, the integrity index is the minimum value of the dependence strength of adjacent parameters in the parameter transmission chain, the reliability index is calculated based on the error rate in the transmission process, and a parameter transmission rule template is generated based on the parameter transmission chain and comprises a source mode, a target mode, a conversion rule and constraint conditions.
The parameter dependence relation diagram construction and transmission rule template generation method is used for generating a parameter transmission rule template by analyzing dependence relations among parameters so as to improve the efficiency and accuracy of parameter transmission.
First, a parameter dependency graph is constructed. Each parameter is considered as a node in the graph, and if there is a dependency between two parameters, an edge is established between the two nodes. The weight of an edge represents the strength of the dependency relationship and may be determined by domain expert scoring or data driven methods. For example, if the parameters a and B together affect the result of a task, the edge weights may be set according to the extent to which they affect the result. Assuming that the degree of influence of the parameter a is 0.8 and the degree of influence of the parameter B is 0.6, the weight of the edge between them can be set to 0.8×0.6=0.48.
Next, the dependency distance between the task nodes is calculated. The dependency distance is used for measuring the strength of the dependency relationship between the parameters. First, the shortest path distance between task nodes, i.e. the minimum value of the sum of edge weights in the path connecting two task nodes, is calculated. For example, the shortest path from parameter a to parameter C passes through parameter B with an a-B edge weight of 0.48 and a B-C edge weight of 0.7, the shortest path distance of a-C is 0.48+0.7=1.18. Meanwhile, the type information of the task nodes, such as the data type, the value range and the like of the parameters, is acquired, and the semantic distance between the task nodes is calculated. The semantic distance is calculated based on a similarity function of the parameter types, e.g. if the data types of the two parameters are the same, their semantic distance is smaller. Assuming that the data type similarity of parameter a and parameter C is 0.9, their semantic distance may be set to 1-0.9=0.1. And then, combining the shortest path distance and the semantic distance through a weight coefficient to obtain a comprehensive distance measurement value. For example, if the shortest path distance is weighted to 0.6 and the semantic distance is weighted to 0.4, the combined distance metric value of a-C is 1.18×0.6+0.1×0.4=0.748.
Then, the dependency strength of the parameter is calculated from the integrated distance metric. The dependency strength of the parameters adopts the product form of a basic strength function and a time sequence influence factor. The base intensity function decays exponentially as the integrated distance measure increases, for example exp (-integrated distance measure) may be used as the base intensity function. The timing impact factor is related to the time interval, e.g. if the time interval of the two parameters is long, the dependency strength between them will decrease. Assuming that the time interval of the parameter a and the parameter C is 1 day, the timing influence factor may be set to 0.9. Assuming that the integrated distance metric value of a-C is 0.748, the dependency strength of a-C is exp (-0.748) 0.9=0.428.
Based on the dependency strength of the parameters, a parameter transfer chain is established. The parameter transfer chain indicates the order and dependency of parameter transfer. For example, if parameter a depends on parameter B, which depends on parameter C, a parameter transfer chain C- > B- > a may be established. And calculating an integrity index and a reliability index of the parameter transmission chain. The integrity index is the minimum value of the dependency strength of adjacent parameters in the parameter transfer chain. The reliability index is calculated based on the error rate during the transfer, e.g. the error rate of the transfer may be counted from historical data. Assuming that the dependency strength of C-B is 0.6 and the dependency strength of B-A is 0.428, the integrity index of the transfer chain is 0.428. Assuming that the error rate during transfer is 0.1, the reliability index of the transfer chain is 1-0.1=0.9.
Finally, a parameter delivery rule template is generated based on the parameter delivery chain. The parameter delivery rule template contains source patterns, target patterns, conversion rules, and constraints. The source mode represents an initial state of parameter delivery, the target mode represents a target state of parameter delivery, the transition rule describes how to transition the parameter from the source mode to the target mode, and the constraint limits the conditions of parameter delivery. For example, the range of the parameter C is [0, 1], the range of the parameter B is [1, 2], and a conversion rule of b=c+1 and a constraint of C > =0 can be set.
The application can realize the following steps:
And the parameter transmission efficiency is improved, namely unnecessary parameter transmission can be avoided by analyzing the dependency relationship among the parameters, so that the parameter transmission efficiency is improved. The parameter transmission accuracy is improved, namely the dependence strength of the parameters can be calculated more accurately by considering the type information and the time interval of the parameters, so that the parameter transmission accuracy is improved. The interpretive of the parameter transfer rule is enhanced, namely the parameter transfer rule template comprises a source mode, a target mode, a conversion rule and constraint conditions, and the logic of parameter transfer can be clearly described, so that the interpretive of the parameter transfer rule is enhanced.
In an alternative embodiment, the output parameter information of the upstream task instance is transferred to the input parameter information of the downstream task instance according to the parameter transfer rule, whether the downstream task instance meets the execution condition is judged based on the integrity of the input parameter information of the downstream task instance, and when the execution condition is met, a task instance scheduling instruction is sent to a task scheduling system, and the triggering of the dynamic scheduling execution of the downstream task instance includes:
Acquiring an output parameter set of an upstream task instance, calculating accuracy and timeliness indexes of each parameter in the output parameter set, determining a parameter quality evaluation value based on the accuracy and the timeliness indexes, and converting the parameter quality evaluation value into a parameter availability index according to a preset weight coefficient;
Counting an input parameter set required by a downstream task instance, calculating a parameter integrity measurement value of an output parameter set of the upstream task instance and an input parameter set required by the downstream task instance, wherein the parameter integrity measurement value is a ratio of the number of actually received parameters to the number of parameters required by the task, and calculating a condition satisfaction degree based on the parameter integrity measurement value and the parameter availability index;
acquiring resource consumption information and task emergency degree information of a downstream task instance, calculating task basic priority based on the resource consumption information and the task emergency degree information, acquiring a load balancing factor of a current time point, wherein the load balancing factor is reduced along with the increase of the current load of a system, and multiplying the task basic priority by the load balancing factor to obtain task dynamic priority;
And carrying out matching degree evaluation on the available computing resources based on the task dynamic priority, wherein the matching degree evaluation comprises matching degree calculation of resource feature dimensions, generating a scheduling instruction comprising task identification, resource identification, priority information, execution constraint and parameter requirements, and carrying out resource availability and parameter completeness verification on the scheduling instruction.
A task dynamic scheduling method based on parameter transfer and condition judgment is used for efficiently executing tasks in a distributed computing environment. The method is characterized in that whether the downstream task meets the execution condition is dynamically judged according to the output parameter information of the upstream task, and priority scheduling is carried out according to the system load condition, so that the resource utilization rate and the task execution efficiency are improved.
First, a set of output parameters for an upstream task instance is obtained. For example, the output parameters of an image recognition task may include the type of object identified, confidence, and image feature vector. Then, the accuracy and timeliness index of each output parameter is calculated separately. For example, confidence may be used as an accuracy indicator and image acquisition time may be used as a timeliness indicator. Assuming that the identified object type is "cat," the confidence is 95% and the image acquisition time is 1 minute ago.
Next, a parameter quality assessment value is determined based on the accuracy and timeliness index. For example, the confidence and timeliness indicators may be weighted and summed to obtain the parameter quality assessment value according to a preset rule. Assuming that the confidence weight is 0.8 and the timeliness weight is 0.2, the parameter quality evaluation value is 0.8×95% +0.2 (100% -1/60) ≡76.3%. Then, the parameter quality evaluation value is converted into a parameter availability index according to a preset weight coefficient. Assuming a weight coefficient of 0.9, the parameter availability index is 0.9×76.3+.68.7%.
Thereafter, the set of input parameters required for the downstream task instance is counted. For example, a target tracking task requires input of object type, image feature vector, and target initial position. Parameter integrity metric values are then calculated for the output parameter set of the upstream task instance and the input parameter set required for the downstream task instance. For example, if the upstream task provides object type and image feature vectors, while the target initial position is provided by other tasks, the parameter integrity metric value is 2/3≡66.7%. Then, a condition satisfaction is calculated based on the parameter integrity metric value and the parameter availability index. For example, the two may be weighted and summed to obtain the condition satisfaction. Assuming that the parameter integrity metric weight is 0.7 and the parameter availability index weight is 0.3, the condition satisfaction is 0.7×66.7% +0.3×68.7% ≡67.2%.
If the condition satisfaction reaches a preset threshold, for example 70%, the next step is performed. And acquiring resource consumption information and task emergency degree information of the downstream task instance. For example, the resource consumption information may include the number of CPU cores required, memory size, and GPU type, while the task urgency information may be evaluated based on the deadline and importance of the task. Assume that the target trace task requires 2 CPU cores, 4GB of memory, and one NVIDIA TESLA T GPU, with a task urgency of "high".
Task base priorities are calculated based on the resource consumption information and the task urgency information. For example, the task base priority may be obtained by weighted summing the resource consumption and the urgency according to a preset rule. Assuming that the resource consumption weight is 0.6 and the emergency degree weight is 0.4, the task base priority is 0.6 (2/8+4/16+1/1) +0.4×1=0.85, wherein the total resources of the CPU, the memory and the GPU are 8 cores, 16GB and 1, respectively.
And acquiring a load balancing factor of the current time point. The load balancing factor decreases as the current load of the system increases. Assuming that the current system load is high, the load balancing factor is 0.5. And multiplying the task basic priority by a load balancing factor to obtain the task dynamic priority. The task dynamic priority is 0.85 x 0.5=0.425.
And evaluating the matching degree of the available computing resources based on the task dynamic priority. The matching degree evaluation comprises matching degree calculation of the feature dimension of the resource. For example, matching is performed according to the type of GPU required for the task and the type of available GPUs. Scheduling instructions are generated that contain task identifications, resource identifications, priority information, execution constraints, and parameter requirements. For example, the scheduling instruction may contain the ID of the target trace task, the ID of the assigned GPU, the priority 0.425, the execution time limit of 1 hour, and the required input parameter information. And verifying the resource availability and parameter completeness of the scheduling instruction. And after the distributed resources are available and the parameters are complete, sending a task instance scheduling instruction to a task scheduling system, and triggering the dynamic scheduling execution of a downstream task instance.
The application can realize the following steps:
The resource utilization rate is improved, namely the calculation resource can be effectively utilized by dynamically scheduling according to the task demands and the system load, and the resource waste and the bottleneck are avoided. The task execution efficiency is improved, namely, urgent and important tasks can be executed preferentially through a priority scheduling and parameter prejudging mechanism, and the task waiting time is reduced, so that the overall task execution efficiency is improved. And the stability of the system is enhanced, namely, the overload of the system can be avoided and the stable operation of the system is ensured through a load balancing mechanism.
Fig. 6 is a schematic structural diagram of a dynamic scheduling system for context transfer and task instance based on data flow in the embodiment of the present invention, as shown in fig. 6, the system includes:
A first unit, configured to receive a task scheduling request, where the task scheduling request includes task identification information and data flow configuration information, and construct a data flow dependency graph based on the data flow configuration information, where the data flow dependency graph includes a plurality of task nodes and a data flow relationship between task nodes, and obtain, for each task node in the data flow dependency graph, corresponding task instance runtime context information, where the task instance runtime context information includes task instance identifier, task instance state information, task instance input parameter information, and task instance output parameter information;
The second unit is used for analyzing the context information during the operation of the task instance based on the data flow relation between task nodes in the data flow dependency graph, establishing a mapping relation between the input parameter information of the task instance and the output parameter information of the task instance, and generating a parameter transfer rule according to the mapping relation, wherein the parameter transfer rule defines a transfer mode of the output parameter information of an upstream task instance to the input parameter information of a downstream task instance;
And the third unit is used for monitoring task instance state information of each task node in the data flow dependency graph, transmitting output parameter information of an upstream task instance to input parameter information of a downstream task instance according to the parameter transmission rule when detecting that the state information of the upstream task instance changes, judging whether the downstream task instance meets the execution condition or not based on the input parameter information integrity of the downstream task instance, and transmitting a task instance scheduling instruction to a task scheduling system when the execution condition is met to trigger dynamic scheduling execution of the downstream task instance.
In a third aspect of an embodiment of the present invention,
There is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the instructions stored in the memory to perform the method described previously.
In a fourth aspect of an embodiment of the present invention,
There is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method as described above.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present invention.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510140290.8A CN119578850B (en) | 2025-02-08 | 2025-02-08 | Data flow-based running context parameter transmission and task instance dynamic scheduling method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510140290.8A CN119578850B (en) | 2025-02-08 | 2025-02-08 | Data flow-based running context parameter transmission and task instance dynamic scheduling method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119578850A CN119578850A (en) | 2025-03-07 |
| CN119578850B true CN119578850B (en) | 2025-04-08 |
Family
ID=94813037
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510140290.8A Active CN119578850B (en) | 2025-02-08 | 2025-02-08 | Data flow-based running context parameter transmission and task instance dynamic scheduling method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119578850B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119248458A (en) * | 2024-12-04 | 2025-01-03 | 厦门两万里文化传媒有限公司 | AI workflow automation management method based on creative scenarios |
| CN119356879A (en) * | 2024-12-23 | 2025-01-24 | 廊坊市讯云数据科技有限公司 | A distributed real-time data stream processing system and method |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230123322A1 (en) * | 2021-04-16 | 2023-04-20 | Strong Force Vcn Portfolio 2019, Llc | Predictive Model Data Stream Prioritization |
-
2025
- 2025-02-08 CN CN202510140290.8A patent/CN119578850B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119248458A (en) * | 2024-12-04 | 2025-01-03 | 厦门两万里文化传媒有限公司 | AI workflow automation management method based on creative scenarios |
| CN119356879A (en) * | 2024-12-23 | 2025-01-24 | 廊坊市讯云数据科技有限公司 | A distributed real-time data stream processing system and method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119578850A (en) | 2025-03-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9235440B2 (en) | Managing job execution | |
| US20220027744A1 (en) | Resource data modeling, forecasting and simulation | |
| Zhang et al. | Combined fault tolerance and scheduling techniques for workflow applications on computational grids | |
| CN118939438A (en) | Intelligent scheduling method and system for heterogeneous equipment system | |
| CN119228319A (en) | Visual project management method and system based on task tracking | |
| US8832839B2 (en) | Assessing system performance impact of security attacks | |
| EP2472397A1 (en) | Load distribution scheduling method in data processing system | |
| US8180716B2 (en) | Method and device for forecasting computational needs of an application | |
| CN115202847A (en) | Task scheduling method and device | |
| CN112163154A (en) | Data processing method, device, equipment and storage medium | |
| CN112948092A (en) | Batch job scheduling method and device, electronic equipment and storage medium | |
| CN117742795A (en) | Instruction scheduling method and device based on time-dependent priority, computer equipment and medium | |
| CN119576590A (en) | An AI-based intelligent scheduling system and method for computing resources | |
| CN120276831A (en) | Log analysis task scheduling system based on neural network | |
| CN119578850B (en) | Data flow-based running context parameter transmission and task instance dynamic scheduling method | |
| CN120178818A (en) | A multi-factory collaborative production management method, device, equipment and storage medium | |
| CN118034887A (en) | Big data platform task management method and system | |
| CN117149383A (en) | Data processing method, device, terminal equipment and storage medium | |
| US20160004982A1 (en) | Method and system for estimating the progress and completion of a project based on a bayesian network | |
| CN112860763B (en) | Real-time streaming data processing method and device, computer equipment and storage medium | |
| CN116048709A (en) | High-expansibility real-time and off-line data processing system | |
| CN120416256B (en) | Operation management method and system for server cluster | |
| US12169737B1 (en) | Task orchestration system in a trustless computing environment | |
| CN120338744B (en) | Conference activity execution task decomposition arrangement management method based on AI technology | |
| US20250217537A1 (en) | System and method for sigma-based and glideslope multi-path process modeling |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |