[go: up one dir, main page]

CN120469785B - Dynamic priority data processing scheduling method in embedded bare computer environment - Google Patents

Dynamic priority data processing scheduling method in embedded bare computer environment

Info

Publication number
CN120469785B
CN120469785B CN202510968781.1A CN202510968781A CN120469785B CN 120469785 B CN120469785 B CN 120469785B CN 202510968781 A CN202510968781 A CN 202510968781A CN 120469785 B CN120469785 B CN 120469785B
Authority
CN
China
Prior art keywords
priority
data buffer
data
buffer area
alarm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510968781.1A
Other languages
Chinese (zh)
Other versions
CN120469785A (en
Inventor
尚怡翔
郑鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Qinen Communication Technology Co ltd
Original Assignee
Tianjin Qinen Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Qinen Communication Technology Co ltd filed Critical Tianjin Qinen Communication Technology Co ltd
Priority to CN202510968781.1A priority Critical patent/CN120469785B/en
Publication of CN120469785A publication Critical patent/CN120469785A/en
Application granted granted Critical
Publication of CN120469785B publication Critical patent/CN120469785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Communication Control (AREA)

Abstract

The invention discloses a dynamic priority data processing scheduling method in an embedded bare computer environment, which comprises the steps of initializing a data buffer area, configuring priority, adding the data buffer area to a priority array, acquiring received data and uploading the received data to a corresponding data buffer area, updating the buffer margin of a buffer space of the data buffer area, adjusting a margin alarm mark when the margin alarm is triggered by the buffer margin, increasing the margin alarm count, adding the data buffer area to a margin queue, dynamically adjusting the priority of the data buffer area according to the margin alarm count, sequentially polling the data buffer area which does not trigger the margin alarm in the priority array according to the priority, adding the non-empty data buffer area to a scheduling queue, sequentially processing the data buffer area in the margin queue when the margin queue is not empty, and sequentially processing the data buffer area in the scheduling queue when the margin queue is empty.

Description

Dynamic priority data processing scheduling method in embedded bare computer environment
Technical Field
The application relates to the technical field of data processing, in particular to a dynamic priority data processing scheduling method in an embedded bare metal environment.
Background
In embedded system applications, especially some MCUs with relatively low performance, software is often developed in a bare metal (no operating system) manner. Because of no operating system scheduling, the interface data processing software in the bare computer environment is easy to influence each data processing module, for example, the interface frequently processes data with certain low real-time requirement, occupies a large amount of CPU resources, and often influences the timeliness of the data processing of other interfaces with high real-time performance. Or prioritized processing may result in low priority data being left unprocessed for a long period of time resulting in buffer overflows.
Therefore, the prior art has defects, and improvement is needed.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a method for processing and scheduling data with dynamic priority in an embedded bare metal environment, where a priority-based data buffer scheduling framework can efficiently manage multiple paths of received data in the bare metal environment, and provide a convenient support for developing data processing applications. The receiving buffer area is added to each interface data, the priority is set, the real-time performance of each interface data and the buffer overflow avoidance are taken into account through a dynamic priority scheduling algorithm based on the priority and the buffer allowance, the balance between the priority and the buffer overflow avoidance is ensured as much as possible, and the reliability and the real-time performance of the data processing under the bare metal (without an operating system) environment can be effectively improved.
The first aspect of the present invention provides a method for processing and scheduling dynamic priority data in an embedded bare metal environment, comprising:
initializing corresponding data buffer areas of all hardware interfaces, configuring unique priority, and adding all data buffer areas to a priority array according to the order of the priority from small to large;
Step two, obtaining the received data received by the hardware interface;
step three, uploading the received data to a corresponding data buffer zone based on a hardware interface, and updating the buffer allowance of the buffer space of the data buffer zone;
Step four, when the buffer margin of the buffer space of the data buffer area triggers the margin alarm, adjusting the margin alarm mark, increasing the margin alarm count, and adding the data buffer area into a margin queue;
Sequentially polling the data buffer areas without triggering the allowance alarm in the priority array according to the priority, and adding the non-empty data buffer areas into a scheduling queue;
step six, when the allowance queue is not empty, sequentially processing the data buffer areas in the allowance queue, and when the allowance queue is empty, sequentially processing the data buffer areas in the scheduling queue;
And step seven, returning to the step five when the allowance queue and the scheduling queue are empty.
In this scheme, still include:
the structure of the data buffer area is an annular queue;
The data structure of the annular queue comprises data buffering, a receiving pointer, a buffer area size, a priority, a surplus alarm identifier and a surplus alarm count.
In this scheme, initializing the data buffer corresponding to each hardware interface includes:
setting the buffer area size of a data buffer area, and distributing a buffer space according to the buffer area size;
The receiving and transmitting pointer is set to 0, the allowance alarm mark is set to 0, and the allowance alarm count is set to 0.
In this scheme, still include:
And after the processing of the data buffer in the allowance queue or the scheduling queue is finished, updating the buffer allowance of the buffer space of the data buffer, and setting the allowance alarm mark to 0.
In this scheme, when the buffer margin of data buffer space triggers the surplus warning, adjust surplus warning sign, increase surplus warning count, include:
triggering a margin alarm when the margin alarm of the data buffer area is marked as 0 and the buffer margin of the buffer space is lower than 1/2;
After triggering the allowance alarm, adjusting the allowance alarm mark of the corresponding data buffer area to be 1;
and when the surplus alarm identifier of the data buffer area is 1, increasing the surplus alarm count of the data buffer area, and resetting the surplus alarm identifier of the data buffer area.
In this scheme, the dynamic adjustment of the priority of the data buffer according to the remaining alert count of the data buffer includes:
When the allowance alarm count of any data buffer is larger than a first preset count threshold M 1, determining the data buffer as a first data buffer Q X, wherein the corresponding priority is X;
Determining a data buffer area corresponding to the priority X-1 in the priority array as a second data buffer area Q X-1, and reading a margin alarm count M X-1 of the second data buffer area Q X-1;
when X-1<0, the adjustment is abandoned;
Exchanging priorities of the first data buffer area Q X and the second data buffer area Q X-1 when M X-1<M2 is performed, exchanging positions of the first data buffer area Q X and the second data buffer area Q X-1 in a priority array, and clearing surplus alarm counts of the first data buffer area Q X and the second data buffer area Q X-1, wherein M 2 is a second preset count threshold value, and M 2<M1;
When M X-1≥M2 is traversed forward based on the priority, sequentially reading the surplus alarm counts of the data buffers corresponding to each priority, determining the data buffer with the first surplus alarm count smaller than M 2 as a third data buffer M START, and determining the corresponding priority as START;
determining the priority X as the priority END;
Performing priority adjustment on the data buffer area corresponding to the priority interval from the priority START to the priority END;
And if the data buffer area with the surplus alarm count smaller than M 2 does not exist in the data buffer area corresponding to the priority 0, discarding the adjustment.
In this scheme, the performing priority adjustment on the data buffer area corresponding to the priority interval from the priority START to the priority END includes:
And adjusting the priority of the third data buffer area M START corresponding to the priority START to END, increasing the priority of the data buffer areas corresponding to the priority START+1 to the priority END by one level, updating the position of the data buffer area corresponding to the priority interval from the priority START to the priority END in the priority array based on the priority, and clearing the surplus alarm count of the data buffer area corresponding to the priority interval from the priority START to the priority END.
The invention discloses a dynamic priority data processing scheduling method in an embedded bare computer environment, which comprises the steps of initializing a data buffer area, configuring priority, adding the data buffer area to a priority array, acquiring received data and uploading the received data to a corresponding data buffer area, updating the buffer margin of a buffer space of the data buffer area, adjusting a margin alarm mark when the margin alarm is triggered by the buffer margin, increasing the margin alarm count, adding the data buffer area to a margin queue, dynamically adjusting the priority of the data buffer area according to the margin alarm count, sequentially polling the data buffer area which does not trigger the margin alarm in the priority array according to the priority, adding the non-empty data buffer area to a scheduling queue, sequentially processing the data buffer area in the margin queue when the margin queue is not empty, and sequentially processing the data buffer area in the scheduling queue when the margin queue is empty.
Drawings
FIG. 1 shows a flow chart of a dynamic priority data processing scheduling method in an embedded bare metal environment provided by the invention;
FIG. 2 is a flow chart showing a processing method when the buffer margin of the buffer space of the data buffer triggers a margin alarm;
fig. 3 shows a schematic structural diagram of a data buffering schedule framework provided by the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
FIG. 1 shows a flow chart of a method for processing and scheduling dynamic priority data in an embedded bare metal environment.
As shown in fig. 1, the invention discloses a method for processing and scheduling dynamic priority data in an embedded bare metal environment, which comprises the following steps:
s102, initializing data buffers corresponding to all hardware interfaces, configuring unique priorities, and adding all the data buffers to a priority array according to the order of the priorities from small to large;
S104, step two, obtaining the received data received by the hardware interface;
S106, uploading the received data to a corresponding data buffer area based on a hardware interface, and updating the buffer allowance of the buffer space of the data buffer area;
S108, when the buffer margin of the buffer space of the data buffer area triggers the margin alarm, adjusting the margin alarm mark, increasing the margin alarm count, and adding the data buffer area into the margin queue;
S110, sequentially polling the data buffer areas without triggering the allowance alarm in the priority array according to the priority, and adding the non-empty data buffer areas into a scheduling queue;
S112, when the residual queue is not empty, sequentially processing the data buffer areas in the residual queue, and when the residual queue is empty, sequentially processing the data buffer areas in the scheduling queue;
S114, in step seven, when the allowance queue and the dispatch queue are empty, returning to step five.
According to an embodiment of the present invention, as shown in fig. 3, a general priority-based data buffer scheduling framework is first designed, which includes a data structure of data buffers, an organization ordering of the data buffers, a scheduler 5, a data handler 6, and a hardware interface interrupt 7. The priority array 1, each data buffer area in the data buffer group 2, the scheduling queue 3 and the allowance queue 4 are all annular queues, wherein W is a write pointer of each queue in the figure, and R is a read pointer of each queue.
The data buffer area is realized in the form of a ring queue, and the size of the ring queue can be set in an initialization stage. The ring queue data structure comprises data buffer, receiving and transmitting pointer, buffer size, priority, allowance alarm mark and allowance alarm count. Each hardware interface corresponds to a unique data buffer, and each data buffer corresponds to a unique priority. The priority range preset by the system is 0-31, 0 is the highest priority, and 31 is the lowest priority. And the residual alarm is that the buffer residual is 1 when the buffer residual is lower than 1/2, the buffer residual is 0 when the buffer residual is higher than 3/4, and the residual alarm is represented when the residual alarm value is 1, and the residual level of all data buffer is set to 0 during initialization. The data buffer area is organized and ordered, the priority array is used for storing all the data buffer areas, and the priority of the data buffer areas is used as an index. The allowance queue stores the data buffer area with the allowance alarm mark of 1 in the form of a ring queue. And the scheduler processes the data buffer areas with no allowance alarm in the priority array according to the priority, and places the non-empty data buffer areas into a scheduling queue. And the data processing program is used for preferentially processing the residual queues, and processing the data of each data buffer area in the scheduling queue one by one under the condition that the residual queues are empty. After the processing is finished, the receiving and transmitting pointer and the allowance alarm mark of the data buffer area are updated.
When executing the dynamic priority data processing scheduling method, firstly, determining unique priority for all hardware interfaces, initializing a corresponding data buffer area structure, setting the size of a buffer area, distributing a buffer space, setting a receiving and transmitting pointer to 0, setting a residual alarm mark to 0, setting a residual alarm count to 0, configuring priority, filling the data buffer area into a priority array according to the priority, after the hardware interfaces interrupt the newly received data into the corresponding data buffer area, updating the buffer residual of the buffer space of the data buffer area, when the buffer residual of the buffer space of the data buffer area triggers the residual alarm (namely, the buffer residual is smaller than 1/2 and meets the residual alarm mark, the residual alarm mark of the data buffer area is adjusted to 1, and continuing to set the residual alarm count of the data buffer area to +1, and then resetting the residual alarm mark of the data buffer area. And simultaneously, placing the buffer area queue into a allowance queue. And simultaneously, dynamically adjusting the priority of each data buffer area according to the surplus alarm count of the data buffer area. When the allowance queue is not empty, the data buffer areas in the allowance queue are sequentially processed according to the order of entering the allowance queue, after the processing is finished, the data buffer area receiving and transmitting pointer and the allowance alarm mark are updated, when the allowance queue is empty, the data of each data buffer area in the scheduling queue is sequentially processed according to the priority, after the processing is finished, the data buffer area receiving and transmitting pointer and the allowance alarm mark are updated, when the allowance queue is empty, the scheduling queue is also empty, the data buffer areas with the allowance alarm mark of 0 are sequentially inquired according to the order of the priority array, the non-empty data buffer area pointer is put into the scheduling queue, and the steps are repeated to finish the data processing scheduling.
In addition, a corresponding data buffer pointer can be generated according to the initial memory address of the data buffer, the data buffer pointer replaces the data buffer and is added into the priority array, the allowance queue and the scheduling queue, and when the data buffer is subjected to data processing, the corresponding data buffer is accessed according to the data buffer pointer to perform data processing.
According to an embodiment of the present invention, further comprising:
The data buffer area is in a ring queue structure;
The data structure of the annular queue comprises data buffering, a receiving pointer, a buffer area size, a priority, a allowance alarm identifier and an allowance alarm count.
It should be noted that the data buffer is implemented in the form of a ring queue, and the size of the ring queue may be set in the initialization stage. The ring queue data structure comprises data buffer, receiving and transmitting pointer, buffer size, priority, allowance alarm mark and allowance alarm count. Each way of received data (received by a different hardware interface) corresponds to a data buffer. The system presets 32 priorities, namely priorities 0-31,0 is the highest priority, 31 is the lowest priority, and each priority corresponds to one data buffer area independently. The margin alarm flag includes 0 and 1.
According to an embodiment of the present invention, initializing a data buffer corresponding to each hardware interface includes:
setting the buffer area size of a data buffer area, and distributing a buffer space according to the buffer area size;
The receiving and transmitting pointer is set to 0, the allowance alarm mark is set to 0, and the allowance alarm count is set to 0.
It should be noted that the buffer sizes are set by those skilled in the art according to actual requirements, and as shown in fig. 3, the buffer sizes of the data buffers corresponding to the priority 0, the priority 21 and the priority 31 are 128, 256 and 256, respectively. The size of the buffer space of the data buffer is equal to the size of the buffer which is preset. Resetting the receiving and transmitting pointer, the allowance alarm mark and the allowance alarm count, all of its data is reset to 0.
According to an embodiment of the present invention, further comprising:
And after the processing of the data buffer in the allowance queue or the scheduling queue is finished, updating the buffer allowance of the buffer space of the data buffer, and setting the allowance alarm mark to 0.
It should be noted that, after the processing of the data buffer in the surplus queue or the scheduling queue is completed, the buffer surplus of the buffer space of the data buffer is reset to 0, and the surplus alarm flag is set to 0, that is, the received data in the data buffer is processed, and the received data is restored to the initial state for receiving new received data.
Fig. 2 shows a flow chart of a processing method when the buffer margin of the buffer space of the data buffer triggers a margin alarm.
As shown in fig. 2, according to an embodiment of the present invention, when a buffer margin of a buffer space of a data buffer triggers a margin alarm, a margin alarm flag is adjusted, and a margin alarm count is increased, including:
S202, triggering a margin alarm when the margin alarm of the data buffer area is marked as 0 and the buffer margin of the buffer space is lower than 1/2;
S204, after the allowance alarm is triggered, adjusting the allowance alarm mark of the corresponding data buffer area to be 1;
S206, when the surplus alarm identification of the data buffer area is 1, the surplus alarm count of the data buffer area is increased, and the surplus alarm identification of the data buffer area is reset.
It should be noted that, the surplus alarm identifier of the data buffer area includes 0 and 1, and the system sets corresponding determination conditions for them respectively. When the buffer margin of the data buffer space is lower than 1/2, the margin alarm mark is 1, and when the buffer margin is higher than 3/4, the margin alarm mark is 0. Therefore, when the buffer margin of the buffer space of the data buffer is lower than 1/2, if the margin alarm identifier of the data buffer is 0, the margin alarm identifier is adjusted to 1, the margin alarm count of the data buffer is +1, and then the margin alarm identifier of the data buffer is reset.
According to the embodiment of the invention, the priority of the data buffer area is dynamically adjusted according to the allowance alarm count of the data buffer area, which comprises the following steps:
When the allowance alarm count of any data buffer is larger than a first preset count threshold M 1, determining the data buffer as a first data buffer Q X, wherein the corresponding priority is X;
Determining a data buffer area corresponding to the priority X-1 in the priority array as a second data buffer area Q X-1, and reading a margin alarm count M X-1 of the second data buffer area Q X-1;
when X-1<0, the adjustment is abandoned;
When M X-1<M2, exchanging the priorities of the first data buffer area Q X and the second data buffer area Q X-1, exchanging the positions of the first data buffer area Q X and the second data buffer area Q X-1 in the priority array, and clearing the surplus alarm count of the first data buffer area Q X and the second data buffer area Q X-1, wherein M 2 is a second preset count threshold value, and M 2<M1;
When M X-1≥M2 is traversed forward based on the priority, sequentially reading the surplus alarm counts of the data buffers corresponding to each priority, determining the data buffer with the first surplus alarm count smaller than M 2 as a third data buffer M START, and determining the corresponding priority as START;
determining the priority X as the priority END;
Performing priority adjustment on the data buffer area corresponding to the priority interval from the priority START to the priority END;
And if the data buffer area with the surplus alarm count smaller than M 2 does not exist in the data buffer area corresponding to the priority 0, discarding the adjustment.
It should be noted that, the first preset count threshold M 1 and the second preset count threshold M 2 are set by those skilled in the art according to actual requirements, and the initial value of M 1 is 128 and the initial value of M 2 is 32.
The remaining capacity alarm counts of the data buffer are accumulated according to whether the remaining capacity alarm is triggered or not in each polling process. By monitoring the margin alarm count for each data buffer, when the margin alarm count for any one data buffer is greater than the first preset count threshold M 1, the data buffer is determined to be the first data buffer M X, and the data buffer higher in priority than the first data buffer M X is determined to be the second data buffer Q X-1. When X-1<0, the first data buffer Q X corresponding to the priority X is the highest priority data buffer, i.e., x=0 at this time, and no priority adjustment is needed. When M X-1<M2 is executed, the first data buffer area Q X is processed preferentially, the priority of the first data buffer area Q X is adjusted to X-1, the priority of the second data buffer area Q X-1 is adjusted to X, the priority of the two adjusted data buffer areas is guaranteed to be still unique, the positions of the two data buffer areas in the priority array are adjusted based on the priority, and the surplus alarm count of the two data buffer areas is cleared. When M X-1<M2, traversing the data buffers in the priority array before the first data buffer M X based on the priority, determining the data buffer with the first margin alarm count smaller than M 2 as the third data buffer M START, determining the priority of the third data buffer M START as START, that is, adjusting the beginning priority, determining the priority X of the first data buffer M X as the priority END, that is, adjusting the position of the data buffer corresponding to the priority interval from the priority START to the priority END in the priority array. If no data buffer with the surplus alarm count smaller than M 2 exists in the data buffer corresponding to the priority 0, the surplus alarm counts of the data buffers corresponding to the priorities 0 to X are higher, and the adjustment of the positions of the data buffers is abandoned.
According to an embodiment of the present invention, performing priority adjustment on a data buffer corresponding to a priority interval from a priority START to a priority END includes:
And adjusting the priority of the third data buffer area M START corresponding to the priority START to END, increasing the priority of the data buffer areas corresponding to the priority START+1 to the priority END by one level, updating the position of the data buffer area corresponding to the priority interval from the priority START to the priority END in the priority array based on the priority, and clearing the surplus alarm count of the data buffer area corresponding to the priority interval from the priority START to the priority END.
It should be noted that, the positions of the data buffers corresponding to the priority intervals from the priority START to the priority END in the priority array are adjusted, the priority intervals from the priority START to the priority END are determined as the priority intervals needing to be adjusted in priority, the priority of the third data buffer M START is adjusted to the priority END, the priorities of the other data buffers in the adjustment queue are all increased by one level, it is ensured that the priority after the adjustment of the data buffers is still unique, and the position of the data buffer corresponding to the adjustment queue is adjusted according to the priority. And similarly, when the surplus alarm count of the data buffer zone participating in adjustment is cleared, frequent adjustment of the part of the data buffer zone is avoided.
Information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals (including but not limited to signals transmitted between a user terminal and other devices, etc.) referred to by the present application are all user-authorized or fully authorized by parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, reference in the present disclosure to "received data received by a hardware interface" is acquired with sufficient authorization.
The invention discloses a dynamic priority data processing scheduling method in an embedded bare computer environment, which comprises the steps of initializing a data buffer area, configuring priority, adding the data buffer area to a priority array, acquiring received data and uploading the received data to a corresponding data buffer area, updating the buffer margin of a buffer space of the data buffer area, adjusting a margin alarm mark when the margin alarm is triggered by the buffer margin, increasing the margin alarm count, adding the data buffer area to a margin queue, dynamically adjusting the priority of the data buffer area according to the margin alarm count, sequentially polling the data buffer area which does not trigger the margin alarm in the priority array according to the priority, adding the non-empty data buffer area to a scheduling queue, sequentially processing the data buffer area in the margin queue when the margin queue is not empty, and sequentially processing the data buffer area in the scheduling queue when the margin queue is empty.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is merely a logical function division, and there may be additional divisions of actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place or distributed on a plurality of network units, and may select some or all of the units according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of hardware plus a form of software functional unit.
It will be appreciated by those of ordinary skill in the art that implementing all or part of the steps of the above method embodiments may be implemented by hardware associated with program instructions, where the above program may be stored in a computer readable storage medium, where the program when executed performs the steps comprising the above method embodiments, where the above storage medium includes a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic or optical disk, or other various media that may store program code.
Or the above-described integrated units of the invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. The storage medium includes various media capable of storing program codes such as a removable storage device, a ROM, a RAM, a magnetic disk or an optical disk.

Claims (7)

1. A dynamic priority data processing scheduling method in an embedded bare computer environment is characterized by comprising the following steps:
initializing corresponding data buffer areas of all hardware interfaces, configuring unique priority, and adding all data buffer areas to a priority array according to the order of the priority from small to large;
Step two, obtaining the received data received by the hardware interface;
step three, uploading the received data to a corresponding data buffer zone based on a hardware interface, and updating the buffer allowance of the buffer space of the data buffer zone;
Step four, when the buffer margin of the buffer space of the data buffer area triggers the margin alarm, adjusting the margin alarm mark, increasing the margin alarm count, and adding the data buffer area into a margin queue;
Sequentially polling the data buffer areas without triggering the allowance alarm in the priority array according to the priority, and adding the non-empty data buffer areas into a scheduling queue;
step six, when the allowance queue is not empty, sequentially processing the data buffer areas in the allowance queue, and when the allowance queue is empty, sequentially processing the data buffer areas in the scheduling queue;
And step seven, returning to the step five when the allowance queue and the scheduling queue are empty.
2. The method for dynamic priority data processing scheduling in an embedded bare metal environment of claim 1, further comprising:
the structure of the data buffer area is an annular queue;
The data structure of the annular queue comprises data buffering, a receiving pointer, a buffer area size, a priority, a surplus alarm identifier and a surplus alarm count.
3. The method for dynamically prioritizing data processing and scheduling in an embedded bare metal environment according to claim 2, wherein initializing the data buffers corresponding to each hardware interface comprises:
setting the buffer area size of a data buffer area, and distributing a buffer space according to the buffer area size;
The receiving and transmitting pointer is set to 0, the allowance alarm mark is set to 0, and the allowance alarm count is set to 0.
4. The method for dynamic priority data processing scheduling in an embedded bare metal environment of claim 1, further comprising:
And after the processing of the data buffer in the allowance queue or the scheduling queue is finished, updating the buffer allowance of the buffer space of the data buffer, and setting the allowance alarm mark to 0.
5. The method for dynamically prioritized data processing and scheduling in an embedded bare metal environment of claim 1, wherein when a buffer margin of a buffer space of the data buffer triggers a margin alarm, adjusting a margin alarm flag, increasing a margin alarm count, comprises:
triggering a margin alarm when the margin alarm of the data buffer area is marked as 0 and the buffer margin of the buffer space is lower than 1/2;
After triggering the allowance alarm, adjusting the allowance alarm mark of the corresponding data buffer area to be 1;
and when the surplus alarm identifier of the data buffer area is 1, increasing the surplus alarm count of the data buffer area, and resetting the surplus alarm identifier of the data buffer area.
6. The method for dynamically prioritizing data processing and scheduling in an embedded bare metal environment according to claim 1, wherein dynamically adjusting the priority of the data buffer according to the headroom alarm count of the data buffer comprises:
When the allowance alarm count of any data buffer is larger than a first preset count threshold M 1, determining the data buffer as a first data buffer Q X, wherein the corresponding priority is X;
Determining a data buffer area corresponding to the priority X-1 in the priority array as a second data buffer area Q X-1, and reading a margin alarm count M X-1 of the second data buffer area Q X-1;
when X-1<0, the adjustment is abandoned;
Exchanging priorities of the first data buffer area Q X and the second data buffer area Q X-1 when M X-1<M2 is performed, exchanging positions of the first data buffer area Q X and the second data buffer area Q X-1 in a priority array, and clearing surplus alarm counts of the first data buffer area Q X and the second data buffer area Q X-1, wherein M 2 is a second preset count threshold value, and M 2<M1;
When M X-1≥M2 is traversed forward based on the priority, sequentially reading the surplus alarm counts of the data buffers corresponding to each priority, determining the data buffer with the first surplus alarm count smaller than M 2 as a third data buffer M START, and determining the corresponding priority as START;
determining the priority X as the priority END;
Performing priority adjustment on the data buffer area corresponding to the priority interval from the priority START to the priority END;
And if the data buffer area with the surplus alarm count smaller than M 2 does not exist in the data buffer area corresponding to the priority 0, discarding the adjustment.
7. The method for dynamically prioritized data processing scheduling in an embedded bare metal environment according to claim 6, wherein said performing priority adjustment on the data buffers corresponding to the priority interval from the priority START to the priority END comprises:
And adjusting the priority of the third data buffer area M START corresponding to the priority START to END, increasing the priority of the data buffer areas corresponding to the priority START+1 to the priority END by one level, updating the position of the data buffer area corresponding to the priority interval from the priority START to the priority END in the priority array based on the priority, and clearing the surplus alarm count of the data buffer area corresponding to the priority interval from the priority START to the priority END.
CN202510968781.1A 2025-07-15 2025-07-15 Dynamic priority data processing scheduling method in embedded bare computer environment Active CN120469785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510968781.1A CN120469785B (en) 2025-07-15 2025-07-15 Dynamic priority data processing scheduling method in embedded bare computer environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510968781.1A CN120469785B (en) 2025-07-15 2025-07-15 Dynamic priority data processing scheduling method in embedded bare computer environment

Publications (2)

Publication Number Publication Date
CN120469785A CN120469785A (en) 2025-08-12
CN120469785B true CN120469785B (en) 2025-09-09

Family

ID=96640671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510968781.1A Active CN120469785B (en) 2025-07-15 2025-07-15 Dynamic priority data processing scheduling method in embedded bare computer environment

Country Status (1)

Country Link
CN (1) CN120469785B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306232A (en) * 2014-06-18 2016-02-03 中兴通讯股份有限公司 Alarm data processing method and network management device
CN106453141A (en) * 2016-10-12 2017-02-22 中国联合网络通信集团有限公司 Global queue adjustment method, service flow queue adjustment method and network system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3587080B2 (en) * 1999-05-06 2004-11-10 日本電気株式会社 Packet buffer management device and packet buffer management method
US8160085B2 (en) * 2007-12-21 2012-04-17 Juniper Networks, Inc. System and method for dynamically allocating buffers based on priority levels
US11470010B2 (en) * 2020-02-06 2022-10-11 Mellanox Technologies, Ltd. Head-of-queue blocking for multiple lossless queues

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306232A (en) * 2014-06-18 2016-02-03 中兴通讯股份有限公司 Alarm data processing method and network management device
CN106453141A (en) * 2016-10-12 2017-02-22 中国联合网络通信集团有限公司 Global queue adjustment method, service flow queue adjustment method and network system

Also Published As

Publication number Publication date
CN120469785A (en) 2025-08-12

Similar Documents

Publication Publication Date Title
US12135996B2 (en) Computing resource scheduling method, scheduler, internet of things system, and computer readable medium
CN111488135A (en) Current limiting method and device for high-concurrency system, storage medium and equipment
CN107682417B (en) Task allocation method and device for data nodes
US8463967B2 (en) Method and device for scheduling queues based on chained list
WO2017000657A1 (en) Cache management method and device, and computer storage medium
CN111061556A (en) Optimization method and device for executing priority task, computer equipment and medium
CN111522643A (en) Multi-queue scheduling method and device based on FPGA, computer equipment and storage medium
RU2641250C2 (en) Device and method of queue management
CN110413210B (en) Method, apparatus and computer program product for processing data
CN113312160A (en) Techniques for behavioral pairing in a task distribution system
US20050257012A1 (en) Storage device flow control
CN110750350B (en) Large resource scheduling method, system, device and readable storage medium
CN117389766A (en) Message sending method and device, storage medium and electronic device
CN112650449B (en) Method and system for releasing cache space, electronic device and storage medium
CN104866238B (en) Access request scheduling method and device
CN111190541B (en) Flow control method of storage system and computer readable storage medium
CN120469785B (en) Dynamic priority data processing scheduling method in embedded bare computer environment
US11194619B2 (en) Information processing system and non-transitory computer readable medium storing program for multitenant service
CN116610702A (en) Time slicing method and device and electronic equipment
CN114253686A (en) Task scheduling method and device, electronic equipment and storage medium
US8793423B2 (en) Servicing interrupt requests in a computer system
CN115484143A (en) Alarm processing method, device, electronic equipment and storage medium
CN112099945A (en) Task processing method, task processing device and electronic equipment
CN106776393B (en) uninterrupted serial port data receiving method and device
CN112532531B (en) Message scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant