[go: up one dir, main page]

CN119806769A - A priority scheduling module and chip - Google Patents

A priority scheduling module and chip Download PDF

Info

Publication number
CN119806769A
CN119806769A CN202411827159.0A CN202411827159A CN119806769A CN 119806769 A CN119806769 A CN 119806769A CN 202411827159 A CN202411827159 A CN 202411827159A CN 119806769 A CN119806769 A CN 119806769A
Authority
CN
China
Prior art keywords
data
input data
priority
input
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411827159.0A
Other languages
Chinese (zh)
Inventor
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xinlianxin Intelligent Technology Co ltd
Original Assignee
Shanghai Xinlianxin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinlianxin Intelligent Technology Co ltd filed Critical Shanghai Xinlianxin Intelligent Technology Co ltd
Priority to CN202411827159.0A priority Critical patent/CN119806769A/en
Publication of CN119806769A publication Critical patent/CN119806769A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种优先级调度模块及芯片,用以在CPU不参与的情况下实现优先级调度。该优先级调度模块包括多个输入数据缓存单元和优先级调度单元;所述多个输入数据缓存单元用于接收多路输入数据;所述多个输入数据缓存单元不具有外部读写地址线;所述多路输入数据在处理模块进行处理时具有不同的优先级;所述优先级调度单元的输入端连接所述多个输入数据缓存单元的输出端,用于根据每路输入数据的优先级对所述多路输入数据进行排序,并将排序后的输入数据按序输入至所述处理模块;所述处理模块用于按照所述多路输入数据的优先级对所述多路输入数据进行处理。

A priority scheduling module and chip are used to implement priority scheduling without CPU involvement. The priority scheduling module includes multiple input data cache units and a priority scheduling unit; the multiple input data cache units are used to receive multiple input data; the multiple input data cache units do not have external read and write address lines; the multiple input data have different priorities when processed by a processing module; the input end of the priority scheduling unit is connected to the output end of the multiple input data cache units, and is used to sort the multiple input data according to the priority of each input data, and input the sorted input data into the processing module in sequence; the processing module is used to process the multiple input data according to the priority of the multiple input data.

Description

Priority scheduling module and chip
Technical Field
The present application relates to the field of computer technologies, and in particular, to a priority scheduling module and a chip.
Background
The priority refers to the priority level assigned to the task by the computer operating system, which determines the priority of the task when using the resources. The priority level assigned to the device can determine the sequencing of the processor responses when the device makes an interrupt request. Task scheduling priority mainly refers to the priority of the task when the task is scheduled to run, and is mainly related to the priority of the task and a scheduling algorithm. In particular, in real-time systems, task scheduling priority reflects a task importance and urgency.
The existing priority scheduling is distributed and scheduled by a Central Processing Unit (CPU), and the priority scheduling mode by the CPU needs to spend a great deal of response and processing time of the CPU, so that the data delay is high.
Therefore, a scheme is needed to implement priority scheduling without participation of the CPU.
Disclosure of Invention
The application provides a priority scheduling module and a chip, which are used for realizing priority scheduling under the condition that a CPU does not participate.
In a first aspect, the present application provides a priority scheduling module, including a plurality of input data buffering units and a priority scheduling unit;
the input data buffer units are used for receiving multiple paths of input data, and do not have external read-write address lines, and the multiple paths of input data have different priorities when processed by the processing module;
The input end of the priority scheduling unit is connected with the output ends of the plurality of input data caching units, and is used for sequencing the plurality of paths of input data according to the priority of each path of input data, and inputting the sequenced input data to the processing module in sequence, and the processing module is used for processing the plurality of paths of input data according to the priority of the plurality of paths of input data.
According to the technical scheme, the priority scheduling unit is introduced, the priority scheduling unit sorts the multiple paths of input data according to the priority, and the sorted input data is sequentially input to the processing module, so that the processing unit can process the input data according to the priority order, the priority scheduling of the multiple paths of input data can be realized on the data layer under the condition that the CPU is not involved, the response time is greatly reduced while the use of the CPU is reduced, and the rapid and simple priority scheduling of the multiple paths of input data is realized.
In one possible design, the priority scheduling unit further includes a plurality of data channels having priorities, each data channel corresponding to a storage address;
the priority scheduling unit is specifically configured to obtain corresponding input data from the input data buffer unit corresponding to each path of input data, and store the input data in a data channel corresponding to the priority of each path of input data;
The priority scheduling unit is specifically configured to input the multiple paths of input data to the processing module according to a sequence from high priority to low priority of the multiple data channels.
In the above technical solution, the data channel has a priority, and the input data in the input data buffer unit connected to the data channel has the same priority as the data channel. Based on the priority of the data channel, the multi-path input data is input to the processing module according to the order of the priority from high to low, so that the priority scheduling of the multi-path data is realized, the load degree of hardware design can be reduced, and the priority processing of the multi-path input data can be directly performed under the condition that the CPU is not involved.
In one possible design, each data channel of the priority scheduling unit has a respective input flag bit, wherein the input flag bit is used for indicating that the data channel has data input;
The priority scheduling unit is further configured to determine whether an input flag bit of a data channel with a highest priority is a first value, if so, output input data of the data channel with the highest priority, and if not, determine whether input flag bits of a data channel with a next priority are valid until the input flag bits of the data channels are all second values, where the flag bit is the first value and indicates that data is input to a corresponding data channel, and the flag bit is the second value and indicates that no data is input to the corresponding data channel.
According to the technical scheme, whether data are input in each data channel can be clearly known by setting the flag bit of the data channel, so that the priority scheduling unit can more accurately order the priorities of the multiple paths of input data.
In one possible design, the priority scheduling module further includes a priority management unit;
The priority scheduling instruction comprises a corresponding relation between an input data caching unit and a data channel;
The priority management unit is further configured to change a storage address of a data channel corresponding to the input data buffer unit according to the priority adjustment instruction.
According to the technical scheme, the corresponding relation between the input data caching unit and the data channel can be dynamically adjusted through the priority management unit, so that the priority of the input data is adjusted, and the priority scheduling of multiple paths of input data is realized more flexibly.
In one possible design, the priority scheduling module further includes a data screening unit;
The input end of the data screening unit is connected with the plurality of input data caching units, the output end of the data screening unit is connected with the priority scheduling unit, and the data screening unit is used for screening out data meeting set screening conditions from the multipath input data and directly inputting the data to the processing module.
In the technical scheme, the data screening unit can screen the most needed data from the multipath data before priority scheduling so as to increase the flexibility of the priority scheduling of the input data.
In one possible design, the priority scheduling module further includes a data reorganization unit;
The input end of the data reorganization unit is connected with the output end of the priority scheduling unit and is used for adjusting the data format of the ordered input data.
According to the technical scheme, the data format of the input data after being sequenced is adjusted through the data reorganizing unit, so that the diversity of data processing can be increased.
In one possible design, the priority scheduling module further includes an output data buffer unit;
The input end of the output data caching unit is connected with the output end of the data reorganizing unit, and the output end of the output data caching unit is connected with the processing module and used for temporarily storing the ordered input data to wait for the processing module to receive the input data.
According to the technical scheme, the data which is not needed to be output temporarily can be temporarily stored through the data caching unit, and the data is received when the data processing module needs to be processed, so that the flexibility of the priority scheduling module is improved.
In one possible design, the input data buffer unit and the output data buffer unit are first-in first-out memory units.
According to the technical scheme, the first-in first-out storage unit is simple in structure, the data read-write logic is simple, the read-write speed is high, the first-in first-out storage unit is used for the input data caching unit and the output data caching unit, so that the structure of the priority scheduling module is simpler, and the data processing speed is high.
In a second aspect, embodiments of the present application provide a chip including a priority scheduling module, a multiplexing module, and a processing module as in any of the possible designs of the first aspect;
the multi-channel input module is respectively connected with a plurality of input data buffer units in the priority scheduling module and is used for outputting multi-channel input data;
the processing module is connected with the output end of the priority scheduling module and is used for acquiring and processing the input data sequenced by the priority scheduling module.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a priority scheduling module according to an embodiment of the present application;
Fig. 2 is a schematic diagram of a second structure of a priority scheduling module according to an embodiment of the present application;
FIG. 3 is an illustration of the prioritization logic of a priority dispatch unit provided by an embodiment of the present application;
Fig. 4 is a schematic structural diagram III of a priority scheduling module according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a correspondence relationship between an input data buffer unit and a data channel according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a priority scheduling module according to an embodiment of the present application;
fig. 7 is a schematic diagram of a structure of a priority scheduling module according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a priority scheduling module according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In embodiments of the present application, a plurality refers to two or more. The words "first," "second," and the like are used merely for distinguishing between the descriptions and not be construed as indicating or implying a relative importance or order.
Fig. 1 is a schematic diagram of a priority scheduling module according to an embodiment of the present application, and as shown in fig. 1, a priority scheduling module 100 includes a plurality of input data buffering units 110 and a priority scheduling unit 120.
The plurality of input data buffer units 110 are used for receiving multiple paths of input data. The input data buffer units 100 are expandable, the number of the input data buffer units 110 is not limited in the present application, and the data amount of the specific input data buffer units 110 can be set according to actual requirements, and illustratively, the priority scheduling module in fig. 1 includes input data buffer units 110-1 to 110-4, and total number of input data buffer units is 4. The input data buffer units 110 do not have external read-write address lines, and the input data storage units may be first-in first-out storage units, which has the characteristics of simple structure and high speed. The application is applied to a scene in which multiple paths of input data are to be input to the same processing module for processing, and the multiple paths of input data have different priorities when being processed by the processing module 200.
The input end of the priority scheduling unit 120 is connected to the output ends of the plurality of input data buffer units 110, and is configured to sort the multiple paths of input data according to the priority of each path of input data, and sequentially input the sorted input data to the processing module 200. The processing module 200 is configured to process the multiple input data according to the priority of the multiple input data.
According to the technical scheme, the priority scheduling unit is introduced, the priority scheduling unit sorts the multiple paths of input data according to the priority, and the sorted input data are sequentially input to the processing module, so that the processing unit can process the input data according to the priority order, the priority scheduling of the multiple paths of input data is carried out on the data layer under the condition that the CPU is not involved, the use of the CPU is reduced, the reaction time is greatly reduced, and the rapid and simple priority scheduling of the multiple paths of input data is realized.
Fig. 2 is a schematic diagram of a second structure of a priority scheduling module according to an embodiment of the present application, as shown in fig. 2, the priority scheduling unit 120 further includes a plurality of data channels with priorities, where the number of data channels is generally consistent with the number of input data buffer units, and referring to fig. 2, for example, corresponding to 4 input data buffer units 110-1 to 110-4, the data channels 1 to 4 are set in the priority scheduling unit. Wherein each data channel corresponds to a storage address, and each storage address is preset with a corresponding priority, so that when an input data buffer unit is connected with one data channel, the input data in the input data storage unit has the same priority as the storage address of the data channel. That is, the input data buffer unit that receives the input data with the highest priority is connected to the storage address with the highest priority, the input data buffer unit that receives the input data with the second priority is connected to the storage address with the second priority, and so on.
The priority scheduling unit 120 is specifically configured to obtain corresponding input data from the input data buffer unit 110 corresponding to each path of input data, and store the input data in a data channel corresponding to the priority of each path of input data. The priority scheduling unit 120 is specifically configured to input multiple paths of input data to the processing module in order of priority of the multiple data channels from high to low.
Taking fig. 2 as an example, assume that the storage addresses of the data channels 1 to 4 are address 1, address 2, address 3, and address 4, respectively, and the priority order is address 1> address 2> address 3> address 4, and the priority of the input data in the input data buffer unit is input data buffer unit 110-1> input data buffer unit 110-2> input data buffer unit 110-3> input data buffer unit 110-4. Then the input data buffer unit 110-1 is accessed to data channel 1, i.e. to access address 1, the input data buffer unit 110-2 is accessed to data channel 2, i.e. to access address 2, the input data buffer unit 110-3 is accessed to address 3, and the input data buffer unit 110-4 is accessed to address 4. In practice, the priority scheduling unit 120 obtains the corresponding input data 1 from the input data buffer unit 110-1, stores the input data 1 in the data channel 1, obtains the corresponding input data 2, input data 3 and input data 4 from the input data buffer units 110-2, 110-3 and 110-4, and stores the input data 2, input data 3 and input data 4 in the data channels 2, 3 and 4 respectively. In this way, the priority scheduling unit 120 may input the multiple paths of input data to the processing module in the order of the input data 1, the input data 2, the input data 3, and the input data 4 according to the order of the priority of the data channels from high to low. The processing module performs data processing in the order of input data 1, input data 2, input data 3, and input data 4.
In the above technical solution, the data channel has a priority, and the input data in the input data buffer unit connected to the data channel has the same priority as the data channel. Based on the priority of the data channel, the multi-path input data is input to the processing module according to the order of the priority from high to low, so that the priority scheduling of the multi-path data is realized, the load degree of hardware design can be reduced, and the priority processing of the multi-path input data can be directly performed under the condition that the CPU is not involved.
In one possible design, each data channel of the priority scheduling unit has a respective input flag bit, where the input flag bit is used to indicate that the data channel has data input, and when there is data input in the data channel, the input flag position of the data channel is set to a first value, and when there is no data input in the data channel, the input flag position of the data channel is set to a second value, where the first value may be 1, and the second value may be 0. The priority scheduling unit is further configured to determine whether an input flag bit of a data channel with a highest priority is a first value, if so, output input data of the data channel with the highest priority, and if not, determine whether input flag bits of a data channel with a next priority are valid, until the input flag bits of a plurality of data channels are all a second value.
Taking fig. 2 as an example, assuming that the priority order of the data channels is data channel 1> data channel 2> data channel 3> data channel 4, the input flag bits of the data channels 1 to 4 are S1, S2, S3 and S4, respectively, and the value of the representing bit is 1 to represent that there is data input in the data channel, the priority ordering logic of the priority scheduling unit refers to fig. 3, and specifically includes the following steps:
Step 301, judging whether the flag bit S1 is 1, if yes, executing step 302, otherwise, executing step 303.
Step 302, outputting data channel 1 data.
After step 302 is performed, step 301 is continued.
Step 303, judging whether the flag bit S2 is 1, if yes, executing step 304, otherwise, executing step 305.
Step 304, outputting data channel 2 data.
After step 304 is performed, step 301 is continued.
Step 305, judging whether the flag bit S3 is 1, if yes, executing step 306, otherwise, executing step 307.
Step 306, outputting data channel 3 data.
After step 306 is performed, step 301 is continued.
Step 307, judging whether the flag bit S4 is 1, if yes, executing step 308, otherwise, executing step 309.
Step 308, outputting data channel 4 data.
After step 308 is performed, step 301 is continued.
Step 309, no data is output, and step 301 is continued.
In one possible implementation, for an inactive data channel, the data flag bit may be set to a second value until the data channel is active. For example, when the number of external input data paths is small, part of the input data buffer units and the corresponding data channels are in idle states, and the flag bit of the idle data channel can be set to a second value until the data channel is enabled.
Fig. 4 is a schematic diagram III of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 4, the priority scheduling module further includes a priority management unit 130;
The priority management unit 130 is configured to receive a priority adjustment instruction, where the priority scheduling instruction includes a correspondence between the input data buffer unit 110 and a data channel. The priority scheduling instruction may be sent by an upper computer or an upper control chip.
The priority management unit 130 is further configured to change the storage address of the data channel corresponding to the input data buffer unit 110 according to the priority adjustment instruction.
Referring to fig. 5, fig. 5 (a) shows the original correspondence between the input data buffer units and the data channels, where the input data buffer units 110-1 to 110-4 are respectively connected to the data channels 1 to 4, that is, the priority order of the storage addresses is the storage address 1> the storage address 2> the storage address 3> the storage address 4, and the priority of the corresponding data channel is the data channel 1> the data channel 2> the data channel 3> the data channel 4, and then the priority of the input data obtained from the input data buffer unit is the input data buffer unit 110-1> the input data buffer unit 110-2> the input data buffer unit 110-3> the input data buffer unit 110-4. Now, when the priority of the input data acquired from the input data buffer unit is changed to input data buffer unit 110-4> input data buffer unit 110-3> input data buffer unit 110-2> input data buffer unit 110-1, the correspondence relationship between the input data buffer units and the data channels may be changed to the case where the input data buffer units 110-1 to 110-4 are connected to the data channels 4 to 1, respectively, i.e., the storage address of the input data buffer unit 110-1 is changed to the storage address 4, and the storage addresses of the input data buffer units 110-2, 110-3, and 110-4 are changed to the storage address 3, the storage address 2, and the storage address 1, respectively, as shown in fig. 5 (b). And then the priority of the input data is adjusted.
According to the technical scheme, the corresponding relation between the input data caching unit and the data channel can be dynamically adjusted through the priority management unit, so that the priority of the input data is adjusted, and the priority scheduling of multiple paths of input data is realized more flexibly.
When the priority scheduling unit is configured, the input data may be connected to the input data buffer unit of the corresponding priority according to the order of priority. When the configuration of the priority scheduling unit is completed, the relation between the input data and the input data caching unit is fixed, and then the dynamic adjustment of the priority of the input data can be realized by changing the data channel connected with the input data caching unit and the input data caching unit.
Fig. 6 is a schematic diagram of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 6, the priority scheduling module further includes a data screening unit 140.
The input end of the data filtering unit 140 is connected to the plurality of input data caching units 110, and the output end of the data filtering unit 140 is connected to the priority scheduling unit 120, so as to screen out data meeting the set filtering condition from the multiple paths of input data, and directly input the data to the processing module 200, so as to increase the flexibility of the priority scheduling of the input data.
Specifically, the setting of the screening conditions may be performed by an upper computer or an upper control chip, and the priority management unit 130 receives a screening condition setting instruction of the upper computer or the upper control chip. The setting of the filtering condition may be filtering out data of a specific data type or a specific character head, for example, filtering out data of a 10374 head. The data filtering unit 140 stores a set data for filtering (the specific data satisfies the set filtering condition) at a fixed address, compares the input data with the set data, and if the comparison result is consistent, the input data satisfies the set filtering condition, directly inputs the set data to the processing module 200, and if the comparison result is inconsistent, the input data is processed by the priority scheduling unit 120. In another embodiment, the input data satisfying the set screening condition may be temporarily stored and finally input to the processing module 200.
Fig. 7 is a schematic diagram fifth of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 7, the priority scheduling module further includes a data reorganizing unit 150.
The input end of the data reorganizing unit 150 is connected to the output end of the priority scheduling unit 120, and is used for adjusting the data format of the ordered input data. For example, the data reorganizing unit 150 may perform processes such as packet header data addition, parity bit addition, data bit adjustment, data length conversion, and flag bit adjustment on the data output from the priority scheduling unit, so as to increase the diversity of data processing.
Fig. 8 is a schematic diagram sixth of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 8, the priority scheduling module further includes an output data buffer unit 160.
The input end of the output data buffer unit 160 is connected to the output end of the data reorganizing unit 150, and the output end of the output data buffer unit 160 is connected to the processing module 200, so as to temporarily store the ordered input data, wait for the processing module 200 to receive, and increase the flexibility of the priority scheduling module. The output data storage unit may be a first-in first-out storage unit.
Based on the same technical concept, fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present application, and as shown in fig. 9, the chip 900 includes a priority scheduling module 100, a multiple input module 300, and a processing module 200;
The multiple input module 300 is connected to the multiple input data buffer units 110 in the priority scheduling module 100, and is configured to output multiple input data.
The processing module 200 is connected to the output end of the priority scheduling module 100, and is configured to obtain and process the input data ordered by the priority scheduling module 100.
In an application scenario, the chip may be an RFID chip, the RFID chip receives a dual-antenna signal, and outputs the dual-antenna signal after internal demodulation, where the dual-antenna signal is a 10374 signal and a 18000C signal, the two signals have priorities, and when the two signals are input into the RFID chip, the priority scheduling module ranks the input signals according to the priorities, and sends the input signals after being ranked to the encoding and decoding module for modulation and demodulation, and finally outputs the input signals.
The chip or the priority scheduling module can also be applied to a real-time voice translation module, and is particularly used in the real-time voice translation process of the multi-user dialogue. In the real-time speech translation process of the multi-user dialogue, after audio input, multiple paths of audio data are separated according to wave bands, voiceprints and the like of the speech, each path of audio data corresponds to the speech of one user, and the multiple paths of audio data are required to be sent to a translation module for translation according to priority. Translation speed is an important index for translation software, and if the priority of multiple paths of audio data is processed by the CPU, the timing and processing speed of the CPU are required to be high. The chip or the priority scheduling module provided by the application can realize priority scheduling of multiple paths of audios under the condition that the CPU is not involved, can save cost, reduce the demands of the CPU and the clock, improve the translation rate, reduce the data delay and finally improve the translation speed of real-time voice. In one possible implementation, the priority scheduling module may also schedule the priorities of the multiple paths of audio in conjunction with the CPU.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. The priority scheduling module is characterized by comprising a plurality of input data caching units and a priority scheduling unit;
the input data buffer units are used for receiving multiple paths of input data, and do not have external read-write address lines, and the multiple paths of input data have different priorities when processed by the processing module;
The input end of the priority scheduling unit is connected with the output ends of the plurality of input data caching units, and is used for sequencing the plurality of paths of input data according to the priority of each path of input data, and inputting the sequenced input data to the processing module in sequence, and the processing module is used for processing the plurality of paths of input data according to the priority of the plurality of paths of input data.
2. The priority scheduling module of claim 1 wherein the priority scheduling unit further comprises a plurality of data lanes having priority, each data lane corresponding to a memory address;
the priority scheduling unit is specifically configured to obtain corresponding input data from the input data buffer unit corresponding to each path of input data, and store the input data in a data channel corresponding to the priority of each path of input data;
The priority scheduling unit is specifically configured to input the multiple paths of input data to the processing module according to a sequence from high priority to low priority of the multiple data channels.
3. The priority scheduling module of claim 2 wherein each data lane of the priority scheduling unit has a respective input flag bit for indicating that the data lane has data input;
The priority scheduling unit is also used for judging whether the input zone bit of the data channel with the highest priority is a first value, if so, outputting the input data of the data channel with the highest priority, otherwise, judging whether the input zone bit of the data channel with the next priority is valid until the input zone bits of the data channels are all second values, wherein the zone bit is the first value and indicates that the corresponding data channel has data input;
The flag bit is a second value to indicate that the corresponding data channel has no data input.
4. The priority scheduling module of claim 2, wherein the priority scheduling module further comprises a priority management unit;
The priority scheduling instruction comprises a corresponding relation between an input data caching unit and a data channel;
The priority management unit is further configured to change a storage address of a data channel corresponding to the input data buffer unit according to the priority adjustment instruction.
5. The priority scheduling module of any one of claims 1 to 4 wherein the priority scheduling module further comprises a data screening unit;
The input end of the data screening unit is connected with the plurality of input data caching units, the output end of the data screening unit is connected with the priority scheduling unit, and the data screening unit is used for screening out data meeting set screening conditions from the multipath input data and directly inputting the data to the processing module.
6. The priority scheduling module of any one of claims 1 to 4 wherein the priority scheduling module further comprises a data reorganizing unit;
The input end of the data reorganization unit is connected with the output end of the priority scheduling unit and is used for adjusting the data format of the ordered input data.
7. The priority scheduling module of claim 6 further comprising an output data buffer unit;
The input end of the output data caching unit is connected with the output end of the data reorganizing unit, and the output end of the output data caching unit is connected with the processing module and used for temporarily storing the ordered input data to wait for the processing module to receive the input data.
8. The priority scheduling module as recited in claim 7 wherein the input data buffer unit and the output data buffer unit are first-in-first-out memory units.
9. A chip, comprising the priority scheduling module, the multiple input module and the processing module according to any one of claims 1 to 8;
the multi-channel input module is respectively connected with a plurality of input data buffer units in the priority scheduling module and is used for outputting multi-channel input data;
the processing module is connected with the output end of the priority scheduling module and is used for acquiring and processing the input data sequenced by the priority scheduling module.
CN202411827159.0A 2024-12-12 2024-12-12 A priority scheduling module and chip Pending CN119806769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411827159.0A CN119806769A (en) 2024-12-12 2024-12-12 A priority scheduling module and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411827159.0A CN119806769A (en) 2024-12-12 2024-12-12 A priority scheduling module and chip

Publications (1)

Publication Number Publication Date
CN119806769A true CN119806769A (en) 2025-04-11

Family

ID=95270858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411827159.0A Pending CN119806769A (en) 2024-12-12 2024-12-12 A priority scheduling module and chip

Country Status (1)

Country Link
CN (1) CN119806769A (en)

Similar Documents

Publication Publication Date Title
JP4024875B2 (en) Method and apparatus for arbitrating access to shared memory for network ports operating at different data rates
US5590353A (en) Vector processor adopting a memory skewing scheme for preventing degradation of access performance
US20030005239A1 (en) Virtual-port memory and virtual-porting
KR20080069272A (en) Memory system with both single and integrated commands
CN112559400B (en) Multi-stage scheduling device, method, network chip and computer readable storage medium
CN116249973B (en) Adaptive memory transaction scheduling
US7346722B2 (en) Apparatus for use in a computer systems
US7272692B2 (en) Arbitration scheme for memory command selectors
CN119271622B (en) On-chip information multiplexing method and related device
JP2008504572A (en) Processing digital media streams
CN119806769A (en) A priority scheduling module and chip
US11029914B2 (en) Multi-core audio processor with phase coherency
US6094696A (en) Virtual serial data transfer mechanism
EP2423820A1 (en) Memory control device and method for controlling same
US10216671B2 (en) Power aware arbitration for bus access
US20240220104A1 (en) Memory control system and memory control method
CN118550855B (en) Queue scheduling device, method, electronic equipment and medium
CN119576822B (en) Artificial intelligence chip with integrated on-chip memory array, computing task processing method, and electronic device
CN116670639B (en) Processor, sorting method and electronic equipment
TWI819428B (en) Processor apparatus
JP3099325B2 (en) Crossbar switch device and control method therefor
GB2341771A (en) Address decoding
GB2341765A (en) Bus idle usage
CN118035137A (en) Data storage system, method and electronic equipment
GB2341772A (en) Primary and secondary bus architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination