Disclosure of Invention
The application provides a priority scheduling module and a chip, which are used for realizing priority scheduling under the condition that a CPU does not participate.
In a first aspect, the present application provides a priority scheduling module, including a plurality of input data buffering units and a priority scheduling unit;
the input data buffer units are used for receiving multiple paths of input data, and do not have external read-write address lines, and the multiple paths of input data have different priorities when processed by the processing module;
The input end of the priority scheduling unit is connected with the output ends of the plurality of input data caching units, and is used for sequencing the plurality of paths of input data according to the priority of each path of input data, and inputting the sequenced input data to the processing module in sequence, and the processing module is used for processing the plurality of paths of input data according to the priority of the plurality of paths of input data.
According to the technical scheme, the priority scheduling unit is introduced, the priority scheduling unit sorts the multiple paths of input data according to the priority, and the sorted input data is sequentially input to the processing module, so that the processing unit can process the input data according to the priority order, the priority scheduling of the multiple paths of input data can be realized on the data layer under the condition that the CPU is not involved, the response time is greatly reduced while the use of the CPU is reduced, and the rapid and simple priority scheduling of the multiple paths of input data is realized.
In one possible design, the priority scheduling unit further includes a plurality of data channels having priorities, each data channel corresponding to a storage address;
the priority scheduling unit is specifically configured to obtain corresponding input data from the input data buffer unit corresponding to each path of input data, and store the input data in a data channel corresponding to the priority of each path of input data;
The priority scheduling unit is specifically configured to input the multiple paths of input data to the processing module according to a sequence from high priority to low priority of the multiple data channels.
In the above technical solution, the data channel has a priority, and the input data in the input data buffer unit connected to the data channel has the same priority as the data channel. Based on the priority of the data channel, the multi-path input data is input to the processing module according to the order of the priority from high to low, so that the priority scheduling of the multi-path data is realized, the load degree of hardware design can be reduced, and the priority processing of the multi-path input data can be directly performed under the condition that the CPU is not involved.
In one possible design, each data channel of the priority scheduling unit has a respective input flag bit, wherein the input flag bit is used for indicating that the data channel has data input;
The priority scheduling unit is further configured to determine whether an input flag bit of a data channel with a highest priority is a first value, if so, output input data of the data channel with the highest priority, and if not, determine whether input flag bits of a data channel with a next priority are valid until the input flag bits of the data channels are all second values, where the flag bit is the first value and indicates that data is input to a corresponding data channel, and the flag bit is the second value and indicates that no data is input to the corresponding data channel.
According to the technical scheme, whether data are input in each data channel can be clearly known by setting the flag bit of the data channel, so that the priority scheduling unit can more accurately order the priorities of the multiple paths of input data.
In one possible design, the priority scheduling module further includes a priority management unit;
The priority scheduling instruction comprises a corresponding relation between an input data caching unit and a data channel;
The priority management unit is further configured to change a storage address of a data channel corresponding to the input data buffer unit according to the priority adjustment instruction.
According to the technical scheme, the corresponding relation between the input data caching unit and the data channel can be dynamically adjusted through the priority management unit, so that the priority of the input data is adjusted, and the priority scheduling of multiple paths of input data is realized more flexibly.
In one possible design, the priority scheduling module further includes a data screening unit;
The input end of the data screening unit is connected with the plurality of input data caching units, the output end of the data screening unit is connected with the priority scheduling unit, and the data screening unit is used for screening out data meeting set screening conditions from the multipath input data and directly inputting the data to the processing module.
In the technical scheme, the data screening unit can screen the most needed data from the multipath data before priority scheduling so as to increase the flexibility of the priority scheduling of the input data.
In one possible design, the priority scheduling module further includes a data reorganization unit;
The input end of the data reorganization unit is connected with the output end of the priority scheduling unit and is used for adjusting the data format of the ordered input data.
According to the technical scheme, the data format of the input data after being sequenced is adjusted through the data reorganizing unit, so that the diversity of data processing can be increased.
In one possible design, the priority scheduling module further includes an output data buffer unit;
The input end of the output data caching unit is connected with the output end of the data reorganizing unit, and the output end of the output data caching unit is connected with the processing module and used for temporarily storing the ordered input data to wait for the processing module to receive the input data.
According to the technical scheme, the data which is not needed to be output temporarily can be temporarily stored through the data caching unit, and the data is received when the data processing module needs to be processed, so that the flexibility of the priority scheduling module is improved.
In one possible design, the input data buffer unit and the output data buffer unit are first-in first-out memory units.
According to the technical scheme, the first-in first-out storage unit is simple in structure, the data read-write logic is simple, the read-write speed is high, the first-in first-out storage unit is used for the input data caching unit and the output data caching unit, so that the structure of the priority scheduling module is simpler, and the data processing speed is high.
In a second aspect, embodiments of the present application provide a chip including a priority scheduling module, a multiplexing module, and a processing module as in any of the possible designs of the first aspect;
the multi-channel input module is respectively connected with a plurality of input data buffer units in the priority scheduling module and is used for outputting multi-channel input data;
the processing module is connected with the output end of the priority scheduling module and is used for acquiring and processing the input data sequenced by the priority scheduling module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In embodiments of the present application, a plurality refers to two or more. The words "first," "second," and the like are used merely for distinguishing between the descriptions and not be construed as indicating or implying a relative importance or order.
Fig. 1 is a schematic diagram of a priority scheduling module according to an embodiment of the present application, and as shown in fig. 1, a priority scheduling module 100 includes a plurality of input data buffering units 110 and a priority scheduling unit 120.
The plurality of input data buffer units 110 are used for receiving multiple paths of input data. The input data buffer units 100 are expandable, the number of the input data buffer units 110 is not limited in the present application, and the data amount of the specific input data buffer units 110 can be set according to actual requirements, and illustratively, the priority scheduling module in fig. 1 includes input data buffer units 110-1 to 110-4, and total number of input data buffer units is 4. The input data buffer units 110 do not have external read-write address lines, and the input data storage units may be first-in first-out storage units, which has the characteristics of simple structure and high speed. The application is applied to a scene in which multiple paths of input data are to be input to the same processing module for processing, and the multiple paths of input data have different priorities when being processed by the processing module 200.
The input end of the priority scheduling unit 120 is connected to the output ends of the plurality of input data buffer units 110, and is configured to sort the multiple paths of input data according to the priority of each path of input data, and sequentially input the sorted input data to the processing module 200. The processing module 200 is configured to process the multiple input data according to the priority of the multiple input data.
According to the technical scheme, the priority scheduling unit is introduced, the priority scheduling unit sorts the multiple paths of input data according to the priority, and the sorted input data are sequentially input to the processing module, so that the processing unit can process the input data according to the priority order, the priority scheduling of the multiple paths of input data is carried out on the data layer under the condition that the CPU is not involved, the use of the CPU is reduced, the reaction time is greatly reduced, and the rapid and simple priority scheduling of the multiple paths of input data is realized.
Fig. 2 is a schematic diagram of a second structure of a priority scheduling module according to an embodiment of the present application, as shown in fig. 2, the priority scheduling unit 120 further includes a plurality of data channels with priorities, where the number of data channels is generally consistent with the number of input data buffer units, and referring to fig. 2, for example, corresponding to 4 input data buffer units 110-1 to 110-4, the data channels 1 to 4 are set in the priority scheduling unit. Wherein each data channel corresponds to a storage address, and each storage address is preset with a corresponding priority, so that when an input data buffer unit is connected with one data channel, the input data in the input data storage unit has the same priority as the storage address of the data channel. That is, the input data buffer unit that receives the input data with the highest priority is connected to the storage address with the highest priority, the input data buffer unit that receives the input data with the second priority is connected to the storage address with the second priority, and so on.
The priority scheduling unit 120 is specifically configured to obtain corresponding input data from the input data buffer unit 110 corresponding to each path of input data, and store the input data in a data channel corresponding to the priority of each path of input data. The priority scheduling unit 120 is specifically configured to input multiple paths of input data to the processing module in order of priority of the multiple data channels from high to low.
Taking fig. 2 as an example, assume that the storage addresses of the data channels 1 to 4 are address 1, address 2, address 3, and address 4, respectively, and the priority order is address 1> address 2> address 3> address 4, and the priority of the input data in the input data buffer unit is input data buffer unit 110-1> input data buffer unit 110-2> input data buffer unit 110-3> input data buffer unit 110-4. Then the input data buffer unit 110-1 is accessed to data channel 1, i.e. to access address 1, the input data buffer unit 110-2 is accessed to data channel 2, i.e. to access address 2, the input data buffer unit 110-3 is accessed to address 3, and the input data buffer unit 110-4 is accessed to address 4. In practice, the priority scheduling unit 120 obtains the corresponding input data 1 from the input data buffer unit 110-1, stores the input data 1 in the data channel 1, obtains the corresponding input data 2, input data 3 and input data 4 from the input data buffer units 110-2, 110-3 and 110-4, and stores the input data 2, input data 3 and input data 4 in the data channels 2, 3 and 4 respectively. In this way, the priority scheduling unit 120 may input the multiple paths of input data to the processing module in the order of the input data 1, the input data 2, the input data 3, and the input data 4 according to the order of the priority of the data channels from high to low. The processing module performs data processing in the order of input data 1, input data 2, input data 3, and input data 4.
In the above technical solution, the data channel has a priority, and the input data in the input data buffer unit connected to the data channel has the same priority as the data channel. Based on the priority of the data channel, the multi-path input data is input to the processing module according to the order of the priority from high to low, so that the priority scheduling of the multi-path data is realized, the load degree of hardware design can be reduced, and the priority processing of the multi-path input data can be directly performed under the condition that the CPU is not involved.
In one possible design, each data channel of the priority scheduling unit has a respective input flag bit, where the input flag bit is used to indicate that the data channel has data input, and when there is data input in the data channel, the input flag position of the data channel is set to a first value, and when there is no data input in the data channel, the input flag position of the data channel is set to a second value, where the first value may be 1, and the second value may be 0. The priority scheduling unit is further configured to determine whether an input flag bit of a data channel with a highest priority is a first value, if so, output input data of the data channel with the highest priority, and if not, determine whether input flag bits of a data channel with a next priority are valid, until the input flag bits of a plurality of data channels are all a second value.
Taking fig. 2 as an example, assuming that the priority order of the data channels is data channel 1> data channel 2> data channel 3> data channel 4, the input flag bits of the data channels 1 to 4 are S1, S2, S3 and S4, respectively, and the value of the representing bit is 1 to represent that there is data input in the data channel, the priority ordering logic of the priority scheduling unit refers to fig. 3, and specifically includes the following steps:
Step 301, judging whether the flag bit S1 is 1, if yes, executing step 302, otherwise, executing step 303.
Step 302, outputting data channel 1 data.
After step 302 is performed, step 301 is continued.
Step 303, judging whether the flag bit S2 is 1, if yes, executing step 304, otherwise, executing step 305.
Step 304, outputting data channel 2 data.
After step 304 is performed, step 301 is continued.
Step 305, judging whether the flag bit S3 is 1, if yes, executing step 306, otherwise, executing step 307.
Step 306, outputting data channel 3 data.
After step 306 is performed, step 301 is continued.
Step 307, judging whether the flag bit S4 is 1, if yes, executing step 308, otherwise, executing step 309.
Step 308, outputting data channel 4 data.
After step 308 is performed, step 301 is continued.
Step 309, no data is output, and step 301 is continued.
In one possible implementation, for an inactive data channel, the data flag bit may be set to a second value until the data channel is active. For example, when the number of external input data paths is small, part of the input data buffer units and the corresponding data channels are in idle states, and the flag bit of the idle data channel can be set to a second value until the data channel is enabled.
Fig. 4 is a schematic diagram III of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 4, the priority scheduling module further includes a priority management unit 130;
The priority management unit 130 is configured to receive a priority adjustment instruction, where the priority scheduling instruction includes a correspondence between the input data buffer unit 110 and a data channel. The priority scheduling instruction may be sent by an upper computer or an upper control chip.
The priority management unit 130 is further configured to change the storage address of the data channel corresponding to the input data buffer unit 110 according to the priority adjustment instruction.
Referring to fig. 5, fig. 5 (a) shows the original correspondence between the input data buffer units and the data channels, where the input data buffer units 110-1 to 110-4 are respectively connected to the data channels 1 to 4, that is, the priority order of the storage addresses is the storage address 1> the storage address 2> the storage address 3> the storage address 4, and the priority of the corresponding data channel is the data channel 1> the data channel 2> the data channel 3> the data channel 4, and then the priority of the input data obtained from the input data buffer unit is the input data buffer unit 110-1> the input data buffer unit 110-2> the input data buffer unit 110-3> the input data buffer unit 110-4. Now, when the priority of the input data acquired from the input data buffer unit is changed to input data buffer unit 110-4> input data buffer unit 110-3> input data buffer unit 110-2> input data buffer unit 110-1, the correspondence relationship between the input data buffer units and the data channels may be changed to the case where the input data buffer units 110-1 to 110-4 are connected to the data channels 4 to 1, respectively, i.e., the storage address of the input data buffer unit 110-1 is changed to the storage address 4, and the storage addresses of the input data buffer units 110-2, 110-3, and 110-4 are changed to the storage address 3, the storage address 2, and the storage address 1, respectively, as shown in fig. 5 (b). And then the priority of the input data is adjusted.
According to the technical scheme, the corresponding relation between the input data caching unit and the data channel can be dynamically adjusted through the priority management unit, so that the priority of the input data is adjusted, and the priority scheduling of multiple paths of input data is realized more flexibly.
When the priority scheduling unit is configured, the input data may be connected to the input data buffer unit of the corresponding priority according to the order of priority. When the configuration of the priority scheduling unit is completed, the relation between the input data and the input data caching unit is fixed, and then the dynamic adjustment of the priority of the input data can be realized by changing the data channel connected with the input data caching unit and the input data caching unit.
Fig. 6 is a schematic diagram of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 6, the priority scheduling module further includes a data screening unit 140.
The input end of the data filtering unit 140 is connected to the plurality of input data caching units 110, and the output end of the data filtering unit 140 is connected to the priority scheduling unit 120, so as to screen out data meeting the set filtering condition from the multiple paths of input data, and directly input the data to the processing module 200, so as to increase the flexibility of the priority scheduling of the input data.
Specifically, the setting of the screening conditions may be performed by an upper computer or an upper control chip, and the priority management unit 130 receives a screening condition setting instruction of the upper computer or the upper control chip. The setting of the filtering condition may be filtering out data of a specific data type or a specific character head, for example, filtering out data of a 10374 head. The data filtering unit 140 stores a set data for filtering (the specific data satisfies the set filtering condition) at a fixed address, compares the input data with the set data, and if the comparison result is consistent, the input data satisfies the set filtering condition, directly inputs the set data to the processing module 200, and if the comparison result is inconsistent, the input data is processed by the priority scheduling unit 120. In another embodiment, the input data satisfying the set screening condition may be temporarily stored and finally input to the processing module 200.
Fig. 7 is a schematic diagram fifth of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 7, the priority scheduling module further includes a data reorganizing unit 150.
The input end of the data reorganizing unit 150 is connected to the output end of the priority scheduling unit 120, and is used for adjusting the data format of the ordered input data. For example, the data reorganizing unit 150 may perform processes such as packet header data addition, parity bit addition, data bit adjustment, data length conversion, and flag bit adjustment on the data output from the priority scheduling unit, so as to increase the diversity of data processing.
Fig. 8 is a schematic diagram sixth of a structure of a priority scheduling module according to an embodiment of the present application, where, as shown in fig. 8, the priority scheduling module further includes an output data buffer unit 160.
The input end of the output data buffer unit 160 is connected to the output end of the data reorganizing unit 150, and the output end of the output data buffer unit 160 is connected to the processing module 200, so as to temporarily store the ordered input data, wait for the processing module 200 to receive, and increase the flexibility of the priority scheduling module. The output data storage unit may be a first-in first-out storage unit.
Based on the same technical concept, fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present application, and as shown in fig. 9, the chip 900 includes a priority scheduling module 100, a multiple input module 300, and a processing module 200;
The multiple input module 300 is connected to the multiple input data buffer units 110 in the priority scheduling module 100, and is configured to output multiple input data.
The processing module 200 is connected to the output end of the priority scheduling module 100, and is configured to obtain and process the input data ordered by the priority scheduling module 100.
In an application scenario, the chip may be an RFID chip, the RFID chip receives a dual-antenna signal, and outputs the dual-antenna signal after internal demodulation, where the dual-antenna signal is a 10374 signal and a 18000C signal, the two signals have priorities, and when the two signals are input into the RFID chip, the priority scheduling module ranks the input signals according to the priorities, and sends the input signals after being ranked to the encoding and decoding module for modulation and demodulation, and finally outputs the input signals.
The chip or the priority scheduling module can also be applied to a real-time voice translation module, and is particularly used in the real-time voice translation process of the multi-user dialogue. In the real-time speech translation process of the multi-user dialogue, after audio input, multiple paths of audio data are separated according to wave bands, voiceprints and the like of the speech, each path of audio data corresponds to the speech of one user, and the multiple paths of audio data are required to be sent to a translation module for translation according to priority. Translation speed is an important index for translation software, and if the priority of multiple paths of audio data is processed by the CPU, the timing and processing speed of the CPU are required to be high. The chip or the priority scheduling module provided by the application can realize priority scheduling of multiple paths of audios under the condition that the CPU is not involved, can save cost, reduce the demands of the CPU and the clock, improve the translation rate, reduce the data delay and finally improve the translation speed of real-time voice. In one possible implementation, the priority scheduling module may also schedule the priorities of the multiple paths of audio in conjunction with the CPU.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.