[go: up one dir, main page]

CN118426967A - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118426967A
CN118426967A CN202410665681.7A CN202410665681A CN118426967A CN 118426967 A CN118426967 A CN 118426967A CN 202410665681 A CN202410665681 A CN 202410665681A CN 118426967 A CN118426967 A CN 118426967A
Authority
CN
China
Prior art keywords
description
target
queue
description information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410665681.7A
Other languages
Chinese (zh)
Inventor
夏天笑
赵志彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunlun Core Beijing Technology Co ltd
Original Assignee
Kunlun Core Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunlun Core Beijing Technology Co ltd filed Critical Kunlun Core Beijing Technology Co ltd
Priority to CN202410665681.7A priority Critical patent/CN118426967A/en
Publication of CN118426967A publication Critical patent/CN118426967A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The disclosure provides a data processing method, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of chips, artificial intelligence accelerator cards, heterogeneous communication and the like. The specific implementation scheme is as follows: writing description information corresponding to the target data into the description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue cache area, and a storage space of the target data is positioned in at least one of the description queue cache area and the target storage area; and transmitting an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue. The disclosure also provides a data processing device, an electronic device and a storage medium.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the technical fields of chips, artificial intelligence accelerator cards, heterogeneous communications, and the like. More particularly, the present disclosure provides a data processing method, apparatus, electronic device, and storage medium.
Background
With the development of artificial intelligence technology, a Central Processing Unit (CPU) may perform artificial intelligence tasks together with an artificial intelligence computing unit.
Disclosure of Invention
The present disclosure provides a data processing method, apparatus, device, and storage medium.
According to an aspect of the present disclosure, there is provided a data processing method, the method including: writing description information corresponding to the target data into the description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue cache area, and a storage space of the target data is positioned in at least one of the description queue cache area and the target storage area; and transmitting an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue.
According to another aspect of the present disclosure, there is provided a data processing method, the method comprising: reading description information from the description queue according to a read pointer of the description queue, wherein the description queue is stored in a description queue cache area; and processing target data corresponding to the description information based on the target storage area or the target cache area according to the description information, wherein the target cache area comprises a description queue cache area.
According to another aspect of the present disclosure, there is provided a data processing apparatus comprising: a first storage unit configured to store target data to be transmitted; a first computing unit configured to: writing description information corresponding to the target data into the description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue cache area, and a storage space to be written of the target data is positioned in at least one of the description queue cache area and the target storage area; and transmitting an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue.
According to another aspect of the present disclosure, there is provided a data processing apparatus comprising: the second storage unit comprises a description queue cache area and a target storage area; a second calculation unit configured to: writing description information corresponding to the target data into the description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue cache area, and a storage space to be written of the target data is positioned in at least one of the description queue cache area and the target storage area; and transmitting an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue.
According to another aspect of the present disclosure, there is provided a data processing apparatus comprising: a first storage unit configured to store target data to be transmitted; a first computing unit configured to: reading description information from the description queue according to a read pointer of the description queue, wherein the description queue is stored in a description queue cache area; and processing target data corresponding to the description information based on the target storage area or the target cache area according to the description information, wherein the target cache area comprises a description queue cache area.
According to another aspect of the present disclosure, there is provided a data processing apparatus comprising: the second storage unit comprises a description queue cache area and a target storage area; a second calculation unit configured to: reading description information from the description queue according to a read pointer of the description queue, wherein the description queue is stored in a description queue cache area; and processing target data corresponding to the description information based on the target storage area or the target cache area according to the description information, wherein the target cache area comprises a description queue cache area.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method provided in accordance with the present disclosure.
According to another aspect of the present disclosure, there is provided an electronic device including a first data processing apparatus and a second data processing apparatus, the second data processing apparatus being connected with the first processing apparatus via an interface.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method provided according to the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method provided according to the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an exemplary system architecture to which a data processing method may be applied, according to one embodiment of the present disclosure;
FIG. 2 is a flow chart of a data processing method according to one embodiment of the present disclosure;
FIG. 3A is a schematic diagram depicting a logical layout of a queue according to one embodiment of the present disclosure;
FIG. 3B is a schematic diagram depicting a queue cache according to one embodiment of the present disclosure
FIG. 4 is a schematic flow chart diagram of a data processing method according to one embodiment of the present disclosure;
FIG. 5 is a flow chart of a data processing method according to another embodiment of the present disclosure;
FIG. 6 is a schematic flow chart diagram of a data processing method according to one embodiment of the present disclosure;
FIG. 7A is a schematic diagram of a system architecture according to one embodiment of the present disclosure;
FIG. 7B is a schematic diagram of a data processing method according to one embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a software stack according to one embodiment of the present disclosure;
FIG. 9 is a block diagram of a data processing apparatus according to one embodiment of the present disclosure;
FIG. 10 is a block diagram of a data processing apparatus according to another embodiment of the present disclosure;
FIG. 11 is a block diagram of a data processing apparatus according to another embodiment of the present disclosure;
FIG. 12 is a block diagram of a data processing apparatus according to another embodiment of the present disclosure;
FIG. 13 is a schematic block diagram of an electronic device according to one embodiment of the disclosure; and
Fig. 14 is a block diagram of an electronic device to which a data processing method may be applied according to one embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The artificial intelligence computing unit may be deployed on an artificial intelligence accelerator board card. The artificial intelligence accelerator card may be used as a device side (device). The board card may be connected to a host side (host) via, for example, a peripheral component interconnect express (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIe) interface. The central processing unit of the host computer side can control the artificial intelligent computing board card to execute tasks. If the load of the central processing unit at the host end is large, when the operations such as interrupt processing, resource management and task scheduling are performed on the artificial intelligent computing board card, the delay time is long, so that the task execution efficiency is reduced. The artificial intelligence computing units may be various computing units such as Graphics Processing Units (GPUs), neural Network Processing Units (NPUs), and kunlun cores.
Further, tasks related to the artificial intelligence model may include tasks to be performed by the central processing unit. The central processing unit at the host side can acquire input data from the artificial intelligence computing unit or can provide result data to the artificial intelligence computing unit through the peripheral component quick interconnection interface. If the data amount of the input data or the result data of the task is large, after the central processing unit finishes executing the task, the data transmission time between the central processing unit and the artificial intelligence computing unit is large, which results in the decrease of the task execution efficiency.
In some embodiments, to improve task execution efficiency, a central processing unit may be disposed on the artificial intelligence accelerator board. The host side can read and write resources (such as data) in the artificial intelligent accelerator board card through a peripheral component quick interconnection protocol. However, there is no efficient two-way communication between the host and the cpu on the board. Communication modes which are not based on an operating system, such as interrupt modes, registers and the like, can be adopted between the host side and the equipment side. The data volume transmitted by the communication mode is small, the communication mode is not flexible enough, automation is difficult to realize, the application on an operating system is difficult to support, and the application scene of a central processing unit on a board card is limited. The application may be an application using a similar remote procedure call (Remote Procedure Call, RPC) upper layer call protocol.
In addition, the host side and the device side use two different address spaces, resulting in difficulty in improving the efficiency of bidirectional communication. In order to debug the artificial intelligent accelerator board card, an independent serial port can be led out, but an additional hardware connection line is needed, which is not beneficial to remote debugging or deployment.
Thus, in order to improve the communication efficiency between the host side and the device side, the present disclosure provides a data processing method, which will be described below.
FIG. 1 is a schematic diagram of an exemplary system architecture to which data processing methods and apparatus may be applied, according to one embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a system architecture 10 according to this embodiment may include a host side 110 and a device side 120. Data and signals may be transferred between host side 110 and device side 120 via interface 130.
The host side 110 may include a first computing unit 111. The first calculation unit 111 may be a central processing unit. The device side 120 may include a second computing unit 122 and a third computing unit 123. The second computing unit 122 may be a central processing unit. The third computing unit 123 may be an artificial intelligence computing unit. It is appreciated that the second computing unit may be a lightweight central processing unit, such as a central processing unit of a reduced instruction set system computer (Reduced Instruction Set Computer, RISC) architecture.
In the embodiment of the disclosure, the second computing unit and the third computing unit are connected through an internal bus, and the first computing unit and the second computing unit are connected through a high-speed serial computer expansion bus. The first computing unit and the third computing unit are connected through a high-speed serial computer expansion bus. As shown in fig. 1, the second computing unit and the third computing unit may be connected by an internal bus (bus). Interface 130 may be a peripheral component interconnect express interface. The first computing unit 111 and the third computing unit 122 may be connected via a high-speed serial computer expansion bus via the interface 130, and the first computing unit 111 and the third computing unit 123 may also be connected via a high-speed serial computer expansion bus.
In the embodiment of the present disclosure, the device side 120 may further include a shared storage unit 124. The device side 120 may read and write data in the shared memory unit 124. The host side 110 may also read and write data in the shared memory unit 124.
It will be appreciated that the task processing method provided by the embodiments of the present disclosure may be performed by at least one of the host side 110 and the device side 120. The task processing method of the present disclosure will be described below with reference to fig. 2.
Fig. 2 is a flow chart of a data processing method according to one embodiment of the present disclosure.
As shown in fig. 2, the method 200 may include operations S210 to S220. It is to be appreciated that the method 200 may be performed by a transmitting device.
In the embodiment of the present disclosure, the transmitting device may be the host 110 or the device 120. For example, if the host 110 is a transmitting device, the device 120 may be a receiving device. If the device side 120 is a transmitting device, the host side 110 may be a receiving device.
In operation S210, description information corresponding to the target data is written into the description queue according to the write pointer of the description queue.
In the embodiments of the present disclosure, the description queue may be stored in a description queue cache. For example, one or more storage areas may be determined in the shared storage unit 124 as the description queue buffer. Multiple locations may be included in the description queue. Each location may store a descriptive information.
In an embodiment of the present disclosure, the storage space of the target data is located in at least one of the description queue buffer and the target storage area. For example, one or more storage areas may be determined in the above-described shared storage unit 124 as the target storage area. For another example, the capacity of the description queue cache may be less than or equal to the capacity of the target storage region. If the data size of the target data is smaller, the target data and the corresponding description information can be stored into the description queue cache area together. If the data size of the target data is large, the target data can be stored into the target storage area.
In the embodiments of the present disclosure, the description information may indicate a storage space of the target data.
In the embodiment of the present disclosure, the description queue may be provided with a write pointer, which may be updated after the description information corresponding to the target data is written to the description queue, so that the following description information of the description information may be written to other positions of the description queue. For example, the description queue may be a ring (ring) queue or a sequential queue.
In operation S220, an interrupt signal corresponding to the target data is transmitted.
In the embodiment of the disclosure, the interrupt signal may instruct the receiving apparatus to read description information corresponding to the target data from the description queue. The interrupt signal may be sent by the sending device. For example, the host side 110 and the device side 120 may each issue an interrupt signal. The host side 110 may send an interrupt signal to the device side 120 so that the device side 120 reads the description information from the description queue. The device side 120 may also send an interrupt signal to the host side 110 so that the host side 110 reads the description information from the description queue.
According to the embodiment of the disclosure, the target data with smaller data volume can be stored in the description queue buffer area, so that the data transmission efficiency is improved, and the storage space is saved. The device end and the host end can both send interrupt signals, so that the device end and the host end can mutually transmit data, the flexibility of the heterogeneous electronic device is improved, the automation of data transmission is facilitated, and the application scene of the heterogeneous electronic device is expanded.
It will be appreciated that while the method of the present disclosure is described above, the description queue of the present disclosure will be described below.
In some embodiments, the description queue may be multiple, and the description queue buffer may be multiple. The plurality of description queues may include a first description queue and a second description queue. The plurality of description queue buffers may include a first description queue buffer and a second description queue buffer. The first description queue may be stored in a first description queue cache. The second description queue may be stored in the second description queue cache. The first description queue may store description information generated by the host side 110. The second description queue may store description information generated by the device side 120. According to the embodiment of the disclosure, the first description queue is arranged for the host end, the second description queue is arranged for the equipment end, so that data and description information can be efficiently transmitted between the host end and the equipment end, and the data transmission efficiency is improved. Description queues and description queue buffers of the present disclosure are further described below in conjunction with fig. 3A and 3B.
FIG. 3A is a schematic diagram depicting a logical layout of a queue according to one embodiment of the present disclosure.
As shown in fig. 3A, description queue q300 may be a circular queue. Description queue q300 may include a plurality of queue positions. Queue position qe301 and queue position qe302 may be idle positions. Queue location qe306 and queue location qe307 may store descriptive information.
In the disclosed embodiment, the description queue may correspond to a write pointer (write pointer) and a read pointer (read pointer). As shown in fig. 3A, the description queue q300 may correspond to the write pointer wp300 and the read pointer rp 300. The write pointer wp300 may point to the queue position qe301. The read pointer may point to queue location qe306.
In the disclosed embodiments, the write pointer may be updated by the transmitting device and the read pointer may be updated by the receiving device. As shown in fig. 3A, the write pointer wp300 can be updated by the transmitting device and the read pointer rp300 can be updated by the receiving device. After the write pointer wp300 is updated, it may point to the queue position qe302. After the read pointer rp300 is updated, it may point to the queue position qe307.
Fig. 3B is a schematic diagram depicting a queue cache according to one embodiment of the present disclosure.
In some embodiments, the description queue cache may include memory space in which description information has been stored and free memory space. As shown in fig. 3B, the description queue buffer rb300 may include a storage space qb301 and a storage space qb302. The storage space qb301 may be free storage space. The queue positions corresponding to the storage space qb301 may include the queue position qe301 and the queue position qe302 described above.
The storage space qb302 may be a storage space in which description information has been stored. The queue locations corresponding to the storage space qb302 may include the queue location qe306 and the queue location qe307 described above.
It will be appreciated that the description queue of the present disclosure is described above and the method of the present disclosure will be further described below.
In some embodiments, the target data may include data to be transmitted. The data to be transmitted may be data transmitted from the host 110 to the device 120, or may be data transmitted from the device 120 to the host 110. This will be described below with reference to fig. 4.
Fig. 4 is a schematic flow chart diagram of a data processing method according to one embodiment of the present disclosure.
As shown in fig. 4, the method 400 may write description information d401 to a description queue, as will be described below.
In an embodiment of the present disclosure, the description information may include at least one of the following fields: an address field, a data amount field, and an identification field. The value of the address field may be the starting address of the target data. The value of the data amount field may be the data amount of the target data. The value of the identification field may be an invalid value, a first identification value, or a second identification value. The first identification value may indicate that the target data is to be received. The second identification value may indicate that the target data has been received. It is understood that the memory cell corresponding to the start address may be the shared memory cell 124 described above.
In operation S401, it is determined whether there is a free position in the description queue.
In an embodiment of the present disclosure, blocking description information from writing to the description queue is responsive to determining that the description queue does not have a free position. For example, the description queue may be written with blocking description information d 401.
In the disclosed embodiment, in response to determining that there is a free position in the description queue, operation S402 is performed.
In operation S402, it is determined whether the data amount of the target data is greater than a preset data amount threshold.
In the embodiment of the disclosure, in response to determining that the data amount of the data to be transmitted is less than or equal to the preset data amount threshold, the buffer space in the description queue buffer may be determined as the storage space of the data to be transmitted. The following will explain in conjunction with operation S403.
In operation S403, the value of the address field of the description information is replaced with the target data.
For example, the value of the address field of the description information d401 may be replaced with the corresponding data to be transmitted, to obtain updated description information corresponding to the description information d 401. Thus, data to be transmitted can be stored to the description queue buffer. The preset data amount threshold may be an empirical value, for example, may be 4 bytes (byte). Next, operation S408 may be performed to determine again whether there is a free position in the description queue, and if there is no free position, continue to block the update description information from writing to the description queue. If there is an idle position, the same or similar operations as those of the above-described operations S210 and S220 may be performed. According to the embodiment of the disclosure, the data with smaller data volume can be efficiently transmitted, the data transmission efficiency is fully improved, the data transmission cost is reduced, and the time length cost and the bandwidth cost for returning the second description information after the receiving device receives the data with smaller data volume can be saved.
It will be appreciated that the present disclosure has been described above with reference to the example in which the data amount of the target data is less than or equal to the preset data amount threshold. The following will explain taking an example in which the data amount of the target data is larger than the preset data amount threshold.
In the embodiment of the disclosure, in response to determining that the data amount of the target data is greater than the preset data amount threshold, a target storage space may be determined in the target storage area as the storage space of the target data according to the description information. For example, description will be made below in connection with operations S404 to S407.
In operation S404, it is determined whether the target storage space of the target data is located in the internal storage unit.
In the embodiment of the disclosure, whether the target storage space is located in the internal storage unit may be determined according to the value of the address field of the description information. For example, the value of the address field may correspond to the physical address of the shared memory unit 124, and if the sending device is the host, it may be determined that the target memory space is located in the external memory unit. If the transmitting device is a device, it may be determined that the target storage space is located in the internal storage unit.
In the embodiment of the present disclosure, in response to determining that the target storage space of the target data is located at the internal storage unit, operation S408 may be performed. For example, if the transmitting device is the device side 120 and the target data is located in the shared storage unit 124, it may be determined that the target storage area of the target data is located in the internal storage unit. The target data may already be located in the target storage space or be written to the target storage space via the internal bus. Next, operation S408 may be performed to determine again whether there is a free position in the description queue, and if there is no free position, continue to block the update description information from writing to the description queue. If there is an idle position, the same or similar operations as those of the above-described operations S210 and S220 may be performed.
It will be appreciated that the disclosure is described above by taking an example in which a target storage space of target data is located in an internal storage unit, and the disclosure will be described below by taking an example in which a target storage space of target data is located in an external storage unit.
In the embodiment of the present disclosure, in response to determining that the target storage space of the target data is located at the external storage unit, operation S405 may be performed. For example, if the transmitting device is the host 110 and the target storage space is located in the shared storage unit 124, it may be determined that the target storage space of the data to be transmitted is located in the external storage unit, and operation S405 may be performed.
In operation S405, a target storage space is determined in the target storage area according to the description information.
For example, a target storage space is determined in the target storage area based on the value of the address field and the value of the data amount field of the description information d 401. The starting address of the target memory space may correspond to the value of the address field and the capacity of the target memory space may be consistent with the value of the data volume field. The target storage space may be used as a storage space for target data.
In operation S406, target data is written into the target storage space using the data handling unit.
For example, the data handling unit may be a direct memory access (direct memory access, DMA) unit, with which target data may be written to the target storage space.
In operation S407, the value of the identification field of the description information is adjusted to the first identification value.
For example, the value of the identification field may be adjusted to a first identification value, resulting in updated descriptive information. According to the embodiment of the disclosure, the identification field value of the data with larger data quantity is set as the first identification value, so that a device receiving the data returns corresponding description information, the target storage space of the data is released, the storage space utilization efficiency is improved, and the storage cost is reduced.
It will be appreciated that the storage space for the target data is determined in a number of ways. Next, after the storage space of the target data has been determined, operation S408 may be performed.
In operation S408, it is again determined whether there is a free position in the description queue.
In an embodiment of the present disclosure, blocking description information from writing to the description queue is responsive to determining that there are no free locations in the description queue. For example, multiple threads may be utilized to perform operations on the description queue. During one or more of operations S401 through S407 described above by the first thread, the second thread may write another description information to the description queue, resulting in no free position in the description queue. In this case, the description information of the first thread may be blocked from being written into the description queue until the description queue has a free position. The embodiment of the disclosure can be applied to a multithreading scene and ensures the accuracy of data transmission in the multithreading scene.
In the embodiment of the present disclosure, in response to determining that there is a free position in the description queue again, operations S410, S411, and S420 are performed. It is understood that operations S410 and S420 are the same as or similar to operations S210 and S220 described above.
In operation S410, description information corresponding to the target data is written into the description queue according to the write pointer of the description queue.
For example, the above description information d401, the update description information obtained by operation S403, or the update description information obtained by operation S407 may be written to the queue position qe301 pointed to by the above write pointer wp 300.
In operation S411, a write pointer describing the queue is updated.
For example, the write pointer wp300 may be updated to point to the queue position qe302.
In operation S420, an interrupt signal corresponding to the target data is transmitted
For example, an interrupt signal corresponding to the target data may be transmitted to the receiving apparatus
It will be appreciated that the manner in which the update description information obtained by operation S407 is written into the description queue is explained above taking the target data to be transmitted as an example. It is also understood that the update description information obtained by operation S407 includes the first identification value. The update description information may be used as the first description information. However, the present disclosure is not limited thereto, and the corresponding description information may be transmitted after receiving the target data, as will be described below.
In some embodiments, writing description information corresponding to the target data to the description queue may further include: in response to determining that the first description information has been read, second description information corresponding to the target data is written into the description queue. The value of the identification field of the second description information is a second identification value. For example, if the host 110 writes the update description information obtained in the operation S407 into the first description queue, the device 120 may read the update description information and read the target data based on the update description information. After reading the updated description information, the device side 120 may generate corresponding second description information, and write the second description information into the second description queue. Further description will follow in connection with the method 400 described above.
Between the above-described operation S401 and operation S402, the following operations may be performed: it is determined whether the value of the identification field of the descriptive information is a second identification value. In response to determining that the value of the identification field of the description information is the second identification value, operation S408 is performed. In response to determining that the value of the identification field of the description information is the first identification value, operation S402 is performed.
It will be appreciated that the disclosure is described above with the description information including one or more of an address field, a data amount field, and an identification field as examples. The present disclosure is not limited thereto and the description information may further include a port number field, which will be described below.
In some embodiments, the description information may also include a port number field. The value of the port number field may be the port number of the receiving device. Multiple applications may read data from a shared memory unit using different port numbers.
In the embodiment of the disclosure, the receiving device may read the target description information from the description queue using the port having the free bandwidth. The value of the port number field of the object description information corresponds to a port having an idle bandwidth. For example, bandwidth may be allocated for different port numbers. According to the embodiment of the disclosure, flow control can be realized, flow control with different priorities can also be realized, and the flexibility of data transmission is improved.
It will be appreciated that some of the ways in which description information is written to the description queue are described above and some of the ways in which description information is read from the description queue are described below.
Fig. 5 is a flow chart of a data processing method according to another embodiment of the present disclosure.
As shown in fig. 5, the method 500 may include operations S510 to S520. It is to be appreciated that the method 500 may be performed by a receiving device.
In the embodiment of the present disclosure, the receiving device may be the host 110 or the device 120. For example, if the host 110 is a receiving device, the device 120 may be a transmitting device. If the device side 120 is a receiving device, the host side 110 may be a transmitting device.
In operation S510, description information is read from the description queue according to the read pointer of the description queue.
In embodiments of the present disclosure, in response to receiving the interrupt signal, descriptive information may be read from the descriptive queue according to a read pointer of the descriptive queue. The interrupt signal may instruct the receiving apparatus to read description information corresponding to the target data from the description queue. The interrupt signal may be sent by the sending device. For example, the host side 110 and the device side 120 may each issue an interrupt signal. The host side 110 may send an interrupt signal to the device side 120 so that the device side 120 reads the description information from the description queue. The device side 120 may also send an interrupt signal to the host side 110 so that the host side 110 reads the description information from the description queue.
In the embodiment of the disclosure, the description queue is stored in the description queue buffer. For example, one or more storage areas may be determined in the shared storage unit 124 as the description queue buffer. Multiple locations may be included in the description queue. Each location may store a descriptive information.
In the embodiments of the present disclosure, the description information may indicate a storage space of the target data.
In embodiments of the present disclosure, the description queue may be provided with a read pointer that may be updated after the description information is read from the description queue in order to read the following description information of the description information. For example, the description queue may be a ring (ring) queue or a sequential queue.
In operation S520, target data corresponding to the description information is processed based on the target storage area or the target cache area according to the description information.
In an embodiment of the present disclosure, the target buffer includes a description queue buffer.
In the embodiment of the disclosure, the target data may be read from the target storage area or the target cache area, or the target data of the target cache area or the target storage area may be released.
Through the embodiment of the disclosure, the receiving device can read the data from the description queue. The receiving device can be a device end or a host end, so that the flexibility of data transmission is improved.
It will be appreciated that while the method of the present disclosure is described above, the method of the present disclosure will be further described below.
Fig. 6 is a schematic flow chart diagram of a data processing method according to one embodiment of the present disclosure.
As shown in fig. 6, method 600 may read description information from a description queue, as will be described below.
In an embodiment of the present disclosure, the description information may include at least one of the following fields: an address field, a data amount field, and an identification field. The value of the address field may be the starting address of the data to be transmitted, or may be the target data. The value of the data amount field may be the data amount of the data to be transmitted. The value of the identification field may be an invalid value, a first identification value, or a second identification value. The first identification value may indicate that the target data is to be received. The second identification value may indicate that the target data has been received. It is understood that the memory cell corresponding to the start address may be the shared memory cell 124 described above.
In an embodiment of the present disclosure, reading description information from a description queue includes: and reading the description information to the description information buffer area. For example, if the description information exists in the description queue, the description information may be read to the description information buffer. The following will explain in conjunction with operations S611 to S615.
In operation S611, it is determined whether there is no free storage space in the description information buffer.
In the embodiment of the present disclosure, in response to determining that the description information cache does not have free storage space, operation S621 may be performed. If the description information buffer area does not have the free storage space, the description information buffer area is full, and target data can be processed according to the description information to release the storage space occupied by the description information.
In the embodiment of the present disclosure, in response to determining that the description information buffer has free storage space, operation S612 may be performed. For example, if there is a free storage space in the description information buffer, indicating that the description information buffer is not full, the description information in the description queue may be transferred to the description information buffer.
In operation S612, it is determined whether description information exists in the description queue.
In the embodiment of the present disclosure, in response to determining that description information exists in the description queue, operation S613 may be performed.
In operation S613, the description information is read to the description information buffer according to the read pointer of the description queue. For example, the description information pointed to by the above-mentioned read pointer rp300 may be read from the description queue buffer to the description information buffer.
In operation S614, a read pointer describing the queue is updated. For example, the read pointer rp300 described above may be updated to point to the queue position qe307. Next, S615 may be operated.
It will be appreciated that the description above has been illustrated with the example of the presence of descriptive information in a descriptive queue. However, the present disclosure is not limited thereto, and in the embodiment of the present disclosure, in response to determining that the description information does not exist in the description queue, operation S615 is performed.
In operation S615, it is determined whether there is no description information in the description information buffer.
In an embodiment of the present disclosure, reading of description information from the description queue is blocked and an interrupt signal is awaited in response to determining that no description information is present in the description information buffer. In response to receiving the interrupt signal, it may return to operation S612 to read description information corresponding to the interrupt signal. It will be appreciated that the receiving means may receive a plurality of interrupt signals and be responsive to one or more of the interrupt signals.
In the embodiment of the present disclosure, in response to determining that the description information exists in the description information cache region, the method 600 may process the target data corresponding to the description information based on the target storage region or the target cache region according to the description information, which will be described below in connection with operation S621. It is understood that operations S611 to S615 may be performed in parallel with operation S621. That is, if the description information exists in the description information buffer, operation S621 may be performed.
In operation S621, at least one description information corresponding to at least one target port is determined from the description buffer according to the value of the port number field of each of the plurality of description information in the description buffer and at least one target port having free bandwidth among the plurality of ports.
For example, different bandwidth utilization may be configured for different ports. The first port may be allocated 30% of the bandwidth and the second port may be allocated 70% of the bandwidth. If the first port requests to read the description information and the bandwidth used by the first port has reached 30% of the total bandwidth, it may be paused to find the description information for the first port until the bandwidth used by the first port is less than 30% of the total bandwidth. If the second port requests to read the description information and the bandwidth used by the second port is lower than 70% of the total bandwidth, the second port may be regarded as the target port. And searching for the description information of which the value of the port number field is consistent with the port number of the second port in the plurality of description information. It will be appreciated that the number of ports described above is merely exemplary, and the number of ports may be much greater than 2, and the bandwidth allocation is merely exemplary.
Next, at least one target data is processed in the target storage area or the target cache area according to at least one description information corresponding to the at least one target port. The following will explain in conjunction with operations S622 to S6210.
In operation S622, it is determined whether the value of the identification field is a second identification value.
In the embodiment of the present disclosure, in response to determining that the value of the identification field is the second identification value, operation S623 may be performed.
In operation S623, the storage space corresponding to the description information is released. For example, based on the value of the address field and the value of the data amount field of the description information, a memory space may be determined, which is freed up to store other data.
In the disclosed embodiment, in response to determining that the value of the identification field is not the second identification value, operation S624 may be performed.
In operation S624, it is determined whether the value of the data amount field is greater than a preset data amount threshold.
In the disclosed embodiment, in response to determining that the value of the data amount field is less than or equal to the preset data amount threshold, operation S625 is performed.
In operation S625, a value of an address field of the description information is taken as target data. For example, as described above, the transmitting apparatus may replace the value of the address field with data whose data amount is smaller than the preset data amount threshold. Thus, in the case where the value of the data amount field is less than or equal to the preset data amount threshold, the value of the address field can be read to acquire the target data.
In the embodiment of the present disclosure, in response to determining that the value of the data amount field is greater than the preset data amount threshold, operation S626 is performed.
In operation S626, it is determined whether the target storage area is located at the internal storage unit.
In the embodiment of the present disclosure, in response to determining that the target storage area is located in the external storage unit, operation S627 may be performed.
In operation S627, target data is acquired from the target storage area using the data handling unit. For example, if the receiving device is the host 110, the target data may be read from the target storage area using a direct memory access unit. Next, operation S628 may be performed.
In the embodiment of the present disclosure, in response to determining that the target storage area is located at the internal storage unit, operation S628 may be performed. For example, if the receiving apparatus is the device side 120, the target data may be already stored in the shared memory unit when the data is transmitted. The device side 120 may obtain the data based on the internal bus. Next, operation S628 may be performed.
In operation S628, it is determined whether the value of the identification field is the first identification value.
In the disclosed embodiment, in response to determining that the value of the identification field is not the first identification value, operation S6210 may be performed.
In the disclosed embodiment, in response to determining that the value of the identification field is the first identification value and the target data has been read, operation S629 may be performed.
In operation S629, second description information is generated. For example, description information whose value of the identification field is the first identification value may be used as the first description information. The second description information may be generated by replacing the value of the identification field of the first description information with the second identification value.
In operation S6210, data is returned. For example, the target data may be provided to the corresponding port.
It will be appreciated that the respective data processing methods of the receiving device and the transmitting device are described above, and the methods of the present disclosure will be further described below in connection with the transmitting device and the receiving device.
Fig. 7A is a schematic diagram of a system architecture according to one embodiment of the present disclosure.
As shown in fig. 7A, the system architecture 70 according to this embodiment may include a host end 710 and a device end 720. Data and signals may be transferred between host side 710 and device side 720 via interface 730.
The host side 710 may include a first computing unit 711. The device side 720 may include a second computing unit 722 and a third computing unit 723. The device side 720 may also include a shared storage unit 724. It will be appreciated that the above description regarding the host side 110 and the device side 120 applies equally to the host side 710 and the device side 720, and this disclosure will not be repeated here.
As shown in fig. 7A, the shared storage unit 724 may include a first description queue buffer rb710 and a second description queue buffer rb720. The first description queue buffer rb710 may store a first description queue. The write pointer of the first description queue may be updated by host side 710. The read pointer of the first description queue may be updated by the device side 720. The write pointer of the second description queue may be updated by the device side 720. The read pointer of the second description queue may be updated by the host side 720.
As shown in fig. 7A, the device side 720 may further include a data handling unit 725 and an interrupt control unit 726. The data-handling unit 725 may perform data-handling. Based on the data handling unit 725, the host side 710 can read and write data from the shared storage unit 724 of the device side 720. The second computing unit 722 may read and write data of the shared memory unit based on the internal bus.
Interrupt control unit 726 may send an interrupt signal to host side 710. It is understood that the signal may be interrupted to the device side 720 based on the first calculation unit 711 of the host side 710.
It will be appreciated that the system architecture of the present disclosure is further described above, and the following description will be made with the host side as the transmitting device and the device side as the receiving device.
Fig. 7B is a schematic diagram of a data processing method according to one embodiment of the present disclosure.
As shown in fig. 7B, the host side 710 may serve as a transmitting device, and the device side 720 may serve as a receiving device. Host side 710 will send the first target data to device side 720. The first description queue buffer rb710 may store the first description queue q710. Taking an example that the data amount of the first target data is greater than the preset data amount threshold, the value of the address field of the description information of the first target data may correspond to one storage space in the shared storage unit 724. In the case where there is a free position in the first description queue q710, it may be determined that the first target data is located at the host side 710 and the storage space of the first target data is an external storage unit (shared storage unit 724) of the host side 710, and the first physical storage space may be determined in the target storage area of the shared storage unit based on the address field and the data amount field of the description information of the first target data. The host side 710 uses the data handling unit 725 to handle the first target data to the first physical storage space. Next, a value of an identification field of the description information of the first target data may be set to a first identification value, resulting in the first description information of the first target data. Next, in the case where it is determined again that there is a free position in the first description queue, based on the write pointer of the first description queue q710, the first description information of the first target data may be written into the first description queue q710, and the write pointer of the first description queue q710 may be updated. Next, the host side 710 may transmit an interrupt signal to the device side using the interrupt control unit 726. Next, the host side 710 may continue writing description information to the first description queue q710.
It is assumed that the first description queue q710 is an empty queue before the first description information of the first target data is not written. The device side 720 may read the first description information from the first description queue q710 to the description information buffer unit according to the read pointer of the first description queue q710 in response to the interrupt signal transmitted from the host side 710, and update the read pointer of the first description queue q 710. If the port corresponding to the value of the port number field of the first description information has free bandwidth, data can be provided for the port based on the first description information. For example, as described above, host side 710 has utilized data handling unit 725 to handle first target data to a first physical storage space. From the value of the first descriptor address field, it may be determined that the first target data is located in an internal memory location of the device side 720. In case the identification of the first description information is a first identification value, second description information of the first target data may be generated and the first target data may be provided to the above-mentioned port. Thus, host side 710 enables the transmission of the first target data to device side 720. At this time, the first physical storage space is still occupied and cannot be used by other data, and a manner of releasing the first physical storage space will be described below.
For the second description information of the first target data, the device side 720 may serve as a transmitting apparatus, and the host side 710 may serve as a receiving apparatus. The second description queue q720 may be stored in the second description queue buffer qb 720. If it is determined that the idle position exists in the second description queue q720, determining that the idle position exists in the second description queue q720 again after determining that the value of the identification field of the second description information is the second identification value, writing the second description information into the second description queue q720 according to the write pointer of the second description queue q720, and updating the write pointer of the second description queue. Next, the device side 720 may send an interrupt signal to the host side 710 using the interrupt control unit 726.
It is assumed that the second description queue q720 is an empty queue before the second description information of the first target data is not written. The host side 710 may read the second description information from the second description queue q720 to the description information buffer unit according to the read pointer of the second description queue q720 in response to the interrupt signal transmitted from the device side 720. In case the identification of the second descriptive information is a second identification value, the above-mentioned first physical address space may be released. Thereby, a release of the storage space is achieved.
It will be appreciated that the disclosure is described above with reference to the host side transmitting data to the device side, and the disclosure will be described below with reference to the device side transmitting data to the host side.
As shown in fig. 7B, the host side 710 may act as a receiving device, and the device side 720 may act as a transmitting device. The device side 720 will send the second target data to the host side 710. The second description queue buffer rb720 may store the second description queue q720. Taking the example that the data amount of the second target data is greater than the preset data amount threshold, the value of the address field of the description information of the second target data may correspond to one storage space in the shared storage unit 724. In the case where there is a free position in the second description queue q720, it may be determined that the second target data is located at the device side 720 and the storage space of the second target data is an internal storage unit (shared storage unit 724) of the device side 720, and the second physical storage space may be determined in the target storage area of the shared storage unit based on the address field and the data amount field of the description information of the first target data. Based on the internal bus, the device side 720 may provide the second target data to the second physical storage space. Next, a value of an identification field of the description information of the second target data may be set to a first identification value, resulting in first description information of the second target data. Next, in the case where it is determined again that there is a free position in the second description queue, based on the write pointer of the second description queue q720, the first description information of the second target data may be written into the second description queue q720, and the write pointer of the second description queue q720 may be updated. Next, the device side 720 may send an interrupt signal to the host side 710 using the interrupt control unit 726. Next, the device side 710 may continue to write description information to the second description queue q720.
It is assumed that the second description queue q720 is an empty queue before the first description information of the second target data is not written. The host side 710 may read the first description information from the second description queue q720 to the description information buffer unit according to the read pointer of the second description queue q720 in response to the interrupt signal transmitted from the device side 720, and update the read pointer of the second description queue q 720. If the port corresponding to the value of the port number field of the first description information has free bandwidth, data can be provided for the port based on the second description information. For example, as described above, it may be determined that the second target data is located in an external storage unit of the host side 710 according to the value of the first description information address field. The host side 710 may read the second target data from the second physical storage space using the data handling unit 725. In case the identification of the first description information is a first identification value, second description information of the second target data may be generated and provided to the port. Thus, device side 720 enables the sending of the second target data to device side 710. At this time, the second physical storage space is still occupied and cannot be used by other data, and a manner of releasing the second physical storage space will be described below.
For the second description information of the second target data, the host side 710 may serve as a transmitting device, and the device side 720 may serve as a receiving device. The first description queue q710 may be stored in the first description queue buffer qb 710. If it is determined that there is a free position in the first description queue q710, it is determined that there is a free position in the first description queue q710 again after determining that the value of the identification field of the second description information is the second identification value, the second description information may be written into the first description queue q710 according to the write pointer of the first description queue q710, and the write pointer of the first description queue is updated. Next, the host side 710 may transmit an interrupt signal to the device side 720 using the interrupt control unit 726.
It is assumed that the first description queue q710 is an empty queue before the second description information of the second target data is not written. The device side 720 may read the second description information from the first description queue q710 to the description information buffer unit according to the read pointer of the first description queue q710 in response to the interrupt signal transmitted from the host side 710. In case the identification of the second descriptive information is a second identification value, the second physical address space may be released. Thereby, a release of the storage space is achieved.
It is to be understood that the method of the present disclosure is described above in connection with a hardware structure, and the software architecture of the present disclosure will be described below.
Fig. 8 is a schematic diagram of a software stack according to one embodiment of the present disclosure.
As shown in fig. 8, software components of the host side 810 include: host driver layer 811, description queue driver layer qdrive, application protocol layer api813, and virtual serial application tty814.
The host driver layer host811 can include an interrupt driver and a data handling driver. The interrupt driver may drive the interrupt control unit 726 described above. The data handling driving level may drive the data handling unit 725, providing a read-write interface of the shared memory unit for the host 810.
Description buffer driver layer qdrive may implement description queue buffers and description information buffers. The description queue may be plural. The plurality of description queues may include the first description queue and the second description queue described above. The first description queue may store description information written by the host side 810 and the second description queue may store description information written by the device side 820. The description information in the description queue may correspond to data with a data amount less than or equal to a preset data amount threshold value, and also correspond to data with a data amount greater than the preset data amount threshold value. Thus, the description queue supports two modes. The first mode may transmit data having an amount of data less than or equal to a preset data amount threshold, which may be the value of the address field. The second mode may transfer large blocks of data with consecutive addresses. The large block of data may be transferred to the shared storage unit or the host-side memory via the direct memory access unit.
The application protocol layer api813 includes a plurality of application program interfaces such as create (create), read (read), write (write), register (open), cancel (close), and destroy (destroy). The creation interface may create and initialize two description queue buffers and provide the write pointer of a first description queue to the host side and the write pointer of a second description queue to the device side. The creation interface may also return a global instance (instance) handle. The registration interface may register a port number and allocate resources (e.g., bandwidth, etc.) for the port. The read and write interfaces may use the port number as an identification of different applications. Within the scope of the application, the port number may be a global identification, not repeated. The cancellation interface may close the port number, freeing up resources allocated for the port so that the host side no longer processes data associated with the port. The write interface may support queue write descriptive information. The description information may be a descriptor (descriptor) that may indicate a start address, a data amount, a port number, and an identification value. The reading ports can read the description information of the corresponding port numbers. The destruction interface may destroy all resources describing the queue cache.
The virtual serial application tty814 can support a serial tool, can operate on a host side using a serial port, and communicates with a second computing unit on the device side.
The software components of the device side 820 include: device driver layer dev821, description queue driver layer qdrive822, application protocol layer api823, and virtual serial application tty824.
The device driver layer dev811 may include an interrupt driver. The interrupt driver may drive the interrupt control unit 726 described above.
The description buffer driver layer qdrive may implement description queue buffering and description information buffering. The description buffer driver layer qdrive822 is the same as or similar to the description buffer driver layer qdrive812 described above, and the disclosure is not repeated here.
The application protocol layer api823 includes a plurality of application interfaces for creating, reading, writing, registering, logging out, destroying, and the like. The application protocol layer api823 is the same as or similar to the application protocol layer api813 described above, and this disclosure is not repeated here.
The virtual serial application tty824 may support serial tools to operate on the host side using serial ports and communicate with the second computing unit on the device side.
It will be appreciated that while the method of the present disclosure is described above, the apparatus of the present disclosure will be described below.
Fig. 9 is a block diagram of a data processing apparatus according to one embodiment of the present disclosure.
As shown in fig. 9, the apparatus 910 may include a first storage unit 912 and a first computing unit 911.
The first storage unit 912 may be configured to store target data to be transmitted.
The first computing unit 911 may be configured to: and writing the description information corresponding to the target data into the description queue according to the write pointer of the description queue. The description queue is stored in the description queue buffer area, and the storage space to be written of the target data is located in at least one of the description queue buffer area and the target storage area. An interrupt signal corresponding to the target data is transmitted. The interrupt signal is used for instructing the receiving device to read description information corresponding to the target data from the description queue.
In some embodiments, the first computing unit may be further configured to: and determining the storage space of the target data according to the data quantity of the target data.
In some embodiments, the first computing unit may be further configured to perform the following operations to determine a storage space of the target data from the data amount of the target data: and determining the cache space in the description queue cache area as the storage space of the target data in response to determining that the data amount of the target data is less than or equal to a preset data amount threshold.
In some embodiments, the descriptive information includes at least one of the following fields: an address field and a data amount field.
In some embodiments, the first computing unit may be further configured to determine a buffer space in the description queue buffer as a storage space of the target data: replacing the value of the address field of the description information with the target data.
In some embodiments, the first computing unit may be further configured to perform the following operations to determine a storage space of the target data from the data amount of the target data: and determining a target storage space in the target storage area as the storage space of the target data according to the description information in response to determining that the data amount of the target data is larger than a preset data amount threshold.
In some embodiments, the first computing unit may be further configured to perform the following operations to determine the storage space of the target data: in response to determining that the target storage space is located at an external storage unit, a first request is sent to a data handling unit to write the target data to the target storage space.
In some embodiments, the first computing unit may be further configured to perform the following operations to determine the storage space of the target data from the data amount of the target data: and in response to determining that the free position exists in the description queue, determining the storage space of the target data according to the data quantity of the target data.
In some embodiments, the first computing unit may be further configured to perform the following operations to write description information corresponding to the target data to a description queue: and writing the description information corresponding to the target data into the description queue in response to the storage space of the target data being determined and the existence of the free position in the description queue being determined again.
In some embodiments, the description information further includes an identification field having a value of a first identification value or a second identification value, the first identification value being used to indicate that the target data is to be received, the second identification value being used to indicate that the target data has been received.
In some embodiments, the first computing unit may be further configured to perform at least one of the following operations to write description information corresponding to the target data to the description queue: and in response to determining that the target data is to be sent, writing first description information corresponding to the target data into the description queue. The value of the identification field of the first description information is a first identification value. And in response to determining that the first descriptive information has been read, writing second descriptive information corresponding to the target data into the descriptive queue. The value of the identification field of the second description information is a second identification value.
In some embodiments, the description information further includes a port number field, such that the receiving device reads the target description information from the description queue using a port having a free bandwidth, a value of the port number field of the target description information corresponding to the port having the free bandwidth.
Fig. 10 is a block diagram of a data processing apparatus according to one embodiment of the present disclosure.
As shown in fig. 10, the apparatus 1020 may include a second storage unit 1024 and a second computing unit 1022.
A second storage unit 1024 includes a description queue cache and a target storage region.
A second calculation unit 1022 configured to: and writing the description information corresponding to the target data into the description queue according to the write pointer of the description queue. The description queue is stored in the description queue buffer area, and the storage space to be written of the target data is located in at least one of the description queue buffer area and the target storage area. An interrupt signal corresponding to the target data is transmitted. The interrupt signal is used for instructing the receiving device to read description information corresponding to the target data from the description queue.
In some embodiments, the second computing unit may be further configured to: and determining the storage space of the target data according to the data quantity of the target data.
In some embodiments, the second computing unit may be further configured to perform the following operations to determine the storage space of the target data from the data amount of the target data: and determining the cache space in the description queue cache area as the storage space of the target data in response to determining that the data amount of the target data is less than or equal to a preset data amount threshold.
In some embodiments, the descriptive information includes at least one of the following fields: an address field and a data amount field.
In some embodiments, the second computing unit may be further configured to determine a buffer space in the description queue buffer as a storage space of the target data: replacing the value of the address field of the description information with the target data.
In some embodiments, the second computing unit may be further configured to perform the following operations to determine the storage space of the target data from the data amount of the target data: and determining a target storage space in the target storage area as the storage space of the target data according to the description information in response to determining that the data amount of the target data is larger than a preset data amount threshold.
In some embodiments, the second computing unit may be further configured to perform the following operations to determine the storage space of the target data from the data amount of the target data: and in response to determining that the free position exists in the description queue, determining the storage space of the target data according to the data quantity of the target data.
In some embodiments, the second computing unit may be further configured to perform the following operations to write description information corresponding to the target data to a description queue: and writing the description information corresponding to the target data into the description queue in response to the storage space of the target data being determined and the existence of the free position in the description queue being determined again.
In some embodiments, the description information further includes an identification field having a value of a first identification value or a second identification value, the first identification value being used to indicate that the target data is to be received, the second identification value being used to indicate that the target data has been received.
In some embodiments, the second computing unit may be further configured to perform at least one of the following operations to write description information corresponding to the target data to the description queue: and in response to determining that the target data is to be sent, writing first description information corresponding to the target data into the description queue. The value of the identification field of the first description information is a first identification value. And in response to determining that the first descriptive information has been read, writing second descriptive information corresponding to the target data into the descriptive queue. The value of the identification field of the second description information is a second identification value.
In some embodiments, the description information further includes a port number field, such that the receiving device reads the target description information from the description queue using a port having a free bandwidth, a value of the port number field of the target description information corresponding to the port having the free bandwidth.
Fig. 11 is a block diagram of a data processing apparatus according to one embodiment of the present disclosure.
As shown in fig. 11, the apparatus 1110 may include a first storage unit 1112 and a first calculation unit 1111.
A first calculation unit 1111 configured to: and reading the description information from the description queue according to the read pointer of the description queue. The description queue is stored in the description queue cache. And processing the target data corresponding to the description information based on the target storage area or the target cache area according to the description information. The target buffer includes a description queue buffer. The target data may be read to the first storage unit 1112.
In some embodiments, the first computing unit may be further configured to perform the following operations to read description information from the description queue: and reading the description information to a description information buffer area.
In some embodiments, the description information includes a port number field, and the destination cache further includes the description information cache.
In some embodiments, the first computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on the target storage area or the target cache area according to the description information: and determining at least one piece of description information corresponding to at least one target port from the description buffer according to the value of the port number field of each of the plurality of pieces of description information in the description buffer and at least one target port with idle bandwidth in the plurality of ports. And processing at least one target data in the target storage area or the target cache area according to at least one piece of description information corresponding to at least one target port.
In some embodiments, the descriptive information includes at least one of the following fields: an address field and a data amount field.
In some embodiments, the first computing unit may be further configured to perform the following operations to process the target data corresponding to the description information based on the target storage area or the target cache area according to the description information: and in response to determining that the value of the data amount field is less than or equal to a preset data amount threshold, taking the value of the address field of the description information as the target data.
In some embodiments, the first computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on the target storage area or the target cache area according to the description information: the target data is read from the target storage area in response to determining that the value of the data amount field is greater than a preset data threshold and the target storage area is located at an internal storage unit.
In some embodiments, the first computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on the target storage area or the target cache area according to the description information: in response to determining that the value of the data amount field is greater than a preset data threshold and the target storage area is located at an external storage unit, a second request is sent to a data handling unit to retrieve the target data from the target storage area.
In some embodiments, the description information further includes an identification field having a value of a first identification value or a second identification value, the first identification value being used to indicate that the target data is to be received, the second identification value being used to indicate that the target data has been received.
In some embodiments, the description information includes first description information and second description information, the value of the identification field of the first description information is a first identification value, and the value of the identification field of the second description information is a second identification value. The first computing unit may be further configured to: and generating the second description information corresponding to the first description information in response to the read target data corresponding to the first description information.
In some embodiments, the first computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on a target storage area or the target cache area according to the description information: and in response to determining that the value of the identification field is the second identification value, releasing the storage space corresponding to the description information.
Fig. 12 is a block diagram of a data processing apparatus according to one embodiment of the present disclosure.
As shown in fig. 12, the apparatus 1200 may include a second storage unit 1224 and a second calculation unit 1222.
A second storage unit 1224 includes a description queue cache and a target storage region.
A second calculation unit 1222 configured to: and reading the description information from the description queue according to the read pointer of the description queue. The description queue is stored in the description queue cache. And processing the target data corresponding to the description information based on the target storage area or the target cache area according to the description information. The target buffer includes a description queue buffer.
In some embodiments, the second computing unit may be further configured to perform the following operations to read description information from the description queue: and reading the description information to a description information buffer area.
In some embodiments, the description information includes a port number field, and the destination cache further includes the description information cache.
In some embodiments, the second computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on the target storage area or the target cache area according to the description information: and determining at least one piece of description information corresponding to at least one target port from the description buffer according to the value of the port number field of each of the plurality of pieces of description information in the description buffer and at least one target port with idle bandwidth in the plurality of ports. And processing at least one target data in the target storage area or the target cache area according to at least one piece of description information corresponding to at least one target port.
In some embodiments, the descriptive information includes at least one of the following fields: an address field and a data amount field.
In some embodiments, the second computing unit may be further configured to perform the following operations to process the target data corresponding to the description information based on the target storage area or the target cache area according to the description information: and in response to determining that the value of the data amount field is less than or equal to a preset data amount threshold, taking the value of the address field of the description information as the target data.
In some embodiments, the second computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on the target storage area or the target cache area according to the description information: the target data is read from the target storage area in response to determining that the value of the data amount field is greater than a preset data threshold and the target storage area is located at an internal storage unit.
In some embodiments, the description information further includes an identification field having a value of a first identification value or a second identification value, the first identification value being used to indicate that the target data is to be received, the second identification value being used to indicate that the target data has been received.
In some embodiments, the description information includes first description information and second description information, the value of the identification field of the first description information is a first identification value, and the value of the identification field of the second description information is a second identification value. The second computing unit may be further configured to: and generating the second description information corresponding to the first description information in response to the read target data corresponding to the first description information.
In some embodiments, the second computing unit may be further configured to perform the following operations to process target data corresponding to the description information based on a target storage area or the target cache area according to the description information: and in response to determining that the value of the identification field is the second identification value, releasing the storage space corresponding to the description information.
It will be appreciated that while the apparatus of the present disclosure has been described above, the electronic device of the present disclosure will be described below.
Fig. 13 is a schematic block diagram of an electronic device according to the present disclosure.
As shown in fig. 13, the apparatus 1300 may include a first data processing device 1310, a second data processing device 1320, and an interface 1330.
The first data processing device 1310 may be the device 910 or the device 1110 described above.
The interface may be the interface 130 described above.
The second data processing device 1330 may be a device 1020 or a device 1220 as described above.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 14 shows a schematic block diagram of an example electronic device 1400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the apparatus 1400 includes a computing unit 1401 that can perform various appropriate actions and processes according to a computer program stored in a Read-Only Memory (ROM) 1402 or a computer program loaded from a storage unit 1408 into a random access Memory (Random Access Memory, RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The computing unit 1401, the ROM 1402, and the RAM 1403 are connected to each other through a bus 1404. An Input/Output (I/O) interface 1405 is also connected to bus 1404.
Various components in device 1400 are connected to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, an optical disk, or the like; and a communication unit 1409 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1401 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1401 include, but are not limited to, a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graph Processing Unit, GPU), various dedicated artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) computing chips, various computing units running machine learning model algorithms, digital signal processors (DIGITAL SIGNAL processors, DSPs), and any suitable processors, controllers, microcontrollers, etc. The computing unit 1401 performs the respective methods and processes described above, for example, a data processing method. For example, in some embodiments, the data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1400 via the ROM 1402 and/or the communication unit 1409. When a computer program is loaded into RAM 1403 and executed by computing unit 1401, one or more steps of the data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1401 may be configured to perform the data processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated Circuit System, field programmable gate array (Field Programmable GATE ARRAY, FPGA), application-specific integrated Circuit (ASIC), application-specific standard product (Application SPECIFIC STANDARD PARTS, ASSP), system-On-Chip (SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a random access Memory, a read-Only Memory, an erasable programmable read-Only Memory (EPROM) or flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a Cathode Ray Tube (CRT) display or a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD)) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: a local area network (Local Aera Network, LAN), a wide area network (Wide Aera Network, WAN), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (31)

1. A data processing method, comprising:
Writing description information corresponding to target data into a description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue buffer area, and a storage space of the target data is positioned in at least one of the description queue buffer area and a target storage area;
And sending an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue.
2. The method of claim 1, wherein the method further comprises:
and determining the storage space of the target data according to the data quantity of the target data.
3. The method of claim 2, wherein the determining the storage space of the target data based on the data amount of the target data comprises:
And determining the cache space in the description queue cache area as the storage space of the target data in response to determining that the data amount of the target data is less than or equal to a preset data amount threshold.
4. A method according to claim 3, wherein the descriptive information includes at least one of the following fields: an address field and a data amount field,
The determining the buffer space in the description queue buffer as the storage space of the target data includes:
Replacing the value of the address field of the description information with the target data.
5. The method of claim 2, wherein the determining the storage space of the target data based on the data amount of the target data comprises:
And determining a target storage space in the target storage area as the storage space of the target data according to the description information in response to determining that the data amount of the target data is larger than a preset data amount threshold.
6. The method of claim 5, wherein the determining the storage space of the target data further comprises:
In response to determining that the target storage space is located at an external storage unit, the target data is written to the target storage space using a data handling unit.
7. The method of claim 2, wherein the determining the storage space of the target data based on the data amount of the target data comprises:
And in response to determining that the free position exists in the description queue, determining the storage space of the target data according to the data quantity of the target data.
8. The method of claim 2 or 7, wherein the writing of description information corresponding to the target data to a description queue comprises:
And writing the description information corresponding to the target data into the description queue in response to the storage space of the target data being determined and the existence of the free position in the description queue being determined again.
9. The method of claim 1, wherein the descriptive information further comprises an identification field having a value of a first identification value or a second identification value, the first identification value being used to indicate that the target data is to be received, the second identification value being used to indicate that the target data has been received.
10. The method of claim 9, wherein the writing of description information corresponding to the target data to the description queue comprises at least one of:
in response to determining that the target data is to be sent, writing first description information corresponding to the target data into the description queue, wherein the value of an identification field of the first description information is a first identification value;
And in response to determining that the first descriptive information has been read, writing second descriptive information corresponding to the target data into the descriptive queue, wherein a value of an identification field of the second descriptive information is a second identification value.
11. The method of claim 1, wherein the description information further includes a port number field so that the receiving device reads target description information from the description queue using a port having a free bandwidth, a value of the port number field of the target description information corresponding to the port having the free bandwidth.
12. The method of any one of claims 1 to 11, wherein the description queue cache and the target storage region are located at an artificial intelligence acceleration device.
13. A data processing method, comprising:
Reading description information from a description queue according to a read pointer of the description queue, wherein the description queue is stored in a description queue cache area;
And processing target data corresponding to the description information based on a target storage area or a target cache area according to the description information, wherein the target cache area comprises a description queue cache area.
14. The method of claim 13, wherein the reading description information from the description queue comprises:
And reading the description information to a description information buffer area.
15. The method of claim 14, wherein the description information includes a port number field, and the destination cache further includes the description information cache
The processing the target data corresponding to the description information based on the target storage area or the target cache area according to the description information comprises the following steps:
Determining at least one piece of description information corresponding to at least one target port from the description cache region according to the value of the port number field of each of a plurality of pieces of description information in the description cache region and at least one target port with idle bandwidth in a plurality of ports;
And processing at least one target data in the target storage area or the target cache area according to at least one piece of description information corresponding to at least one target port.
16. The method of claim 13, wherein the descriptive information includes at least one of the following fields: an address field and a data amount field.
17. The method of claim 16, wherein the processing the target data corresponding to the description information based on the target storage area or the target cache area according to the description information comprises:
and in response to determining that the value of the data amount field is less than or equal to a preset data amount threshold, taking the value of the address field of the description information as the target data.
18. The method of claim 16, wherein the processing the target data corresponding to the description information based on the target storage area or the target cache area according to the description information comprises:
The target data is read from the target storage area in response to determining that the value of the data amount field is greater than a preset data threshold and the target storage area is located at an internal storage unit.
19. The method of claim 16, wherein the processing the target data corresponding to the description information based on the target storage area or the target cache area according to the description information comprises:
in response to determining that the value of the data amount field is greater than a preset data threshold and the target storage area is located at an external storage unit, the target data is retrieved from the target storage area using a data handling unit.
20. The method of claim 13, wherein the descriptive information further comprises an identification field having a value of a first identification value or a second identification value, the first identification value being used to indicate that the target data is to be received, the second identification value being used to indicate that the target data has been received.
21. The method of claim 20, wherein the description information includes first description information and second description information, the value of the identification field of the first description information is a first identification value, the value of the identification field of the second description information is a second identification value,
Further comprises:
And generating the second description information corresponding to the first description information in response to the read target data corresponding to the first description information.
22. The method of claim 20, wherein the processing, based on the description information, target data corresponding to the description information based on a target storage area or the target cache area comprises:
And in response to determining that the value of the identification field is the second identification value, releasing the storage space corresponding to the description information.
23. The method of any of claims 13 to 22, wherein the target cache region and the target storage region are located at an artificial intelligence acceleration device.
24. A data processing apparatus comprising:
A first storage unit configured to store target data to be transmitted;
A first computing unit configured to:
Writing description information corresponding to the target data into a description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue buffer area, and a storage space to be written of the target data is located in at least one of the description queue buffer area and a target storage area;
And sending an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue.
25. A data processing apparatus comprising:
the second storage unit comprises a description queue cache area and a target storage area;
a second calculation unit configured to:
Writing description information corresponding to the target data into a description queue according to a write pointer of the description queue, wherein the description queue is stored in a description queue buffer area, and a storage space to be written of the target data is located in at least one of the description queue buffer area and a target storage area;
And sending an interrupt signal corresponding to the target data, wherein the interrupt signal is used for instructing a receiving device to read the description information corresponding to the target data from the description queue.
26. A data processing apparatus comprising:
A first storage unit;
A first computing unit configured to:
Reading description information from a description queue according to a read pointer of the description queue, wherein the description queue is stored in a description queue cache area;
And processing target data corresponding to the description information based on a target storage area or a target cache area according to the description information, wherein the target cache area comprises a description queue cache area.
27. A data processing apparatus comprising:
the second storage unit comprises a description queue cache area and a target storage area;
a second calculation unit configured to:
Reading description information from a description queue according to a read pointer of the description queue, wherein the description queue is stored in a description queue cache area;
And processing target data corresponding to the description information based on a target storage area or a target cache area according to the description information, wherein the target cache area comprises a description queue cache area.
28. An electronic device, comprising:
A first data processing device, being the device of claim 24 or 26;
A second data processing device, being a device as claimed in claim 25 or 27, said second data processing device being connected to said first data processing device via an interface.
29. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 23.
30. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 23.
31. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 23.
CN202410665681.7A 2024-05-27 2024-05-27 Data processing method, device, electronic equipment and storage medium Pending CN118426967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410665681.7A CN118426967A (en) 2024-05-27 2024-05-27 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410665681.7A CN118426967A (en) 2024-05-27 2024-05-27 Data processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118426967A true CN118426967A (en) 2024-08-02

Family

ID=92323128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410665681.7A Pending CN118426967A (en) 2024-05-27 2024-05-27 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118426967A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119201481A (en) * 2024-11-28 2024-12-27 阿里云计算有限公司 Computing system, data processing method, equipment, device, medium and program product
CN119938004A (en) * 2025-01-20 2025-05-06 昆仑芯(北京)科技有限公司 Processing method, device, electronic device and storage medium for loop code

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119201481A (en) * 2024-11-28 2024-12-27 阿里云计算有限公司 Computing system, data processing method, equipment, device, medium and program product
CN119201481B (en) * 2024-11-28 2025-05-30 阿里云计算有限公司 Computing system, data processing method, device, apparatus, medium, and program product
CN119938004A (en) * 2025-01-20 2025-05-06 昆仑芯(北京)科技有限公司 Processing method, device, electronic device and storage medium for loop code

Similar Documents

Publication Publication Date Title
US11550627B2 (en) Hardware accelerated dynamic work creation on a graphics processing unit
JP7313381B2 (en) Embedded scheduling of hardware resources for hardware acceleration
US9710310B2 (en) Dynamically configurable hardware queues for dispatching jobs to a plurality of hardware acceleration engines
US11290392B2 (en) Technologies for pooling accelerator over fabric
CN118426967A (en) Data processing method, device, electronic equipment and storage medium
JP2011526390A (en) Lazy processing of interrupt message end in virtual environment
KR101693662B1 (en) A method and apparatus for supporting programmable software context state execution during hardware context restore flow
US10983833B2 (en) Virtualized and synchronous access to hardware accelerators
CN114513545B (en) Request processing method, device, equipment and medium
EP4134827A1 (en) Hardware interconnect with memory coherence
CN115202827A (en) Method for processing virtualized interrupt, interrupt controller, electronic device and chip
US10545697B1 (en) Reverse order request queueing by para-virtual device drivers
CN100514362C (en) Switching switch system with independent output and method thereof
KR20250067493A (en) Computing system including multi-core processor and operating method thereof
CN113439260A (en) I/O completion polling for low latency storage devices
CN117992125A (en) Reducing index update messages for memory-based communication queues
CN113032154B (en) Scheduling method and device for virtual CPU, electronic equipment and storage medium
CN115544042A (en) Cached information updating method and device, equipment and medium
EP2798455B1 (en) Direct ring 3 submission of processing jobs to adjunct processors
US12111779B2 (en) Node identification allocation in a multi-tile system with multiple derivatives
US20240233066A1 (en) Kernel optimization and delayed execution
EP4542386A1 (en) Data processing method and apparatus
WO2024188112A1 (en) Task processing method and chip
CN119166565A (en) A SPDK architecture multi-control storage expansion method and device based on memory sharing
CN119597687A (en) Direct memory access controller, processing method, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination