CN112948317A - Multi-node system based on Hlink and processing method - Google Patents
Multi-node system based on Hlink and processing method Download PDFInfo
- Publication number
- CN112948317A CN112948317A CN202110188777.5A CN202110188777A CN112948317A CN 112948317 A CN112948317 A CN 112948317A CN 202110188777 A CN202110188777 A CN 202110188777A CN 112948317 A CN112948317 A CN 112948317A
- Authority
- CN
- China
- Prior art keywords
- node
- hlink
- controller
- accelerator
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention relates to the field of multi-node processing systems, in particular to a multi-node system based on an Hlink and a processing method thereof, aiming at solving the problem of how to realize large bandwidth, low delay and tight coupling interconnection among multiple nodes. The system comprises at least two nodes, wherein each node comprises a memory controller, an Hlink controller and at least one accelerator; the nodes are in communication connection through an Hlink controller; the Hlink controller comprises a bus interface, an Hlink protocol stack, a control register, a command register, an address mapping table, a routing table, a data check and recovery unit and an external interface. According to the invention, efficient communication among nodes and dynamic combination of multi-node hardware accelerator resources can be realized without additional connection of chips, and the requirement of load diversity can be met.
Description
Technical Field
The invention relates to a multi-node processing system, in particular to a multi-node system and a processing method based on an Hlink.
Background
At present, data processing nodes are usually connected together through an Ethernet switch, and limited by the delay and bandwidth of the Ethernet, and only limited communication can be carried out between the nodes. It is difficult to establish tightly coupled communication between the accelerators of two nodes. That is, the existing data processing chips can only work alone, and cannot be interconnected to form a set of larger-scale processing system. However, the task requirements of the current data processing chip are various, and it is difficult for a single hardware structure to efficiently meet the load diversity requirements at low cost.
The interconnection of nodes by ethernet switches, for example, referring to fig. 7, is a process for reading data in accelerator a of node 1 by accelerator B of node 2: 1. the accelerator A of the node 1 transmits result data to a memory controller; 2. the accelerator A of the node 1 informs the CPU of ready data transmission through interruption; 3. the CPU of the node 1 informs the network card to transmit data to the node 2; 4. the network card of the node 1 copies the data in the memory controller to the buffer area of the network card through DMA; 5. the network card of the node 1 sends data to the switch through the Ethernet; 6. the switch sends the data to the network card of the node 2; 7. after the network card of the node 2 receives the data, copying the data to a memory controller through DMA; 8. the network card of the node 2 informs the CPU that the data has been received through interruption; 9, the CPU analyzes the data in the Ethernet message and informs an accelerator B; 10. the accelerator B of the node 2 transfers the data from the memory controller to its own buffer by DMA, and thus completes the data reading. This approach has the following disadvantages: the transmission bandwidth is low and the delay is large; an additional network switch chip is required; loose coupling connection, which is difficult to exert the advantages of streaming processing; the setting and configuration are complex, and a plurality of sets of systems need to be coordinated by powerful management software.
Disclosure of Invention
In order to solve the above-mentioned problems in the prior art, i.e. to solve the problem of how to implement a high-bandwidth, low-latency, tightly-coupled interconnection between multiple nodes, in a first aspect of the present invention, there is provided an Hlink-based multi-node system, the system comprising at least two nodes,
the node comprises a memory controller, an Hlink controller and at least one accelerator; the nodes are in communication connection through an Hlink controller;
the system comprises an Hlink controller, a bus interface, an Hlink protocol stack, a control register, a command register, an address mapping table, a routing table, a data check and recovery unit and an external interface, wherein the Hlink controller comprises a bus interface, a Hlink protocol stack, a control register, a command register, an address mapping table, a routing table, a data check and recovery unit and an external interface;
the bus interface is used for communicating the Hlink controller with a module in a local node;
the Hlink protocol stack is used for analyzing and transmitting messages, and the messages comprise node addresses and accelerator addresses which need to be interconnected;
the control register is used for configuring and initializing the Hlink controller;
the command register is used for command operation of the Hlink controller;
the address mapping table is used for configuring the mapping relation between the local physical address and the remote physical address of the local accelerator;
the routing table is used for appointing a node address for remote data access;
the data check and recovery module is used for checking and recovering data inside the Hlink controller;
and the external interface is used for being connected with other nodes.
In one embodiment, the Hlink controller has a buffer space for data buffering.
In an embodiment, the accelerator is configured to generate the packet and transmit the packet to an Hlink protocol stack of the local node while transmitting data to the cache space of the local node.
In an embodiment, the node further comprises a CPU.
In an embodiment, the Hlink protocol stack is configured to, when receiving a message of a local node accelerator, parse a node address of a node to be connected in the message to the routing table; and when receiving the message transmitted by another node, analyzing the message, converting the message into an interrupt signal and sending the interrupt signal to the CPU of the local node.
In one embodiment, the CPU is configured to identify the local accelerator to be accessed and send a data read notification to the local accelerator to be accessed after receiving the interrupt signal.
In one embodiment, the Hlink controller is further configured to send the data to a local memory controller when it receives another node transmission data.
In one embodiment, the accelerator is further configured to read data from the local memory controller in a direct memory controller access manner upon receiving the read notification.
In a second aspect of the present invention, there is also provided a processing method for an Hlink-based multi-node system, which is used for interconnection between a first node and a second node, where the nodes include a memory controller, an Hlink controller and at least one accelerator, and the method includes:
an accelerator of a first node transmits data and a generated message to an Hlink controller of the node for caching, wherein the message comprises node addresses of the first node and a second node and an accelerator address;
the Hlink controller of the first node analyzes the node address of the second node in the message;
the Hlink controller of the first node sends the data and the message to the Hlink controller of the second node according to the node address of the second node;
the Hlink controller of the second node stores the received data to the memory controller of the node, analyzes the accelerator address of the second node in the message and sends a data reading notice to the accelerator corresponding to the accelerator address;
and after receiving the data reading notification, the accelerator of the second node reads data from the memory controller of the second node in a direct memory controller access mode.
In a third aspect of the present invention, there is provided a processing method for an Hlink-based multi-node system, which is used for interconnecting a first node and a second node, where the nodes include a CPU, a memory controller, an Hlink controller, and at least one accelerator, and the method includes:
an accelerator of a first node transmits data and a generated message to an Hlink controller of the node for caching, wherein the message comprises node addresses of the first node and a second node and an accelerator address;
the Hlink controller of the first node analyzes the node address of the second node in the message;
the Hlink controller of the first node sends the data and the message to the Hlink controller of the second node according to the node address of the second node;
the Hlink controller of the second node stores the received data to the memory controller of the node, converts the message into interrupt information and sends the interrupt information to the CPU of the second node;
the CPU of the second node identifies the address of the accelerator of the second node and sends a data reading notice to the accelerator corresponding to the address of the accelerator;
and after receiving the data reading notification, the accelerator of the second node reads data from the memory controller of the second node in a direct memory controller access mode.
The invention has the advantages that:
according to the multilink-based multi-node system and the processing method, efficient communication among nodes and dynamic combination of multi-node hardware accelerator resources can be achieved without additional connecting chips, and the requirement for diversity of loads can be met.
Furthermore, the invention supports various topological structures and can flexibly construct a multi-node processing system
Furthermore, the CPU in the invention does not participate in the processing of the corresponding protocol of the Hlink controller, thereby greatly saving the CPU resource and greatly reducing the communication delay.
Drawings
FIG. 1 is a schematic structural diagram of a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the main structure of the Hlink controller of the present invention;
fig. 3 is a schematic diagram illustrating an interconnection effect between a node 1 and an FPGA chip through an Hlink controller in the first embodiment;
FIG. 4 is a schematic structural diagram of a second embodiment of the present invention;
FIG. 5 is a schematic diagram of the local spatial effect that node 2 can map multiple accelerators of node 1 to itself in the second embodiment;
FIG. 6 is a schematic diagram of a multilink-based multi-node system including four nodes;
fig. 7 is a schematic diagram of the interconnection of nodes implemented by ethernet switches in the prior art.
Detailed Description
Referring to fig. 1, fig. 1 illustrates a main structure of an Hlink-based multi-node system of a first embodiment. As shown in fig. 1, the multilink-based multi-node system provided by the present embodiment includes two nodes. The node includes a memory controller, an Hlink controller, and at least one accelerator. The nodes are in communication connection through an Hlink controller.
Referring to fig. 2, fig. 2 illustrates a main structural schematic diagram of an Hlink controller. As shown in fig. 2, the Hlink controller includes a bus interface, an Hlink protocol stack, a control register, a command register, an address mapping table, a routing table, a data checksum recovery unit, and an external interface. The Hlink controller has a buffer space for data buffering. And the bus interface is used for communicating the Hlink controller with the modules in the local node. The Hlink protocol stack is used for analyzing and transmitting messages, and the messages comprise node addresses and accelerator addresses which need to be interconnected. The control register is used for configuration and initialization of the Hlink controller. The command register is used for command operation of the Hlink controller. The address mapping table is used for configuring a mapping relationship between a local physical address of the local accelerator and a remote physical address, for example, the local physical address of the local accelerator a is 0x10000, and the remote physical address is 0x 50000. At this point, an entry is generated in the address mapping table: source 0x50000, destination 0x 10000. When the Hlink controller receives a data request containing an address of 0x50000 sent by a remote Hlink controller, the address mapping table converts the address into 0x 10000. The Hlink controller then forwards the data request to 0x10000 (local accelerator a). The routing table is used for specifying a node address for remote data access, for example, when the Hlink controller of the node 1 needs to send data to the node 2, after a message is generated, an external interface (connected with the node 2) of the Hlink controller to be accessed is determined by searching the node address table in the routing table, and the data is sent out. The data check and recovery module is used for data check and recovery inside the Hlink controller. The external interface is used for being connected with other nodes.
As shown in fig. 1, in the present embodiment, a node 1 is provided with a CPU, a memory controller, a network card, an Hlink controller, an accelerator a, an accelerator B, and an accelerator C. It should be noted that the node 1 may not be provided with a CPU and a network card, and whether the node 1 is provided with a CPU and a network card is not limited. Although the node 1 is provided with three accelerators, at least one accelerator may be provided in the node 1, and the number thereof is not limited. The node 2 is an FPGA chip, and a memory controller, an accelerator D and an Hlink controller are configured in the FPGA chip.
With reference to fig. 1, in practical applications, for example, when the node 1 needs a fourth accelerator to work, the accelerator D of the FPGA chip may be used to implement the function. Taking the interconnection of the accelerator a of the node 1 and the accelerator D of the FPGA chip as an example, the method comprises the following steps: 1. an accelerator A of the node 1 transmits data to a cache space of the Hlink; 2. the data are sent to an Hlink controller of the FPGA chip for caching by the Hlink controller of the node 1; copying data to a memory controller by an Hlink controller of the FPGA chip; 4, the Hlink controller of the FPGA chip informs an accelerator D; accelerator D of the FPGA chip copies data from the memory controller to the buffer of accelerator D by means of direct memory controller access (DMA). Thus, the node 1 and the FPGA chip are interconnected through the Hlink controller, and the accelerator D is used as a virtual device of the node from the view of the application layer, and the node 1 includes the accelerator D (as shown in fig. 3).
Referring to fig. 4, fig. 4 illustrates a main structure of an Hlink-based multi-node system of a second embodiment. As shown in fig. 4, the system includes two nodes, which include a CPU, a network card, a memory controller, an Hlink controller, and at least one accelerator. The nodes are in communication connection through an Hlink controller. In this embodiment, the CPU configures and initializes the Hlink controller through the control register. The CPU operates on the Hlink through the command register, for example, initiates a DMA data transfer or the like.
Further, the Hlink protocol of the Hlink controller supports transmission of an interrupt message, and the Hlink controller of the node 1 may send an interrupt notification to the CPU of the node 2 through the interrupt message. After receiving the interrupt message, the Hlink of the node 2 analyzes the message, converts the message into an interrupt signal, and initiates an interrupt request to the CPU of the node 2. When the CPU of the node 2 performs interrupt processing, the interrupt processing software can read the relevant information of the Hlink controller, and recognize that the CPU is an interrupt request issued by an accelerator of the node 1.
Specifically, the accelerator is configured to generate a message and transmit the message to the Hlink protocol stack of the local node while transmitting data to the cache space of the local node. The method comprises the steps that an Hlink protocol stack is configured to analyze a node address of a node to be connected in a message to a routing table when the message of a local node accelerator is received; and when receiving the message transmitted by another node, analyzing the message, converting the message into an interrupt signal and sending the interrupt signal to the CPU of the local node. The CPU is configured to identify the local accelerator to be accessed and send a data reading notification to the local accelerator to be accessed after receiving the interrupt signal. The Hlink controller is also configured to send data to the local memory controller when it receives another node transfer data. The accelerator is further configured to read data from the local memory controller in a direct memory controller access manner upon receiving the read notification.
Therefore, the Hlink controller can realize the interrupt mapping and the interrupt message transmission of the remote nodes and realize the interrupt notification mechanism among the nodes. The resolution of the protocol stack is realized by using the Hlink controller, and the CPU does not participate in the process, so that the CPU resource is greatly saved, and the communication delay is greatly reduced. The Hlink controller can realize automatic conversion of equipment addresses among different nodes, realize transparent resource access among the nodes and further realize efficient communication and resource sharing among the computing nodes.
In this embodiment, two nodes are interconnected by an Hlink controller, the bandwidth of a single lane of the Hlink controller is up to 25gb/s, and the delay is as low as 10 ns. The Hlink protocol stack is completely processed by the Hlink controller, a CPU is not needed to participate in the middle, and communication delay is greatly reduced. In this manner, tightly coupled communication may be established between the accelerators of the two nodes.
The following describes the tight coupling communication established between the accelerators of the two nodes, taking the flow of data exchange between accelerator a of node 1 and accelerator B of node 2 as an example.
Referring to fig. 4, the main flow of the tightly coupled communication established between the accelerators of two nodes is as follows: 1. the accelerator A of the node 1 transmits the result data to a cache of the Hlink controller; 2. the method comprises the steps that an Hlink controller of a node 1 sends data to an Hlink controller of a node 2 for caching through an Hlink protocol stack; 3. the Hlink controller of the node 2 copies the data to the memory controller; 4. the Hlink controller of the node 2 informs the CPU that the data is ready through interruption; 5. the CPU of node 2 informs accelerator B; 6. accelerator B of node 2 copies data from the memory controller to accelerator B's buffer by means of direct memory controller access. As such, the two nodes are interconnected by the Hlink controller. Similarly, node 2 can map the multiple accelerators of node 1 to its local space through the address mapping table in the Hlink controller, so that the CPU of node 2 can directly access and control these virtual accelerators (as shown in fig. 5).
It should be noted that, although the multilink-based multi-node system of the above embodiment only involves two nodes, the present invention provides an multilink-based multi-node system that can achieve interconnection of more nodes, and the number of the multilink-based multi-node system is not limited. As shown in fig. 6, is a four-node Hlink-based multi-node system. After the hardware structures of the nodes are assembled, the configuration can be carried out through a software API. That is, an address mapping table and a routing table of an Hlink controller of each node are configured through an out-of-band configuration network (for example, an ethernet or an I2C interface, etc.), accelerator resources of a plurality of nodes are organically combined, and virtualization and on-demand combination of accelerator modules are realized; once the configuration is completed, the physical accelerator resources of a plurality of nodes are virtualized into one node, and the node can exclusively and efficiently use all the physical accelerator resources; after the accelerator is used, the accelerator resources can be returned to the initial state by configuring the Hlink controller.
Based on the foregoing system embodiment, an embodiment of the present invention further provides a processing method for an Hlink-based multi-node system, which is used for interconnecting a first node and a second node, where the nodes include a memory controller, an Hlink controller, and at least one accelerator, and the method includes:
step S11: an accelerator of a first node transmits data and a generated message to an Hlink controller of the node for caching, wherein the message comprises node addresses of the first node and a second node and an accelerator address;
step S12: the Hlink controller of the first node analyzes the node address of the second node in the message;
step S13: the Hlink controller of the first node sends the data and the message to the Hlink controller of the second node according to the node address of the second node;
step S14: the Hlink controller of the second node stores the received data to the memory controller of the node, analyzes the accelerator address of the second node in the message and sends a data reading notice to the accelerator corresponding to the accelerator address;
step S15: and after receiving the data reading notification, the accelerator of the second node reads data from the memory controller of the second node in a direct memory controller access mode.
Based on the foregoing system embodiment, an embodiment of the present invention further provides a processing method for an Hlink-based multi-node system, which is used for interconnecting a first node and a second node, where the nodes include a CPU, a memory controller, an Hlink controller, and at least one accelerator, and the method includes:
step S21: an accelerator of a first node transmits data and a generated message to an Hlink controller of the node for caching, wherein the message comprises node addresses of the first node and a second node and an accelerator address;
step S22: the method comprises the steps that an Hlink controller of a first node analyzes a node address of a second node in a message;
step S23: the Hlink controller of the first node sends the data and the message to the Hlink controller of the second node according to the node address of the second node;
step S24: the Hlink controller of the second node stores the received data to the memory controller of the node, converts the message into interrupt information and sends the interrupt information to the CPU of the second node;
step S25: the CPU of the second node identifies the address of the accelerator of the second node and sends a data reading notice to the accelerator corresponding to the address of the accelerator;
step S26: and after receiving the data reading notification, the accelerator of the second node reads data from the memory controller of the second node in a direct memory controller access mode.
In conclusion, according to the multilink-based multi-node system and the processing method provided by the invention, no additional chip is required for interconnection among nodes; the protocol of the Hlink controller is specially designed for high-speed communication and resource access among nodes, and the protocol is simple and efficient; the Hlink protocol stack is processed by hardware, and control software is simple; the transmission protocol delay of the Hlink controller is as low as 10ns level, the single lane bandwidth of the Hlink controller exceeds 25gbps, and the Hlink controller has the characteristics of low delay and high-speed communication. The traditional node interconnection mode of the Ethernet switch and the PCIe non-transparent bridge has the characteristics of complex structure, high delay and small communication bandwidth. For example, PCIe non-transparent bridge (NTB), multiple node interconnections need to add an additional transparent bridge chip to cooperate with multiple non-transparent bridge chips, and the hardware structure is complex. The PCIe transport protocol delays the us level, PCIe4.0 bandwidth 16gbps, delay and bandwidth cannot meet the requirements. PCIe protocol is complex and bloated, a special software library is required for using a non-transparent bridge, and software implementation is complex.
Those skilled in the art will appreciate that the terms first, second, etc. are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The above description is of the preferred embodiment of the present invention and the technical principles applied thereto, and it will be apparent to those skilled in the art that any changes and modifications based on the equivalent changes and simple substitutions of the technical solution of the present invention are within the protection scope of the present invention without departing from the spirit and scope of the present invention.
Claims (10)
1. An Hlink-based multi-node system, characterized in that the system comprises at least two nodes,
the node comprises a memory controller, an Hlink controller and at least one accelerator; the nodes are in communication connection through an Hlink controller;
the system comprises an Hlink controller, a bus interface, an Hlink protocol stack, a control register, a command register, an address mapping table, a routing table, a data check and recovery unit and an external interface, wherein the Hlink controller comprises a bus interface, a Hlink protocol stack, a control register, a command register, an address mapping table, a routing table, a data check and recovery unit and an external interface;
the bus interface is used for communicating the Hlink controller with a module in a local node;
the Hlink protocol stack is used for analyzing and transmitting messages, and the messages comprise node addresses and accelerator addresses which need to be interconnected;
the control register is used for configuring and initializing the Hlink controller;
the command register is used for command operation of the Hlink controller;
the address mapping table is used for configuring the mapping relation between the local physical address and the remote physical address of the local accelerator;
the routing table is used for appointing a node address for remote data access;
the data check and recovery module is used for checking and recovering data inside the Hlink controller;
and the external interface is used for being connected with other nodes.
2. The Hlink-based multi-node system of claim 1, wherein the Hlink controller has a buffer space for data buffering.
3. The Hlink-based multi-node system of claim 2, wherein the accelerator is configured to generate the message and transmit the message to an Hlink protocol stack of a local node while it is transmitting data to the cache space of the local node.
4. The Hlink-based multi-node system of claim 3, wherein the node further comprises a CPU.
5. The Hlink-based multi-node system according to claim 4, wherein the Hlink protocol stack is configured to, upon receiving a message from a local node accelerator, parse a node address of a node to be connected in the message into the routing table; and
when receiving the message transmitted by another node, the message is analyzed and converted into an interrupt signal to be sent to the CPU of the local node.
6. The Hlink-based multi-node system of claim 5, wherein the CPU is configured to identify a local accelerator to be accessed upon receipt of the interrupt signal, and to send a data read notification to the local accelerator to be accessed.
7. The Hlink-based multi-node system of claim 6, wherein the Hlink controller is further configured to send data to a local memory controller when it receives another node transfer data.
8. The Hlink-based multi-node system of claim 7, wherein the accelerator is further configured to read data from the local memory controller in a direct memory controller access manner upon receiving the read notification.
9. A method of processing an Hlink-based multi-node system for interconnecting a first node and a second node, the nodes including a memory controller, an Hlink controller, and at least one accelerator, the method comprising:
an accelerator of a first node transmits data and a generated message to an Hlink controller of the node for caching, wherein the message comprises node addresses of the first node and a second node and an accelerator address;
the Hlink controller of the first node analyzes the node address of the second node in the message;
the Hlink controller of the first node sends the data and the message to the Hlink controller of the second node according to the node address of the second node;
the Hlink controller of the second node stores the received data to the memory controller of the node, analyzes the accelerator address of the second node in the message and sends a data reading notice to the accelerator corresponding to the accelerator address;
and after receiving the data reading notification, the accelerator of the second node reads data from the memory controller of the second node in a direct memory controller access mode.
10. A processing method for an Hlink-based multi-node system, for interconnecting a first node and a second node, the nodes including a CPU, a memory controller, an Hlink controller and at least one accelerator, the method comprising:
an accelerator of a first node transmits data and a generated message to an Hlink controller of the node for caching, wherein the message comprises node addresses of the first node and a second node and an accelerator address;
the Hlink controller of the first node analyzes the node address of the second node in the message;
the Hlink controller of the first node sends the data and the message to the Hlink controller of the second node according to the node address of the second node;
the Hlink controller of the second node stores the received data to the memory controller of the node, converts the message into interrupt information and sends the interrupt information to the CPU of the second node;
the CPU of the second node identifies the address of the accelerator of the second node and sends a data reading notice to the accelerator corresponding to the address of the accelerator;
and after receiving the data reading notification, the accelerator of the second node reads data from the memory controller of the second node in a direct memory controller access mode.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110188777.5A CN112948317A (en) | 2021-02-19 | 2021-02-19 | Multi-node system based on Hlink and processing method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110188777.5A CN112948317A (en) | 2021-02-19 | 2021-02-19 | Multi-node system based on Hlink and processing method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN112948317A true CN112948317A (en) | 2021-06-11 |
Family
ID=76244201
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110188777.5A Withdrawn CN112948317A (en) | 2021-02-19 | 2021-02-19 | Multi-node system based on Hlink and processing method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112948317A (en) |
-
2021
- 2021-02-19 CN CN202110188777.5A patent/CN112948317A/en not_active Withdrawn
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9047222B2 (en) | Unified multi-transport medium connector architecture | |
| US9430432B2 (en) | Optimized multi-root input output virtualization aware switch | |
| CA2657827C (en) | Method and apparatus for distributing usb hub functions across a network | |
| US8407367B2 (en) | Unified connector architecture | |
| US9292460B2 (en) | Versatile lane configuration using a PCIe PIE-8 interface | |
| US9219695B2 (en) | Switch, information processing apparatus, and communication control method | |
| WO2015034588A1 (en) | Universal pci express port | |
| US10609125B2 (en) | Method and system for transmitting communication data | |
| US20100250823A1 (en) | Pci-express communication system and pci-express communication method | |
| US12197367B2 (en) | Communications for field programmable gate array device | |
| CN100452757C (en) | Message transferring method and device | |
| EP4616296A1 (en) | Pcie retimer providing failover to redundant endpoint using inter-die data interface | |
| WO2025087005A1 (en) | Interconnect system, device and network | |
| CN102103471A (en) | Data transmission method and system | |
| US20090235048A1 (en) | Information processing apparatus, signal transmission method, and bridge | |
| US20190286606A1 (en) | Network-on-chip and computer system including the same | |
| EP3550439B1 (en) | Information processing system, semiconductor integrated circuit, and information processing method | |
| US8521895B2 (en) | Management of application to application communication requests between data processing systems | |
| CN119402444A (en) | A multi-node communication architecture based on RDMA | |
| US8560594B2 (en) | Management of process-to-process communication requests | |
| US20170295237A1 (en) | Parallel processing apparatus and communication control method | |
| CN112948317A (en) | Multi-node system based on Hlink and processing method | |
| CN117435538A (en) | Bridging system for converting PCIe (peripheral component interconnect express) into SRIO (serial peripheral component interconnect express) | |
| US20200387396A1 (en) | Information processing apparatus and information processing system | |
| JP2013065079A (en) | Relay device and communication system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WW01 | Invention patent application withdrawn after publication | ||
| WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210611 |