CN116755624A - Communication method and system based on FC equipment multi-partition independent cache - Google Patents
Communication method and system based on FC equipment multi-partition independent cache Download PDFInfo
- Publication number
- CN116755624A CN116755624A CN202310743999.8A CN202310743999A CN116755624A CN 116755624 A CN116755624 A CN 116755624A CN 202310743999 A CN202310743999 A CN 202310743999A CN 116755624 A CN116755624 A CN 116755624A
- Authority
- CN
- China
- Prior art keywords
- data
- dma
- fpga
- ddr
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a communication method and a communication system based on multi-partition independent cache of FC equipment, wherein the communication method and the communication system comprise an FPGA (field programmable gate array) arranged in the FC equipment and a DDR (double data rate) connected with the FPGA. And realizing data communication between the opposite terminal equipment and the target cache partition by the FC equipment through the FPGA. The data flow direction from the target cache partition to the opposite terminal is the sending direction, and the receiving direction is the receiving direction. In the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; switching to large queue mode buffer memory when meeting the condition, buffering IU in the same block used by the large queue in DDR in a splicing mode, reporting by adopting aging timeout, and reading out data stored in the block. The communication method of the invention can fully improve the buffer use efficiency, and can dynamically adjust the number of the self-adaptive DDR memory blocks, dynamically solve the problem of multi-channel concurrency, and improve the overall performance of the system.
Description
Technical Field
The invention relates to the technical field of FC fiber channel networks, in particular to a communication method and system based on FC equipment multi-partition independent caching.
Background
On-board data bus technology is an interconnection technology for on-board devices, subsystems, and modules. In terms of the concept of a computer, various avionics devices are equivalent to a microcomputer, and the bus communication technology is to connect channels and links of the microcomputers, so that the avionics devices form a network with complete functions. FC (Fibre Channel), a Channel standard proposed by the american industry standards institute (ANSI) in 1988, is intended to meet the increasing demand for high-speed data channels inside aerospace vehicles, and the serial transmission rate of FC can reach 133 megabaud to 1.0625 gigabaud, which is the main implementation manner of the current communication network of an airborne avionics system, so as to realize the high-speed airborne communication demand.
The FC fiber channel is a network structure based on standards, has the advantages of both channels and networks, and can run the currently mainstream channel standards and network protocols on the same physical interface. The huge data throughput makes it possible to realize large data transmission between different systems on board the vehicle, and to establish any topology with the same equipment, so as to meet different connection characteristics, such as point-to-point communication network, arbitrated loop communication network, switched network, etc., and realize high-speed communication between node devices.
In the communication topology of FC fibre channel, a node device is typically configured with one or more fibre channel ports (FC ports). The node device can be connected with the host through a PCIE interface, and is communicated with the host to receive and send messages, for example, service messages are sent to the host, the configuration information of the host to the node device (such as FPGA, DDR storage, optical signal processing and the like) is received, and the host is connected into the FC topology network through an FC port.
The FPGA chip is a key device for realizing node communication in node equipment, and the performance of the FPGA chip is directly related to whether two communication parties can normally connect and communicate. Under the modern more and more complex application environment of airborne equipment, an airborne communication topology network is more complex, the transmission data volume in the network is geometrically increased, and particularly, the transmission topology network has dynamic and concurrent transmission characteristics, so that the overall performance of node equipment in the overall FC network is improved, the function of the node equipment in the FC network is played, the data processing efficiency is improved, and the key problem to be solved by the current airborne avionics FC network communication system is solved.
Disclosure of Invention
The invention provides a communication method based on multi-partition independent cache of FC equipment, which comprises the following steps:
after the FPGA of the FC equipment is powered on, in a register submodule of an RX_TOP module in the FPGA, dividing the DDR connected with the FPGA into a large queue, a small queue and a buffer area for dynamic scheduling, wherein the size of each block is the same as the dma_buffer size of a target buffer area, and the base address of each block is respectively stored in a FIFO queue of the FPGA; the sum of the sizes of the buffer areas for large queue buffer, small queue buffer and dynamic scheduling is smaller than or equal to the DDR capacity;
in the data flow transmitting direction from the target cache partition to the opposite terminal equipment, data are stored in the target cache partition according to four priorities pri 0-pri 3, the data are stored in dma_buffers of the target cache partition, when a certain partition transmits the data, dma_buffers in the corresponding priority of the corresponding partition are pushed to a DMA_TOP module of an FPGA, the DMA_TOP module analyzes the data into bare data and is matched with a response descriptor, the bare data are pushed to a TX_TOP module of the FPGA, the TX_TOP module assembles and packages the received data into data frames conforming to an FC frame protocol, and finally the data frames are transmitted to the opposite terminal equipment through an FC_MAC module of the FPGA;
in the data flow receiving direction from the opposite terminal equipment to the target cache partition, the data sent from the opposite terminal equipment flows in through an FC_MAC module of the FPGA, is analyzed into bare data by an RX_TOP module of the FPGA and is matched with a corresponding descriptor, and then the data is stored in a DDR connected with the FGPA according to a large queue mode or a small queue mode; after the receiving direction meets the condition of initiating DMA, the RX_TOP module of the FPGA reads data from the DDR, sends the data into the DMA_TOP module of the FPGA, and finally sends the data into dma_buffer of the corresponding priority of the target cache partition in a DMA mode;
in the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; and
under the condition that the preset condition is met, the control is switched to a large queue mode for caching, the IU is cached in the same block used for large queue caching in the DDR in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is carried out by adopting aging timeout, and data stored in the block are read out.
In an alternative embodiment, when the large queue mode is used for caching, the IUs of different channels with the same priority under the same partition are spliced in the same block of the DDR in a splicing mode.
In an alternative embodiment, in the unicast mode, if the received IU includes data of a plurality of frames, a default small queue mode is used for buffering, the received IU is segmented according to the size of the dma_buffer and pushed to the block divided in the DDR for buffering, and when the eop flag in the IU is identified, or after a predetermined period of time passes, no new data is received in the block, a timeout mechanism is triggered, the content in the block is read out, pushed to the dma_top module for DMA, and the read data is sent to the dma_buffer of the corresponding priority of the destination cache partition by means of DMA.
In an alternative embodiment, in the multicast mode, if the received IU includes data of multiple frames, a default small queue mode is used for buffering, the IU is split according to the size of dma_buffer, pushed into a corresponding block in the DDR for buffering, after the data is read out from the DDR, one piece of data is copied in the dma_top module, and then sent to different destination buffer partitions respectively.
In an alternative embodiment, if the received IU contains only one frame of data, the received IU is buffered by switching from the default small queue mode to the large queue mode.
In an alternative embodiment, when all blocks in the buffer area for small queues are occupied, for receiving a new IU entry, the FPGA controls and adjusts the blocks in the dynamic scheduling buffer area for buffer storage use by subsequently entered IUs.
In an alternative embodiment, when the FPGA performs block division on DDR, setting an occupation threshold of blocks in a buffer area for small queues, and if the number of the blocks in the buffer area for small queues reaches the occupation threshold, starting the blocks in the dynamic scheduling buffer area to be supplemented for buffer storage of the subsequent entering IU; and is also provided with
And when the number of blocks in the buffer area for the small queue is lower than a set release threshold, releasing the supplemented blocks and recycling the blocks to the dynamic scheduling buffer area.
According to a second aspect of the object of the present invention, there is also provided a communication system based on FC device multi-partition independent cache, comprising:
the FPGA is arranged in the FC equipment, and data communication and transmission of the FC equipment between the opposite terminal equipment and the target cache partition are realized through the FPGA;
DDR connected with the FPGA;
the FPGA is internally configured with a DMA_TOP module, a TX_TOP module, an RX_TOP module and an FC_MAC module; the DMA_TOP module is used for performing DMA direct memory access processing, the TX_TOP module is used for packaging data sent by the DMA_TOP module into frames conforming to an FC protocol and sending the frames to opposite terminal equipment through the FC_MAC module, meanwhile, the FC_MAC module receives the data sent by the opposite terminal equipment, the data are sent to the RX_TOP module after being analyzed, the RX_TOP module controls the DDR to perform data cache storage, the data in the DDR are read out and sent to the DMA_TOP module when preset conditions are met, and the data are sent to dma_buffer of the corresponding priority of a target cache partition in a DMA mode;
after the FPGA of the FC equipment is powered on, in a register sub-module of an RX_TOP module in the FPGA, dividing the DDR into a large queue, a small queue and a buffer area for dynamic scheduling, wherein the size of each block is the same as the dma_buffer size of a target buffer area, and the base address of each block is respectively stored in the FIFO queue of the FPGA; the sum of the sizes of the buffer areas for large queue buffer, small queue buffer and dynamic scheduling is smaller than or equal to the DDR capacity;
in the data flow transmitting direction from the target cache partition to the opposite terminal equipment, data are stored in the target cache partition according to four priorities pri 0-pri 3, the data are stored in dma_buffers of the target cache partition, when a certain partition transmits the data, dma_buffers in the corresponding priority of the corresponding partition are pushed to a DMA_TOP module of an FPGA, the DMA_TOP module analyzes the data into bare data and is matched with a response descriptor, the bare data are pushed to a TX_TOP module of the FPGA, the TX_TOP module assembles and packages the received data into data frames conforming to an FC frame protocol, and finally the data frames are transmitted to the opposite terminal equipment through an FC_MAC module of the FPGA;
in the data flow receiving direction from the opposite terminal equipment to the target cache partition, the data sent from the opposite terminal equipment flows in through an FC_MAC module of the FPGA, is analyzed into bare data by an RX_TOP module of the FPGA and is matched with a corresponding descriptor, and then the data is stored in a DDR connected with the FGPA according to a large queue mode or a small queue mode; after the receiving direction meets the condition of initiating DMA, the RX_TOP module of the FPGA reads data from the DDR, sends the data into the DMA_TOP module of the FPGA, and finally sends the data into dma_buffer of the corresponding priority of the target cache partition in a DMA mode;
in the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; and
under the condition that the preset condition is met, the control is switched to a large queue mode for caching, the IU is cached in the same block used for large queue caching in the DDR in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is carried out by adopting aging timeout, and data stored in the block are read out.
As an optional embodiment, the rx_top module is configured with a register sub-module, a write control module, and a read control module, where the write control module is divided into two sub-modules, namely a large queue write module and a small queue write module, which are respectively used for write operation in the large queue mode and write operation in the small queue mode;
when IU flows into RX_TOP module from FC_MAC module, RX_TOP module firstly completes analysis of frame, analyzes IU into bare data of frame and descriptor of corresponding frame;
the write control module judges whether the data frame belongs to a large queue or a small queue according to the descriptor, correspondingly enters the large queue write module and the small queue write module, reads the allocated DDR block address from the memory sub-module, writes IU data into the DDR block, and writes the descriptor of the data frame into the register sub-module for storage;
when the read control module recognizes that the register sub-module has the storage information of the descriptor, the descriptor information is read, the content in the DDR block is read according to the descriptor information, the data is sent to the lower DMA_TOP module, and after one block is read, the read control module releases the address of the DDR block back to the register sub-module for subsequent recycling;
after the FC_MAC module receives the IU, the IU is cached by default by using a small queue mode, the IU is segmented and pushed into a corresponding block in the DDR according to the size of the dma_buffer for caching, and the IU is read when the conditions are met; and under the condition that the preset condition is met, controlling to switch to a large queue mode for caching, caching IUs in the DDR in the same block for large queue caching in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is performed by adopting aging timeout, and data stored in the block are read out, wherein when the large queue mode is used for caching, the IUs of different channels with the same priority in the same partition are spliced in the same block of the DDR in the splicing mode.
As an alternative embodiment, if the received IU contains only one frame of data, the received IU is buffered by switching from the default small queue mode to the large queue mode.
According to the technical scheme, the multi-partition-based size queue communication method based on the FC equipment, which is provided by the invention, has the advantages that the size queue data processing mode is used for the FPGA internal data of the FC equipment, the data processing capacity of the system is improved, the external DDR mode is adopted for storage, the resources in the FPGA are saved, the method is suitable for different transmission scenes, and the data processing efficiency of the system can be improved. Meanwhile, on the basis of multi-partition communication of the FC equipment, the data packet is segmented in the FPGA, and the through piece is selected to use a size column mode, so that on one hand, the use efficiency of the buffer can be fully improved, on the other hand, the problem of multi-channel concurrency can be dynamically solved, and the overall performance of the system is improved.
It should be understood that all combinations of the foregoing concepts, as well as additional concepts described in more detail below, may be considered a part of the inventive subject matter of the present disclosure as long as such concepts are not mutually inconsistent. In addition, all combinations of claimed subject matter are considered part of the disclosed inventive subject matter.
The foregoing and other aspects, embodiments, and features of the present teachings will be more fully understood from the following description, taken together with the accompanying drawings. Other additional aspects of the invention, such as features and/or advantages of the exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of the embodiments according to the teachings of the invention.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the invention will now be described, by way of example, with reference to the accompanying drawings.
FIG. 1 is a system block diagram of a communication system based on FC device multi-partition independent caching in accordance with an embodiment of the present invention.
FIG. 2 is a diagram of a small queue mode data store according to an embodiment of the invention.
FIG. 3 is a schematic diagram of a large queue mode data store according to an embodiment of the invention.
FIG. 4 is a flow chart of a small queue write data of an embodiment of the invention.
FIG. 5 is a flow chart of a large queue write data of an embodiment of the invention.
FIG. 6 is a flow chart of read data processing according to an embodiment of the present invention.
Detailed Description
For a better understanding of the technical content of the present invention, specific examples are set forth below, along with the accompanying drawings.
Aspects of the invention are described in this disclosure with reference to the drawings, in which are shown a number of illustrative embodiments. The embodiments of the present disclosure are not necessarily intended to include all aspects of the invention. It should be understood that the various concepts and embodiments described above, as well as those described in more detail below, may be implemented in any of a number of ways, as the disclosed concepts and embodiments are not limited to any implementation. Additionally, some aspects of the disclosure may be used alone or in any suitable combination with other aspects of the disclosure.
The communication system based on the FC device multi-partition independent cache in combination with the examples shown in fig. 1, 2 and 3 comprises an FPGA (Field Programmable Gate Array ) arranged in the FC device and a DDR (double rate synchronous dynamic random access memory) connected with the FPGA, which is one type of memory. And the FPGA is arranged in the FC equipment, and data communication and transmission of the FC equipment between the opposite terminal equipment and the target cache partition are realized through the FPGA.
Embodiments of the present invention, FC devices are particularly referred to as FC switches.
As shown in fig. 1, the direction of the data flow from the destination cache partition to the opposite end is the sending direction, whereas the receiving direction is the receiving direction, and we will describe the relationship, the link and the state change between the modules through the sending and the receiving of the data flow.
Referring to fig. 1, the configuration in the FPGA includes a dma_top module, a tx_top module, an rx_top module, and an fc_mac module; the DMA_TOP module is used for performing DMA direct memory access processing, the TX_TOP module is used for packaging data sent by the DMA_TOP module into frames conforming to an FC protocol and sending the frames to opposite terminal equipment through the FC_MAC module, meanwhile, the FC_MAC module receives the data sent by the opposite terminal equipment, the data are sent to the RX_TOP module after being analyzed, the RX_TOP module controls the DDR to perform data cache storage, the data in the DDR are read out and sent to the DMA_TOP module when preset conditions are met, and the data are sent to dma_buffer of the corresponding priority of a target cache partition in a DMA mode.
In the data flow transmitting direction from the destination cache partition to the opposite terminal equipment, data are stored in the destination cache partition according to four priorities pri 0-pri 3, the data are stored in dma_buffers of the destination cache partition, when a partition transmits the data, dma_buffers in the corresponding priority of the corresponding partition are pushed to a DMA_TOP module of the FPGA, the DMA_TOP module analyzes the data into bare data and is matched with a response descriptor, the bare data are pushed to a TX_TOP module of the FPGA, the TX_TOP module assembles and packages the received data into data frames conforming to an FC frame protocol, and finally the data frames are transmitted to the opposite terminal equipment through an FC_MAC module of the FPGA.
In the data flow receiving direction from the opposite terminal equipment to the target cache partition, the data sent from the opposite terminal equipment flows in through an FC_MAC module of the FPGA, is analyzed into bare data by an RX_TOP module of the FPGA and is matched with a corresponding descriptor, and then the data is stored in a DDR connected with the FGPA according to a large queue mode or a small queue mode; and after the receiving direction meets the condition of initiating DMA, the RX_TOP module of the FPGA reads the data from the DDR, sends the data into the DMA_TOP module of the FPGA, and finally sends the data into dma_buffer of the corresponding priority of the target cache partition in a DMA mode.
In the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; and
under the condition that the preset condition is met, the control is switched to a large queue mode for caching, the IU is cached in the same block used for large queue caching in the DDR in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is carried out by adopting aging timeout, and data stored in the block are read out.
The communication mode designed by the invention is multi-partition transceiving, so that a plurality of cache partitions receive data in the receiving direction, the cache partitions can separate dma_buffers with fixed sizes and push corresponding dma buffer addresses to the FPGA, therefore, after IU (Information Unit) data information is received in the FPGA, IUs are segmented according to the sizes of the dma_buffers, then a complete IU is uploaded in a plurality of times, and a plurality of dma_buffers store data are received in a target cache partition and spliced into a complete IU.
In combination with the example shown in fig. 1, the rx_top module is configured with a register sub-module, a write control module, and a read control module, where the write control module is divided into two sub-modules, i.e., a large queue write module and a small queue write module, which are respectively used for writing operations in the large queue mode and the small queue mode.
When the IU flows from the fcmac module into the rx_top module, the rx_top module first completes parsing of the frame, parsing the IU into bare data of the frame and descriptors of the corresponding frame.
The write control module judges whether the data frame belongs to a large queue or a small queue according to the descriptors, correspondingly enters the large queue write module and the small queue write module, reads the allocated DDR block address from the memory sub-module, writes IU data into the DDR block, and writes the descriptors of the data frame into the register sub-module for storage.
When the read control module recognizes that the register sub-module has the storage information of the descriptor, the descriptor information is read, the content in the DDR block is read according to the descriptor information, the data is sent to the lower DMA_TOP module, and after one block is read, the read control module releases the address of the DDR block back to the register sub-module for subsequent recycling.
After the FC_MAC module receives the IU, the IU is cached by default by using a small queue mode, the IU is segmented and pushed into a corresponding block in the DDR according to the size of the dma_buffer for caching, and the IU is read when the conditions are met; and under the condition that the preset condition is met, controlling to switch to a large queue mode for caching, caching IUs in the DDR in the same block for large queue caching in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is performed by adopting aging timeout, and data stored in the block are read out, wherein when the large queue mode is used for caching, the IUs of different channels with the same priority in the same partition are spliced in the same block of the DDR in the splicing mode.
In the embodiment of the invention, judgment is carried out according to the data of the IU, and if the received IU only contains the data of one frame, the received IU is switched from a default small queue mode to a large queue mode to be cached.
When the large queue mode is used for caching, the IUs of different channels with the same priority under the same partition are spliced in the same block of the DDR in a splicing mode.
As an alternative embodiment, in the unicast mode, if the received IU includes data of multiple frames, a default small queue mode is used for buffering, the received IU is segmented according to the size of dma_buffer and pushed to a block divided in the DDR for buffering, and when eop marks in the IU are identified, or after a predetermined period of time passes, new data is not received in the block, a timeout mechanism is triggered, content in the block is read out, pushed to a dma_top module for DMA, and the read data is sent to dma_buffer of a corresponding priority of a destination cache partition by means of DMA.
As an alternative embodiment, in the multicast mode, if the received IU includes data of multiple frames, a default small queue mode is used for buffering, the IU is split according to the size of dma_buffer, pushed into a corresponding block in the DDR for buffering, after the data is read out from the DDR, one piece of data is copied in the dma_top module, and then sent to different destination buffer partitions respectively.
As an alternative embodiment, if the received IU contains only one frame of data, the received IU is buffered by switching from the default small queue mode to the large queue mode.
And when all blocks in the buffer area for the small queue are occupied, for receiving the new IU entry, the FPGA controls and adjusts the blocks in the dynamic scheduling buffer area for the subsequent entry IU to carry out buffer storage.
When the FPGA performs block division on DDR, setting an occupation threshold of blocks in a buffer area for small queues, and if the number of the blocks in the buffer area for small queues reaches the occupation threshold, starting the blocks in the dynamic scheduling buffer area for supplementing for buffer storage of the subsequent IU; and is also provided with
And when the number of blocks in the buffer area for the small queue is lower than a set release threshold, releasing the supplemented blocks and recycling the blocks to the dynamic scheduling buffer area.
We will further describe below with reference to specific examples.
In the embodiment of the invention, after the FPGA of the FC equipment is powered on, in a register sub-module of an RX_TOP module in the FPGA, dividing the DDR into blocks, and dividing buffer areas respectively used for a large queue, a small queue and dynamic scheduling in the DDR, wherein the size of each block is the same as the dma_buffer size of a target buffer area, and the base address of each block is respectively stored in a FIFO queue of the FPGA, so that when data enter the DDR for storage, one base address is read out for use, and when the data of one block is read out from the DDR, the base address is released back into the FIFO queue for subsequent circular use; the sum of the sizes of the buffer areas for large queue buffer, small queue buffer and dynamic scheduling is smaller than or equal to the capacity of DDR.
In combination with the examples shown in fig. 1, 2 and 3, the DDR memory is divided into three parts, the first part is used by a small queue, the second part is used by a large queue, and the third part is used by dynamic scheduling.
After the system is powered on, the FPGA divides the three parts in the DDR in a register sub-module of the RX_TOP module, and the size of each block is the same as the size of the dma_buffer. At initialization, the number of blocks that the FPGA will allocate to the three parts is assumed to be a blocks (A1-An), B blocks (B1-Bn), C blocks (C1-Cn), and the total capacity of these together is less than or equal to the capacity of DDR. The base address of each block is respectively stored in the FIFO of the FPGA, when data enter the DDR for storage, one address is read out for use, and when the data of one block are read out from the DDR, the address is released back to the corresponding FIFO for subsequent recycling.
1 Small queue storage mode (default mode)
1.1 unicast: in the embodiment of the present invention, the small queue storage mode is provided for the case that one IU includes a plurality of frames, the storage mode of the small queue in the DDR is shown in fig. 2, the data writing processing flow chart is shown in fig. 4, and if unicast is to be initiated, that is, when data of one channel is sent to only one partition, then the IU is split and stored according to the size of the DDR block (that is, the size of dma_buffer).
For example, assuming that the IU occupies three blocks A1 to A3, normally, when the eop flag of an IU is recognized, that is, it represents the end of one IU, the read control module reads data according to the flow of fig. 6.
For example, if the data in IU is stored in A2 block, but the block capacity of A2 is half of the stored data, and the subsequent data is stored for a long time not only so, thereby triggering a timeout mechanism, the data of A1 and A2 are directly read out first to dma report, and continue to be stored in A3 and A4 after IU data is subsequently received.
If an IU sent from the peer device only contains one frame in the default small queue storage mode, then the one frame occupies a block of a portion a for reporting, and the storage space utilization is low, so that the large queue can be switched by the controlled switch for caching.
1.2 multicast
When multicast is initiated, the data writing processing mode in the DDR is the same as that of unicast, but after the data is read out from the DDR, one data is copied into multiple data in the DMA_TOP module, and then the data are respectively sent to different destination cache partitions.
1.3 multichannel concurrency
Assuming that each block of A1-An can store 32 frames, but at this time, n channels of data are received concurrently, and each channel only receives one frame, so each block of A1-An is occupied, because each block is only allowed to store data under the same channel, at this time, if the n+1th channel is reached, the block of the a part has been used up and has not released back the available address, at this time, the block of the C part is to be supplemented for subsequent use by adjusting the block of the C part (for dynamic scheduling).
In an alternative embodiment, the FPGA may supplement and release the blocks of part a by setting a threshold. Setting an occupation threshold value of blocks in a buffer area for small queues when the DDR is used for block division, and starting the blocks in the dynamic scheduling buffer area for supplementing if the number of the blocks in the buffer area for small queues reaches the occupation threshold value for buffer storage of the subsequent IU; and when the number of blocks in the buffer area for the small queue is lower than a set release threshold, releasing the supplemented blocks and recycling the blocks to the dynamic scheduling buffer area.
For example, after the number of the blocks used in the part a is higher than the occupation threshold, the block number of the part C is started to supplement the blocks in the part a, and when the number of the blocks used in the part a is lower than the release threshold, the number of the blocks supplemented in the part C is released back to the part C.
It should be appreciated that if part C does not supplement the block to part a, the number of uses of part a is below the release threshold and no block retraction is required.
In some embodiments, when part C has no available blocks to supplement part A, and a new block is needed for storing a newly received data frame, a configuration process may be performed by the user, for example, to empty all blocks (or part of the blocks) of the DDR, store the newly received data, or directly discard all subsequent newly received data until there is room for storage.
2. Big queue mode (user on)
In an embodiment of the invention, the large queue mode is proposed for the case where one IU contains only one frame.
In the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; if the received IU only contains data of one frame, switching from a default small queue mode to a large queue mode to buffer the received IU, adopting a splicing mode to buffer the IU in the same block used for large queue buffer in DDR, after a preset time period, the memory of the block is still not full, and no new IU enters, adopting aging timeout to report, and reading out the data stored in the block.
2.1 write data processing flow in big queue mode as shown in fig. 5, splicing IU of different channels with the same priority in the same partition in a DDR block in a splicing mode, when the memory of the long-time block is not full and new data is not received, reporting by aging timeout, and reading the content in the block.
For example, after the large queue mode starts, since the received IU is a single frame, if a small queue is still used, only one frame of data is stored in a DDR block, which is wasteful, and the resource usage is not high, in the embodiment of the present invention, for single priority data of a single destination partition, one IU (only including one frame) of data enters the DDR, and is copied to the corresponding block buffer of the DDR according to the destination partition and the priority for storage, when the block is full of data, DMA is initiated to report the data, and when a part of the DDR block is stored, but when no new data is received within a specified time, the stored data in the DDR block is directly read out after the specified time is up, and the flow of the read-out is shown in fig. 6.
2.2, under the large queue, directly putting the unicast into the block of the corresponding partition, and for multicast, copying one data and then putting the copied data into the blocks corresponding to the two partitions for storage; in addition, if insufficient DDR blocks occur, the processing is the same as that of a small queue, the fetch supplement is performed from the C part, and the supplement block is released when the condition is satisfied.
Therefore, the size queue communication method based on fc equipment multi-partition provided by the invention uses a size queue data processing mode in the FPGA, improves the data processing capacity of the system, adopts an external DDR mode for storage, saves resources in the FPGA, adapts to different transmission scenes, and can improve the data processing efficiency of the system. Meanwhile, on the basis of multi-partition communication of the FC equipment, the data packet is segmented in the FPGA, and the storage and writing operation of a large column mode and a small column mode are combined, so that on one hand, the use efficiency of the buffer can be fully improved, on the other hand, the number of the self-adaptive DDR storage blocks can be dynamically adjusted, the problem of multi-channel concurrency is dynamically solved, the problem of caching of data under the concurrency channel is solved, and the overall performance of the system is improved.
While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Those skilled in the art will appreciate that various modifications and adaptations can be made without departing from the spirit and scope of the present invention. Accordingly, the scope of the invention is defined by the appended claims.
Claims (10)
1. The communication method based on the multi-partition independent cache of the FC equipment is characterized by comprising the following steps of:
after the FPGA of the FC equipment is powered on, in a register submodule of an RX_TOP module in the FPGA, dividing the DDR connected with the FPGA into a large queue, a small queue and a buffer area for dynamic scheduling, wherein the size of each block is the same as the dma_buffer size of a target buffer area, and the base address of each block is respectively stored in a FIFO queue of the FPGA; the sum of the sizes of the buffer areas for large queue buffer, small queue buffer and dynamic scheduling is smaller than or equal to the DDR capacity;
in the data flow transmitting direction from the target cache partition to the opposite terminal equipment, data are stored in the target cache partition according to four priorities pri 0-pri 3, the data are stored in dma_buffers of the target cache partition, when a certain partition transmits the data, dma_buffers in the corresponding priority of the corresponding partition are pushed to a DMA_TOP module of an FPGA, the DMA_TOP module analyzes the data into bare data and is matched with a response descriptor, the bare data are pushed to a TX_TOP module of the FPGA, the TX_TOP module assembles and packages the received data into data frames conforming to an FC frame protocol, and finally the data frames are transmitted to the opposite terminal equipment through an FC_MAC module of the FPGA;
in the data flow receiving direction from the opposite terminal equipment to the target cache partition, the data sent from the opposite terminal equipment flows in through an FC_MAC module of the FPGA, is analyzed into bare data by an RX_TOP module of the FPGA and is matched with a corresponding descriptor, and then the data is stored in a DDR connected with the FGPA according to a large queue mode or a small queue mode; after the receiving direction meets the condition of initiating DMA, the RX_TOP module of the FPGA reads data from the DDR, sends the data into the DMA_TOP module of the FPGA, and finally sends the data into dma_buffer of the corresponding priority of the target cache partition in a DMA mode;
in the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; and
under the condition that the preset condition is met, the control is switched to a large queue mode for caching, the IU is cached in the same block used for large queue caching in the DDR in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is carried out by adopting aging timeout, and data stored in the block are read out.
2. The communication method based on FC device multi-partition independent caching according to claim 1, wherein when caching in large queue mode, the same priority IU of different channels under the same partition are spliced in the same block of DDR by splicing.
3. The communication method based on FC device multi-partition independent buffering according to claim 1, wherein in unicast mode, if the received IU contains data of a plurality of frames, a default small queue mode is used for buffering, the received IU is split according to the size of dma_buffer and pushed to the block divided in DDR for buffering, and when eop flag in IU is identified, or no new data in block enters after a predetermined period of time, a timeout mechanism is triggered, content in block is read, pushed to dma_top module for DMA, and the read data is sent to dma_buffer of corresponding priority of destination buffer partition by way of DMA.
4. The communication method based on FC device multi-partition independent caching according to claim 3, wherein in multicast mode, if the received IU contains data of multiple frames, a default small queue mode is used for caching, the IU is split according to the size of dmabuffer, pushed into a corresponding block in DDR for caching, and after reading data from DDR, after copying one piece of data in dma_top module, respectively sent to different destination cache partitions.
5. A method of FC device multi-partition independent cache based communication as recited in any one of claims 1-4 wherein if a received IU contains only one frame of data, switching from a default small queue mode to a large queue mode to cache the received IU.
6. The FC device multi-partition independent cache based communication method of any one of claims 1-4, wherein when all blocks in a cache area for use in a small queue are occupied, for receiving a new IU entry, the FPGA controls and adjusts the blocks in the dynamically scheduled cache area for use in cache storage by subsequently entered IUs.
7. The communication method based on FC equipment multi-partition independent caching according to claim 6, wherein the FPGA sets an occupancy threshold of blocks in a cache area for use by the small queue when performing block division for DDR, and if the number of blocks in the cache area for use by the small queue reaches the occupancy threshold, starts the blocks in the dynamic scheduling cache area to be replenished for use by a subsequently entered IU to perform cache storage; and is also provided with
And when the number of blocks in the buffer area for the small queue is lower than a set release threshold, releasing the supplemented blocks and recycling the blocks to the dynamic scheduling buffer area.
8. A communication system based on FC device multi-partition independent caching, comprising:
the FPGA is arranged in the FC equipment, and data communication and transmission of the FC equipment between the opposite terminal equipment and the target cache partition are realized through the FPGA;
DDR connected with the FPGA;
the FPGA is internally configured with a DMA_TOP module, a TX_TOP module, an RX_TOP module and an FC_MAC module; the DMA_TOP module is used for performing DMA direct memory access processing, the TX_TOP module is used for packaging data sent by the DMA_TOP module into frames conforming to an FC protocol and sending the frames to opposite terminal equipment through the FC_MAC module, meanwhile, the FC_MAC module receives the data sent by the opposite terminal equipment, the data are sent to the RX_TOP module after being analyzed, the RX_TOP module controls the DDR to perform data cache storage, the data in the DDR are read out and sent to the DMA_TOP module when preset conditions are met, and the data are sent to dma_buffer of the corresponding priority of a target cache partition in a DMA mode;
after the FPGA of the FC equipment is powered on, in a register sub-module of an RX_TOP module in the FPGA, dividing the DDR into a large queue, a small queue and a buffer area for dynamic scheduling, wherein the size of each block is the same as the dma_buffer size of a target buffer area, and the base address of each block is respectively stored in the FIFO queue of the FPGA; the sum of the sizes of the buffer areas for large queue buffer, small queue buffer and dynamic scheduling is smaller than or equal to the DDR capacity;
in the data flow transmitting direction from the target cache partition to the opposite terminal equipment, data are stored in the target cache partition according to four priorities pri 0-pri 3, the data are stored in dma_buffers of the target cache partition, when a certain partition transmits the data, dma_buffers in the corresponding priority of the corresponding partition are pushed to a DMA_TOP module of an FPGA, the DMA_TOP module analyzes the data into bare data and is matched with a response descriptor, the bare data are pushed to a TX_TOP module of the FPGA, the TX_TOP module assembles and packages the received data into data frames conforming to an FC frame protocol, and finally the data frames are transmitted to the opposite terminal equipment through an FC_MAC module of the FPGA;
in the data flow receiving direction from the opposite terminal equipment to the target cache partition, the data sent from the opposite terminal equipment flows in through an FC_MAC module of the FPGA, is analyzed into bare data by an RX_TOP module of the FPGA and is matched with a corresponding descriptor, and then the data is stored in a DDR connected with the FGPA according to a large queue mode or a small queue mode; after the receiving direction meets the condition of initiating DMA, the RX_TOP module of the FPGA reads data from the DDR, sends the data into the DMA_TOP module of the FPGA, and finally sends the data into dma_buffer of the corresponding priority of the target cache partition in a DMA mode;
in the data flow receiving direction, after receiving IU, the FPGA defaults to buffer according to a small queue mode, segments and pushes the IU into corresponding blocks in the DDR according to the size of dma_buffer to buffer, and reads out when the conditions are met; and
under the condition that the preset condition is met, the control is switched to a large queue mode for caching, the IU is cached in the same block used for large queue caching in the DDR in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is carried out by adopting aging timeout, and data stored in the block are read out.
9. The communication system based on FC device multi-partition independent cache according to claim 8, wherein the rx_top module is configured with a register sub-module, a write control module, and a read control module, wherein the write control module is divided into two sub-modules, namely a big queue write module and a small queue write module, which are respectively used for a write operation in a big queue mode and a write operation in a small queue mode;
when IU flows into RX_TOP module from FC_MAC module, RX_TOP module firstly completes analysis of frame, analyzes IU into bare data of frame and descriptor of corresponding frame;
the write control module judges whether the data frame belongs to a large queue or a small queue according to the descriptor, correspondingly enters the large queue write module and the small queue write module, reads the allocated DDR block address from the memory sub-module, writes IU data into the DDR block, and writes the descriptor of the data frame into the register sub-module for storage;
when the read control module recognizes that the register sub-module has the storage information of the descriptor, the descriptor information is read, the content in the DDR block is read according to the descriptor information, the data is sent to the lower DMA_TOP module, and after one block is read, the read control module releases the address of the DDR block back to the register sub-module for subsequent recycling;
after the FC_MAC module receives the IU, the IU is cached by default by using a small queue mode, the IU is segmented and pushed into a corresponding block in the DDR according to the size of the dma_buffer for caching, and the IU is read when the conditions are met; and under the condition that the preset condition is met, controlling to switch to a large queue mode for caching, caching IUs in the DDR in the same block for large queue caching in a splicing mode, after a preset time period, the memory of the block is still not full, and no new IU is received, reporting is performed by adopting aging timeout, and data stored in the block are read out, wherein when the large queue mode is used for caching, the IUs of different channels with the same priority in the same partition are spliced in the same block of the DDR in the splicing mode.
10. The FC device multi-partition independent cache based communication system of claim 9, wherein if a received IU contains only one frame of data, switching from a default small queue mode to a large queue mode to cache the received IU.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310743999.8A CN116755624A (en) | 2023-06-25 | 2023-06-25 | Communication method and system based on FC equipment multi-partition independent cache |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310743999.8A CN116755624A (en) | 2023-06-25 | 2023-06-25 | Communication method and system based on FC equipment multi-partition independent cache |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116755624A true CN116755624A (en) | 2023-09-15 |
Family
ID=87951128
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310743999.8A Pending CN116755624A (en) | 2023-06-25 | 2023-06-25 | Communication method and system based on FC equipment multi-partition independent cache |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116755624A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118631773A (en) * | 2024-06-20 | 2024-09-10 | 南京全信传输科技股份有限公司 | FC device adaptive rate DMA communication method and communication system based on multi-partition |
| CN118626293A (en) * | 2024-08-09 | 2024-09-10 | 成都领目科技有限公司 | A data cache transmission method and system with adaptive data bandwidth |
| CN118646634A (en) * | 2024-05-30 | 2024-09-13 | 南京全信传输科技股份有限公司 | Health management communication method and system based on FC device dual drive |
-
2023
- 2023-06-25 CN CN202310743999.8A patent/CN116755624A/en active Pending
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118646634A (en) * | 2024-05-30 | 2024-09-13 | 南京全信传输科技股份有限公司 | Health management communication method and system based on FC device dual drive |
| CN118646634B (en) * | 2024-05-30 | 2025-07-18 | 南京全信传输科技股份有限公司 | FC equipment dual-drive-based health management communication method and system |
| CN118631773A (en) * | 2024-06-20 | 2024-09-10 | 南京全信传输科技股份有限公司 | FC device adaptive rate DMA communication method and communication system based on multi-partition |
| CN118626293A (en) * | 2024-08-09 | 2024-09-10 | 成都领目科技有限公司 | A data cache transmission method and system with adaptive data bandwidth |
| CN118626293B (en) * | 2024-08-09 | 2024-11-12 | 成都领目科技有限公司 | A data cache transmission method and system with adaptive data bandwidth |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6754222B1 (en) | Packet switching apparatus and method in data network | |
| US5502719A (en) | Path allocation system and method having double link list queues implemented with a digital signal processor (DSP) for a high performance fiber optic switch | |
| CN116755624A (en) | Communication method and system based on FC equipment multi-partition independent cache | |
| US7227841B2 (en) | Packet input thresholding for resource distribution in a network switch | |
| US7042891B2 (en) | Dynamic selection of lowest latency path in a network switch | |
| US6922408B2 (en) | Packet communication buffering with dynamic flow control | |
| US5528584A (en) | High performance path allocation system and method with fairness insurance mechanism for a fiber optic switch | |
| US7401126B2 (en) | Transaction switch and network interface adapter incorporating same | |
| US5548590A (en) | High performance frame time monitoring system and method for a fiber optic switch for a fiber optic network | |
| US7406041B2 (en) | System and method for late-dropping packets in a network switch | |
| US20020118692A1 (en) | Ensuring proper packet ordering in a cut-through and early-forwarding network switch | |
| US7995472B2 (en) | Flexible network processor scheduler and data flow | |
| US20100057953A1 (en) | Data processing system | |
| CN116192772B (en) | CPU (Central processing Unit) receiving and dispatching packet scheduling device and method based on space cache | |
| WO2006036124A1 (en) | Improved handling of atm data | |
| CN116821042A (en) | FC equipment DMA communication method based on multiple partitions | |
| WO2023202294A1 (en) | Data stream order-preserving method, data exchange device, and network | |
| CN114531488A (en) | High-efficiency cache management system facing Ethernet exchanger | |
| CN100571195C (en) | Multi-port Ethernet switching device and data transmission method | |
| US8174971B2 (en) | Network switch | |
| EP3487132B1 (en) | Packet processing method and router | |
| JP7435054B2 (en) | Communication device, control method for communication device, and integrated circuit | |
| US7353303B2 (en) | Time slot memory management in a switch having back end memories stored equal-size frame portions in stripes | |
| KR100378372B1 (en) | Apparatus and method for packet switching in data network | |
| KR100441883B1 (en) | Apparatus and method for Ingress control of packet switch system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |