[go: up one dir, main page]

CN102508719B - Method and device for receiving data of multiple connections in edge trigger mode - Google Patents

Method and device for receiving data of multiple connections in edge trigger mode Download PDF

Info

Publication number
CN102508719B
CN102508719B CN2011103746890A CN201110374689A CN102508719B CN 102508719 B CN102508719 B CN 102508719B CN 2011103746890 A CN2011103746890 A CN 2011103746890A CN 201110374689 A CN201110374689 A CN 201110374689A CN 102508719 B CN102508719 B CN 102508719B
Authority
CN
China
Prior art keywords
buffer memory
data
primary data
current connection
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2011103746890A
Other languages
Chinese (zh)
Other versions
CN102508719A (en
Inventor
应鸿浩
何仲君
毛银杰
章乐焱
鲁建凡
柳正龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hundsun Technologies Inc
Original Assignee
Hundsun Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hundsun Technologies Inc filed Critical Hundsun Technologies Inc
Priority to CN2011103746890A priority Critical patent/CN102508719B/en
Publication of CN102508719A publication Critical patent/CN102508719A/en
Application granted granted Critical
Publication of CN102508719B publication Critical patent/CN102508719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System (AREA)

Abstract

The invention discloses a method and a device for receiving data of multiple connections in an edge trigger mode. The multiple connections of an application program adopting the edge trigger mode are respectively provided with a bottom layer receiving cache with a fixed size and a first cache with a variable size; and all connection of the application program share a second cache with a fixed size. The method comprises the following steps of: sequentially receiving the data in the bottom layer receiving cache by adopting a scattered reading mechanism; and judging whether the data in the bottom layer receiving cache of the multiple connections are received or not, if not, taking any un-received connection as the execution receiving process of the current connection until the data in the bottom layer receiving cache of the multiple connections are completely received. According to the method and the device disclosed by the embodiment of the invention, the memory space can be saved and the operation speed of the application program is increased while the problem that the application programs of certain connections are unavailable when the application programs work in the edge triggermode and the data of the multiple connections are received is resolved.

Description

A kind of multi-link data receive method and device of edge-triggered pattern
Technical field
The present invention relates to network communications technology field, particularly relate to a kind of multi-link data receive method and device of edge-triggered pattern.
Background technology
The application program of operating system needs to send or receive data by the connection of setting up when carrying out data transmission with other programs.The bottom buffer memory that connects is divided into bottom and receives buffer memory and bottom transmission buffer memory, and both are independent mutually, non-overlapping copies.It is the storage space that is used for depositing the data of having received in the connection that operating system provides for each connection that bottom receives buffer memory, and these data of having received remain application program and read.Bottom sends buffer memory, and to be operating system be used for depositing the storage space that application program is submitted to the data of operating system for what each connection provided, and these data that need submit remain operating system and send.
The application program of operating system has two kinds of mode of operations, and a kind of is horizontal trigger mode, and another kind is the edge-triggered pattern.For receiving operation, horizontal trigger mode refers to need only the state of the bottom reception buffer memory that connects not for empty, and operating system will notification application read data from bottom reception buffer memory; And the edge-triggered pattern refers to have only when the state of the bottom reception buffer memory that connects becomes " not empty " from " sky ", and operating system just notification application reads data from bottom reception buffer memory.For transmit operation, horizontal trigger mode refers to need only the state of the bottom transmission buffer memory that connects for full, and operating system will notification application write data in bottom transmission buffer memory; And the edge-triggered pattern refers to have only when the state of the bottom transmission buffer memory that connects becomes " being discontented with " from " expiring ", and operating system just notification application writes data in bottom transmission buffer memory.The mode performance of therefore visible edge-triggered will be higher than level to be triggered, because if data had not both been received in a connection, there are not data to need to send again, this moment is if horizontal trigger mode, what then the operating system notification application can continue writes data in the bottom transmission buffer memory of this connection, this can cause cpu busy percentage high, and there is not above problem in the edge-triggered pattern.
When application work under the edge-triggered pattern, be cached with data and need read when the bottom of operating system notification application connection receives, if all data that this moment, application program was not received in this connection bottom reception buffer memory allow the state of bottom reception buffer memory become sky, when even if so follow-up operating system is received data in this connection again, also no longer the bottom of this connection of notification application is data cached has data to read.So, for the data that can not omit any connection receive, application program must be notified the bottom of certain connection to receive to be cached with data need receive the time in operating system, will receive in this connection all data in buffer and all receive connecting bottom, become " sky " so that bottom receives the state of buffer memory.
Further, if the bottom on certain connects receives buffer memory and receives a lot of data, and the buffer memory that application program copy bottom is used when receiving data in buffer is big inadequately, and that just needs circulation to receive data in this connections, and the data of having received in this bottom reception buffer memory all copy and finish.For example: one connects the transmission mass data to application program, when application program receives data in buffer at the bottom that receives this connection, the bottom that has new data to enter into this connection again continually receives buffer memory, this will cause application program can't the data that the bottom of this connection receives in the buffer memory to be run through all the time, application program will be absorbed in the data that endless circulation receives this connection, finally cause application program only to be this Connection Service, and other connections can't obtain the service of this application program.Like this, with regard to causing application program to receive data under multi-link edge-triggered pattern the disabled problem of service has appearred.
In order to solve the disabled problem of above-mentioned service, there are two kinds of methods in prior art, first kind is to connect to each to distribute a jumbo buffer memory, the bottom that enough allows the disposable copy of application program finish some connections (for example A connects) receives all data on the buffer memory, the state of the bottom reception buffer memory that at this moment should connect is " sky " just, simultaneously the application program bottom that can then go to read other connections (for example B connects) receives the data in the buffer memory, after data in the bottom reception buffer memory of B connection by the time also ran through and finish, whether application program is followed the demand operating system again had new data to read.Because receiving buffer memory, bottom that some connections are read in circulation caused the disabled phenomenon of application program service with regard to not occurring like this.
Another kind is public large capacity cache of all connections of application program, receives the data in the bottom reception buffer memory, guarantees that the bottom that allows application program can disposablely run through certain connection receives all data on the buffer memory.Application program receives when data cached at the bottom that receives each connection, receive with that public piece large capacity cache earlier, and then the actual data of receiving in the large capacity cache are copied to application program connect in the buffer memory that distributes separately (if exceed size for the buffer memory that connects independent allocation, application program can allow the capacity of buffer memory of this independent allocation enlarge) for each.Application program just can not occur causing application program service unavailable because being absorbed in the data of reading certain connection that circulate.
But above-mentioned first method is because application program has been distributed a large amount of buffer memorys for each connects, the problem that has caused the memory headroom waste, further, because the memory amount that application program can be used fixes, so the linking number that finally causes application program to support is also limited.Second method can cause application program again when the data of each each connection of reception, all needs to carry out an extra memory copying, has reduced the travelling speed of application program.
Summary of the invention
Technical matters to be solved by this invention is, a kind of multi-link data receive method of edge-triggered pattern is provided, to solve in application work in edge-triggered pattern following time, when receiving for multi-link data the disabled problem of application program service of some connection the time, also can the save memory space, and promote the travelling speed of application program.
Another object of the present invention is that above-mentioned design is applied to provide a kind of data sink in the concrete applied environment, thereby guarantees realization and the application of this method.
For solving the problems of the technologies described above, the embodiment of the invention provides a kind of multi-link data receive method of edge-triggered pattern, adopt a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; Described method comprises:
Adopt scatter-read mechanism successively the data that described multi-link bottom receives in the buffer memory to be received, described receiving course comprises: read primary data from the bottom reception buffer memory of current connection; The size of more described primary data and described current first buffer memory that is connected: if described primary data greater than described first buffer memory, then is stored to described primary data respectively in first buffer memory and described second buffer memory of described current connection in order; If described primary data is less than or equal to first buffer memory of described current connection, then described primary data all is stored in first buffer memory of described current connection;
Judge whether the data that described multi-link bottom receives in the buffer memory all receive, if not, then be connected to current connection with any that does not receive and carry out described receiving course, till described multi-link bottom receives data in the buffer memory and all receives.
Optionally, under the situation of described primary data greater than first buffer memory of described current connection, be stored to described primary data in first buffer memory of described current connection and described second buffer memory in order respectively after, also comprise:
The size of the storage space that the described primary data of foundation takies in described second buffer memory expands described first buffer memory.
Optionally, also comprise:
In that part of first buffer memory that is copied to after the described expansion that described primary data is stored in second buffer memory.
Optionally, described employing scatter-read mechanism reads primary data from the bottom reception buffer memory of current connection, specifically comprise:
From the bottom reception buffer memory of described current connection, read current all primary datas, and when reading, described read operation is locked.
Optionally, also comprise:
After the primary data in the bottom reception buffer memory of described current connection receives, described read operation is carried out release.
A kind of multi-link data sink of edge-triggered pattern adopts a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; Described device comprises:
The data receiving element is used for adopting scatter-read mechanism successively the data that described multi-link bottom receives buffer memory to be received, and described data receiving element comprises: read module is used for receiving buffer memory from the bottom of current connection and reads primary data; Comparison module, the size that is used for more described primary data and described current first buffer memory that is connected: memory module, if be used for described primary data greater than described first buffer memory, described primary data be stored to respectively in order in first buffer memory and described second buffer memory of described current connection; If described primary data is less than or equal to first buffer memory of described current connection, then described primary data all is stored in first buffer memory of described current connection;
Judge module is used for judging whether the data that described multi-link bottom receives buffer memory all receive;
Trigger module, be used for result at described judge module for situation not under, be connected to current connection and trigger described data receiving system with any that does not receive, till described multi-link bottom receives data in the buffer memory and all receives.
Optionally, also comprise:
Enlargement module, the size that is used for the storage space that takies at described second buffer memory according to described primary data expands described first buffer memory.
Optionally, also comprise:
The copy module is used for described primary data in that part of first buffer memory after being copied to described expansion that second buffer memory is stored.
Optionally, described read module specifically comprises:
Reading submodule is used for receiving buffer memory from the bottom of described current connection and reads current all primary datas;
The locking submodule is used for when reading described read operation being locked.
Optionally, also comprise:
Separate lock module, be used for after the primary data of the bottom reception buffer memory of described current connection receives, described read operation being carried out release.
From above-mentioned technical scheme as can be seen, compared with prior art, the invention provides a kind of multi-link data receive method and device of edge-triggered pattern, adopt a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size, and all of described application program connect shared fixed-size second buffer memory, like this when reading multi-link bottom and receive data in the buffer memory, can disposablely read application program to the data that bottom receives in the buffer memory on independent first buffer memory that distributes of each connection, perhaps on this first buffer memory and second buffer memory, can avoid the disabled problem of other Connection Service that causes when multi-link data receive under the edge-triggered pattern, simultaneously, also can reduce the internal memory use amount of application program, and improve the travelling speed of application program.
Description of drawings
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, the accompanying drawing that describes below only is some embodiment that put down in writing among the application, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the process flow diagram of the inventive method embodiment one;
Fig. 2 is the process flow diagram of step 101 among the inventive method embodiment one;
Fig. 3 is the process flow diagram of the inventive method embodiment two;
Fig. 4 is the structural representation of system embodiment one of the present invention;
Fig. 5 is the structural representation of data receiving element 40 in the system embodiment one of the present invention;
Fig. 6 is the structural representation of system embodiment two of the present invention.
Embodiment
For realizing goal of the invention of the present invention, the invention provides a kind of multi-link data receive method and device of edge-triggered pattern, adopt a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size, and all of described application program connect shared fixed-size second buffer memory, like this when reading multi-link bottom and receive data in the buffer memory, can disposablely read application program to the data that bottom receives in the buffer memory on independent first buffer memory that distributes of each connection, perhaps on this first buffer memory and second buffer memory, can avoid the disabled problem of other Connection Service that causes when multi-link data receive under the edge-triggered pattern.
It more than is core concept of the present invention, in order to make those skilled in the art person understand the present invention program better, below in conjunction with the accompanying drawing in the embodiment of the invention, technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that obtains under the creative work prerequisite.
Referring to Fig. 1, show the process flow diagram of the multi-link data receive method embodiment 1 of a kind of edge-triggered pattern of the present invention, can may further comprise the steps:
Step 101: adopt scatter-read mechanism successively the data that described multi-link bottom receives in the buffer memory to be received.
In embodiments of the present invention, application program adopts the edge-triggered pattern to carry out data and receives, wherein, operating system is that a plurality of connections have respectively distributed a fixed-size bottom to receive buffer memory, application program then connects first buffer memory distributed a variable size separately for each, and this first is buffered in the DRP data reception process and can dynamically expands according to what of storage data; In addition, described application program has also been distributed fixed-size second buffer memory jointly for all connections, and this second buffer memory can be a large capacity cache.
Wherein, it is very little that the initial size of multi-link first buffer memory that has can arrange, follow-up in DRP data reception process, because the size of this first buffer memory is dynamic, so the size of first buffer memory can be gradually is directly proportional with the size of data of the corresponding required reception of connection, for example the data volume of user A transmission is general, and the data volume that user B sends is bigger, the data volume that user C sends is less, then may be extended for the 10M size gradually with first buffer memory that is connected that user A carries out data interaction, and may be extended for the 20M size gradually with first buffer memory that is connected that user B carries out data interaction, and may be 3M with first buffer memory that is connected that user C carries out data interaction.Certainly, in actual applications, the size of first buffer memory is dynamic, so above-mentioned numerical value only is an example, can not be used for limiting the size of first buffer memory in the embodiment of the invention.
Wherein, the API that scatter-read provides for operating system (Application Programming Interface, application programming interface), scatter-read can be when reading bottom for 1 time and receive data in the buffer memory copies wherein all data on the discontinuous internal memory of polylith in order.Concrete, referring to Fig. 2, step 101 can may further comprise the steps when implementing:
Step 201: from the bottom reception buffer memory of current connection, read primary data.
Wherein, current any that is connected in multi-link need read the connection that bottom receives the data in the buffer memory, receiving course shown in Figure 2 just is introduced at a connection in multi-link, and the DRP data reception process that the bottom of other connections receives in the buffer memory is similar.
Step 202: the size of more described primary data and described current first buffer memory that is connected, if described primary data then enters step 203 greater than described first buffer memory; If described primary data is less than or equal to first buffer memory of described current connection, then enter step 204.
The size of the primary data in the bottom reception buffer memory of current connection is compared with the size that this links first buffer memory of configuration, when initial data during greater than first buffer memory, illustrate that this primary data can not be stored in first buffer memory fully, at this moment just need use the storage space of second buffer memory and store that a part of primary data that first buffer memory can not be stored; When initial data are less than or equal to first buffer memory, illustrate that primary data can be stored in first buffer memory fully.
Step 203: described primary data is stored to respectively in order in first buffer memory and described second buffer memory of described current connection.
When initial data can not be stored in first buffer memory fully, just primary data need be stored to respectively in first buffer memory and second buffer memory in order.For example, if the primary data that the bottom of a connection receives in the buffer memory is the data that comprise " 0123456789 " these 10 bytes, and be that the size of first buffer memory of this connection configuration is 6, the size of second buffer memory is 1000, after storage finished so, first buffer memory was used fully, and " 012345 " is being deposited in the inside, second buffer memory is partly used, and " 6789 " are being deposited in the inside.
Step 204: described primary data all is stored in first buffer memory of described current connection.
When initial data can be stored in first buffer memory fully, just primary data directly can be stored in first buffer memory.For example, the primary data that the bottom that connects receives in the buffer memory is the data that comprise " 0123456789 " these 10 bytes, if for the size of first buffer memory of this connection configuration is 12, the size of second buffer memory is 1000, after storage finishes so, first buffer memory is partly used, and " 0123456789 " is being deposited in the inside, and second buffer memory is not used.
Step 102: judge that whether the data that described multi-link bottom receives in the buffer memory all receive, and if not, then enter step 103.
Judge whether the data that a plurality of bottoms of current multi-link correspondence receive in the buffer memorys all receive, if all received, end data receiving course then.
Step 103: be connected to current connection with any that does not receive and carry out described receiving course, till described multi-link bottom receives data in the buffer memory and all receives.
Not carrying out the connection that data receive if also exist, then be connected to current connection with any that does not receive and carry out described receiving course, namely is the described process of step 201~step 204, till the data in multi-link bottom reception buffer memory all receive.
In the embodiment of the invention, because each bottom that connects except fixed size receives the buffer memory, also has independent dynamic open-ended first buffer memory, and all connections can both share jumbo second buffer memory, so the data in the bottom of each connection reception buffer memory are when adopting scatter-read mechanism to carry out data read, can both disposablely receive, so just can be after the bottom of receiving current connection receives data in the buffer memory, the bottom that then goes to receive other connections receives the data in the buffer memory, just the disabled problem of service of other connections can not occur.
Compare with method of the prior art, the embodiment of the invention not only can be saved the internal memory use amount of application program when solving the disabled problem of service, can also improve the travelling speed of application program.Because the mechanism that the embodiment of the invention is utilized scatter-read receives data read in the buffer memory to the connection bottom to first buffer memory, perhaps in first buffer memory and second buffer memory, and the data volume that the great majority connection sends to application program can be very not big, can be very not frequent yet, so in most cases, only having used first buffer memory (being that each connects independently buffer memory) can receive data in buffer with bottom and receive fully.Further, if certain data volume that connects transmission is big especially, so after the bottom that uses second buffer memory (namely all connect public buffer memory) to receive connection receives the partial data of buffer memory, only need to copy that part of data of storing in second buffer memory, and, neither carry out each time all needing to use second buffer memory when data receive, so the frequency of the quantity of copies data and generation copy is all little than prior art, this has just improved the travelling speed of application program, do not need to distribute jumbo buffer memory for each connects yet, thereby saved the internal memory use amount of application program.
Referring to Fig. 3, show the process flow diagram of the multi-link data receive method embodiment 2 of a kind of edge-triggered pattern of the present invention, adopt a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; Can may further comprise the steps:
Step 301: from the bottom reception buffer memory of described current connection, read current all primary datas, and when reading, described read operation is locked.
Need to prove, in actual applications, adopting scatter-read mechanism to carry out data receives, when the primary data in the bottom reception buffer memory that reads some connections, for fear of the conflict that occurs at this moment on the data read, when the bottom that reads this connection receives data in the buffer memory, this read operation is locked, just can guarantee in this DRP data reception process, not have in this connection new data and store in the bottom reception buffer memory.
Step 302: the size of more described primary data and described current first buffer memory that is connected, if described primary data then enters step 303 greater than described first buffer memory; If described primary data is less than or equal to first buffer memory of described current connection, then enter step 306.
Step 303: described primary data is stored to respectively in order in first buffer memory and described second buffer memory of described current connection.
Step 304: the size of the storage space that the described primary data of foundation takies in described second buffer memory expands described first buffer memory.
Step 305: in that part of first buffer memory that is copied to after the described expansion that described primary data is stored, enter step 307 in second buffer memory.
When primary data being stored in first buffer memory and second buffer memory respectively, the bottom of representing this connection of the not enough storage of size of first buffer memory receives the data in the buffer memory, therefore, can enlarge the size of first buffer memory according to the actual use amount of second buffer memory, the size of first buffer memory before the size of first buffer memory after the expansion equals not expand and the big or small sum of the storage space that primary data takies in second buffer memory.When practical application, only need the actual data of depositing in second buffer memory, the afterbody that copies first buffer memory to gets final product, thereby enlarges the size of first buffer memory.Through above-mentioned processing, follow-up when continuing to read bottom again and receiving data in the buffer memory, the probability of using second buffer memory will reduce, and this also can promote the travelling speed of application program.
Step 306: described primary data all is stored in first buffer memory of described current connection.
Step 307: described read operation is carried out release.
Receive after data in the buffer memory all receive at the bottom of current connection, this read operation is carried out release, receiving in the buffer memory with the bottom that guarantees this connection to have new data to enter, but so also can guarantee follow-uply can trigger read operation.
Step 308: judge that whether the data that described multi-link bottom receives in the buffer memory all receive, and if not, then enter step 309.
Step 309: be connected to current connection with any that does not receive and carry out described receiving course, till described multi-link bottom receives data in the buffer memory and all receives.
The initial size of first buffer memory in the embodiment of the invention can be distributed smallerly, can allow the capacity of second buffer memory reach operating system for connecting the size that the bottom that distributes receives buffer memory, thereby can guarantee to receive in the data cached operation at the bottom that reads connection for 1 time wherein data are run through, thereby the assurance application program can then go to receive the data that other connect, and can guarantee follow-up when having new data to enter connection bottom reception buffer memory in this same connection, operating system meeting notification application is because meet the condition of edge-triggered.Therefore, after method that application program is providing with the embodiment of the invention reads the data of certain connection, just can go to read the data of other connections, avoid the disabled problem of service.
Further, connect for major part, the data that first buffer memory that application program provides just enough receives the bottom that connects in the buffer memory all read, and do not have extra memory copying this moment, thereby improved the travelling speed of application program; On the other hand, for to the heavy especially connection of application transfer data, when receiving data, application program can use second buffer memory, at this moment, can enlarge the size of first buffer memory according to the actual use amount of second buffer memory, then the data in second buffer memory be added to the afterbody of first buffer memory.After first buffer memory enlarges, the size after can keeping its to enlarge, thus in the follow-up operation of reading data, reduced the probability of using second buffer memory, also reduced the probability that needs memory copying, so improved the travelling speed of application program.Because second buffer memory is that all connections are public, so also can not cause internal memory waste problem.
Description by above method embodiment, the those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential general hardware platform, can certainly pass through hardware, but the former is better embodiment under a lot of situation.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words can embody with the form of software product, this computer software product is stored in the storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, the perhaps network equipment etc.) carry out all or part of step of the described method of each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as ROM (read-only memory) (ROM), random-access memory (ram), magnetic disc or CD.
Corresponding to top method embodiment, the embodiment of the invention also provides a kind of multi-link data sink of edge-triggered pattern, wherein, adopt a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; Referring to Fig. 4, show the structural representation of a kind of multi-link data sink embodiment 1 of edge-triggered pattern, this device can comprise:
Data receiving element 40 is used for adopting scatter-read mechanism successively the data that described multi-link bottom receives buffer memory to be received;
Wherein, with reference to shown in Figure 5, described data receiving element 40 specifically can comprise: read module 401 is used for receiving buffer memory from the bottom of current connection and reads primary data; Comparison module 402, the size that is used for more described primary data and described current first buffer memory that is connected: memory module 403, if be used for described primary data greater than described first buffer memory, described primary data be stored to respectively in order in first buffer memory and described second buffer memory of described current connection; If described primary data is less than or equal to first buffer memory of described current connection, then described primary data all is stored in first buffer memory of described current connection;
Judge module 41 is used for judging whether the data that described multi-link bottom receives buffer memory all receive;
Trigger module 42, be used for result at described judge module for situation not under, be connected to current connection and trigger described data receiving system with any that does not receive, till described multi-link bottom receives data in the buffer memory and all receives.
Compare with method of the prior art, the embodiment of the invention not only can be saved the internal memory use amount of application program when solving the disabled problem of service, can also improve the travelling speed of application program.Because the mechanism that the embodiment of the invention is utilized scatter-read receives data read in the buffer memory to the connection bottom to first buffer memory, perhaps in first buffer memory and second buffer memory, and the data volume that the great majority connection sends to application program can be very not big, can be very not frequent yet, so in most cases, only having used first buffer memory (being that each connects independently buffer memory) can receive data in buffer with bottom and receive fully.Further, if certain data volume that connects transmission is big especially, so after the bottom that uses second buffer memory (namely all connect public buffer memory) to receive connection receives the partial data of buffer memory, only need to copy that part of data of storing in second buffer memory, and, neither carry out each time all needing to use second buffer memory when data receive, so the frequency of the quantity of copies data and generation copy is all little than prior art, this has just improved the travelling speed of application program, do not need to distribute jumbo buffer memory for each connects yet, thereby saved the internal memory use amount of application program.
Referring to Fig. 6, show the structural representation of a kind of multi-link data sink embodiment 2 of edge-triggered pattern, wherein, adopt a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; This device can comprise:
Read module 401 is used for receiving buffer memory from the bottom of current connection and reads primary data;
Described read module 401 specifically can comprise in the present embodiment:
Reading submodule 601 is used for receiving buffer memory from the bottom of described current connection and reads current all primary datas;
Locking submodule 602 is used for when reading described read operation being locked.
Comparison module 402 is for the size of more described primary data with described current first buffer memory that is connected;
Memory module 403 if be used for described primary data greater than described first buffer memory, is stored to described primary data respectively in first buffer memory and described second buffer memory of described current connection in order; If described primary data is less than or equal to first buffer memory of described current connection, then described primary data all is stored in first buffer memory of described current connection;
Enlargement module 603, the size that is used for the storage space that takies at described second buffer memory according to described primary data expands described first buffer memory.
Copy module 604 is used for described primary data in that part of first buffer memory after being copied to described expansion that second buffer memory is stored.
Separate lock module 605, be used for after the primary data of the bottom reception buffer memory of described current connection receives, described read operation being carried out release.
Judge module 41 is used for judging whether the data that described multi-link bottom receives buffer memory all receive;
Trigger module 42, be used for result at described judge module for situation not under, be connected to current connection and trigger described data receiving system with any that does not receive, till described multi-link bottom receives data in the buffer memory and all receives.
The initial size of first buffer memory in the embodiment of the invention can be distributed smallerly, can allow the capacity of second buffer memory reach operating system for connecting the size that the bottom that distributes receives buffer memory, thereby can guarantee to receive in the data cached operation at the bottom that reads connection for 1 time wherein data are run through, thereby the assurance application program can then go to receive the data that other connect, and can guarantee follow-up when having new data to enter connection bottom reception buffer memory in this same connection, operating system meeting notification application is because meet the condition of edge-triggered.Therefore, after the device that adopts the embodiment of the invention to provide reads the data of certain connection, just can go to read the data of other connections, avoid the disabled problem of service.
Further, connect for major part, the data that first buffer memory that application program provides just enough receives the bottom that connects in the buffer memory all read, and do not have extra memory copying this moment, thereby improved the travelling speed of application program; On the other hand, for to the heavy especially connection of application transfer data, when receiving data, application program can use second buffer memory, at this moment, can enlarge the size of first buffer memory according to the actual use amount of second buffer memory, then the data in second buffer memory be added to the afterbody of first buffer memory.After first buffer memory enlarges, the size after can keeping its to enlarge, thus in the follow-up operation of reading data, reduced the probability of using second buffer memory, also reduced the probability that needs memory copying, so improved the travelling speed of application program.Because second buffer memory is that all connections are public, so also can not cause internal memory waste problem.
Be understandable that the present invention can be used in numerous general or special purpose computingasystem environment or the configuration.For example: personal computer, server computer, handheld device or portable set, plate equipment, multicomputer system, the system based on microprocessor, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, comprise distributed computing environment of above any system or equipment etc.
The present invention can describe in the general context of the computer executable instructions of being carried out by computing machine, for example program module.Usually, program module comprises the routine carrying out particular task or realize particular abstract data type, program, object, assembly, data structure etc.Also can in distributed computing environment, put into practice the present invention, in these distributed computing environment, be executed the task by the teleprocessing equipment that is connected by communication network.In distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium that comprises memory device.
Need to prove, in this article, relational terms such as first and second grades only is used for an entity or operation are made a distinction with another entity or operation, and not necessarily requires or hint and have the relation of any this reality or in proper order between these entities or the operation.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thereby make and comprise that process, method, article or the equipment of a series of key elements not only comprise those key elements, but also comprise other key elements of clearly not listing, or also be included as the intrinsic key element of this process, method, article or equipment.Do not having under the situation of more restrictions, the key element that is limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment that comprises described key element and also have other identical element.
For device embodiment, because it corresponds essentially to method embodiment, so relevant part gets final product referring to the part explanation of method embodiment.Device embodiment described above only is schematic, wherein said unit as the separating component explanation can or can not be physically to separate also, the parts that show as the unit can be or can not be physical locations also, namely can be positioned at a place, perhaps also can be distributed on a plurality of network element.Can select wherein some or all of module to realize the purpose of present embodiment scheme according to the actual needs.Those of ordinary skills namely can understand and implement under the situation of not paying creative work.
The above only is the specific embodiment of the present invention; should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the principle of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. the multi-link data receive method of an edge-triggered pattern is characterized in that, adopts a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; Described method comprises:
Adopt scatter-read mechanism successively the data that described multi-link bottom receives in the buffer memory to be received, described receiving course comprises: read primary data from the bottom reception buffer memory of current connection; The size of more described primary data and described current first buffer memory that is connected: if described primary data greater than described first buffer memory, then is stored to described primary data respectively in first buffer memory and described second buffer memory of described current connection in order; If described primary data is less than or equal to first buffer memory of described current connection, then described primary data all is stored in first buffer memory of described current connection;
Judge whether the data that described multi-link bottom receives in the buffer memory all receive, if not, then be connected to current connection with any that does not receive and carry out described receiving course, till described multi-link bottom receives data in the buffer memory and all receives.
2. method according to claim 1, it is characterized in that, under the situation of described primary data greater than first buffer memory of described current connection, be stored to described primary data in first buffer memory of described current connection and described second buffer memory in order respectively after, also comprise:
The size of the storage space that the described primary data of foundation takies in described second buffer memory expands described first buffer memory.
3. method according to claim 2 is characterized in that, also comprises:
In that part of first buffer memory that is copied to after the described expansion that described primary data is stored in second buffer memory.
4. method according to claim 1 is characterized in that, described employing scatter-read mechanism reads primary data from the bottom reception buffer memory of current connection, specifically comprise:
From the bottom reception buffer memory of described current connection, read current all primary datas, and when reading, described read operation is locked.
5. method according to claim 4 is characterized in that, also comprises:
After the primary data in the bottom reception buffer memory of described current connection receives, described read operation is carried out release.
6. the multi-link data sink of an edge-triggered pattern is characterized in that, adopts a plurality of connections of the application program of described edge-triggered pattern respectively to have a fixed-size bottom reception buffer memory, and first buffer memory of variable size; All of described application program connect shared fixed-size second buffer memory; Described device comprises:
The data receiving element is used for adopting scatter-read mechanism successively the data that described multi-link bottom receives buffer memory to be received, and described data receiving element comprises: read module is used for receiving buffer memory from the bottom of current connection and reads primary data; Comparison module, the size that is used for more described primary data and described current first buffer memory that is connected: memory module, if be used for described primary data greater than described first buffer memory, described primary data be stored to respectively in order in first buffer memory and described second buffer memory of described current connection; If described primary data is less than or equal to first buffer memory of described current connection, then described primary data all is stored in first buffer memory of described current connection;
Judge module is used for judging whether the data that described multi-link bottom receives buffer memory all receive;
Trigger module, be used for result at described judge module for situation not under, be connected to current connection and trigger described data receiving element with any that does not receive, till described multi-link bottom receives data in the buffer memory and all receives.
7. require 6 described devices according to power, it is characterized in that, also comprise:
Enlargement module, the size that is used for the storage space that takies at described second buffer memory according to described primary data expands described first buffer memory.
8. device according to claim 7 is characterized in that, also comprises:
The copy module is used for described primary data in that part of first buffer memory after being copied to described expansion that second buffer memory is stored.
9. device according to claim 6 is characterized in that, described read module specifically comprises:
Reading submodule is used for receiving buffer memory from the bottom of described current connection and reads current all primary datas;
The locking submodule is used for when reading described read operation being locked.
10. device according to claim 9 is characterized in that, also comprises:
Separate lock module, be used for after the primary data of the bottom reception buffer memory of described current connection receives, described read operation being carried out release.
CN2011103746890A 2011-11-22 2011-11-22 Method and device for receiving data of multiple connections in edge trigger mode Active CN102508719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103746890A CN102508719B (en) 2011-11-22 2011-11-22 Method and device for receiving data of multiple connections in edge trigger mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103746890A CN102508719B (en) 2011-11-22 2011-11-22 Method and device for receiving data of multiple connections in edge trigger mode

Publications (2)

Publication Number Publication Date
CN102508719A CN102508719A (en) 2012-06-20
CN102508719B true CN102508719B (en) 2013-10-09

Family

ID=46220811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103746890A Active CN102508719B (en) 2011-11-22 2011-11-22 Method and device for receiving data of multiple connections in edge trigger mode

Country Status (1)

Country Link
CN (1) CN102508719B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6662275B2 (en) * 2001-02-12 2003-12-09 International Business Machines Corporation Efficient instruction cache coherency maintenance mechanism for scalable multiprocessor computer system with store-through data cache
US7117315B2 (en) * 2002-06-27 2006-10-03 Fujitsu Limited Method and apparatus for creating a load module and a computer product thereof
JP4241175B2 (en) * 2003-05-09 2009-03-18 株式会社日立製作所 Semiconductor device
JP2005018441A (en) * 2003-06-26 2005-01-20 Mitsubishi Electric Corp Memory device

Also Published As

Publication number Publication date
CN102508719A (en) 2012-06-20

Similar Documents

Publication Publication Date Title
KR102137761B1 (en) Heterogeneous unified memory section and method for manaing extended unified memory space thereof
EP2288975B1 (en) Method for optimizing cleaning of maps in flashcopy cascades containing incremental maps
US20150127691A1 (en) Efficient implementations for mapreduce systems
EP2254036A2 (en) Storage apparatus and data copy method
CN103677654A (en) Method and electronic equipment for storing data
CN103309818A (en) Method and device for storing data
CN102750174A (en) Method and device for loading file
CN101097501B (en) Method and system for repositioning logical volume
CN103544153A (en) Data updating method and system based on database
CN107533508B (en) Method and system for reducing memory committed amount when compressing memory
CN102027453A (en) System and method for optimizing interrupt processing in virtualized environments
CN108874903A (en) Method for reading data, device, computer equipment and computer readable storage medium
CN110825690A (en) Inter-core communication method and device of multi-core processor
US20240385978A1 (en) Host, information processing method, electronic system, and readable memory medium
CN115516436A (en) Reasoning in memory
CN114579055A (en) Disk storage method, device, equipment and medium
US11138178B2 (en) Separation of computation from storage in database for better elasticity
CN113051277B (en) Account processing method and device
CN102508719B (en) Method and device for receiving data of multiple connections in edge trigger mode
US9405470B2 (en) Data processing system and data processing method
CN115718663B (en) Binder-driven memory management method, device, equipment, and storage medium
CN110008176A (en) A kind of file creating method and relevant apparatus
US11442633B2 (en) Method, electronic device and computer program product for storage management
CN102541743A (en) Storage management method, equipment and system
CN103544116A (en) Data processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant