[go: up one dir, main page]

CN104484287B - Nonvolatile cache realization method and device - Google Patents

Nonvolatile cache realization method and device Download PDF

Info

Publication number
CN104484287B
CN104484287B CN201410806036.9A CN201410806036A CN104484287B CN 104484287 B CN104484287 B CN 104484287B CN 201410806036 A CN201410806036 A CN 201410806036A CN 104484287 B CN104484287 B CN 104484287B
Authority
CN
China
Prior art keywords
buffer unit
data
unit
cache lines
little
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410806036.9A
Other languages
Chinese (zh)
Other versions
CN104484287A (en
Inventor
刘建伟
丁杰
刘乐乐
周文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zeshi Technology Co.,Ltd.
Original Assignee
NETBRIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NETBRIC TECHNOLOGY Co Ltd filed Critical NETBRIC TECHNOLOGY Co Ltd
Priority to CN201410806036.9A priority Critical patent/CN104484287B/en
Publication of CN104484287A publication Critical patent/CN104484287A/en
Application granted granted Critical
Publication of CN104484287B publication Critical patent/CN104484287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a nonvolatile cache realization method and device. The nonvolatile cache realization method comprises the following steps that firstly, physical flash memory storage resources are virtualized into a flash memory storage pool, then, three kinds of logic storage units including a large cache unit, a small cache unit and a write mirror unit are created on the storage pool, the large cache unit is used for providing conventional cache service, the small cache unit is used for providing the acceleration service of random write operation and data temporary storage service of read operation, and the write mirror unit is used for providing a redundant backup protection function on dirty data in a large cache and a small cache. The nonvolatile cache realization method has the advantages that the problem of generation of huge cache state tables is avoided, the redundant backup mode seriously influencing the write performance is also voided, and the ultra-large capacity and the ultra-high performance can be realized, so that the read-write performance of centralized control equipment is obviously improved.

Description

A kind of non-volatile cache implementation method and device
Technical field
The present invention relates to technical field of memory, more particularly to a kind of concentration control for being applied to integrated distribution formula storage architecture In control equipment, the non-volatile cache implementation method and dress of the storage performance of common control equipment and whole storage system is improved Put.
Background technology
With the development of semiconductor technology, the memory density of high speed nonvolatile memory part (such as flash memory) is increasingly Height, has been widely used in the data center as data access acceleration equipment at present.It is non-volatile compared with mechanical disk Memory device (such as flash memory) has faster speed of random access;Compared with DRAM, nonvolatile semiconductor memory member (for example dodges Deposit) can continue to keep data after power-off, and with bigger memory density.
The high storage density of flash memory, it is non-volatile, and the characteristics of high access speed so that flash memory is obtained within the storage system To being widely applied, one of which application is exactly the acceleration equipment as storage system.Flash memory sets as the acceleration of storage system It is standby, also there are various ways of realization.There is flash memory accelerator card, flash memory accelerates accumulation layer, and flash memory to accelerate caching.
When flash memory is as accelerating caching to apply, the information of each cache lines (Cache Line) of record buffer memory is needed. The address of such as cache object, the state (dirty, invalid, freeze, in removing, load medium) of caching, and cache lines is aging Degree etc..The number of cache lines is relevant with the granularity of I/O request with the size of flash cache.
When flash memory is as accelerating caching to apply, in addition it is also necessary to ensure that flash cache and the storage system of rear end connection (are such as divided Cloth storage cluster 203) between data consistency, i.e., the data in flash cache will with rear end connection storage system in Data are consistent.
In existing flash cache method, there is what is only used as read buffer.Have what is used as read-write cache, but it is right The acceleration effect of write operation is limited, and reason is, in order to ensure the reliability of data, and to have used far-reaching to write performance superfluous Remaining redundancy technique.On the other hand, during existing flash cache is realized, the capacity of flash memory does not reach hundred TB ranks.
The disclosure of background above technology contents is only used for inventive concept and the technical scheme for aiding in understanding the present invention, and it is not The prior art of present patent application is necessarily belonged to, without tangible proof the applying date of the above in present patent application is being shown In the case of disclosed, above-mentioned background technology should not be taken to evaluate the novelty and creativeness of the application.
The content of the invention
It is an object of the invention to a kind of non-volatile cache implementation method is proposed, to solve what above-mentioned prior art was present Cache lines list item present in flash cache manages and control device huge with buffer status table caused by Data Consistency Readwrite performance difference technical problem.
For this purpose, the present invention proposes a kind of non-volatile cache implementation method, it is first that the flash memory storage resource of physics is virtual Turn to flash memory storage pond, then on the storage pool create three kinds of logic storage units, big buffer unit, little buffer unit and Mirror image unit is write, the big buffer unit is used to provide conventional buffer service, and the little buffer unit is used to provide random write The temporary service of the data of acceleration service and read operation of operation, it is described to write mirror image unit for for big buffer unit and little caching list Dirty data in unit provides redundancy backup defencive function;
When data write, such as the write operation has hit the cache lines of little buffer unit, then write the data to little buffer unit, Such as miss little buffer unit but the cache lines of big buffer unit are hit, then write the data to big buffer unit, such as big caching Unit and little buffer unit are all miss and accelerate mark effective, then write data into little buffer unit, and otherwise data are not written into Flash memory storage resource and the rear end storage cluster that writes direct;
During digital independent, the cache lines of little buffer unit have been hit in the such as read operation, then the data in little buffer unit Return, such as miss little buffer unit has still hit the cache lines of big buffer unit, then the data in big buffer unit are returned Return, such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then read big caching from rear end storage cluster The data of the cache lines correspondence size of unit are simultaneously loaded into the cache lines of big buffer unit, then data are returned to front end data Shen Please unit, such as big buffer unit and little buffer unit is all miss and accelerate mark invalid but the temporary mark of data is effective, then from Rear end storage cluster reads the cache line data of the little buffer unit of correspondence and is loaded into the cache lines of little buffer unit, then data Front end data application unit is returned to, the data for otherwise reading from rear end storage cluster are directly sent without flash memory storage resource Give front end data application unit.
Preferably, the method for the present invention can also have following technical characteristic:
The big buffer unit, little buffer unit and write the size of mirror image unit and meet equation below (Little_Size+ Mirror_size)/Little_granularity+Big_Size/Big_granularity<=available_DRAM_ Size/entry_size, wherein, Big_Size is the size of big buffer unit, and Little_Size is the big of little buffer unit Little, Mirror_size is the size for writing mirror image unit, and Little_granularity is the size of little buffer unit cache lines, Big_granularity is the size of big buffer unit cache lines, and available_DRAM_Size is available memory buffers shape The size of the DRAM of state table, entry_size is to cache each table item size.
It is described write mirror image unit mirror image subelement write by least one logic constitute, the big buffer unit, little caching list Unit is made up of respectively the big caching subelement of at least one logic, the little caching subelement of at least one logic.
The flash memory storage resource of the physics includes two or more physics pallet, the big buffer unit, little buffer unit With write mirror image unit across described two above physics pallets.
Data when writing the big buffer unit, little buffer unit and write mirror image unit, write by the big buffer unit Enter physical location from the write physical location for writing mirror image unit on the different physics pallets, the little caching is single The write physical location of unit is also on the different physics pallets from the write physical location for writing mirror image unit.
The little buffer unit is located in same physics pallet or across two with the single cache lines for writing mirror image unit Above physics pallet, the single cache lines of the big buffer unit are located in same physics pallet or across two or more physics Pallet.
Which physics pallet data write operation and data read operation fall by following principle:When certain physics pallet is damaged Bad when, only by the transition of operation being mapped to originally on the physics pallet to other physics pallets, and just it is mapped to originally Read-write operation on other physics pallets maintains mapping relations constant.
The cache lines of the big buffer unit at least include dirty situation, clean state and disarmed state, the dirty situation table Show that the data in the data and back end storage system in cache lines are inconsistent, the clean state represent data in cache lines and Data in back end storage system are consistent, and the disarmed state is represented in cache lines without valid data;When cache lines are in invalid During state, dirty situation is jumped to when receiving data write request, receive when clean data load request and jump to clean state;When When caching is in dirty situation, only receives when cache lines remove request and redirect as clean state;When cache lines are in clean state When, dirty situation is jumped to when receiving data write request, jump to disarmed state when receiving invalidation request.
The cache lines of the little buffer unit at least include dirty situation, clean state, disarmed state and frozen state, described Dirty situation represents that the data in data and back end storage system in cache lines are inconsistent, and the clean state is represented in cache lines Data it is consistent with the data in back end storage system, the disarmed state is represented in cache lines without valid data, the jelly Knot state representation current cache row is in frozen state, can only be read, it is impossible to be written into;When cache lines are in disarmed state When, dirty situation is jumped to when receiving data write request, receive when clean data load request and jump to clean state;Work as caching In dirty situation when, receive when cache lines remove request and redirect as disarmed state, redirect as frozen state when receiving mobile request; When cache lines are in clean state, dirty situation is jumped to when receiving data write request, it is invalid to jump to when receiving read request State;When cache lines are in frozen state, only receive cache lines when moving the return for completing and jump to disarmed state.
Also include guarding unit, this is guarded unit and deposits for the dirty data write in mirror image unit to be scavenged into into rear end on backstage Accumulation, to limit the flash memory storage resource in need the dirty data for doing redundancy backup in predetermined scope.
The redundancy backup is adopted and writes mirror-image fashion.
The flash memory storage resource of the physics is flash memory, phased memory.
The present invention also proposes that a kind of non-volatile cache realizes device, including:Flash memory storage resource virtualizing unit, is used for It is flash memory storage pond by the flash memory storage resource virtualizing of physics;
Logic storage unit creating unit, for creating three kinds of logic storage units on the storage pool, big caching is single First, little buffer unit and mirror image unit is write, the big buffer unit is used to provide conventional buffer service, the little buffer unit The temporary service of the data of acceleration service and read operation for providing random writing operations, it is described to write mirror image unit for cache greatly Redundancy backup defencive function is provided with the dirty data in little caching;
Data write unit and data-reading unit;
When the data write unit carries out data write, such as the write operation has hit the cache lines of little buffer unit, then Little buffer unit is write the data to, such as miss little buffer unit but the cache lines of big buffer unit has been hit, then data is write Enter big buffer unit, such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then write data into little slow Memory cell, otherwise data are not written into flash memory storage resource and the rear end storage cluster that writes direct;
When the data-reading unit carries out digital independent, the cache lines of little buffer unit have been hit in the such as read operation, then Data in little buffer unit are returned, such as miss little buffer unit but hit the cache lines of big buffer unit, then Data in big buffer unit are returned, and such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then from rear End storage cluster reads the data of the cache lines correspondence size of big buffer unit and is loaded into the cache lines of big buffer unit, then Data return to front end data application unit, and such as big buffer unit and little buffer unit are all miss and accelerate mark invalid but number Identify effectively according to temporary, then read the cache line data of the little buffer unit of correspondence from rear end storage cluster and be loaded into little caching list Cache lines of unit, then data are returned to front end data application unit, the data for otherwise reading from rear end storage cluster without Flash memory storage resource and be directly fed to front end front end data application unit.
Preferably, device of the invention can also have following technical characteristic:
The big buffer unit, little buffer unit and write the size of mirror image unit and meet equation below (Little_Size+ Mirror_size)/Little_granularity+Big_Size/Big_granularity<=available_DRAM_ Size/entry_size, wherein, Big_Size is the size of big buffer unit, and Little_Size is the big of little buffer unit Little, Mirror_size is the size for writing mirror image unit, and Little_granularity is the size of little buffer unit cache lines, Big_granularity is the size of big buffer unit cache lines, and available_DRAM_Size is available memory buffers shape The size of the DRAM of state table, entry_size is to cache each table item size.
It is described write mirror image unit and can write mirror image subelement by multiple logics constitute.
The flash memory storage resource of the physics includes two or more physics pallet, the big buffer unit, little buffer unit Can be across described two above physics pallets with mirror image unit is write.
The data write unit when writing data into the big buffer unit, little buffer unit and writing mirror image unit, The write physical location of the big buffer unit is in different physics pallets from the write physical location for writing mirror image unit On, the write physical location of the little buffer unit is also at different physics from the write physical location for writing mirror image unit On pallet.
The little buffer unit is located in same physics pallet or across two with the single cache lines for writing mirror image unit Above physics pallet, the single cache lines of the big buffer unit are located in same physics pallet or across two or more physics Pallet.
Which physics pallet the operation of the data write unit and data-reading unit falls by following principle:When certain When physics pallet is damaged, only by the transition of operation being mapped to originally on the physics pallet to other physics pallets, and it is former Read-write operation to be just mapped on other physics pallets maintains mapping relations constant.
The cache lines of the big buffer unit at least include dirty situation, clean state and disarmed state, the dirty situation table Show that the data in the data and back end storage system in cache lines are inconsistent, the clean state represent data in cache lines and Data in back end storage system are consistent, and the disarmed state is represented in cache lines without valid data;When cache lines are in invalid During state, dirty situation is jumped to when receiving data write request, receive when clean data load request and jump to clean state;When When caching is in dirty situation, only receives when cache lines remove request and redirect as clean state;When cache lines are in clean state When, dirty situation is jumped to when receiving data write request, jump to disarmed state when receiving invalidation request.
The cache lines of the little buffer unit at least include dirty situation, clean state, disarmed state and frozen state, described Dirty situation represents that the data in data and back end storage system in cache lines are inconsistent, and the clean state is represented in cache lines Data it is consistent with the data in back end storage system, the disarmed state is represented in cache lines without valid data, the jelly Knot state representation current cache row is in frozen state, can only be read, it is impossible to be written into;When cache lines are in disarmed state When, dirty situation is jumped to when receiving data write request, receive when clean data load request and jump to clean state;Work as caching In dirty situation when, receive when cache lines remove request and redirect as disarmed state, redirect as frozen state when receiving mobile request; When cache lines are in clean state, dirty situation is jumped to when receiving data write request, it is invalid to jump to when receiving read request State;When cache lines are in frozen state, only receive cache lines when moving the return for completing and jump to disarmed state.
Also include guarding unit, this is guarded unit and deposits for the dirty data write in mirror image unit to be scavenged into into rear end on backstage Accumulation, to limit the flash memory storage resource in need the dirty data for doing redundancy backup in predetermined scope.
The redundancy backup is adopted and writes mirror-image fashion.
The beneficial effect that the present invention is compared with the prior art includes:By being sudden strain of a muscle by the flash memory storage resource virtualizing of physics Storage pool is deposited, and three kinds of logic storage units are created on the storage pool, and the data for being adopted write and read method, The non-volatile cache implementation method of the present invention is avoided and produces huge buffer status table problem, be it also avoid having a strong impact on and is write The redundancy backup mode of performance, can accomplish vast capacity and very-high performance, so as to significantly improve the reading of common control equipment Write performance, and can accomplish uninterruptedly to provide storage service.
Description of the drawings
Fig. 1 is the overall logic structural representation of the flash cache of embodiment 1;
Fig. 2 is the integrated distribution formula storage architecture schematic diagram of embodiment 1;
Fig. 3 is the unitary physical structure schematic diagram of flash cache in embodiment 1;
Fig. 4 is that the cache lines of big buffer unit in embodiment 1 simplify state transition table;
Fig. 5 is that the cache lines of little buffer unit in embodiment 1 simplify state transition table;
Fig. 6 is flash cache write operation flow chart in embodiment 1;
Fig. 7 is a flow chart of flash cache read operation in embodiment 1;
Fig. 8 is another flow chart of flash cache read operation in embodiment 1;
Fig. 9 is flash cache logic module illustration corresponding with physical module in embodiment 1.
Specific embodiment
Nonvolatile semiconductor memory member (i.e. flash memory storage resource) in cache implementing method disclosed by the invention include but not It is limited to flash memory, phased memory etc..The storage system that rear end of the present invention is connect includes but is not limited to 203 concentration for providing in Fig. 2 Distributed memory system (cluster), is only below that the present invention will be described by taking integrated distribution formula storage system framework as an example.
In integrated distribution formula storage system framework shown in Fig. 2, the flash cache in common control equipment needs to have super The characteristics of Large Copacity, very-high performance (the high IOPS of offer and low latency are provided) this is because:What common control equipment connected divides The memory capacity of cloth storage cluster is PB ranks, and corresponding buffer memory capacity is TB ranks up to a hundred.But, the sudden strain of a muscle of vast capacity Deposit and be buffered in two difficult problems, i.e. cache lines list item problem of management and Data Consistency.
When flash memory is used as caching, whole storage resource is decomposed into many cache lines by needs according to certain granularity (cache line), for each cache lines, is required for recording the relevant information of this cache lines, the such as data of cache lines storage From where, current state of cache lines etc., when the capacity of flash cache reaches TB byte, such as 200T Byte up to a hundred, If dividing cache lines according to the granularity of 4K Byte, total 200TB/4KB=50 × 109Individual cache lines, it is assumed that every Individual cache lines need 16Byte to record its state, then the table with regard to needing 800GByte altogether carrys out the whole flash memory of identification record and delays The state deposited, this is one huge and be the table that can not be born.And the granularity of 4KByte is determined by virtual machine 201 , i.e., as the block storage device of virtual machine 201, the block access unit of data storage is exactly 4KByte's.This will produce huge Big buffer status table is cache lines list item problem of management.
When flash memory is used as caching, in addition it is also necessary to ensure that the data in caching and rear end are distributed the data of storage cluster 203 Concordance, when the data in the data in caching and distribution storage cluster 203 are inconsistent, needs do standby to the data in caching Part protection.It is exactly RAID5/6 to be used most protected modes at present, but RAID5/6 is to be with huge write performance sacrifice Cost.Another mode is exactly only to use as read buffer, any write operation all write direct rear end distribution storage collection Group 203, and the related data in flash cache is set to into disarmed state, so as to ensure that the data in caching are deposited with rear end forever Accumulation data are consistent, it is to avoid the data in caching are carried out with backup protection, but such implementation can only be directed to Partial read operation is accelerated, and write operation can not be accelerated.Here it is Data Consistency, and its bring not Profit affects.
With reference to specific embodiment and compare accompanying drawing the present invention is described in further detail.It is emphasized that What the description below was merely exemplary, rather than in order to limit the scope of the present invention and its application.
With reference to figure 1 below -9, the embodiment of non-limiting and nonexcludability, wherein identical reference table will be described Show identical part, unless stated otherwise.
Embodiment one:
A kind of non-volatile cache implementation method, is flash memory storage pond first by the flash memory storage resource virtualizing of physics, Then three kinds of logic storage units are created on the storage pool, big buffer unit 101, little buffer unit 102 and mirror image list is write Unit 103, as shown in Figure 1.The big buffer unit 101 is used to provide conventional buffer service, and the little buffer unit 102 is used for The temporary service of the data of acceleration service and read operation of random writing operations is provided, it is described to write mirror image unit 103 for cache greatly Dirty data in unit 101 and little buffer unit 102 provides redundancy backup defencive function;When data write, such as the write operation is ordered The cache lines of little buffer unit 102 are suffered, has then write the data to little buffer unit 102, such as miss little buffer unit 102 but life The cache lines of big buffer unit 101 are suffered, has then write the data to big buffer unit 101, such as big buffer unit 101 and little caching are single Unit 102 is all miss and accelerates mark effective, then write data into little buffer unit 102, and otherwise data are not written into flash memory storage Resource and the rear end storage cluster 203 that writes direct;During digital independent, the caching of little buffer unit 102 has been hit in the such as read operation OK, then the data return in little buffer unit 102, such as miss little buffer unit 102 has still hit big buffer unit 101 Cache lines, then the data in big buffer unit 101 are returned, such as big buffer unit 101 and little buffer unit 102 are all miss And accelerate mark effective, then the data of the cache lines correspondence size of big buffer unit 101 are read from rear end storage cluster and is loaded To the cache lines of big buffer unit 101, then data are returned to virtual machine 201, such as big buffer unit 101 and little buffer unit 102 is all miss and accelerate the temporary mark of the invalid but data of mark effective, then read the little caching of correspondence from rear end storage cluster single The cache line data of unit 102 is simultaneously loaded into the cache lines of little buffer unit 102, then data are returned to virtual machine 201, otherwise from The data that rear end storage cluster reads are directly fed to front end virtual machine 201 without flash cache 100.Wherein, it is described virtual Machine 201 is only for example as one kind of front end data application unit, and the front end data application unit in the present invention is not limited to This.
In the present embodiment, structural representation such as Fig. 3 institutes of the flash memory storage resource (or claiming flash cache 100) of the physics Show, each pallet provides the flash memory storage resource of physics, and the internal reliability ensured using corresponding technology inside pallet And stability.Physical flash storage resource is divided into into big buffer unit 101 and little buffer unit 102, can effectively be solved The excessive problem of ultra-large capacity flash memory buffer status table.
As shown in figure 4, the cache line state table for big buffer unit is illustrated, i.e., the state bag of the cache lines of big buffer unit Include but be not limited to the state listed in Fig. 4.Basic status after the cache lines of big buffer unit simplify has three:Dirty situation, i.e., The data in data and back end storage system 203 in cache lines are inconsistent;Data and rear end in clean state, i.e. cache lines Data in storage system 203 are consistent;Without valid data in disarmed state, i.e. cache lines.State transition process is:Work as caching When row is in disarmed state, (write for for example carrying out self virtualizing machine 201 please if receive the data write request of cache line size Ask), cache lines jump to dirty situation, if receiving when clean data load request, (such as the write from storage system 203 is asked Ask), cache lines jump to clean state;When caching is in dirty situation, when only receiving cache lines removing request, state transition For clean state;When cache lines are in clean state, if receiving data write request, cache lines jump to dirty situation, such as Fruit receives invalidation request, and cache lines jump to disarmed state.
As shown in figure 5, the cache line state table for little buffer unit is illustrated, i.e., the state of little buffer unit cache lines includes But it is not limited to the state listed in Fig. 5.Basic status after the cache lines of little buffer unit simplify has four:Dirty situation, that is, delay The data deposited in the data and back end storage system 203 in row are inconsistent;Data and rear end in clean state, i.e. cache lines are deposited Data in storage system 203 are consistent;Without valid data in disarmed state, i.e. cache lines;At frozen state, i.e. current cache row In frozen state, can only be read, it is impossible to be written into.State transition process is:When cache lines are in disarmed state, if received (for example carry out the write request of self virtualizing machine 201) during to data write request, cache lines jump to dirty situation, if received clean When data load request (such as from the write request of storage system 203), cache lines jump to clean state;When caching is in During dirty situation, if receive cache lines removing request, state transition is disarmed state, if receiving mobile request, state is jumped Switch to frozen state;When cache lines are in clean state, if receiving data write request, cache lines jump to dirty situation, If receiving read request, cache lines jump to disarmed state;When cache lines are in frozen state, only receive what movement was completed Return, cache lines jump to disarmed state.
Greatly/different state of little buffer unit and redirecting realizes acceleration read operation and write operation.In this example, big caching is single First and little buffer unit state and the difference for redirecting are because that their service purpose is different, caching is carried out to read and write access and is added Using big buffer unit or little buffer unit when fast, depending on tactful information and big buffer unit and little buffer unit Status information.Tactful information includes but is not limited to the grade of service, hit probability prediction etc..Tactful information can be straight Connect and come from common control equipment 202, it is also possible to come from virtual machine 201.Whether status information including but not limited to hits.This In example, big buffer unit is used to provide conventional buffer service, and can adopt different to different cache lines according to the grade of service Aging policy;Little buffer unit provides caching acceleration function for the write operation for not hitting big buffer unit first, and not have There is the read operation for hitting big buffer unit to provide temporary data storage function.
The cache lines of little buffer unit 102 are little, are for example 4KByte;The cache lines of big buffer unit 101 be it is big, It is for example 4Mbytes;The cache lines of the cache lines and little buffer unit 102 of writing mirror image unit 103 can be consistent.It is specific slow Depositing capable size can be adjusted according to practical situation, for example, store request situation according to virtual machine 201 and determine that little caching is single The cache line size of unit 102, according to rear end the cache lines for realizing the big buffer unit 101 of situation decision of storage cluster 203 are distributed Size.
And little buffer unit 102, big buffer unit 101 and the size and mutual relation of mirror image unit 103 are write, can basis DRAM resources in common control equipment 202 determine.For example, it is desired to the table of all record buffer memory states is all put into into concentration control In the corresponding DRAM resources of control equipment 202, then (Little_Size+Mirror_size)/Little_ will be met granularity+Big_Size/Big_granularity<=available_DRAM_Size/entry_size.Wherein, Little_Size is exactly the size of little buffer unit 102, and Mirror_size is the size for writing mirror image unit 103, Little_ Granularity is the size of the cache lines of little buffer unit 102, the cache line size of little buffer unit 102 in the present embodiment It is consistent with the block size of the data access of virtual machine 201, Big_Size is the size of big buffer unit 101, Big_ Granularity is the size of the cache lines of big buffer unit 101, and available_DRAM_Size is available memory buffers shape The size of the DRAM of state table, entry_size is to cache each table item size.
Write mirror image unit 103 and provide redundancy backup protection for the dirty data in big buffer unit 101 and little buffer unit 102 Function.Carry out the data of self virtualizing machine 201 while big buffer unit 101 or little buffer unit 102 is write, be also written into writing In mirror image unit 103.
One preferably way is to still further comprise one to guard unit, and it is responsible for that on backstage mirror image unit 103 will be write In dirty data be scavenged into rear end storage cluster 203.Because writing mirror image unit 103 only backs up big buffer unit 101 and little caching Dirty data in unit 102, and guard unit dirty data is constantly scavenged at predetermined regular rear end storage cluster In 203, so the dirty data in flash cache 100 is limited, it is not necessary to which all data in whole flash cache 100 are done Redundancy backup.Meanwhile, on the one hand backup policy reduces requirement of the redundancy backup to performance using writing by the way of mirror image, another Aspect realizes the effect accelerated to all write operations.
As shown in figure 8, the handling process to guard unit, the situation of mirror image unit 103 is write in detection first, when writing mirror image list Unit 103 is non-NULL, unit is guarded from one dirty data of taking-up and relevant information (such as address information) in mirror image is write, according to correlation Information inquiry flash cache state table, obtains flash cache state.If buffer status show the slow of the little buffer unit 102 of hit Row is deposited, the cache lines of big buffer unit 101 are not hit, then directly by the data dump in the cache lines of little buffer unit 102 To in rear end storage cluster 203.If buffer status show the cache lines for both hitting little buffer unit 102 or hit big caching list The cache lines of unit 101, then first by the caching of the data-moving in the cache lines of little buffer unit 102 to big buffer unit 101 In row, then again by the data dump in the cache lines of big buffer unit 101 to rear end storage cluster 203.If caching shape State is displayed without hitting the cache lines of little buffer unit 102, hits the cache lines of big buffer unit 101, and big buffer unit 101 cache lines contain dirty data, then by the data dump in the cache lines of big buffer unit 101 to rear end storage cluster 203 In.If buffer status are displayed without hitting the cache lines of little buffer unit 102, the cache lines of big buffer unit 101 are hit, and And in the cache lines of big buffer unit 101 need not then any operation be done to big/little buffer unit without dirty data.It is worth saying Bright, described herein to be only for example, this flow process can accordingly be changed with the change according to status information.Meanwhile, write Mirror image unit 103 can write mirror image subelement and constitute by multiple logics, and each small letter mirror image subelement has the demons of oneself.
The corresponding relation of cache logic unit and physical location (physics pallet) illustrates as shown in figure 9, each logical block, Big buffer unit 101, little buffer unit 102, writing mirror image unit 103 can be across all physics pallets, the benefit of do so It is the degree of concurrence that can improve each physics pallet, improves performance.Writing mirror logic unit can be divided into multiple little writing There is a logic to write mirror image subelement on mirror logic subelement, such as each pallet, be divided into multiple little logics and write mirror image The benefit of subelement can be that concurrent multiple mirror images of writing guard unit, and dirty data is scavenged into the speed of rear end storage cluster for raising Degree.
As shown in figure 9, the new write data for carrying out self virtualizing machine 201 are writing big buffer unit 101, little buffer unit 102 During with writing mirror image unit 103, can be by following principle:The physical location and write for writing big buffer unit or little buffer unit is write The physical location of mirror image unit 103 is not on same physics pallet.For example write the physics pallet sequence of mirror image unit 103 It number can be physics pallet sequence number such a simple rule of plus of the big buffer unit 101 of write or little buffer unit 102 (being not limited to this).Advantage of this is that can ensure redundancy backup and former data on different physics pallets, single When physics pallet is damaged, flash cache 100 still has available data to provide.The cache lines of the big caching be given in Fig. 9 it is big It is little for 4MByte, but it is actually used in, can be adjusted according to practical situation.
The little buffer unit 102 can be located in same physics pallet with the single cache lines for writing mirror image unit 103 Or across two or more physics pallet, the single cache lines of the big buffer unit both can also may be used across multiple physics pallets With in same physics pallet.In this example so that the single cache lines of big buffer unit are located in same physics pallet as an example Illustrate, which is easily facilitated when realizing that single physical pallet is damaged, remain able to provide the technology effect of persistent service Really.
Based on big buffer unit, little buffer unit and write under mirror image unit Fractionation regimen, wherein a physics pallet is damaged In the case of bad, persistent service presentation mode is shown in the following example.
If pallet 1 is damaged in Fig. 9, it is impossible to provide in service, and pallet 1 write mirror back-up is on pallet 0 Dirty data, it is as follows that data recovery and persistent service provide process:
The first step:Pallet 0 and pallet 1 are masked as first can not provide free buffer row state.
Second step:Traversal removes dirty data, and thread is as follows:
Thread 1:The cache line state table of traversal pallet 0, just being failed in clean state, in dirty situation just By data dump to rear end storage cluster, then failed.
Thread 2:The cache line state table of traversal pallet 1, just being failed in clean state, in dirty situation just etc. Treat to state to be changed into clean state.
Thread 3:Improve and mirror image is write on pallet 2 guard the running priority level of unit for the superlative degree.
Thread 1,2 and 3 is concurrently performed.
3rd step:After the traversal of waiting tray 0 and 1 all terminates, then pallet 0 is set to the shape that can provide free buffer row State.Because under news, pallet 0 does double copies with the mirror image unit of writing on pallet 2.
The read-write operation for carrying out self virtualizing machine falls which selectable algorithm of physics pallet determines according to following principle:When certain When individual physics pallet is damaged, only by the transition of operation being mapped to originally on this physics pallet to other physics pallets, And be just mapped to the read-write operation on other physics pallets originally and maintain mapping relations constant.The algorithm for meeting this requirement is current Have a lot, such as CRUSH algorithms etc..
Outside the basic technique problems i.e. cache lines list item problem of management and Data Consistency for solving the present invention, send out A person of good sense also has found, due to coming the granularity of the read-write operation of self virtualizing machine 201 and the cache line size one of little buffer unit 102 Cause, but the size of the cache lines of big buffer unit 101 is larger, therefore the situation for hitting big/little buffer unit simultaneously occurs.Can Solved by methods as described below:
As shown in Figure 2 and Figure 6, when the write operation for carrying out self virtualizing machine 201 is sent to flash cache 100, if this is write The operation hits cache lines of little buffer unit 102, then write the data to little buffer unit 102, if not hitting little caching The cache lines of unit 102, but hit the cache lines of big buffer unit 101, then big buffer unit 101 is write the data to, such as Really big/little buffer unit does not all hit, then inquire about acceleration and identify whether effectively, if it is valid, writing data into little caching Unit 102, otherwise data are not written into flash cache 100, are directly over the write rear end storage cluster 203 of common control equipment 202. Such write operation flow process ensure that, in the case where write operation hits the cache lines of little buffer unit 102, then little caching is single Data in unit 102 are forever newest.
As shown in Figure 2 and Figure 7, when the read operation for carrying out self virtualizing machine 201 is sent to flash cache 100, if this reading The operation hits cache lines of little buffer unit 102, then return the data in little buffer unit 102, if do not hit little The cache lines of buffer unit 102, but hit the cache lines of big buffer unit 101, then the data in big buffer unit 101 Return, if the cache lines of greatly/little caching are not all hit, inquiry acceleration is identified whether effectively, if it is valid, from rear end Storage cluster 203 reads the data of big cache line size, is loaded into the cache lines of big buffer unit 101, after allowing data is returned again Back to virtual machine 201, if invalid, inquiry data are temporary to be identified whether effectively, if it is valid, from rear end storage cluster 203 cache line datas for reading little buffer unit 102, are loaded into the cache lines of little buffer unit 102, then data are returned again To virtual machine 201, the otherwise data from the reading of rear end storage cluster are directly over centralized Control and set without flash cache 100 Standby 202 give front end virtual machine 201.
The non-volatile cache implementation method of the present embodiment, the size of the state table of record buffer memory state can be controlled one Determine in scope, in addition to it can accelerate to read operation, additionally it is possible to which whole write operations are accelerated.Additionally, during backup Only partial data is backed up, backup data quantity is limited and backup operation is little to performance impact.Furthermore, also without HotSpare disk, Persistent service can be provided.
Embodiment two:
The device of the present embodiment is corresponding consistent with the non-volatile cache implementation method in previous embodiment.
A kind of non-volatile cache realizes device, including flash memory storage resource virtualizing unit, and logic storage unit is created Unit, data write unit and data-reading unit.
It is flash memory storage pond that the flash memory storage resource virtualizing unit is used for the flash memory storage resource virtualizing of physics.
The logic storage unit creating unit is used to create three kinds of logic storage units on the storage pool, big caching Unit, little buffer unit and mirror image unit is write, the big buffer unit is used to provide conventional buffer service, and the little caching is single Unit is described to write mirror image unit for delay greatly for the temporary service of the data of acceleration service and read operation that provides random writing operations Deposit and provide redundancy backup defencive function with the dirty data in little caching.
The flash memory storage resource of the physics preferably may include two or more physics pallet, the big buffer unit, little slow Memory cell and mirror image unit is write across described two above physics pallets.And preferably:The little buffer unit and write mirror image list The single cache lines of unit are located in same physics pallet, and the single cache lines of the big buffer unit are located at same physics support In disk or across two or more physics pallet.
When the data write unit carries out data write, such as the write operation has hit the cache lines of little buffer unit, then Little buffer unit is write the data to, such as miss little buffer unit but the cache lines of big buffer unit has been hit, then data is write Enter big buffer unit, such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then write data into little slow Memory cell, otherwise data are not written into flash memory storage resource and the rear end storage cluster that writes direct.
The data write unit when writing data into the big buffer unit, little buffer unit and writing mirror image unit, The write physical location of the big buffer unit is preferably in different physics from the write physical location for writing mirror image unit On pallet, the write physical location of the little buffer unit is preferably also at not with the write physical location for writing mirror image unit On same physics pallet.
When the data-reading unit carries out digital independent, the cache lines of little buffer unit have been hit in the such as read operation, then Data in little buffer unit are returned, such as miss little buffer unit but the cache lines of big buffer unit has been hit, then big Data return in buffer unit, such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then from rear end Storage cluster reads the data of the cache lines correspondence size of big buffer unit and is loaded into the cache lines of big buffer unit, then number According to returning to virtual machine, such as big buffer unit and little buffer unit are all miss and accelerate mark invalid but data are temporary is identified with Effect, then read the cache line data of the little buffer unit of correspondence and be loaded into the cache lines of little buffer unit from rear end storage cluster, Again data are returned to virtual machine, the data for otherwise reading from rear end storage cluster are directly fed to without flash memory storage resource Front end virtual machine.
The big buffer unit, little buffer unit and write mirror image unit size divide can be by various ways, it is preferred to use Meet equation below
(Little_Size+Mirror_size)/Little_granularity+Big_Size/Big_granularity <=available_DRAM_Size/entry_size, wherein,
Big_Size is the size of big buffer unit,
Little_Size is the size of little buffer unit,
Mirror_size is the size for writing mirror image unit,
Little_granularity is the size of little buffer unit cache lines,
Big_granularity is the size of big buffer unit cache lines,
Available_DRAM_Size is the size of the DRAM of available memory buffers state table,
Entry_size is to cache each table item size.
Additionally, it is described write mirror image unit and can write mirror image subelement by multiple logics constitute.
Which physics pallet the operation of the data write unit and data-reading unit falls preferably by following principle:When When certain physics pallet is damaged, only by the transition of operation being mapped to originally on the physics pallet to other physics pallets, And be just mapped to the read-write operation on other physics pallets originally and maintain mapping relations constant.
The cache lines of the big buffer unit at least include dirty situation, clean state and disarmed state, the dirty situation table Show that the data in the data and back end storage system in cache lines are inconsistent, the clean state represent data in cache lines and Data in back end storage system are consistent, and the disarmed state is represented in cache lines without valid data;When cache lines are in invalid During state, dirty situation is jumped to when receiving data write request, receive when clean data load request and jump to clean state;When When caching is in dirty situation, only receives when cache lines remove request and redirect as clean state;When cache lines are in clean state When, dirty situation is jumped to when receiving data write request, jump to disarmed state when receiving invalidation request.
The cache lines of the little buffer unit at least include dirty situation, clean state, disarmed state and frozen state, described Dirty situation represents that the data in data and back end storage system in cache lines are inconsistent, and the clean state is represented in cache lines Data it is consistent with the data in back end storage system, the disarmed state is represented in cache lines without valid data, the jelly Knot state representation current cache row is in frozen state, can only be read, it is impossible to be written into;When cache lines are in disarmed state When, dirty situation is jumped to when receiving data write request, receive when clean data load request and jump to clean state;Work as caching In dirty situation when, receive when cache lines remove request and redirect as disarmed state, redirect as frozen state when receiving mobile request; When cache lines are in clean state, dirty situation is jumped to when receiving data write request, it is invalid to jump to when receiving read request State;When cache lines are in frozen state, only receive cache lines when moving the return for completing and jump to disarmed state.
In the present embodiment, further preferably including unit is guarded, this guards unit for will write dirty in mirror image unit on backstage Data dump to rear end storage cluster, to limit the flash memory storage resource in need the dirty data for doing redundancy backup in predetermined model In enclosing.The redundancy backup is preferably adopted and writes mirror-image fashion.
It would be recognized by those skilled in the art that it is possible that numerous accommodations are made to above description, so embodiment is only For describing one or more particular implementations.
Although having been described above and describing the example embodiment for being counted as the present invention, it will be apparent to those skilled in the art that It can be variously modified and be replaced, without departing from the spirit of the present invention.Furthermore it is possible to make many modifications with by spy Condition of pledging love is fitted to the religious doctrine of the present invention, without departing from invention described herein central concept.So, the present invention is unrestricted May also include belonging to all embodiments and its equivalent of the scope of the invention in specific embodiment disclosed here, but the present invention Thing.

Claims (24)

1. a kind of non-volatile cache implementation method, it is characterised in that:It is sudden strain of a muscle first by the flash memory storage resource virtualizing of physics Storage pool is deposited, three kinds of logic storage units are then created on the storage pool, big buffer unit, little buffer unit and write mirror image Unit, the big buffer unit is used to provide conventional buffer service, and the little buffer unit is used to provide random writing operations Accelerate the temporary service of data of service and read operation, it is described to write mirror image unit for in big buffer unit and little buffer unit Dirty data provides redundancy backup defencive function;
When data write, such as the write operation has hit the cache lines of little buffer unit, then write the data to little buffer unit, such as not Hit little buffer unit but hit the cache lines of big buffer unit, then write the data to big buffer unit, such as big buffer unit And acceleration mark all miss with little buffer unit effectively, then writes data into little buffer unit, and otherwise data are not written into flash memory Storage resource and the rear end storage cluster that writes direct;
During digital independent, the cache lines of little buffer unit have been hit in the such as read operation, then the data in little buffer unit are returned, Such as miss little buffer unit but the cache lines of big buffer unit are hit, then the data in big buffer unit have been returned, it is such as big Buffer unit and little buffer unit are all miss and accelerate mark effective, then read the slow of big buffer unit from rear end storage cluster Deposit the data of row correspondence size and be loaded into the cache lines of big buffer unit, then data are returned to front end data application unit, As big buffer unit and little buffer unit are all miss and accelerate the temporary mark of the invalid but data of mark effective, then store from rear end Cluster reads the cache line data of the little buffer unit of correspondence and is loaded into the cache lines of little buffer unit, then before data are returned to End data application unit, the data for otherwise reading from rear end storage cluster are directly fed to front end number without flash memory storage resource According to application unit.
2. non-volatile cache implementation method as claimed in claim 1, it is characterised in that:The big buffer unit, little caching Unit meets equation below with the size for writing mirror image unit
(Little_Size+Mirror_size)/Little_granularity+Big_Size/Big_granularity<=
Available_DRAM_Size/entry_size, wherein,
Big_Size is the size of big buffer unit,
Little_Size is the size of little buffer unit,
Mirror_size is the size for writing mirror image unit,
Little_granularity is the size of little buffer unit cache lines,
Big_granularity is the size of big buffer unit cache lines,
Available_DRAM_Size is the size of the DRAM of available memory buffers state table,
Entry_size is to cache each table item size.
3. non-volatile cache implementation method as claimed in claim 1, it is characterised in that:The mirror image unit of writing is by least one Individual logic writes mirror image subelement composition, and the big buffer unit, little buffer unit are single by big caching of at least one logic respectively Unit, the little caching subelement composition of at least one logic.
4. non-volatile cache implementation method as claimed in claim 1, it is characterised in that:The flash memory storage resource of the physics Including two or more physics pallet, the big buffer unit, little buffer unit and mirror image unit is write more than described two Physics pallet.
5. non-volatile cache implementation method as claimed in claim 4, it is characterised in that:Data are writing the big caching list First, little buffer unit and when writing mirror image unit, the write physical location of the big buffer unit writes writing for mirror image unit with described Enter physical location on the different physics pallets, the write physical location of the little buffer unit writes mirror image list with described The write physical location of unit is also on the different physics pallets.
6. non-volatile cache implementation method as claimed in claim 4, it is characterised in that:The little buffer unit and write mirror image The single cache lines of unit are located in same physics pallet or across two or more physics pallet, the list of the big buffer unit Individual cache lines are located in same physics pallet or across two or more physics pallet.
7. non-volatile cache implementation method as claimed in claim 4, it is characterised in that data write operation and digital independent Which physics pallet operation falls by following principle:When certain physics pallet is damaged, the physics support only will be originally mapped to Transition of operation on disk is on other physics pallets, and the read-write operation being just mapped on other physics pallets originally maintains to reflect Penetrate relation constant.
8. non-volatile cache implementation method as claimed in claim 1, it is characterised in that:The cache lines of the big buffer unit At least include dirty situation, clean state and disarmed state, the dirty situation represents data and back end storage system in cache lines In data it is inconsistent, the clean state represents that the data in cache lines are consistent with the data in back end storage system, described Disarmed state is represented in cache lines without valid data;
When cache lines are in disarmed state, dirty situation is jumped to when receiving data write request, receiving clean data loading please Clean state is jumped to when asking;
When cache lines are in dirty situation, only receive when cache lines remove request and redirect as clean state;
When cache lines are in clean state, dirty situation is jumped to when receiving data write request, redirected when receiving invalidation request To disarmed state.
9. non-volatile cache implementation method as claimed in claim 1, it is characterised in that:The cache lines of the little buffer unit At least include dirty situation, clean state, disarmed state and frozen state, the dirty situation represents the data in cache lines and rear end Data in storage system are inconsistent, and the clean state represents the data one in data and back end storage system in cache lines Cause, the disarmed state is represented without valid data in cache lines, the frozen state represents that current cache row is in and freezes shape State, can only be read, it is impossible to be written into;
When cache lines are in disarmed state, dirty situation is jumped to when receiving data write request, receiving clean data loading please Clean state is jumped to when asking;
When cache lines are in dirty situation, receive and redirect when cache lines remove request as disarmed state, jump when receiving mobile request Switch to frozen state;
When cache lines are in clean state, dirty situation is jumped to when receiving data write request, jumped to when receiving read request Disarmed state;
When cache lines are in frozen state, only receive cache lines when moving the return for completing and jump to disarmed state.
10. non-volatile cache implementation method as claimed in claim 1, it is characterised in that:Also include guarding unit, this is guarded Unit is used to that the dirty data write in mirror image unit to be scavenged into into rear end storage cluster on backstage, to limit the flash memory storage resource The middle dirty data that need to do redundancy backup is in predetermined scope.
11. non-volatile cache implementation methods as claimed in claim 10, it is characterised in that:The redundancy backup is adopted and writes mirror Image space formula.
The 12. non-volatile cache implementation methods as described in any one of claim 1-11, it is characterised in that:The sudden strain of a muscle of the physics It is flash memory or phased memory to deposit storage resource.
A kind of 13. non-volatile caches realize device, it is characterised in that include:
Flash memory storage resource virtualizing unit, for by the flash memory storage resource virtualizing of physics be flash memory storage pond;
Logic storage unit creating unit is big buffer unit, little for creating three kinds of logic storage units on the storage pool Buffer unit is used to provide conventional buffer service with mirror image unit, the big buffer unit is write, and the little buffer unit is used for The temporary service of the data of acceleration service and read operation of random writing operations is provided, it is described write mirror image unit for for big caching with it is little Dirty data in caching provides redundancy backup defencive function;
Data write unit and data-reading unit;
When the data write unit carries out data write, such as the write operation has hit the cache lines of little buffer unit, then number According to little buffer unit is write, such as miss little buffer unit but the cache lines of big buffer unit are hit, then write the data to big Buffer unit, such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then write data into little caching single Unit, otherwise data are not written into flash memory storage resource and the rear end storage cluster that writes direct;
When the data-reading unit carries out digital independent, the cache lines of little buffer unit have been hit in the such as read operation, then little Data in buffer unit are returned, and such as miss little buffer unit has still hit the cache lines of big buffer unit, then big slow Data return in memory cell, such as big buffer unit and little buffer unit are all miss and accelerate mark effective, then deposit from rear end Accumulation reads the data of the cache lines correspondence size of big buffer unit and is loaded into the cache lines of big buffer unit, then data Front end data application unit is returned to, such as big buffer unit and little buffer unit are all miss and accelerate that mark is invalid but data are temporary Deposit mark effectively, then read the cache line data of the little buffer unit of correspondence from rear end storage cluster and be loaded into little buffer unit Cache lines, then data are returned to front end data application unit, otherwise the data from the reading of rear end storage cluster are without flash memory Storage resource and be directly fed to front end front end data application unit.
14. non-volatile caches as claimed in claim 13 realize device, it is characterised in that:It is the big buffer unit, little slow Memory cell meets equation below with the size for writing mirror image unit
(Little_Size+Mirror_size)/Little_granularity+Big_Size/Big_granularity<=
Available_DRAM_Size/entry_size, wherein,
Big_Size is the size of big buffer unit,
Little_Size is the size of little buffer unit,
Mirror_size is the size for writing mirror image unit,
Little_granularity is the size of little buffer unit cache lines,
Big_granularity is the size of big buffer unit cache lines,
Available_DRAM_Size is the size of the DRAM of available memory buffers state table,
Entry_size is to cache each table item size.
15. non-volatile caches as claimed in claim 13 realize device, it is characterised in that:The mirror image unit of writing is by least One logic writes mirror image subelement composition, and the big buffer unit, little buffer unit can be by the big cachings of one or more logic The little caching subelement composition of subelement, logic.
16. non-volatile caches as claimed in claim 13 realize device, it is characterised in that:The flash memory storage money of the physics Source include two or more physics pallet, the big buffer unit, little buffer unit and write mirror image unit across it is described two with Upper physics pallet.
17. non-volatile caches as claimed in claim 16 realize device, it is characterised in that:The data write unit is being incited somebody to action When data write the big buffer unit, little buffer unit and write mirror image unit, the write physical location of the big buffer unit From the write physical location for writing mirror image unit on different physics pallets, the write physical bit of the little buffer unit Put and be also on different physics pallets from the write physical location for writing mirror image unit.
18. non-volatile caches as claimed in claim 16 realize device, it is characterised in that:The little buffer unit and write mirror As the single cache lines of unit are located in same physics pallet or across two or more physics pallet, the big buffer unit Single cache lines are located in same physics pallet or across two or more physics pallet.
19. non-volatile caches as claimed in claim 16 realize device, it is characterised in that the data write unit sum According to the operation of reading unit which physics pallet is fallen by following principle:When certain physics pallet is damaged, only will reflect originally The transition of operation on the physics pallet is mapped to other physics pallets, and the reading being just mapped to originally on other physics pallets Write operation maintains mapping relations constant.
20. non-volatile caches as claimed in claim 13 realize device, it is characterised in that:The caching of the big buffer unit Row at least includes dirty situation, clean state and disarmed state, and the dirty situation represents the data in cache lines and rear end storage system Data in system are inconsistent, and the clean state represents that the data in cache lines are consistent with the data in back end storage system, institute State disarmed state to represent in cache lines without valid data;
When cache lines are in disarmed state, dirty situation is jumped to when receiving data write request, receiving clean data loading please Clean state is jumped to when asking;
When cache lines are in dirty situation, only receive when cache lines remove request and redirect as clean state;
When cache lines are in clean state, dirty situation is jumped to when receiving data write request, redirected when receiving invalidation request To disarmed state.
21. non-volatile caches as claimed in claim 13 realize device, it is characterised in that:The caching of the little buffer unit Row at least includes dirty situation, clean state, disarmed state and frozen state, the dirty situation represent data in cache lines and after Data in the storage system of end are inconsistent, and the clean state represents the data in data and back end storage system in cache lines Unanimously, the disarmed state represents that without valid data in cache lines the frozen state represents that current cache row is in and freezes State, can only be read, it is impossible to be written into;
When cache lines are in disarmed state, dirty situation is jumped to when receiving data write request, receiving clean data loading please Clean state is jumped to when asking;
When cache lines are in dirty situation, receive and redirect when cache lines remove request as disarmed state, jump when receiving mobile request Switch to frozen state;
When cache lines are in clean state, dirty situation is jumped to when receiving data write request, jumped to when receiving read request Disarmed state;
When cache lines are in frozen state, only receive cache lines when moving the return for completing and jump to disarmed state.
22. non-volatile caches as claimed in claim 13 realize device, it is characterised in that:Also include guarding unit, this is kept Shield unit is used to that the dirty data write in mirror image unit to be scavenged into into rear end storage cluster on backstage, to limit the flash memory storage money The dirty data for doing redundancy backup is needed in source in predetermined scope.
23. non-volatile caches as claimed in claim 22 realize device, it is characterised in that:The redundancy backup is adopted and writes mirror Image space formula.
24. non-volatile caches as described in any one of claim 13-23 realize device, it is characterised in that:The physics Flash memory storage resource is flash memory or phased memory.
CN201410806036.9A 2014-12-19 2014-12-19 Nonvolatile cache realization method and device Active CN104484287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410806036.9A CN104484287B (en) 2014-12-19 2014-12-19 Nonvolatile cache realization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410806036.9A CN104484287B (en) 2014-12-19 2014-12-19 Nonvolatile cache realization method and device

Publications (2)

Publication Number Publication Date
CN104484287A CN104484287A (en) 2015-04-01
CN104484287B true CN104484287B (en) 2017-05-17

Family

ID=52758830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410806036.9A Active CN104484287B (en) 2014-12-19 2014-12-19 Nonvolatile cache realization method and device

Country Status (1)

Country Link
CN (1) CN104484287B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016095233A1 (en) * 2014-12-19 2016-06-23 北京麓柏科技有限公司 Method and apparatus for realizing non-volatile cache
CN107179878B (en) * 2016-03-11 2021-03-19 伊姆西Ip控股有限责任公司 Data storage method and device based on application optimization
CN108459826B (en) * 2018-02-01 2020-12-29 杭州宏杉科技股份有限公司 Method and device for processing IO (input/output) request
CN110032526B (en) * 2019-04-16 2021-10-15 苏州浪潮智能科技有限公司 A method, system and device for page caching based on non-volatile media
CN111045604B (en) * 2019-12-11 2022-11-01 苏州浪潮智能科技有限公司 Small file read-write acceleration method and device based on NVRAM
CN113010474B (en) * 2021-03-16 2023-10-24 中国联合网络通信集团有限公司 File management method, instant messaging method and storage server
CN113190473B (en) * 2021-04-30 2023-05-30 广州大学 Cache data management method and medium based on energy collection nonvolatile processor
CN118331508B (en) * 2024-06-14 2024-09-20 鹏钛存储技术(南京)有限公司 Method and system for maintaining data access consistency

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5448719A (en) * 1992-06-05 1995-09-05 Compaq Computer Corp. Method and apparatus for maintaining and retrieving live data in a posted write cache in case of power failure
CN1862475A (en) * 2005-07-15 2006-11-15 华为技术有限公司 Method for managing magnetic disk array buffer storage
CN102713828A (en) * 2011-12-21 2012-10-03 华为技术有限公司 Multi-device mirror images and stripe function-providing disk cache method, device, and system
CN103226519A (en) * 2012-01-31 2013-07-31 Lsi公司 Elastic cache of redundant cache data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7356651B2 (en) * 2004-01-30 2008-04-08 Piurata Technologies, Llc Data-aware cache state machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5448719A (en) * 1992-06-05 1995-09-05 Compaq Computer Corp. Method and apparatus for maintaining and retrieving live data in a posted write cache in case of power failure
CN1862475A (en) * 2005-07-15 2006-11-15 华为技术有限公司 Method for managing magnetic disk array buffer storage
CN102713828A (en) * 2011-12-21 2012-10-03 华为技术有限公司 Multi-device mirror images and stripe function-providing disk cache method, device, and system
CN103226519A (en) * 2012-01-31 2013-07-31 Lsi公司 Elastic cache of redundant cache data

Also Published As

Publication number Publication date
CN104484287A (en) 2015-04-01

Similar Documents

Publication Publication Date Title
CN104484287B (en) Nonvolatile cache realization method and device
CN103207839B (en) Cache management method and system that track in the high-speed cache of storage is removed
US10126964B2 (en) Hardware based map acceleration using forward and reverse cache tables
US5551002A (en) System for controlling a write cache and merging adjacent data blocks for write operations
US7010645B2 (en) System and method for sequentially staging received data to a write cache in advance of storing the received data
CN105159622B (en) A kind of method and system reducing SSD read-write IO time delay
US20030105928A1 (en) Method, system, and program for destaging data in cache
US20140115235A1 (en) Cache control apparatus and cache control method
US20200042343A1 (en) Virtual machine replication and migration
US20100115193A1 (en) System and method for improving data integrity and memory performance using non-volatile media
US9037787B2 (en) Computer system with physically-addressable solid state disk (SSD) and a method of addressing the same
TWI771933B (en) Method for performing deduplication management with aid of command-related filter, host device, and storage server
CN101236482B (en) Method for processing data under degrading state and independent redundancy magnetic disc array system
US20100235568A1 (en) Storage device using non-volatile memory
US10564865B2 (en) Lockless parity management in a distributed data storage system
JP2001142778A (en) Method for managing cache memory, multiplex fractionization cache memory system and memory medium for controlling the system
KR20140111588A (en) System, method and computer-readable medium for managing a cache store to achieve improved cache ramp-up across system reboots
CN108459826A (en) A kind of method and device of processing I/O Request
CN101866307A (en) Data storage method and device based on mirror image technology
US5420983A (en) Method for merging memory blocks, fetching associated disk chunk, merging memory blocks with the disk chunk, and writing the merged data
CN109582219A (en) Storage system, computing system and its method
US10114566B1 (en) Systems, devices and methods using a solid state device as a caching medium with a read-modify-write offload algorithm to assist snapshots
KR101562794B1 (en) Data storage device
Lv et al. Zonedstore: A concurrent zns-aware cache system for cloud data storage
US8732404B2 (en) Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20180829

Address after: 100013 11, 1 anding Gate Street, Chaoyang District, Beijing (anzhen incubator C218)

Patentee after: Beijing Jiangjiang science and Technology Center (limited partnership)

Address before: 100083 B-602-017 5, 1 building, 18 Zhongguancun East Road, Haidian District, Beijing.

Patentee before: NETBRIC TECHNOLOGY CO., LTD.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Nonvolatile cache realization method and device

Effective date of registration: 20200119

Granted publication date: 20170517

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: Beijing Jiangjiang science and Technology Center (limited partnership)

Registration number: Y2020990000082

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20210305

Granted publication date: 20170517

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: Beijing Jiangjiang science and Technology Center (L.P.)

Registration number: Y2020990000082

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210520

Address after: 100089 room 203-1, 2 / F, building 1, courtyard 1, Shangdi East Road, Haidian District, Beijing

Patentee after: Beijing Zeshi Technology Co.,Ltd.

Address before: 100013 11, 1 anding Gate Street, Chaoyang District, Beijing (anzhen incubator C218)

Patentee before: Beijing Jiangjiang science and Technology Center (L.P.)