CN110688056B - Storage medium replacement for NVM group - Google Patents
Storage medium replacement for NVM group Download PDFInfo
- Publication number
- CN110688056B CN110688056B CN201810730565.3A CN201810730565A CN110688056B CN 110688056 B CN110688056 B CN 110688056B CN 201810730565 A CN201810730565 A CN 201810730565A CN 110688056 B CN110688056 B CN 110688056B
- Authority
- CN
- China
- Prior art keywords
- logical unit
- logical
- lun
- unit
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Storage medium replacement for NVM groups is disclosed. The disclosed storage medium replacement method for an NVM group includes: selecting a source logic unit and a destination logic unit to be replaced; copying the data of the source logic unit to the destination logic unit; the metadata is also updated such that accesses to the source logical unit are mapped to the destination logical unit.
Description
Technical Field
The present application relates to storage devices, and more particularly to replacement or data migration of storage media of the storage devices that make up an NVM group (NVMSet).
Background
Fig. 1 illustrates a block diagram of a storage device. The solid state storage device 100 is coupled to a host for providing storage capacity for the host. The host and solid state storage device 100 may be coupled by a variety of means including, but not limited to, connecting the host to the solid state storage device 100 via, for example, SATA (SERIAL ADVANCED Technology Attachment ), SCSI (Small Computer system interface), SAS (SERIAL ATTACHED SCSI ), IDE (INTEGRATED DRIVE Electronics, integrated drive Electronics), USB (Universal Serial Bus ), PCIE (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, PCIE, peripheral component interconnect), NVMe (NVM Express, high speed nonvolatile storage), ethernet, fibre channel, wireless communication network, and the like. The host may be an information processing device capable of communicating with the storage device in the manner described above, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, or the like. Memory device 100 includes an interface 110, a control unit 120, one or more NVM chips 130, and DRAM (Dynamic Random Access Memory ) 140.
NAND flash memory, phase change memory, feRAM (Ferroelectric RAM, ferroelectric memory), MRAM (Magnetic Random Access Memory, magnetoresistive memory), RRAM (RESISTIVE RANDOM ACCESS MEMORY, resistive memory), etc. are common NVM.
The interface 110 may be adapted to exchange data with a host by way of, for example, SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
Control unit 120 is used to control data transfer between interface 110, NVM chip 130, and DRAM 140, and is also used for memory management, host logical address to flash physical address mapping, erase balancing, bad block management, etc. The control unit 120 may be implemented in various manners including software, hardware, firmware, or a combination thereof, for example, the control unit 120 may be in the form of an FPGA (Field-programmable gate array) or an ASIC (Application SPECIFIC INTEGRATED Circuit) or a combination thereof. The control unit 120 may also include a processor or controller in which software is executed to manipulate the hardware of the control unit 120 to process IO (Input/Output) commands. Control unit 120 may also be coupled to DRAM 140 and may access the data of DRAM 140. FTL tables and/or cached data of IO commands may be stored in the DRAM.
Control unit 120 includes a flash interface controller (alternatively referred to as a media interface controller, a flash channel controller) that is coupled to NVM chip 130 and issues commands to NVM chip 130 in a manner that complies with the interface protocol of NVM chip 130 to operate NVM chip 130 and to receive command execution results output from NVM chip 130. Known NVM chip interface protocols include "Toggle", "ONFI", and the like.
The memory Target (Target) is one or more logical units (LUNs, logic UNit) of a shared Chip Enable (CE) signal within the NAND flash package. One or more dies (Die) are included within the NAND flash package. Typically, the logic unit corresponds to a single die. The logic cell may include multiple planes (planes). Multiple planes within a logic unit may be accessed in parallel, while multiple logic units within a NAND flash memory chip may execute commands and report status independently of each other. In "Open NAND FLASH INTERFACE Specification (review 3.0)" available from http:// www.micron.com/-/media/Documents/Products/Other% 20Documents/ON FI3—0gold, the meaning of the target, logical Unit (LUN), plane is provided as regards the object, which is part of the prior art.
Data is typically stored and read on a storage medium on a page basis. While data is erased in blocks. A block (also called a physical block) contains a plurality of pages. Pages on a storage medium (called physical pages) have a fixed size, e.g., 17664 bytes. The physical pages may also have other sizes.
In a solid state storage device, FTL (Flash Translation Layer ) is utilized to maintain mapping information from logical addresses to physical addresses. The logical addresses constitute the storage space of the solid state storage device as perceived by upper level software such as the operating system. The physical address is an address for accessing a physical storage unit of the solid state storage device. Address mapping can also be implemented in the prior art using an intermediate address modality. For example, logical addresses are mapped to intermediate addresses, which in turn are further mapped to physical addresses.
The table structure storing mapping information from logical addresses to physical addresses is called FTL table. FTL tables are important metadata in solid state storage devices. Typically, the data items of the FTL table record address mapping relationships in units of data pages in the solid-state storage device.
Fig. 2 shows a schematic diagram of a large block. A chunk includes a physical block from each of a plurality of logical units (referred to as a logical unit group). Preferably, each logical unit provides a physical block for a large block. By way of example, a large block is constructed on every 16 Logical Units (LUNs). Each chunk includes 16 physical blocks, from each of 16 Logical Units (LUNs). In the example of FIG. 2, chunk 0 includes physical chunk 0 from each of the 16 Logical Units (LUNs), and chunk 1 includes physical chunk 1 from each Logical Unit (LUN). There are a variety of other ways to construct the chunk. For example, in chinese patent application 2017107523210 (entitled variable large block based garbage collection method and apparatus) a way to construct conveniently large blocks is provided.
As an alternative, page stripes are constructed in large blocks, with physical pages of the same physical address in each Logical Unit (LUN) constituting a "page stripe". In FIG. 2, physical pages P0-0, P0-1 … …, and P0-x form page stripe 0, where physical pages P0-0, P0-1, … …, P0-14 are used to store user data, and P0-15 are used to store parity data calculated from all user data within the stripe. Similarly, in FIG. 2, physical pages P2-0, physical pages P2-1 … …, and physical pages P2-x constitute page stripe 2. Alternatively, the physical page used to store the parity data may be located anywhere in the page stripe.
The storage device also performs wear leveling operations such that each physical block of the storage device experiences substantially the same number of erasures during use, thereby reducing the lifetime exhaustion of individual physical blocks from adversely affecting the lifetime of the storage device.
In order to improve the quality of service of a storage device, the provision of a NVM group (NVMSet) mechanism in the storage device is under discussion (see https://www.snia.org/sites/default/files/SDCEMEA/2018/Presentations/Achieving-Predictable-Latency-Solid-State-Storage-SSD-SNIA-SDC-EMEA-2018.pdf).NVM set of nonvolatile storage media. Nonvolatile storage media in different NVM groups are independent of each other. For example, nonvolatile storage media belonging to one NVM group are no longer belonging to another NVM group. By differentiating the NVM groups in the storage device, the impact of IO commands accessing some of the nonvolatile storage media on the processing performance of IO commands accessing other nonvolatile storage media is reduced or eliminated. Endurance group (EnduranceGroup) is also under discussion. Endurance group may include one or more NVM groups. There are one or more endurance groups in the storage device.
A Namespace (NS) is also defined in the NVMe protocol. A namespace of size n is a collection of logical blocks with logical block addresses from 0 to n-1. Namespaces can be uniquely identified by namespace IDs (NAMESPACE ID, NSID). The nonvolatile storage media used by the same namespace are from the same NVM group and not from multiple NVM groups. Two or more namespaces can use nonvolatile storage media from the same NVM group.
Disclosure of Invention
Once the NVM groups are provided, wear imbalance will occur between NVM groups in the storage device as the NVM groups are used. There is a further need to provide wear leveling capability between NVM groups to extend the life of the storage device. Wear leveling between NVM groups is achieved by replacement of storage media such as logical units.
According to a first aspect of the present application, there is provided a storage medium replacement method of a first NVM group according to the first aspect of the present application, comprising: selecting a source logic unit and a destination logic unit to be replaced; copying the data of the source logic unit to the destination logic unit; the metadata is also updated such that accesses to the source logical unit are mapped to the destination logical unit.
According to the storage medium replacement method of the first NVM group of the first aspect of the present application, there is provided a second storage medium replacement method of the NVM group according to the first aspect of the present application, further comprising: the physical blocks of the source logical unit are erased.
According to the storage medium replacement method of the first or second NVM group of the first aspect of the present application, there is provided a third storage medium replacement method of the NVM group according to the first aspect of the present application, further comprising: the destination logical unit is set to the occupied state and the source logical unit is set to the idle state.
According to one of the storage medium replacement methods of the first to third NVM groups of the present application, there is provided a fourth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein: part of the data of the source logical unit is copied to the destination logical unit; the method further comprises the steps of: yet another portion of the data of the source logical unit is copied to the first logical unit.
According to one of the storage medium replacement methods of the first to fourth NVM groups of the present application, there is provided a fifth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein the virtual logical unit mapped to the source logical unit is modified to be mapped to the destination logical unit by updating metadata.
According to one of the storage medium replacement methods of the first to fourth NVM groups of the present application, there is provided the sixth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein the logical address for accessing the copied data is modified to the logical address mapped to the destination logical unit by updating the metadata.
According to one of the storage medium replacement methods of the first through sixth NVM groups of the present application, there is provided the seventh storage medium replacement method of the NVM group according to the first aspect of the present application, wherein the physical address of the copied data on the destination logical unit is the same as the physical address on the source logical unit.
According to one of the storage medium replacement methods of the first to seventh NVM groups of the present application, there is provided an eighth storage medium replacement method of the NVM group according to the first aspect of the present application, further comprising: acquiring a first large block on the source logic unit; selecting a first logical unit from a plurality of logical units including the destination logical unit; selecting a first physical block from the first logical unit to replace a physical block provided by the source logical unit for the first chunk; and copying data of a physical block provided by the source logical unit for the first large block to the first physical block.
According to a storage medium replacement method of an eighth NVM group according to the first aspect of the present application, there is provided a ninth storage medium replacement method of an NVM group according to the first aspect of the present application, wherein a first logical unit is selected at a specified probability from the plurality of logical units, and the plurality of logical units each have a probability of being selected.
According to a ninth storage medium replacement method of an NVM group of the first aspect of the present application, there is provided a tenth storage medium replacement method of an NVM group according to the first aspect of the present application, wherein the probability that the first logical unit is selected is greater than the probability that other logical units than the first logical unit at the plurality of logical units are selected.
The storage medium replacement method of the eighth or the second ninth NVM group according to the first aspect of the present application provides the eleventh storage medium replacement method of the NVM group according to the first aspect of the present application, further comprising setting a probability of being selected for each logical unit.
According to a twelfth storage medium replacement method of the NVM group of the first aspect of the present application, there is provided a twelfth storage medium replacement method of the NVM group of the first aspect of the present application, wherein substantially the same probability of being selected is set for each logical unit in a first phase of a lifecycle of the storage device; and setting different probabilities of being selected for each logical unit in a second phase of the lifecycle of the storage device.
According to an eleventh or twelfth method of replacing a storage medium of an NVM group according to the first aspect of the present application, there is provided a thirteenth method of replacing a storage medium of an NVM group according to the first aspect of the present application, wherein the probability of being selected is set for one or more logical units, positively or negatively correlated with the number of erasures experienced by the logical unit.
According to one of the storage medium replacement methods of the eleventh through thirteenth NVM groups of the present application, there is provided the fourteenth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein the probability of the source logical unit being selected is increased in response to the source logical unit being predicted to be replaced.
According to one of the storage medium replacement methods of the eighth through fourteenth NVM groups of the first aspect of the present application, there is provided a fifteenth storage medium replacement method of the NVM group according to the first aspect of the present application, further comprising: in response to creating the second large block, a second plurality of logical units is selected and a physical block from the second plurality of logical units is obtained for building the second large block.
A fifteenth storage medium replacement method for an NVM group according to the first aspect of the present application provides the sixteenth storage medium replacement method for an NVM group according to the first aspect of the present application, further comprising: selecting the logic units according to the designated probability to obtain the second plurality of logic units.
According to a fifteenth or sixteenth method of replacing a storage medium of an NVM group according to the first aspect of the present application, there is provided a seventeenth method of replacing a storage medium of an NVM group according to the first aspect of the present application, wherein each of the second plurality of logical units belongs to the same endurance group or NVM group.
According to one of the storage medium replacement methods of the first to seventeenth NVM groups of the present application, there is provided the eighteenth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein the source logical unit and the destination logical unit belong to the same endurance group.
According to one of the storage medium replacement methods of the eighth to seventeenth NVM group of the first aspect of the present application, there is provided the nineteenth storage medium replacement method of the NVM group according to the first aspect of the present application, wherein each of the plurality of logic units belongs to the same endurance group.
According to a second aspect of the present application, there is provided a first storage device according to the second aspect of the present application, comprising a control unit and an NVM chip, the control unit performing one of the storage medium replacement methods of the NVM group according to the first aspect of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings to those skilled in the art.
FIG. 1 is a block diagram of a storage device;
FIG. 2 is a schematic diagram of a large block;
FIG. 3A is a schematic diagram of an NVM group replacement non-volatile storage medium according to an embodiment of the present application;
FIG. 3B illustrates a flow chart of data migration according to an embodiment of the present application;
FIG. 4A is a schematic diagram of an NVM group replacement non-volatile storage medium according to yet another embodiment of the present application;
FIG. 4B is a schematic diagram of the result of a replacement of a nonvolatile storage medium by the NVM group in accordance with the embodiment of FIG. 4A of the present application;
FIG. 5 illustrates a flow chart of data migration according to the embodiments of FIGS. 4A and 4B of the present application;
FIG. 6A is a schematic diagram of migrating non-volatile storage media according to another embodiment of the present application; and
FIG. 6B illustrates a schematic diagram of the result of migrating a nonvolatile storage medium according to the embodiment of FIG. 6A of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
FIG. 3A is a schematic diagram of an NVM group replacement non-volatile storage medium according to an embodiment of the present application.
According to the embodiment of FIG. 3A, the storage device includes a plurality of logical units (LUNs 0-9). The storage device provides multiple NVM groups (NVM group 310, NVM group 312, and NVM group 314). Alternatively, NVM group 310 and NVM group 312 belong to endurance group 320, and NVM group 322 belongs to endurance group 322. It will be appreciated that a storage device according to other embodiments of the present application may not provide a endurance set.
According to an embodiment of the present application, nonvolatile storage media are allocated in units of LUNs for NVM. NVM group 310 is assigned LUN 0 and LUN 1, NVM group 312 is assigned LUN 2 and LUN 3, and NVM group 314 is assigned LUN 6-LUN 7. Since NVM group 310 and NVM group 312 belong to the same endurance group 320, the logic cells of NVM group 310 and NVM group 312 may be swapped to achieve wear leveling within endurance group 320. For example, LUN 1 exchanges with LUN 2 such that LUN 1 is assigned to NVM group 312 and LUN 2 is assigned to NVM group 310, with data migration between LUN 1 and LUN 2 occurring as LUN 1 exchanges with LUN 2.
Optionally, the storage device further comprises free logical units (LUN 4 and LUN 5). An idle logical unit is a logical unit that has not been assigned to any NVM group. For example, when LUN 2 is excessively worn out due to frequent writing, LUN 4 is assigned to NVM group 312 to replace LUN 2, and LUN 2 is removed from NVM group 312, thereby achieving wear leveling. Data migration is performed between LUN 4 and LUN 2 along with LUN 4 exchanges with LUN 2.
Alternatively or further, an empty logical unit (e.g., LUN 4) is assigned to the durability group 320, then LUN 4 is no longer assigned to other durability groups (durability group 322) than the durability group 320. Such that once each LUN is used for a certain endurance group, it is used only for that endurance group, or as an idle logical unit, for the life cycle of the storage device.
In addition to allocating nonvolatile storage media in units of logical units for NVM, in other embodiments nonvolatile storage media are allocated in units of NVM chips, dies, or targets.
According to embodiments of the present application, a logical unit of a storage device may be in various states, such as an occupied state, an idle state, and an in-flight state. The logical units in the idle state are not assigned to any NVM group. The logical units assigned to the NVM group and not involved in data migration are in an occupied state. The logical unit that is undergoing data migration is in a migrating state.
FIG. 3B illustrates a flow chart of data migration according to an embodiment of the present application.
The storage device initiates data migration based on the usage of the logical units of each NVM group or as a response to a data migration command sent by the host to the storage device.
By way of example, NVM group 312 assumes a larger write operation such that LUN 2 is written to significantly higher than LUN 3 (e.g., as compared to other logical units of the storage device) to initiate data migration.
For data migration, referring to FIG. 3B, a source logical unit (LUN (s)) and a destination logical unit (LUN (d)) to be migrated are selected (340), wherein the source logical unit is a logical unit in an occupied state in the NVM group to be migrated, and the destination logical unit is an idle logical unit. And marking the selected source logic unit and the selected destination logic unit as in-migration states. By way of example, referring also to FIG. 3, LUN3 of NVM group 312 is selected as the source logical unit and LUN 4 is selected as the destination logical unit.
The data of the source logical unit (LUN 3) is copied to the destination logical unit (LUN 4) (350). Alternatively, the data of the source logical unit (LUN 3) is copied to other logical units (e.g., LUN 2) belonging to the NVM group 312 other than the destination logical unit (LUN 4), while the objects of the present application can be achieved as well. Optionally, only the effective data in the source logic unit is migrated, so that the efficiency of data migration is improved, and the data writing amount in the data migration process is reduced.
Metadata of the storage device is also updated to record new storage locations after the data migration (360). For example, for each piece of migrated data, its physical address in the destination logical unit or other logical units is recorded in the FTL table of the storage device. Thus, when the host accesses data at a logical address, the FTL table looks up the accessed logical address to be provided with the corresponding physical address by the target logical unit (LUN 4) or other logical units (LUN 2).
In an alternative embodiment, the FTL table is not present in the storage device, and the host accesses the storage device at a physical address. The storage device provides virtual logical units (vLUN), and provides virtual logical unit to logical unit mappings. In response to migrating data of the source logical unit to the destination logical unit in the data migration, the metadata is updated to modify the virtual logical unit originally mapped to the source logical unit to be mapped to the destination logical unit. And during the data migration process, copying the data to the destination logical unit only and not to other logical units, and not changing the physical address of the migrated data within the logical units. For example, data at the physical address P1 of the source logical unit (LUN 3) is copied to the physical address P1 of the destination logical unit (LUN 4). Alternatively or further, if the physical address P1 of the LUN 4 is a bad block, reporting to the storage device that a bad block or a data error occurs at the physical address P1 of the virtual logical unit corresponding to the LUN 3 to the host, and initiating a data error processing procedure by the host, for example, moving all data of the block where the physical address P1 is located or all valid data to a new physical address.
In response to completing data migration for all data or all valid data of the source logical unit (LUN 3), LUN 3 is marked as an idle state. Optionally, all physical blocks of LUN 3 are also erased. And marking the destination logical unit (LUN 4) as an occupied state.
During data migration on a source logical unit, there is a portion of the data on both the source logical unit and the destination logical unit (or other logical units, which are the same NVM group as the source logical unit and are different from the source logical unit) for the portion of the data that has been migrated. The read command for this portion of data can be responded to with both the source logical unit and the destination logical unit. For a portion of data of the source logical unit that has not been migrated, a read command for the portion of data is responded to with the source logical unit. For example, the FTL table is queried according to the logical address accessed by the read command to obtain the corresponding physical address, and the obtained physical address is accessed to obtain the data to be read by the read command. For a write command, a physical address is allocated from the destination logical unit (or other logical unit), and data to be written by the write command is written to the allocated physical address.
FIG. 4A is a schematic diagram of an NVM group replacement non-volatile storage medium according to yet another embodiment of the present application.
According to the embodiment of FIG. 4A, the storage device includes a plurality of logical units (LUNs 0-9). The storage device provides multiple NVM groups (NVM group 410, NVM group 412, and NVM group 414). Alternatively, NVM group 410 and NVM group 412 belong to endurance group 420, and NVM group 422 belongs to endurance group 422. It will be appreciated that a storage device according to other embodiments of the present application may not provide a endurance set. NVM group 410 is assigned LUN 0 and LUN 1, NVM group 412 is assigned LUN 2 and LUN 3, and NVM group 414 is assigned LUN 6-LUN 7. The storage device also includes free logical units (LUN 4 and LUN 5).
According to an embodiment of the application, the large blocks are provided by individual logical units of the NVM group. The logical units that provide physical blocks for the same chunk are from the same endurance set. Referring to FIG. 4A, each of large block 430, large block 432, and large block 438 includes physical blocks provided by LUNs 0-3. And a write command to access the namespace provided by NVM group 410, to which, for example, large block 430 is allocated and data is written to the physical blocks provided by LUN 0 and/or LUN 1 for large block 430, and not to which LUN 2 or LUN 3 is allocated for the physical blocks provided by large block 430. Each of large block 450, large block 452, and large block 458 includes physical blocks provided by LUNs 6-9.
According to the embodiment shown in FIG. 4A, a chunk is constructed from logical units belonging to the same endurance set, such that each logical unit providing a physical block for the chunk is from the same endurance set, and may be from the same or different NVM set.
In an alternative embodiment, the chunk is constructed from logical units belonging to the same NVM group, such that each logical unit providing a physical block for the chunk is from the same NVM group.
With continued reference to FIG. 4A, physical blocks belonging to the same large block have the same physical block address within their respective logical units. Alternatively, the chunk is constructed according to the technical solution provided by chinese patent application 201610814552.5 (data organization method and apparatus for multi-plane flash memory), in particular the chunk construction method provided in connection with the relevant description of fig. 4A, 4B, 5, 6A or 6B.
With continued reference to FIG. 4A, data migration is performed on LUN 3 and free LUN 4 belonging to NVM group 412.
FIG. 4B is a schematic diagram of the result of a replacement of a nonvolatile storage medium by an NVM group in accordance with the embodiment of FIG. 4A of the present application.
After data migration is performed on LUN 3 and LUN 4, LUN 3 is referred to as an empty LUN in the storage device, and LUN 4 becomes a logical unit belonging to the NVM group 412. And each of large block 430, large block 432, and large block 438, including the physical blocks provided by LUNs 0-LUN 2, and LUN 4.
In one embodiment, the storage device maintains a large block table in which logical units that construct each large block are recorded. For example, in response to the data migration of LUN 3 with LUN 4, each logical unit number that constructs chunk 430, chunk 432, and chunk 438 is updated in the chunk table.
In yet another embodiment, the chunk is constructed from virtual logical units. Thus, although the data of the LUN 3 is migrated to the LUN 4, the same virtual logical unit still refers to the LUN 4, and therefore, when a large block is constructed or accessed according to the virtual logical unit, the data migration between the LUN 3 and the LUN 4 is not perceived.
The data recorded by each physical block of the large block may be user data or check data. In the data migration process, if the physical block provided by the source logic unit of the large block stores user data, after the data migration, the physical address of the user data is updated in the FTL table. If the physical block of the large block provided by the source logical unit stores check data, the FTL table need not be updated for its corresponding.
According to the embodiment shown in fig. 4A and 4B, while the logical units providing physical blocks for chunk 430, chunk 432, and chunk 438 belong to two NVM groups, these logical units belong to the same endurance group 420. Garbage collection operations of the storage devices occur within a endurance group. For example, valid data recovered from chunk 438 is written to the chunk constructed from the logical units of endurance set 420. Further, for valid data from NVM group 410 in chunk 438, the physical blocks of the constructed chunk provided by the logical units belonging to NVM group 410 are still written. While for valid data from NVM group 412 in chunk 438, the physical blocks of the constructed chunk provided by the logical units belonging to NVM group 412 are still written. Thereby ensuring that user data belonging to NVM group 410 is not written to physical blocks belonging to NVM group 412 nor to physical blocks belonging to NVM group 414.
FIG. 5 illustrates a flow chart of data migration according to the embodiments of FIGS. 4A and 4B.
The storage device initiates data migration based on the usage of the logical units of each NVM group or as a response to a data migration command sent by the host to the storage device.
By way of example, the storage device receives a host indication that data migration is to be performed. By way of example, the host also indicates that the selected source logical unit is LUN 3 and the destination logical unit is LUN4 (see also FIGS. 4A and 4B) (510). And marking the selected source logic unit and the selected destination logic unit as in-migration states.
Respective large blocks (530) on the source logical unit (LUN 3) are acquired. The chunk on a logical unit is the chunk to which the physical block of the logical unit belongs. And identifying whether the portion of each chunk on the source logical unit stored on the source logical unit is user data or parity data. And physical blocks on the source logical unit (LUN 3) that do not belong to any large blocks, no data migration is performed. For example, each chunk on the source logical unit is retrieved by a chunk table maintained by the storage device.
The retrieved partial data belonging to the source logical unit for each chunk on the source logical unit (LUN 3) is copied to the destination logical unit (LUN 4) (540), and the migrated data has the same physical address in the source logical unit as in the destination logical unit. And updating metadata including the chunk table and FTL table (550). The updated large block table records the new physical blocks (provided by the destination logical unit (LUN 4)) of each large block where the data migration occurred. The updated FTL table records the physical address of the data migrated user data on the destination logical unit (LUN (4)). If the data migration occurs with large blocks of check data, the FTL table need not be updated for it.
According to the embodiment of fig. 5, the portion where data migration occurs is migrated, whether valid data or invalid data, so that the new chunk obtained after data migration still satisfies the data verification rule.
In an alternative embodiment, the FTL table is not present in the storage device, and the host accesses the storage device at a physical address. The storage device provides virtual logical units (vLUN), and provides virtual logical unit to logical unit mappings. In response to migrating the data of the source logical unit to the destination logical unit in the data migration, the metadata is updated to modify the virtual logical unit originally mapped to the source logical unit (LUN 3) to be mapped to the destination logical unit (LUN 4). And during the data migration process, copying the data to the destination logical unit only and not to other logical units, and not changing the physical address of the migrated data within the logical units.
In response to completing data migration for all data of the source logical unit (LUN 3), LUN 3 is marked as an idle state. Optionally, all physical blocks of LUN 3 are also erased. And marking the destination logical unit (LUN 4) as an occupied state.
FIG. 6A is a schematic diagram of migrating non-volatile storage media according to another embodiment of the present application.
According to the embodiment of FIG. 6A, the storage device includes multiple logical units (LUNs 620-628) belonging to the same NVM group or endurance group. The storage device also includes a free logical unit (LUN (d)). The source logical unit LUN(s) 620 is swapped with the destination logical unit LUN (d) through data migration.
In fig. 6A, a broken line box indicates a physical block, and numerals in the broken line box indicate a large block to which the physical block belongs. According to the embodiment of FIG. 6A, the individual physical blocks that make up a large block need not have the same physical address, but may be located anywhere in the logical unit. As an example, a physical block of a large block follows a rule that a plurality of physical blocks constituting the large block do not have any two physical blocks from the same logical unit, in other words, each physical block constituting the large block is from a different logical unit.
Still by way of example, chunk 1 includes physical blocks provided from each of LUN 620, LUN 622, and LUN 624, and chunk 4 includes physical blocks provided from each of LUN 620, LUN 622, and LUN 626. Alternatively, each chunk need not include the same number of physical blocks.
Optionally, the storage device maintains a chunk table in which physical blocks that construct the chunks are recorded. Alternatively, information of all physical blocks constituting a large block is recorded in each physical block of the large block.
In order to swap the source logical unit LUN(s) 620 with the destination logical unit LUN (d), it is necessary to migrate data of multiple physical blocks (indicated by the dashed boxes with labels 1,2, 3, 4, 5 in LUN(s) 620) that have been used to construct a large block in the source logical unit LUN(s) 620 to the destination logical unit LUN (d) and/or to other logical units (LUN 622, LUN 624, LUN 626, or LUN 628) of the same NVM group or endurance group that the homologous logical unit LUN(s) 620 belongs to.
FIG. 6B illustrates a schematic diagram of the result of replacing a non-volatile storage medium according to the embodiment of FIG. 6A.
In fig. 6B, data of each physical block (indicated by a hatched dotted-line box) of the source logical unit LUN(s) 620 is migrated to other logical units, while a physical block as a data migration target is indicated by a dotted-line box with a numeral of a superscript' ".
The physical blocks indicated by the numbers 1, 2 and 3 of the source logical unit LUN(s) 620 are migrated to the destination logical unit LUN (d); the physical block indicated by numeral 4 of the source logical unit LUN(s) 620 is migrated to the logical unit LUN 626; the physical block indicated by numeral 5 of the source logical unit LUN(s) 620 is migrated to the logical unit LUN 624.
In this way, writing data to be migrated in the source logical unit LUN(s) 620 into a plurality of logical units (destination logical unit LUN (d), logical unit LUN626 and logical unit LUN 624) reduces the amount of write data to the destination logical unit LUN (d), disperses the write data into a plurality of logical units, so that write operations caused by data migration are processed in parallel on a plurality of logical units, and also speeds up the process of data migration.
By way of example, to migrate data of a physical block indicated by numeral 4 in the source logical unit LUN(s) 620, a physical block is selected for carrying the data to be migrated. The physical blocks that carry the migrated data are selected in consideration of a variety of policies. For example, (1) based on the chunk in which the migration data is located, the selected physical chunk is to satisfy the condition for constructing the chunk. Referring to FIG. 6A, before data migration, physical blocks of large block 4 are provided by LUN(s) 620, LUN 622, and LUN 626, and then the newly selected physical blocks are avoided from these logical units to meet the conditions for building the large block. (2) The destination logical unit LUN (d) has already provided physical blocks for chunk 1, chunk 2 and chunk 3 (see fig. 6B, indicated by the dashed boxes where numerals 1', 2' and 3' are located), then the newly selected physical block is to be avoided from the destination logical unit LUN (d). (3) The remaining selectable logical units are LUN 624 and LUN 628, with one of LUN 624 and LUN 628 being selected, e.g., randomly, to carry physical block 4 of source logical unit LUN(s) 620. In the example of FIG. 6B, the physical block provided by LUN 628, indicated by the dashed box where numeral 4' is located, is selected to carry the data to be migrated.
Alternatively or in addition, when selecting a logical unit carrying data to be migrated, each selectable logical unit is selected with a different probability. Still referring to FIGS. 6A and 6B, to migrate the data of the physical block of the dashed box of the source logical unit LUN(s) where numeral 5 is located, the logical units that optionally carry the data to be migrated are LUN (d), LUN 622 and LUN 624. Based on the remaining life of the logical unit LUN 624 being longer (e.g., its number of erasures is lower), the physical block provided by the LUN 624 is selected to carry the data to be migrated. And selecting the logic units with longer residual life to bear the data to be migrated is beneficial to realizing the wear balance of each logic unit of the storage device.
The inventors have also appreciated that uniform wear of the individual logic cells facilitates extending the service life of the NVM group or endurance group, but does not facilitate swapping logic cells belonging to the NVM group or endurance group with free logic cells. Because the wear level difference between the free logical units and the occupied logical units may be large, significant differences in the life of each logical unit of the NVM group or endurance group may occur after swapping. For this purpose, as another embodiment, to migrate the data of the physical block of the dashed box of the source logical unit LUN(s) where the numeral 5 is located, the logical units that can be selected to carry the data to be migrated are LUN (d), LUN 622 and LUN 624. Based on the short remaining lifetime of the logical unit LUN 624 (e.g., its number of erasures is high), the physical block provided by the LUN 624 is selected to carry the data to be migrated. So that the lifetime of LUN 624 is consumed faster than other logical units (see FIG. 6, LUN(s) 620, LUN 622, LUN 626, and LUN 628). Further, after the LUN 624 is swapped with the free logical unit in the future, the lifetime difference of the free logical unit from other logical units in the NVM group or endurance group is not excessive.
Still alternatively, when selecting a logical unit that carries data to be migrated, the probability that each candidate logical unit is selected is specified such that each logical unit in the NVM group or the endurance group is used at the specified probability, and thus the remaining life of each logical unit exhibits a difference with use. For example, for the candidate logical units LUN 624, LUN 626, and LUN 628, the probabilities of 20%, 30%, and 50% are chosen such that the lifetime consumption of LUN 628 is the fastest and the lifetime consumption of LUN 624 is the slowest. Further, when a logical unit needs to be swapped, LUN 628 is preferentially swapped with an idle logical unit.
Still alternatively, in addition to specifying the probability that each candidate logical unit is selected when a logical unit carrying data to be migrated is selected, the probability that each candidate logical unit for building a chunk is selected when a chunk is built, further causing the remaining life of each logical unit to exhibit a difference as the storage device is used. Thereby facilitating selection of a suitable logic cell to be swapped from the NVM group or the endurance group when a logic cell needs to be swapped.
In still alternative embodiments, the blocks are constructed for initial use of the storage device, with the logic units being selected to construct the blocks with substantially the same probability to achieve wear leveling of the storage medium of the storage device. As the storage device is used, and/or is predicted to exchange occupied logical units with free logical units, the likelihood of each logical unit being selected when building a chunk is changed and/or data migration is performed, such that some logical units are used more for building a chunk, while other logical units are used relatively less for building a chunk. For example, the sum of the number of erasures for each physical block of a logical unit divided by the total number of erasures occurring in the storage device, the resulting quotient is used as the probability of selecting a logical unit. So that the probability that a logical unit is selected to construct a chunk or carry migrated data is positively correlated with the number of times that logical unit is erased.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (20)
1. A storage medium replacement method for an NVM group, comprising:
selecting a source logic unit and a destination logic unit to be replaced;
copying the data of the source logic unit to the destination logic unit;
The metadata is also updated such that access to the source logical unit is mapped to the destination logical unit;
Further comprises: constructing a chunk from the endurance set or the logical units of the NVM set, or from the virtual logical units;
When selecting a logical unit carrying data to be migrated, the probability of each candidate logical unit being selected is specified, and when building a chunk, the probability of each candidate logical unit for building a chunk being selected is also specified, each candidate logical unit being selected at a different probability.
2. The method of claim 1, further comprising:
the physical blocks of the source logical unit are erased.
3. The method of claim 1 or 2, further comprising:
the destination logical unit is set to the occupied state and the source logical unit is set to the idle state.
4. The method according to claim 1, wherein:
Part of the data of the source logical unit is copied to the destination logical unit;
the method further comprises the steps of: yet another portion of the data of the source logical unit is copied to the first logical unit.
5. The method of claim 1, wherein a virtual logical unit mapped to the source logical unit is modified to be mapped to the destination logical unit by updating metadata.
6. The method of claim 1, wherein the logical address to access the replicated data is modified by updating metadata to a logical address mapped to the destination logical unit.
7. The method of claim 1, wherein
The physical address of the copied data on the destination logical unit is the same as the physical address on the source logical unit.
8. The method of claim 1, further comprising:
acquiring a first large block on the source logic unit;
selecting a first logical unit from a plurality of logical units including the destination logical unit;
Selecting a first physical block from the first logical unit to replace a physical block provided by the source logical unit for the first chunk; and
And copying the data of the physical block provided by the source logic unit for the first large block to the first physical block.
9. The method of claim 8, wherein
A first logical unit is selected from the plurality of logical units at a specified probability, and each of the plurality of logical units has a probability of being selected.
10. The method of claim 9, wherein
The first logical unit is selected at a greater probability than other logical units of the plurality of logical units than the first logical unit.
11. The method of claim 8, further comprising setting a probability of being selected for each logical unit.
12. The method of claim 11, wherein substantially the same probability of being selected is set for each logical unit at a first stage of a lifecycle of the storage device; and setting different probabilities of being selected for each logical unit in a second phase of the lifecycle of the storage device.
13. The method of claim 11, wherein the probability of being selected for one or more logic cells is positively or negatively correlated with the number of erasures experienced by the logic cell.
14. The method of claim 11, wherein the chance that the source logical unit is selected is increased in response to the source logical unit being expected to be replaced.
15. The method of claim 8, further comprising:
in response to creating the second large block, a second plurality of logical units is selected and a physical block from the second plurality of logical units is obtained for building the second large block.
16. The method of claim 15, further comprising: selecting the logic units according to the designated probability to obtain the second plurality of logic units.
17. The method of claim 15, wherein each of the second plurality of logic cells belongs to a same endurance group or NVM group.
18. The method of claim 1, wherein the source logical unit and the destination logical unit belong to the same endurance group.
19. The method of claim 8, wherein each of the plurality of logic cells belongs to a same endurance group.
20. A memory device comprising a control component and an NVM chip, the control component performing the method of one of claims 1-19.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810730565.3A CN110688056B (en) | 2018-07-05 | 2018-07-05 | Storage medium replacement for NVM group |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810730565.3A CN110688056B (en) | 2018-07-05 | 2018-07-05 | Storage medium replacement for NVM group |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110688056A CN110688056A (en) | 2020-01-14 |
| CN110688056B true CN110688056B (en) | 2024-10-01 |
Family
ID=69106673
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810730565.3A Active CN110688056B (en) | 2018-07-05 | 2018-07-05 | Storage medium replacement for NVM group |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110688056B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116756091A (en) * | 2023-08-22 | 2023-09-15 | 深圳富联富桂精密工业有限公司 | Snapshot management method, electronic device and storage medium |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102103629A (en) * | 2010-12-14 | 2011-06-22 | 西北工业大学 | Online data migration method |
| CN105009085A (en) * | 2013-03-18 | 2015-10-28 | 株式会社东芝 | Information processing system, control program and information processing equipment |
| CN107807788A (en) * | 2016-09-09 | 2018-03-16 | 北京忆恒创源科技有限公司 | The data organization method and device of more planar flash memories |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN100337224C (en) * | 2003-12-03 | 2007-09-12 | 华为技术有限公司 | Method of local data migration |
| US7343467B2 (en) * | 2004-12-20 | 2008-03-11 | Emc Corporation | Method to perform parallel data migration in a clustered storage environment |
| JP5124103B2 (en) * | 2006-05-16 | 2013-01-23 | 株式会社日立製作所 | Computer system |
| US7809912B1 (en) * | 2006-09-29 | 2010-10-05 | Emc Corporation | Methods and systems for managing I/O requests to minimize disruption required for data migration |
| JP4930934B2 (en) * | 2006-09-29 | 2012-05-16 | 株式会社日立製作所 | Data migration method and information processing system |
| US8484414B2 (en) * | 2009-08-31 | 2013-07-09 | Hitachi, Ltd. | Storage system having plurality of flash packages |
| US9223502B2 (en) * | 2011-08-01 | 2015-12-29 | Infinidat Ltd. | Method of migrating stored data and system thereof |
-
2018
- 2018-07-05 CN CN201810730565.3A patent/CN110688056B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102103629A (en) * | 2010-12-14 | 2011-06-22 | 西北工业大学 | Online data migration method |
| CN105009085A (en) * | 2013-03-18 | 2015-10-28 | 株式会社东芝 | Information processing system, control program and information processing equipment |
| CN107807788A (en) * | 2016-09-09 | 2018-03-16 | 北京忆恒创源科技有限公司 | The data organization method and device of more planar flash memories |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110688056A (en) | 2020-01-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11481144B1 (en) | Techniques for directed data migration | |
| US9298534B2 (en) | Memory system and constructing method of logical block | |
| CN106708424B (en) | Apparatus and method for performing selective underlying exposure mapping on user data | |
| US10102118B2 (en) | Memory system and non-transitory computer readable recording medium | |
| WO2016067328A1 (en) | Storage apparatus having nonvolatile memory device, and nonvolatile memory device | |
| JP7392080B2 (en) | memory system | |
| CN110554833B (en) | Parallel processing IO commands in a memory device | |
| CN108877862B (en) | Data organization of page stripes and method and apparatus for writing data to page stripes | |
| CN108228470B (en) | Method and equipment for processing write command for writing data into NVM (non-volatile memory) | |
| CN109558334B (en) | Garbage data recovery method and solid-state storage device | |
| KR20210028729A (en) | Logical vs. physical table fragments | |
| AU2016397188A1 (en) | Storage system and system garbage collection method | |
| US20240012580A1 (en) | Systems, methods, and devices for reclaim unit formation and selection in a storage device | |
| CN108628762B (en) | A solid-state storage device and method for processing IO commands | |
| TWI714975B (en) | Data storage device and control method for non-volatile memory | |
| CN109840048A (en) | Store command processing method and its storage equipment | |
| US11347637B2 (en) | Memory system and non-transitory computer readable recording medium | |
| CN110688056B (en) | Storage medium replacement for NVM group | |
| CN111338975B (en) | Multi-stream-oriented garbage recycling method and storage device thereof | |
| TWI724550B (en) | Data storage device and non-volatile memory control method | |
| CN112181274B (en) | Large block organization method for improving performance stability of storage device and storage device thereof | |
| CN113485948B (en) | NVM bad block management method and control part | |
| CN109840219B (en) | Address translation system and method for mass solid state storage device | |
| CN110968520B (en) | Multi-stream storage device based on unified cache architecture | |
| CN110968525B (en) | FTL provided cache, optimization method and storage device thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information | ||
| CB02 | Change of applicant information |
Address after: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd. Address before: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd. |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant |