CN118155694A - Low-delay DDR dual inline memory module, memory system and operation method thereof - Google Patents
Low-delay DDR dual inline memory module, memory system and operation method thereof Download PDFInfo
- Publication number
- CN118155694A CN118155694A CN202211558413.2A CN202211558413A CN118155694A CN 118155694 A CN118155694 A CN 118155694A CN 202211558413 A CN202211558413 A CN 202211558413A CN 118155694 A CN118155694 A CN 118155694A
- Authority
- CN
- China
- Prior art keywords
- ddr
- delay
- memory module
- clock latch
- latch driver
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015654 memory Effects 0.000 title claims abstract description 142
- 230000009977 dual effect Effects 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 27
- 239000002245 particle Substances 0.000 claims abstract description 16
- 230000004913 activation Effects 0.000 claims abstract description 12
- 239000008187 granular material Substances 0.000 claims description 4
- 230000006870 function Effects 0.000 abstract description 10
- 238000002070 Raman circular dichroism spectroscopy Methods 0.000 description 17
- 230000008439 repair process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 10
- 239000000872 buffer Substances 0.000 description 5
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005538 encapsulation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002035 prolonged effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 102100035964 Gastrokine-2 Human genes 0.000 description 1
- 101001075215 Homo sapiens Gastrokine-2 Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/70—Masking faults in memories by using spares or by reconfiguring
- G11C29/72—Masking faults in memories by using spares or by reconfiguring with optimized replacement algorithms
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/04—Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
- G11C29/08—Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
- G11C29/12—Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
- G11C29/18—Address generation devices; Devices for accessing memories, e.g. details of addressing circuits
Landscapes
- Dram (AREA)
Abstract
The invention discloses a low-delay DDR dual inline memory module, a memory system and an operation method thereof. The low-delay DDR dual in-line memory module comprises two sub-channels, a clock latch driver and an SPD data memory, wherein each sub-channel comprises a plurality of groups of DRAM chip particles; the clock latch driver is used for replacing bad row addresses in the DRAM chip particles; the SPD data memory is used for storing SPD data, and the SPD data is configured to support independent delay setting of DDR activation commands, wherein the independent delay setting meets the timing requirement of the DRAM chip particles. According to the invention, the independent delay setting is carried out on the ACT command, the delay of the command related to the ACT command is compensated, and other commands are still under the control of the normal delay adder, so that the DDR DIMM has a row replacement function, and the efficiency of the DDR5 system is effectively improved.
Description
Technical Field
The invention belongs to the technical field of memories, and in particular relates to a low-delay DDR dual-in-line memory module, a memory system and an operation method thereof.
Background
The Double Data Rate (DDR) synchronous dynamic random access memory (Synchronous Dynamic Random Access Memory, SDRAM) (also known as DRAM) standard, which is currently in widespread use, is adapted for DDR4 and DDR5 memories and provides a channel that can support Dual in-line-line Memory Module (DIMM) devices, which can record Data on both the rising and falling edges of the clock.
DDR5 DRAM supports failed row address repair, post-package repair (Post PACKAGE REPAIR, PPR) allows for simple and easy repair methods to be used in DIMM systems. Two methods are provided in DDR 5: post-package hard repair for permanent row repair (hPPR) and Post-package Soft repair for temporary row repair (Soft Post PACKAGE REPAIR, SPPR). With post-package hard repair, DDR5 may correct at least one row address in each logical storage array (BG). With post-package soft repair, DDR5 may repair one row address per BG. If the BG's post-encapsulation hard repair resources are used up, the BG will have no more resources available for post-encapsulation soft repair.
With the continued development of DRAM technology, the problem of yield becomes more and more important due to the higher density of DDR 5. The profitability problem is: 1) Defects are more likely to exist during DRAM fabrication. 2) Some DRAM cells may lose memory throughout the life cycle of the DIMM. Even though DRAM manufacturers may apply stricter rules to avoid scenario 1 above, it is not easy for DRAM manufacturers to avoid scenario 2 above.
The industry is developing fault line replacement techniques to ameliorate the above-described profitability issues. Using this technique, the operating system will replace the failed row in the DRAM granule. The RCD records the failed row information of the DRAM in the RDIMM or LRDIMM, and each time the DRAM needs to be accessed, the RCD looks up its recorded failed row table for row address replacement. However, for DDR5, the RCD needs to increase the latency of receiving the full command, again taking into account the lookup table search time, resulting in reduced efficiency of the overall DDR5 system.
Disclosure of Invention
In order to meet the above defects or improvement demands of the prior art, the invention provides a low-delay DDR dual-in-line memory module which can set independent delay aiming at ACT commands and related commands, thereby effectively solving the problem that the overall system efficiency is affected when bad line replacement is carried out.
To achieve the above object, according to one aspect of the present invention, there is provided a low-latency DDR dual inline memory module, comprising: two sub-channels, a clock latch driver and an SPD data memory, wherein each sub-channel comprises a plurality of groups of DRAM chip particles; the clock latch driver is used for replacing bad row addresses in the DRAM chip particles; the SPD data memory is used for storing SPD data, and the SPD data is configured to support independent delay setting of DDR activation commands, wherein the independent delay setting meets the timing requirement of the DRAM chip particles.
In some implementations, the SPD data is further configured to have a uniform latency setting for commands that are not related to the DDR activate command.
In some embodiments, the independent delay setting is longer than the unified delay setting.
In some embodiments, the SPD data is further configured to add an additional difference to the delay associated with the DDR activation command based on the additional difference between the length of the independent delay setting and the length of the unified delay setting.
In some embodiments, the clock latch driver has a delay adder for delaying a time parameter of the command received by the clock latch driver for a preset length of time.
In some implementations, the two sub-channels have the same structure, and the clock latch driver is common to the two sub-channels.
In some embodiments, the clock latch driver has a lookup table for memory address replacement of a failed memory space in a DRAM chip granule, the clock latch driver including a standard clock latch driver, a high bandwidth clock latch driver, and a multiplexed clock latch driver.
In some embodiments, the SPD data is used to delay a command sent by a CPU or memory controller.
According to another aspect of the present invention, there is provided a memory system including a CPU and the low latency DDR dual inline memory module described above.
According to still another aspect of the present invention, there is provided a method of operating a memory system including a memory controller and a low latency DDR dual inline memory module, the method of operating comprising:
The memory controller reads SPD data in an SPD data memory of the low-delay DDR dual-inline memory module, wherein the SPD data is configured to support a clock latch driver of the low-delay DDR dual-inline memory module to set independent delay on a DDR activation command;
when a fault storage space exists in DRAM chip particles of the low-delay DDR dual-inline memory module, the clock latch driver is used for replacing a storage address of the fault storage space;
The clock latch driver is also used for independently delaying the received DDR activation command.
In some embodiments, the clock latch driver is further configured to set a uniform delay for commands not related to the DDR activate command; the SPD data is further configured to add an additional difference to the delay associated with the DDR activation command based on the additional difference between the length of the independent delay setting and the length of the unified delay setting.
In some embodiments, the memory system further comprises a system management bus for accessing the SPD data memory.
In some embodiments, the clock latch driver is further configured to search for an access address when the DDR activate command is received, and if the access address is a failed row address, replace the access address with a reserved address.
In some embodiments, the low latency DDR dual inline memory module includes a standard DDR dual inline memory module, a high bandwidth DDR dual inline memory module, and a multiple merge array dual inline memory module.
In general, the above technical solutions conceived by the present invention have the following beneficial effects compared with the prior art: by setting independent delay aiming at the ACT command and compensating the delay of the command related to the ACT command, and the other commands are still under the control of a normal delay adder, the DDR DIMM avoids the limitation that all commands need to prolong the waiting time when using the line replacement technology, and effectively improves the efficiency of the DDR5 system.
Drawings
FIG. 1 is a schematic diagram of a low latency DDR5 memory system according to one embodiment of the invention;
FIG. 2 is a schematic waveform diagram without an ACT independent delay setting, in accordance with one embodiment of the present invention;
FIG. 3 is a schematic waveform diagram with ACT independent delay settings according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of a low latency high bandwidth DDR5 memory system according to one embodiment of the invention;
FIG. 5 is a schematic diagram of the architecture of a high bandwidth storage system according to one embodiment of the present invention;
FIG. 6 is a flow chart of a method of operation of a high bandwidth storage system in accordance with one embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
FIG. 1 is a schematic diagram of a DDR5 memory system 960 according to an embodiment of the invention. As shown in FIG. 1, DDR5 memory system 960 includes CPU 970, MC (Memory Controller ) 980, and DDR 5-DIMMs that meet JEDEC standards. The DDR5-DIMM includes two sub-channels, one of which is shown at 300 in FIG. 1. As shown in FIG. 1, sub-channel 300 includes 5 DRAM chipsets 341-345, a first set of data buses 301-305, and a clock latch driver (RCD) 330. Specifically, the 5 DRAM chipsets 341 to 345 include a first DRAM chipset 341, a second DRAM chipset 342, a third DRAM chipset 343, a fourth DRAM chipset 344, and a fifth DRAM chipset 345, each of the first to fifth DRAM chipsets 341 to 345 containing 4 DRAM chips, the 5 DRAM chipsets 341 to 345 containing 20 DRAM chip particles in total; the first set of data buses 301-305 specifically includes a first data bus 301, a second data bus 302, a third data bus 303, a fourth data bus 304, and a fifth data bus 305.
Wherein the first set of data buses 301-305 is between the external host interface of the sub-channel 300 and the DRAM chip sets 341-345, in particular the first data bus 301 connects the external host interface and the first DRAM chip set 341; the second data bus 302 connects the external host interface and the second DRAM chipset 342; the third data bus 303 connects the external host interface to the third DRAM chipset 343; the fourth data bus 304 connects the external host interface and the fourth DRAM chipset 344; the fifth data bus 305 connects the external host interface and the fifth DRAM chipset 345.
In some implementations, the DDR5-DIMM further includes an RCD330, the RCD330 being shared by two sub-channels (the other sub-channel not shown in FIG. 1), the command/address bus 320 connecting the MC 980 and the RCD 330.
In some embodiments, the DDR5-DIMM further includes SPD data memory for storing SPD data, such as SPD (SERIAL PRESENCE DETECT )/SPD_HUB 350, which may be an EEPROM chip or a HUB-enabled SPD EEPROM, for storing description information associated with the DDR 5-DIMM. DDR5 memory system 960 also includes a system management bus smbus for accessing and reading the relevant information stored in SPD/SPD_HUB 350.
It is understood that in embodiments of the present invention, a memory system (or computer system) may include a CPU, MC, and DDR DIMM having the first and second sub-channels described above. DIMMs (dual in-line memory modules) include a plurality of random access memory (DRAM) chips on a single small circuit board, storing programs and data that the CPU needs to execute. The CPU manages and reads and writes the DIMM through the MC. The RCD is used for signal conditioning to mediate between the host and the DRAM chip.
It is appreciated that the second sub-channel of the DDR5-DIMM may have the same structure as the first sub-channel, e.g., in DDR5 memory system 960 as shown in FIG. 1, the second sub-channel includes a third set of data buses, a fourth set of data buses, a second set of data buffers, and a second set of DRAM chips, respectively. Wherein the clock latch driver is shared by the first sub-channel and the second sub-channel. For details of the second sub-channel, reference may be made to the first sub-channel, and details thereof will not be repeated here.
When a DDR DIMM detects a row failure in the DRAM chipset, such as a row failure due to a DRAM manufacturing defect, the DDR DIMM will repair the failed row according to JEDEC related specifications. However, the common DDR4/5DRAM has limited capability of repairing the fault row address, and influences the reliability of the common DDR DIMM. In some embodiments, as shown in FIG. 1, a plurality of spare memory spaces are reserved in each DRAM chipset 341-345 for memory address replacement upon detection of a row failure of the memory.
In some embodiments, the RCD 330 has a lookup table for memory address replacement for the failed memory space in the 5 DRAM chipsets 341-345. The lookup table is configured to use the addresses of the spare memory spaces in the DRAM chipset 341-345 to perform a memory address replacement for the address of the failed memory space when the current DRAM access address is detected as the address of the failed memory space. On DDR DIMMs with RCDs, the DRAM access addresses will all pass through the RCD before fanning out to the DRAM. If the failed row address is stored in a lookup table of the RCD, the RCD can check whether the current address is for the row with the error, and then dynamically replace the failed row address with the spare memory space address, thereby effectively improving the failed row repair capability of the DDR DIMM.
When the RCD has a row replacement function, for each received ACT command (DDR row activate command), the RCD looks up its lookup table, and if the address hits, the address of the failed memory space is replaced with the address of the spare memory space. For DDR5, the ACT command is a 2UI command, the RCD must wait for both UIs to receive the full command and then search the lookup table to check the row address, and the RCD has to set a number equal to or greater than 2 to its delay adder considering the lookup table search time, which can affect all other commands, especially read/write commands, thereby reducing the efficiency of the overall DDR5 system.
In some implementations, the SPD data is configured to support the clock latch driver to set independent delays for DDR activate commands. In an embodiment of the present invention, a separate independent delay is provided for the ACT commands in the RCD320, by providing a dedicated latency for the ACT commands, while keeping other commands independent of the ACT commands at a normal delay, without having to increase the latency of all commands, thereby reducing overall efficiency degradation due to the ACT commands when using a row replacement technique.
In some implementations, the clock latch driver is further configured to have a uniform delay setting for commands that are not related to DDR activate commands. Specifically, for commands that are independent of the ACT command, the time parameters thereof are configured to conform to the unified delay setting of the protocol.
In some embodiments, the independent delay setting is longer than the unified delay setting. Specifically, the delay setting parameter for the command may be set in clock period, so that the independent delay for the ACT command is greater than the unified delay for other commands. It is understood that the specific duration of the independent delay setting may be set according to actual requirements, which is not limited by the present invention.
In some implementations, the SPD data is further configured to add the additional difference to the delay associated with the DDR activation command based on a difference between the length of the independent delay setting and the length of the unified delay setting. Specifically, after applying a larger delay for the ACT command, the following DRAM-side time parameter needs to be increased by the difference between the ACT command and the normal delay adder:
(1) ACT to internal read or write latency: tRCD;
(2) ACT to PRE command period: tRAS;
(3) ACT to REF command period: tRC;
(4) Other parameters related to timing between the ACT and other commands.
It will be appreciated that the above-described timing parameters on the DRAM side in relation to the ACT command are given by way of example only, and the invention is not limited thereto. In some implementations, these time parameters should be stored in the SPD/SDP_HUB device on the DIMM. Once the independent delay setting of the ACT command is applied, the affected relevant time parameters stored in the SPD/sdp_hub device are updated accordingly. After updating the relevant time parameters, the effect of reducing the delay overall can be achieved without further changing the BIOS or MC.
In some embodiments, the clock latch driver has a delay adder (not shown in fig. 1) for delaying a time parameter of the command received by the clock latch driver for a preset length of time. Specifically, the independent delay setting of the ACT command is realized by the delay adder, and the normal delay meeting the JEDEC protocol standard is set by the delay adder for other commands which are irrelevant to the ACT command. It is understood that in JEDCE regarding the RCD standard specification, the delay adder settings are applicable to all commands. Thus, for other commands not related to the ACT command, the preset unified delay setting is applicable.
FIG. 2 is a schematic waveform diagram showing an embodiment of the present invention without an ACT independent delay setting. As shown in fig. 2, the tRCD/tRAS/tRC timing between the RCD host interface and the DRAM interface is the same without setting a dedicated independent latency setting for the ACT command. FIG. 3 is a schematic waveform diagram showing an ACT independent delay setting in accordance with one embodiment of the present invention. As shown in FIG. 3, the tRCD host interface and the DRAM interface differ in tRCD/tRAS/tRC timing due to the dedicated independent latency setting set for the ACT command. In the embodiment of the present invention, assuming that the ACT independent delay is set to 2 and the normal uniform delay is set to 1, the tRCD/tRAS/tRC timing in FIG. 3 will be 1 period shorter than that in FIG. 2. Therefore, to compensate for the effect of the ACT command independent delay setting, the time parameter for tRCD/tRAS/tRC needs to be increased accordingly.
In some embodiments, the independent delay setting for the ACT command does not affect the time parameter between ACT commands, including:
(1) Command periods between ACT commands to different memory blocks (banks) in the same logical storage array Group (Bank Group): trrd_s_slr;
(2) Command periods between ACT commands to the same memory block in the same logical storage array group: trrd_l_slr;
(3) Command period between ACT commands: tRRD;
(4) Command cycle between four active windows to the same logical memory block: tFAW.
It will be appreciated that the above-described time parameters between ACT commands are given by way of example only, and the present invention is not limited thereto.
FIG. 4 is a schematic diagram of a low latency High Bandwidth (High Bandwidth) DDR5 memory system 930 according to one embodiment of the invention. As shown in fig. 4, the low latency high bandwidth DDR5 memory system 930 includes a CPU 940, MC 950, and DDR5-HBDIMM (or other hybrid DIMMs) that meet the JEDEC standard. The high bandwidth DDR5-DIMM includes two sub-channels, one of which is shown at 200 in FIG. 4. As shown in fig. 4, sub-channel 200 includes 5 DRAM chipsets 241-245 (which contain 20 or 18 DRAM chip particles in total), 5 High Bandwidth Data Buffers (HBDB) 211-215, and high bandwidth clock latch driver (HBRCD) 230. Wherein HBRCD230 is shared by two sub-channels (the other sub-channel is not shown in FIG. 4), command/address bus 220 connects MC 950 and HBRCD230, a first set of data buses 201-205 is between the external host interface of sub-channel 200 and HBDB-215, a second set of data buses 221-225 is between HBDB-215 and DRAM chip sets 241-245, and the second set of data buses 221-225 run at half the data rate of the first set of data buses 201-205, e.g., the data rate of the first set of data buses 201-205 is 6400MT/s, and the data rate of the second set of data buses 221-225 is 3200MT/s.
In some embodiments, the low latency high bandwidth DDR5 memory system 930 further includes SPD data memory, such as SPD (SERIAL PRESENCE DETECT )/SPD_HUB 250, which may be an EEPROM chip or a HUB-enabled SPD EEPROM, for storing the associated SPD description information of the high bandwidth DDR 5-DIMM. The low latency high bandwidth DDR5 memory system 930 also includes a system management bus smbus for accessing and reading the associated information stored in the SPD/SPD_HUB 250.
In some embodiments, the low latency high bandwidth DDR5 memory system 930 further includes a second sub-channel (not shown in fig. 4), which may have the same structure as the first sub-channel, e.g., correspondingly, the second sub-channel includes a second set of data buffers and a second set of DRAM chips. Wherein the high bandwidth clock latch driver is shared by the first sub-channel and the second sub-channel. For details of the second sub-channel, reference may be made to the first sub-channel, and details thereof will not be repeated here.
In some embodiments, the high bandwidth DDR5-DIMM high bandwidth clock latch driver (HBRCD, 230) has a lookup table for implementing the present replacement function. A dedicated independent delay setting is therefore employed for the ACT command such that the command associated with the ACT command has a different delay setting than the other commands. The DRAM-side time parameter associated with the ACT command is stored in the SPD/SDP_HUB device on the DIMM. Once the independent delay setting of the ACT command is applied, the affected relevant time parameters stored in the SPD/sdp_hub device are updated accordingly. It will be appreciated that the independent latency setting of the ACT command and the unified latency setting of other commands may be described with reference to the low latency DDR5 memory system 960 and have similar advantages as described above and will not be repeated here.
FIG. 5 is a schematic diagram of a low latency multiple combining array (Multiplexer Combined Ranks, MCR) DDR5 memory system 900 according to one embodiment of the invention. As shown in FIG. 5, a low latency multiple-way merge array DDR5 memory system 900 includes a CPU 910, an MC 920, and DDR5-MCRDIMM (or other hybrid DIMMs) that meet the JEDEC standard. The multiple-merge array DDR5-DIMM includes two sub-channels, one of which is shown at 100 in FIG. 5. As shown in FIG. 5, sub-channel 100 includes 5 DRAM chipsets 141-145 (which contain 20 DRAM chip particles in total), 5 Multiplexed Data Buffers (MDBs) 111-115, and multiplexed clock latch driver (MRCD) 130. Wherein MRCD 130 is shared by two sub-channels (the other sub-channel is not shown in FIG. 5), command/address bus 120 connects MC 920 and MRCD 130, a first set of data buses 101-105 is between the external host interface of sub-channel 100 and MDBs 111-115, a second set of data buses 121-125 is between MDBs 111-115 and DRAM chip sets 141-145, and second set of data buses 121-125 is running at half the data rate of first set of data buses 101-105, e.g., the data rate of first set of data buses 101-105 is 6400MT/s, and the data rate of second set of data buses 121-125 is 3200MT/s.
In some embodiments, the low latency multiple-merge array DDR5 memory system 900 also includes SPD data memory, such as SPD (SERIAL PRESENCE DETECT )/SPD_HUB 150, which may be an EEPROM chip or a HUB-enabled SPD EEPROM, for storing the associated SPD description information for the high bandwidth DDR 5-DIMM. The low latency multiple-combining array DDR5 memory system 900 also includes a system management bus smbus for accessing and reading the relevant information stored in the SPD/SPD_HUB 150.
In some embodiments, the low latency multiple-way merge array DDR5 memory system 900 also includes a second sub-channel (not shown in FIG. 5), which may have the same structure as the first sub-channel, e.g., correspondingly, the second sub-channel includes a second set of data buffers and a second set of DRAM chips. Wherein the multiplexed clock latch driver is shared by the first sub-channel and the second sub-channel. For details of the second sub-channel, reference may be made to the first sub-channel, and details thereof will not be repeated here.
In some embodiments, the multiplexed clock latch driver (MRCD) of the multiple-merge array DDR5-DIMM has a lookup table for implementing the present replacement function. A dedicated independent delay setting is therefore employed for the ACT command such that the command associated with the ACT command has a different delay setting than the other commands. The DRAM-side time parameter associated with the ACT command is stored in the SPD/SDP_HUB device on the DIMM. Once the independent delay setting of the ACT command is applied, the affected relevant time parameters stored in the SPD/sdp_hub device are updated accordingly. It will be appreciated that the independent latency setting of the ACT command and the unified latency setting of other commands may be described with reference to the low latency DDR5 memory system 960 and have similar advantages as described above and will not be repeated here.
By adopting the low-delay DDR dual-inline memory module provided by the embodiment of the invention, the independent delay setting is carried out on the ACT command, the delay of the command related to the ACT command is compensated, and other commands are still under the control of the normal delay adder, so that the limit that the waiting time is prolonged for all the commands when the DDR DIMM uses the line replacement technology is avoided, and the efficiency of the DDR5 system is effectively improved.
FIG. 6 is a flow chart of a method of operating a low latency memory system including a memory controller and a low latency DDR dual inline memory module according to one embodiment of the invention, comprising the steps of:
Step S1: the memory controller reads SPD data in an SPD data memory of the low-latency DDR dual inline memory module, the SPD data configured to support a clock latch driver of the low-latency DDR dual inline memory module to set an independent delay for the DDR activate command. In some embodiments, the clock latch driver is further configured to set a uniform delay for commands not related to the DDR activate command; the SPD data is further configured to add the additional difference to the delay associated with the DDR activation command based on the difference between the length of the independent delay setting and the length of the unified delay setting. Specifically, the independent delay setting of the ACT command is realized by the delay adder, and the normal delay meeting the JEDEC protocol standard is set by the delay adder for other commands which are irrelevant to the ACT command.
In the embodiment of the invention, the memory controller can read the parameters, and then when the command is sent, the related parameters are prolonged, so that the time parameters of the command are not violated, and the stability and the reliability of the system operation are ensured. Specifically, once the independent delay setting of the ACT command is applied, the affected relevant time parameters stored in the SPD/sdp_hub device are updated accordingly. After updating the relevant time parameters, the effect of reducing the delay overall can be achieved without further changing the BIOS or MC.
Step S2: when a faulty memory space exists in the DRAM chip granule of the low-delay DDR dual-inline memory module, the clock latch driver of the low-delay DDR dual-inline memory module is used for replacing the memory address of the faulty memory space. In some embodiments, the clock latch driver is further configured to search for an access address when the DDR activate command is received, and if the access address is a failed row address, replace the access address with a reserved address. In particular, the clock latch driver has a lookup table for performing memory address replacement for a failed memory space in a plurality of DRAM chip sets. The lookup table is configured to use the address of the spare memory space in the DRAM chipset to perform a memory address replacement for the address of the failed memory space when the current DRAM access address is detected as the address of the failed memory space. It will be appreciated that the clock latch driver may include standard clock latch drivers, high bandwidth clock latch drivers, multiplexed clock latch drivers, and other suitable forms of clock latch drivers, as the invention is not limited in this respect.
Step S3: the clock latch driver is also used to delay the received DDR activate command.
In some embodiments, the memory system further includes a system management BUS (SM BUS) for accessing the SPD data memory to read DRAM-related information.
In some embodiments, the low latency DDR dual inline memory modules include standard DDR dual inline memory modules, high bandwidth DDR dual inline memory modules, multiple drop-out array dual inline memory modules, and other applicable forms of DDR dual inline memory modules as described above, as the invention is not limited in this regard.
In the embodiment of the present invention, the DRAM memory particles may be DDR4, DDR5, DDR6, LPDDR or GDDR, may be LPDDR4, LPDDR5 or LPDDR5x, or may be DRAM memory particles of other forms, which is not limited in this invention.
It will be appreciated that in embodiments of the present invention, the above-described row replacement functions may be performed by an Operating System (OS) or otherwise implemented, as the invention is not limited in this regard.
By adopting the operation method of the memory system, the row replacement is carried out through the clock latch driver, the independent delay setting is carried out on the ACT command through the SPD data memory, the delay of the command related to the ACT command is compensated, and other commands are still under the control of the normal delay adder, so that the memory system can realize the row replacement function, and meanwhile, the limitation that all commands are required to prolong the waiting time is avoided, and the efficiency of the memory system is effectively improved.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method description in a flowchart or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that various changes and substitutions are possible within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.
Claims (14)
1. A low latency DDR dual inline memory module, comprising: two sub-channels, a clock latch driver and an SPD data memory, wherein each sub-channel comprises a plurality of groups of DRAM chip particles; the clock latch driver is used for replacing bad row addresses in the DRAM chip particles; the SPD data memory is used for storing SPD data, and the SPD data is configured to support independent delay setting of DDR activation commands, wherein the independent delay setting meets the timing requirement of the DRAM chip particles.
2. The low latency DDR dual inline memory module of claim 1, wherein the SPD data is further configured to have a uniform latency setting for commands that are not related to the DDR activate command.
3. The low latency DDR dual inline memory module of claim 2, wherein a time period of the independent delay setting is longer than a time period of the unified delay setting.
4. The low latency DDR dual inline memory module of claim 2, wherein the SPD data is further configured to add an additional difference to the latency associated with the DDR activate command based on the additional difference between the duration of the independent latency setting and the duration of the unified latency setting.
5. The low latency DDR dual inline memory module of claim 1, wherein the clock latch driver has a latency adder for delaying a time parameter of receipt of a command by the clock latch driver for a preset length of time.
6. The low latency DDR dual inline memory module of claim 1, wherein the two sub-channels have the same structure, the clock latch driver being common to the two sub-channels.
7. The low latency DDR dual inline memory module of any one of claims 1 to 6, wherein the clock latch driver has a lookup table for memory address replacement of a failed memory space in a DRAM chip granule, the clock latch driver comprises a standard clock latch driver, a high bandwidth clock latch driver, and a multiplexed clock latch driver.
8. The low latency DDR dual inline memory module of any of claims 1 to 6, wherein the SPD data is to cause a CPU or memory controller to delay a sent command.
9. A memory system comprising a CPU, a memory controller and a low latency DDR dual inline memory module as claimed in any one of claims 1 to 8.
10. A method of operation of a memory system comprising a memory controller and a low latency DDR dual inline memory module, the method of operation comprising:
The memory controller reads SPD data in an SPD data memory of the low-delay DDR dual-inline memory module, wherein the SPD data is configured to support a clock latch driver of the low-delay DDR dual-inline memory module to set independent delay on a DDR activation command;
when a fault storage space exists in DRAM chip particles of the low-delay DDR dual-inline memory module, the clock latch driver is used for replacing a storage address of the fault storage space;
The clock latch driver is also used for independently delaying the received DDR activation command.
11. The method of operation of claim 10, wherein the clock latch driver is further to set a uniform delay for commands that are not related to the DDR activate command; the SPD data is further configured to add an additional difference to the delay associated with the DDR activation command based on the additional difference between the length of the independent delay setting and the length of the unified delay setting.
12. The method of operation of claim 10, wherein the memory system further comprises a system management bus for accessing the SPD data memory.
13. The method of operation of claim 10, wherein the clock latch driver is further configured to look up an access address upon receipt of the DDR activate command, and to replace the access address with a reserved address if the access address is a failed row address.
14. The method of operation of any of claims 10 to 13, wherein the low latency DDR dip memory module comprises a standard DDR dip memory module, a high bandwidth DDR dip memory module, and a multiple-merge array dip memory module.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211558413.2A CN118155694A (en) | 2022-12-06 | 2022-12-06 | Low-delay DDR dual inline memory module, memory system and operation method thereof |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211558413.2A CN118155694A (en) | 2022-12-06 | 2022-12-06 | Low-delay DDR dual inline memory module, memory system and operation method thereof |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118155694A true CN118155694A (en) | 2024-06-07 |
Family
ID=91289224
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211558413.2A Pending CN118155694A (en) | 2022-12-06 | 2022-12-06 | Low-delay DDR dual inline memory module, memory system and operation method thereof |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118155694A (en) |
-
2022
- 2022-12-06 CN CN202211558413.2A patent/CN118155694A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3433860B1 (en) | Fine granularity refresh | |
| US8892942B2 (en) | Rank sparing system and method | |
| US8281191B2 (en) | Fully-buffered dual in-line memory module with fault correction | |
| US10740010B2 (en) | Memory module and memory system including memory module | |
| US6986118B2 (en) | Method for controlling semiconductor chips and control apparatus | |
| US7694093B2 (en) | Memory module and method for mirroring data by rank | |
| US20080162807A1 (en) | Method and apparatus for redundant memory arrays | |
| US9123443B2 (en) | Memory device, memory management device, and memory management method | |
| US20100106904A1 (en) | Shadow raid cache memory | |
| US11809743B2 (en) | Refresh management list for DRAM | |
| US20080270826A1 (en) | Redundant memory to mask dram failures | |
| US9001567B2 (en) | Replacement of a faulty memory cell with a spare cell for a memory circuit | |
| US20140337589A1 (en) | Preventing a hybrid memory module from being mapped | |
| US20040125666A1 (en) | Method and apparatus for restoring defective memory cells | |
| US20240013851A1 (en) | Data line (dq) sparing with adaptive error correction coding (ecc) mode switching | |
| US9230635B1 (en) | Memory parametric improvements | |
| US8370564B2 (en) | Access control device, information processing device, access control program and access control method | |
| CN118155694A (en) | Low-delay DDR dual inline memory module, memory system and operation method thereof | |
| CN115220960A (en) | DDR dual inline memory module, repairable memory system and operation method thereof | |
| US12158827B2 (en) | Full dynamic post-package repair | |
| US20110010580A1 (en) | Memory apparatus, memory controlling method and program | |
| WO2023272585A1 (en) | Method and apparatus for testing memory | |
| CN118277297A (en) | Repairable DDR dual inline memory module, memory system and operation method thereof | |
| CN119088614B (en) | CXL memory module, memory repair method, control chip, medium and system | |
| EP4273707A1 (en) | Memory device including address table and operating method for memory controller |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |