CN113421879B - Cache content addressable memory and memory chip package structure - Google Patents
Cache content addressable memory and memory chip package structure Download PDFInfo
- Publication number
- CN113421879B CN113421879B CN202110971320.1A CN202110971320A CN113421879B CN 113421879 B CN113421879 B CN 113421879B CN 202110971320 A CN202110971320 A CN 202110971320A CN 113421879 B CN113421879 B CN 113421879B
- Authority
- CN
- China
- Prior art keywords
- memory
- content addressable
- tri
- random access
- dynamic random
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L25/00—Assemblies consisting of a plurality of semiconductor or other solid state devices
- H01L25/18—Assemblies consisting of a plurality of semiconductor or other solid state devices the devices being of the types provided for in two or more different main groups of the same subclass of H10B, H10D, H10F, H10H, H10K or H10N
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C15/00—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
- G11C15/04—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L23/00—Details of semiconductor or other solid state devices
- H01L23/28—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection
- H01L23/31—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection characterised by the arrangement or shape
- H01L23/3107—Encapsulations, e.g. encapsulating layers, coatings, e.g. for protection characterised by the arrangement or shape the device being completely enclosed
Landscapes
- Engineering & Computer Science (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Physics & Mathematics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Power Engineering (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Semiconductor Memories (AREA)
Abstract
The invention provides a cache content addressing memory and a memory chip packaging structure, wherein the cache content addressing memory comprises a tri-state content addressing memory, a packaging layer and a dynamic random access memory bare chip; each dynamic random access memory bare chip is stacked to form a stacked memory; the tri-state content addressable memory is electrically connected with one dynamic random access memory bare chip in the stacked memory; the tri-state content addressable memory is electrically connected with the packaging layer, and one dynamic random access memory bare chip in the stacked memory is electrically connected with the packaging layer. By encapsulating the tri-state content addressable memory and the at least two dynamic random access memory bare chips in the cache content addressable memory, the storage and reading speed of the memory can be effectively improved, and the capacity is effectively improved.
Description
Technical Field
The invention relates to the technical field of memory semiconductors, in particular to a cache content addressable memory and a memory chip packaging structure.
Background
Tcam (ternary content addressable memory) is a ternary content addressable memory, and is mainly used for quickly searching for entries such as ACL (Access Control Lists), routing, and the like. When a hardware-based TCAM (ternary content addressable Memory) searching method is used for searching, all data of the whole table item space are inquired at the same time, the searching speed is not influenced by the size of the data of the table item space, the searching is completed once in each clock cycle, the average searching speed is 6 times that of the searching based on an SRAM (Static Random-Access Memory) algorithm, and in the worst case, the average searching speed can reach 128 times.
At present, TCAMs are mainly realized by two modes, namely hardware-based TCAMs and SRAM-based storage software simulation.
The hardware TCAM memory is developed from cam (content addressable memory). The general CAM memory has only two states of each bit, 0 or 1, and the TCAM has three states of each bit, except 0 and 1, and a don't care state, so called "tri-state", which is realized by mask, and it is the third state feature of the TCAM that can not only perform exact match search, but also perform fuzzy match search.
Each hardware TCAM memory cell contains 2 SRAM cells and a comparison circuit. Compared with SRAM, the memory density is very low, and TCAM capacity is far smaller than SRAM capacity on the premise of the same chip area. Therefore, the cost and power consumption of TCAM are tens of times of those of the common SRAM, and a large storage capacity cannot be achieved.
Disclosure of Invention
In view of the above, it is desirable to provide a cache content addressable memory and a memory chip package structure.
A cache content addressable memory, comprising: the device comprises a tri-state content addressable memory, a packaging layer and at least two dynamic random access memory bare cores;
each dynamic random access memory bare chip is stacked to form a stacked memory, and the dynamic random access memory bare chips are electrically connected in sequence;
the tri-state content addressable memory is electrically connected with one dynamic random access memory bare chip in the stacked memory;
the three-state content addressing memory is electrically connected with the packaging layer, one dynamic random access memory bare chip in the stacked memory is electrically connected with the packaging layer, and the packaging layer is provided with a connecting bump for electrically connecting with the outside.
In one embodiment, the package structure further comprises a logic die, wherein the packaging layer is a first interposer;
the tri-state content addressable memory is electrically connected with the first intermediate layer;
the logic bare chip is arranged between the stacked memory and the first interposer, and one dynamic random access memory bare chip which is closest to the first interposer in the stacked memory is electrically connected with the first interposer through the logic bare chip.
In one embodiment, the tri-state content addressable memory and the logic die are electrically connected through a bus disposed on the first interposer.
In one embodiment, the tri-state content addressable memory is electrically connected to the first interposer through a first micro bump, and the logic die is electrically connected to the first interposer through a second micro bump.
In one embodiment, a logic bare chip is integrated in the packaging layer;
one dynamic random access memory die which is closest to the packaging layer in the stacked memory is electrically connected with the packaging layer through a third micro bump;
the tri-state content addressable memory is electrically connected with the packaging layer through a fourth micro-bump.
In one embodiment, the tri-state content addressable memory is electrically connected to the dram die closest to the packaging layer in the stacked memory through a bus disposed on the packaging layer.
In one embodiment, the packaging layer is a logic die;
the tri-state content addressable memory is stacked with the stacked memory;
the location of the tri-state content addressable memory is set to:
the tri-state content addressing memory is arranged on one side of the stacked memory far away from the logic bare chip, the tri-state content addressing memory is electrically connected with the dynamic random access memory bare chip far away from the logic bare chip in the stacked memory, and the dynamic random access memory bare chip closest to the logic bare chip in the stacked memory is electrically connected with the logic bare chip;
or, the tri-state content addressable memory is arranged between the stacked memory and the logic die, and one dynamic random access memory die closest to the logic die in the stacked memory is connected with the logic die through the tri-state content addressable memory.
In one embodiment, the tri-state content addressable memory is disposed on a side of the stacked memory away from the logic die, the tri-state content addressable memory is electrically connected to one of the dynamic random access memory die farthest from the logic die in the stacked memory by using a TSV, and one of the dynamic random access memory die closest to the logic die in the stacked memory of the logic die dynamic random access memory die is electrically connected to the logic die by a fifth micro bump.
A memory chip package structure, comprising a processor and a second interposer, and further comprising the cache content addressable memory according to any of the above embodiments, wherein the connection bumps of the package layer of the cache content addressable memory are electrically connected to the second interposer, and the processor is electrically connected to the second interposer.
In one embodiment, the cache memory and the processor are disposed on a first side of the second interposer, and a second side of the second interposer is electrically connected to the package substrate.
The packaging structure of the cache content addressing memory and the memory chip can effectively improve the storage and reading speed of the memory and effectively improve the capacity by packaging the tri-state content addressing memory and the at least two dynamic random access memory bare chips in the cache content addressing memory.
Drawings
FIG. 1 is a schematic structural diagram of a memory chip package structure in one embodiment;
FIG. 2 is a block diagram of a cache content addressable memory in one embodiment;
FIG. 3 is a diagram of a cache content addressable memory in another embodiment;
FIG. 4 is a diagram of a cache contents addressable memory in yet another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not represent the only embodiments.
Example one
In this embodiment, as shown in fig. 1 and fig. 2, a memory chip package structure is provided, which includes a cache content addressing memory 102, a processor 101, and a second interposer 108, wherein the connection bumps of the package layer of the cache content addressing memory 102 are electrically connected to the second interposer 108, and the processor 101 is electrically connected to the second interposer 108.
In this embodiment, the processor 101 is any one of a CPU (central processing unit, CPU 101), a GPU (graphics processing unit, GPU 101), an FPGA (Field Programmable Gate Array), an ASIC (Application specific integrated Circuit), and an SoC (System on Chip), in which in one embodiment, the processor 101 is a CPU, in one embodiment, the processor 101 is a GPU, in one embodiment, the processor 101 is an ASIC, and in one embodiment, the processor 101 is an FPGA.
In this embodiment, the second interposer 108 is a Redistribution interposer (RDL), and the second interposer 108 is used for connecting the processor 101 and the cache content addressing memory 102, and is also used for connecting with an external device.
In this embodiment, the connection bumps of the package layer of the cache content addressing memory 102 are Micro bumps (Micro Bump), the cache content addressing memory 102 and the processor 101 are disposed on one side of the second interposer 108, the cache content addressing memory 102 and the processor 101 are electrically connected to the second interposer 108 through a plurality of Micro bumps 110, respectively, and the other side of the second interposer 108 is connected to the package substrate 109. It should be understood that the cache content addressing memory 102 and the connection between the processor 101 and the second interposer 108 may be implemented using flip-chip bump (ball flip chip package) technology, such as controlled collapse chip connection or chip connection.
In one embodiment, the memory chip package structure further includes a package substrate 109, the cache content addressing memory 102 and the processor 101 are disposed on a first side of the second interposer 108, and a second side of the second interposer 108 is electrically connected to the package substrate 109.
In this embodiment, the package substrate 109 is used for connecting with devices outside the memory chip package structure, and the second surface of the second interposer 108 is electrically connected with the package substrate 109 through the micro bumps 111. In this way, the cache content addressable memory 102, the processor 101, and the second interposer 108 may be packaged together to form a memory chip package structure, which is electrically connected to external components through the package substrate 109. Specifically, a side of the package substrate 109 facing away from the second interposer 108 is provided with micro bumps 112 for connection with the outside. In addition, in this embodiment, the connection between the second interposer 108 and the package substrate 109 may be implemented by using a flip-chip bump (ball-flip-chip package) technology, such as a controlled collapse chip connection (cgl) or a chip connection (pcl).
In one embodiment, as shown in fig. 2, cache content addressing memory 102 includes tri-state content addressing memory 105, encapsulation layer 106, and at least two dynamic random access memory die 103; each dynamic random access memory die 103 is stacked, each dynamic random access memory die 103 is stacked to form a stacked memory, and each dynamic random access memory die 103 is electrically connected in sequence; the tri-state content addressable memory 105 is electrically connected to one of the dynamic random access memory dies 103 in the stacked memory; the tri-state content addressable memory 105 is electrically connected with the packaging layer 106, one dynamic random access memory bare chip 103 in the stacked memory is electrically connected with the packaging layer 106, and the packaging layer 106 is provided with a connecting bump for electrically connecting with the outside.
In this embodiment, the cache content addressable memory 102 may also be referred to as a hybrid cache content addressable memory, and includes two memories, namely a tri-state content addressable memory 105 and a dynamic random access memory die 103, where the tri-state content addressable memory 105 is a TCAM, and in each embodiment, the tri-state content addressable memory 105 is a TCAM die, the dynamic random access memory die 103 is a dynamic random access memory die, and the dynamic random access memory die 103 is a DRAM die. The stacked Memory formed by stacking the dynamic random access Memory dies 103 is an HBM (High Bandwidth Memory).
In this embodiment, the cache content addressing memory 102 further includes a package casing, the package casing is connected to the package layer 106, the package casing and the inside of the package layer 106 form a package cavity, the package casing encapsulates the ternary content addressing memory 105 and each dynamic random access memory die 103 in the package cavity, and the ternary content addressing memory 105 and each dynamic random access memory die 103 are connected to an external element through the package layer 106, that is, the ternary content addressing memory 105 and each dynamic random access memory die 103 are electrically connected to the second interposer 108 through the package layer 106, so that the ternary content addressing memory 105 and each dynamic random access memory die 103 can be electrically connected to the processor 101.
In this embodiment, the dynamic random access memory dies 103 are sequentially stacked on the packaging layer 106 and electrically connected to form a stacked memory, and one dynamic random access memory die 103 closest to the packaging layer 106 in the stacked memory is electrically connected to the packaging layer 106. The cache content addressable memory 102 may be stacked with the dynamic random access memory die 103, or may be disposed on a side of the packaging layer 106 facing away from the second interposer 108 with the stacked memory.
In the above embodiment, by encapsulating the tri-state content addressable memory 105 and the at least two dynamic random access memory dies 103 in the cache content addressable memory 102, the storage and reading speed of the memory can be effectively increased, the capacity is effectively increased, and the capacity problem of the emulator is solved.
In one embodiment, the cache content addressing memory 102 further includes a logic die 104, the packaging layer 106 is a first interposer; the tri-state content addressable memory 105 is electrically connected with the first interposer; the logic dies 104 are disposed between the stacked memory and the first interposer, and one of the dynamic random access memory dies 103 in the stacked memory closest to the first interposer is electrically connected to the first interposer through the logic dies 104.
In this embodiment, the cache content addressable memory 102 is a TCAM die, the first interposer is a rewiring interposer, each dynamic random access memory die 103 is a DRAM die, TSVs (Through-Silicon-Via) are used for electrical connection between the dynamic random access memory dies 103, and the logic die 104 is a logic die of the dynamic random access memory die 103 and is used for performing a management task of the dynamic random access memory die 103. Specifically, the logic die 104 and the dynamic random access memory die 103 stacked at the bottommost layer in the stacked memory are connected by using TSVs.
The logic die 104 is electrically connected with the dynamic random access memory die 103 stacked at the bottommost layer in the stacked memory, so as to realize connection with each dynamic random access memory die 103, and further realize storage management of each dynamic random access memory die 103, and the logic die 104 is electrically connected with the first interposer, so that each dynamic random access memory die 103 can be electrically connected with external elements through the first interposer.
In one embodiment, the tri-state content addressable memory 105 and the logic die 104 are electrically connected through a bus 107 disposed on the first interposer.
In this embodiment, the first interposer is provided with a wiring slot, the bus 107 is disposed in the wiring slot, the bus 107 is an Input/Output (I/O) bus, the I/O bus includes a plurality of data lines and one or more control and address lines, and the logic die 104 is electrically connected to the tri-state content addressable memory 105 through the bus 107. Thus, the tri-state content addressable memory 105 can communicate with the logic die 104 through the bus 107, and the tri-state content addressable memory 105 can communicate with the dynamic random access memory die 103.
In this embodiment, the tri-state content addressable memory 105 is electrically connected to the first interposer through a first micro bump, and the logic die 104 is electrically connected to the first interposer through a second micro bump.
In this embodiment, as shown in fig. 2, the tri-state content addressable memory 105 and the logic die 104 are electrically connected to the first interposer through the micro bumps 113, respectively, so as to achieve communication with external devices. The package layer is provided with micro bumps 110 for electrically connecting with the second interposer 108.
In one embodiment, as shown in fig. 3, the packaging layer 106 has a logic die integrated therein; one dynamic random access memory die 103 in the stacked memory, which is closest to the packaging layer 106, is electrically connected to the packaging layer 106 through a third micro bump; the tri-state content addressable memory 105 is electrically connected to the package layer 106 through a fourth micro bump.
In this embodiment, one of the dynamic random access memory dies 103 and the tri-state content addressable memory 105 in the stacked memory, which are closest to the packaging layer 106, are electrically connected to the packaging layer 106 through the micro bumps 113, respectively. The difference between this embodiment and the above embodiments is that a logic die, that is, a logic die of the dynamic random access memory die 103, is packaged in the packaging layer 106, and the logic die integrates the HBM controller, the search engine, and the synchronization engine. Specifically, the search engine receives a search task from the external processor 101 to perform task scheduling, and queries the TCAM simulator if the search engine does not find a rule matching the search keyword in the buffer; and the synchronization engine counts the times of unmatched keywords in the TCAM cache and synchronizes the corresponding rules stored in the simulator to the TCAM according to the rules.
In this embodiment, the logic die is integrated in the packaging layer 106, so that the structure of the cache content addressable memory 102 can be effectively simplified, and the volume of the cache content addressable memory 102 is smaller.
In one embodiment, the tri-state content addressable memory 105 is electrically connected to one of the dynamic random access memory dies 103 in the stacked memory closest to the packaging layer 106 through a bus 107 disposed on the packaging layer 106.
In this embodiment, the bus 107 is an I/O bus, the I/O bus includes a plurality of data lines and one or more control and address lines, and the dynamic random access memory die 103 located at the bottom layer in the stacked memory is electrically connected to the tri-state content addressing memory 105 through the bus 107, so as to implement communication between the tri-state content addressing memory 105 and the dynamic random access memory die 103.
In one embodiment, as shown in fig. 4, the encapsulation layer 106 is a logic die; the tri-state content addressable memory 105 is in a stacked arrangement with the stacked memory; the tri-state content addressable memory 105 is disposed on a side of the stacked memory away from the logic die, the tri-state content addressable memory 105 is electrically connected to the dynamic random access memory die 103 farthest from the logic die in the stacked memory, and the dynamic random access memory die 103 closest to the logic die in the stacked memory is electrically connected to the logic die.
In this embodiment, 3D packaging is adopted, and the logic die is used as the packaging layer 106, which is not only used for managing each dynamic random access memory die 103 in the stacked memory, but also used for connecting external elements of the cache content addressing memory 102, so that the structure of the cache content addressing memory 102 can be further simplified. In addition, the tri-state content addressing memory 105 is stacked with the stacked memory as a TCAM die, the tri-state content addressing memory 105 is stacked on top of the stacked memory, the tri-state content addressing memory 105 is connected with the dynamic random access memory die 103 on top of the stacked memory, and the dynamic random access memory die 103 on bottom of the stacked memory is connected with the logic die.
In this embodiment, the tri-state content addressable memory is disposed on a side of the stacked memory away from the logic die, the tri-state content addressable memory is electrically connected to the dynamic random access memory die farthest from the logic die by using a TSV, and the dynamic random access memory die closest to the logic die in the stacked memory is electrically connected to the logic die by using a fifth micro bump.
In this embodiment, the TSV can maximize the stacking density of the chips in the three-dimensional direction, minimize the overall size, and greatly improve the chip speed and the performance of low power consumption, thereby effectively increasing the speed of the cache content addressing memory 102 and reducing the power consumption.
In this embodiment, the tri-state content addressing memory 105 is connected to the dynamic random access memory die 103 on the top of the stacked memory through the TSV, and the dynamic random access memory die 103 on the bottom of the stacked memory is connected to the logic die through the micro bump 113, so that the connection between the tri-state content addressing memory 105 and the dynamic random access memory die 103 is realized, and the dynamic random access memory die 103 and the tri-state content addressing memory 105 can be connected to external elements through the logic die.
In this embodiment, the volume of the cache content addressable memory can be further reduced by stacking the tri-state content addressable memory 105 with the stacked memory and using the packaging layer 106 as a logic die.
In one embodiment, the encapsulation layer is a logic die; the tri-state content addressable memory is stacked with the stacked memory; the tri-state content addressing memory is arranged between the stacked memory and the logic die, and one dynamic random access memory die closest to the logic die in the stacked memory is connected with the logic die through the tri-state content addressing memory.
In this embodiment, the logic die is used as a package layer for managing each dynamic random access memory die in the stacked memory and connecting external elements of the cache content addressable memory. In addition, the tri-state content addressing memory is arranged between the bottom of the stacked memory and the logic bare chip, the tri-state content addressing memory is connected with the dynamic random access memory bare chip at the bottom of the stacked memory through the TSV, and the tri-state content addressing memory is connected with the logic bare chip through the micro bump, so that each dynamic random access memory bare chip in the stacked memory is connected with the logic bare chip.
In other embodiments, the encapsulation layer is a logic die; the ternary content addressing memory is positioned between any two dynamic random access memory bare chips in the stacked memory and is connected with the two dynamic random access memory bare chips, and one dynamic random access memory bare chip closest to the logic bare chip in the stacked memory is electrically connected with the logic bare chip.
Specifically, in this embodiment, the tri-state content addressable memory is located between any two dynamic random access memory die in the stacked memory, the tri-state content addressable memory and the two dynamic random access memory die are respectively connected through TSVs, and the dynamic random access memory die at the bottom in the stacked memory and the logic die are electrically connected through the micro bump.
Example two
In the embodiment, a hybrid cache content addressable memory is provided, which comprises a high bandwidth memory HBM, wherein search rule table entries except table entries stored in a hardware TCAM are stored, and a TCAM simulator based on a DRAM is arranged on the search rule table entries;
the hardware TCAM memory stores part of high-frequency searching rule table items as a buffer;
the HBM and the TCAM are packaged in the same chip, and the HBM and the TCAM are arranged on the same silicon intermediate layer;
the search engine carries out task scheduling between the TCAM simulator and the TCAM, and if the search engine does not find a rule matching the search keyword in the buffer, the TCAM simulator is inquired;
and the synchronization engine counts the times of unmatched keywords in the TCAM cache and synchronizes the corresponding rules stored in the simulator to the TCAM according to the rules.
As shown in fig. 1, 101 is a processor, such as a CPU/GPU/FPGA/ASIC/SoC, etc., and 102 is a hybrid cache content addressable memory;
101 and 102 are electrically connected to the interposer 108 through the micro-bumps 110;
109 is a package substrate with lower electrical connections 112, such as solder balls;
110. 111 may employ a suitable flip-chip bump technology such as controlled collapse chip connection (C4) or chip connection (C2), 112 being micro bumps for connection to its underlying interposer.
Example 1:
as shown in fig. 2, which is an internal diagram 102, wherein 103 is a stacked DRAM die, 104 is a logic die, which performs DDR memory management tasks, 105 is a TCAM die, 104 and 105 are electrically connected to an interposer 106 through micro bumps (micro bumps), 105 is electrically connected to 104 through an I/O bus 107, which may pass through the interposer 106, the I/O bus 107 includes a plurality of data lines, one or more control and address lines; 106, a search engine and a synchronization engine are integrated.
Example 2:
as shown in fig. 3, unlike example 1, 106 is not an interposer, but a logic die that integrates a HBM controller, a search engine, and a synchronization engine.
Example 3:
as shown in fig. 4, in the present embodiment, with 3D packaging, TCAM105 may be placed on top of HBM DRAM die, or between DRAM die and logic die 106, with TSV technology connecting the middle of the die.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A cache content addressable memory, comprising: the device comprises a tri-state content addressable memory, a packaging layer and at least two dynamic random access memory bare cores;
each dynamic random access memory bare chip is stacked to form a stacked memory, and the dynamic random access memory bare chips are electrically connected in sequence;
the tri-state content addressable memory is electrically connected with one dynamic random access memory bare chip in the stacked memory;
the three-state content addressing memory is electrically connected with the packaging layer, one dynamic random access memory bare chip in the stacked memory is electrically connected with the packaging layer, and the packaging layer is provided with a connecting bump for electrically connecting with the outside.
2. The cache content addressable memory of claim 1, further comprising a logic die, the packaging layer being a first interposer;
the tri-state content addressable memory is electrically connected with the first intermediate layer;
the logic bare chip is arranged between the stacked memory and the first interposer, and one dynamic random access memory bare chip which is closest to the first interposer in the stacked memory is electrically connected with the first interposer through the logic bare chip.
3. The cache content addressable memory of claim 2, wherein the tri-state content addressable memory is electrically coupled to the logic die via a bus disposed on the first interposer.
4. The cache content addressable memory of claim 2, wherein the tri-state content addressable memory is electrically coupled to the first interposer via a first microbump, and wherein the logic die is electrically coupled to the first interposer via a second microbump.
5. The cache content addressable memory of claim 1, wherein a logic die is integrated within the packaging layer;
one dynamic random access memory die which is closest to the packaging layer in the stacked memory is electrically connected with the packaging layer through a third micro bump;
the tri-state content addressable memory is electrically connected with the packaging layer through a fourth micro-bump.
6. The cache content addressable memory of claim 5, wherein the tri-state content addressable memory is electrically connected to the DRAM die of the stack of memory that is closest to the packaging layer by a bus disposed on the packaging layer.
7. The cache content addressable memory of claim 1, wherein the encapsulation layer is a logic die;
the tri-state content addressable memory is stacked with the stacked memory;
the location of the tri-state content addressable memory is set to:
the tri-state content addressing memory is arranged on one side of the stacked memory far away from the logic bare chip, the tri-state content addressing memory is electrically connected with the dynamic random access memory bare chip far away from the logic bare chip in the stacked memory, and the dynamic random access memory bare chip closest to the logic bare chip in the stacked memory is electrically connected with the logic bare chip;
or, the tri-state content addressable memory is arranged between the stacked memory and the logic die, and one dynamic random access memory die closest to the logic die in the stacked memory is connected with the logic die through the tri-state content addressable memory.
8. The cache content addressable memory of claim 7, wherein the tri-state content addressable memory is disposed on a side of the stacked memory away from the logic die, the tri-state content addressable memory is electrically connected to a dynamic random access memory die farthest from the logic die in the stacked memory by a TSV, and a dynamic random access memory die closest to the logic die in the stacked memory of the logic die dynamic random access memory die is electrically connected to the logic die by a fifth micro bump.
9. A memory chip package structure comprising a processor and a second interposer, further comprising the cache content addressable memory of any of claims 1-8, wherein the connection bumps of the package layer of the cache content addressable memory are electrically connected to the second interposer, and wherein the processor is electrically connected to the second interposer.
10. The memory chip package structure of claim 9, further comprising a package substrate, wherein the cache content addressable memory and processor are disposed on a first side of the second interposer, and wherein a second side of the second interposer is electrically connected to the package substrate.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110971320.1A CN113421879B (en) | 2021-08-24 | 2021-08-24 | Cache content addressable memory and memory chip package structure |
| PCT/CN2022/090480 WO2023024562A1 (en) | 2021-08-24 | 2022-04-29 | Cache content addressable memory and memory chip encapsulation structure |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110971320.1A CN113421879B (en) | 2021-08-24 | 2021-08-24 | Cache content addressable memory and memory chip package structure |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113421879A CN113421879A (en) | 2021-09-21 |
| CN113421879B true CN113421879B (en) | 2021-12-28 |
Family
ID=77719769
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110971320.1A Active CN113421879B (en) | 2021-08-24 | 2021-08-24 | Cache content addressable memory and memory chip package structure |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN113421879B (en) |
| WO (1) | WO2023024562A1 (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113421879B (en) * | 2021-08-24 | 2021-12-28 | 浙江毫微米科技有限公司 | Cache content addressable memory and memory chip package structure |
| CN115036303A (en) * | 2022-06-20 | 2022-09-09 | 西安微电子技术研究所 | Calculation microsystem based on TSV primary integration and LTCC secondary integration |
| WO2025107848A1 (en) * | 2023-11-24 | 2025-05-30 | 北京硅升科技有限公司 | Chip packaging body, working assembly and computing device |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102651234A (en) * | 2011-02-24 | 2012-08-29 | 株式会社东芝 | Content addressable memory |
| CN104038416A (en) * | 2014-06-17 | 2014-09-10 | 上海新储集成电路有限公司 | Network processor |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005032991A (en) * | 2003-07-14 | 2005-02-03 | Renesas Technology Corp | Semiconductor device |
| CN101217468A (en) * | 2007-12-28 | 2008-07-09 | 华为技术有限公司 | Routing look-up table system, tri-state content-addressable memory and network processor |
| US10672744B2 (en) * | 2016-10-07 | 2020-06-02 | Xcelsis Corporation | 3D compute circuit with high density Z-axis interconnects |
| CN113421879B (en) * | 2021-08-24 | 2021-12-28 | 浙江毫微米科技有限公司 | Cache content addressable memory and memory chip package structure |
-
2021
- 2021-08-24 CN CN202110971320.1A patent/CN113421879B/en active Active
-
2022
- 2022-04-29 WO PCT/CN2022/090480 patent/WO2023024562A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102651234A (en) * | 2011-02-24 | 2012-08-29 | 株式会社东芝 | Content addressable memory |
| CN104038416A (en) * | 2014-06-17 | 2014-09-10 | 上海新储集成电路有限公司 | Network processor |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113421879A (en) | 2021-09-21 |
| WO2023024562A1 (en) | 2023-03-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113421879B (en) | Cache content addressable memory and memory chip package structure | |
| US7282951B2 (en) | Reconfigurable processor module comprising hybrid stacked integrated circuit die elements | |
| TW201515176A (en) | Flexible memory system with a controller and a stack of memory | |
| US12039244B2 (en) | Hybrid node chiplet stacking design | |
| US20230197705A1 (en) | Interconnection structures for high-bandwidth data transfer | |
| US20240063200A1 (en) | Hybrid bonding based integrated circuit device and method of manufacturing the same | |
| CN113923157B (en) | A multi-core system and processing method based on on-chip network | |
| CN109558370A (en) | Three-dimensional computations encapsulation | |
| CN117915670B (en) | Integrated chip structure for memory and calculation | |
| CN104038416B (en) | Network processing unit | |
| US20240273040A1 (en) | Multi-Stack Compute Chip and Memory Architecture | |
| CN116610630B (en) | Multi-core system and data transmission method based on network-on-chip | |
| US12373361B2 (en) | Systems, methods, and devices for advanced memory technology | |
| CN113380783B (en) | Integrated circuit packaging structure and network chip | |
| Kim et al. | The future of advanced package solutions | |
| US20200364547A1 (en) | Chip including neural network processors and methods for manufacturing the same | |
| CN111459864B (en) | Memory device and manufacturing method thereof | |
| WO2023056876A1 (en) | Longitudinal stacked chip, integrated circuit device, board, and manufacturing method therefor | |
| Smith et al. | Interconnect Design for Heterogeneous Integration of Chiplets in the AMD Instinct™ MI300X Accelerator | |
| Dayo et al. | The Future of Memory: Limits and Opportunities | |
| England et al. | Advanced packaging drivers/opportunities to support emerging artificial intelligence applications | |
| US12299597B2 (en) | Reconfigurable AI system | |
| TWI834089B (en) | A system-on-integrated-chip, a method for producing the same and a readable storage medium | |
| Liao et al. | Embedded IPD Integration Solution for Large Chip Module Fan-Out Package | |
| CN120264774A (en) | Integrated circuit chip and method for preparing the same |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |