[go: up one dir, main page]

Lu et al., 2021 - Google Patents

A runtime reconfigurable design of compute-in-memory based hardware accelerator

Lu et al., 2021

View PDF
Document ID
5722306819253295930
Author
Lu A
Peng X
Luo Y
Huang S
Yu S
Publication year
Publication venue
2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)

External Links

Snippet

Compute-in-memory (CIM) is an attractive solution to address the “memory wall” challenges for the extensive computation in machine learning hardware accelerators. Prior CIM-based architectures, though can adapt to different neural network models during the design time …
Continue reading at past.date-conference.com (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C13/00Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00 - G11C25/00
    • G11C13/0002Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00 - G11C25/00 using resistance random access memory [RRAM] elements
    • G11C13/0021Auxiliary circuits
    • G11C13/0023Address circuits or decoders
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C15/00Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
    • G11C15/04Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/52Multiplying; Dividing
    • G06F7/523Multiplying only
    • G06F7/53Multiplying only in parallel-parallel fashion, i.e. both operands being entered in parallel
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C8/00Arrangements for selecting an address in a digital store
    • G11C8/10Decoders

Similar Documents

Publication Publication Date Title
Peng et al. Optimizing weight mapping and data flow for convolutional neural networks on RRAM based processing-in-memory architecture
Sun et al. Fully parallel RRAM synaptic array for implementing binary neural network with (+ 1,− 1) weights and (+ 1, 0) neurons
Peng et al. Optimizing weight mapping and data flow for convolutional neural networks on processing-in-memory architectures
Peng et al. DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training
Yu et al. Compute-in-memory with emerging nonvolatile-memories: Challenges and prospects
Peng et al. DNN+ NeuroSim: An end-to-end benchmarking framework for compute-in-memory accelerators with versatile device technologies
Bavikadi et al. A review of in-memory computing architectures for machine learning applications
US11507808B2 (en) Multi-layer vector-matrix multiplication apparatus for a deep neural network
Reuben et al. Memristive logic: A framework for evaluation and comparison
Chen et al. Design and optimization of FeFET-based crossbars for binary convolution neural networks
CN114298296B (en) Convolutional neural network processing method and device based on storage and computing integrated array
Zhu et al. Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures
Chen et al. Partition SRAM and RRAM based synaptic arrays for neuro-inspired computing
Zhang et al. The application of non-volatile look-up-table operations based on multilevel-cell of resistance switching random access memory
Ma et al. In-memory computing: The next-generation ai computing paradigm
Shim et al. Architectural design of 3D NAND flash based compute-in-memory for inference engine
Lu et al. A runtime reconfigurable design of compute-in-memory based hardware accelerator
Lu et al. Benchmark of the compute-in-memory-based DNN accelerator with area constraint
Liu et al. Bit-transformer: Transforming bit-level sparsity into higher preformance in reram-based accelerator
Sharma et al. A 64 kb reconfigurable full-precision digital ReRAM-based compute-in-memory for artificial intelligence applications
Chen et al. Hidden-ROM: A compute-in-ROM architecture to deploy large-scale neural networks on chip with flexible and scalable post-fabrication task transfer capability
Bavandpour et al. Acortex: An energy-efficient multipurpose mixed-signal inference accelerator
Lu et al. A runtime reconfigurable design of compute-in-memory–based hardware accelerator for deep learning inference
CN119495341A (en) A matrix computing device based on flexible RRAM storage and computing array
Sun et al. Efficient processing of MLPerf mobile workloads using digital compute-in-memory macros