Spintronic memory cells for neural networks

Spintronic memory cells for neural networks

In recent years, researchers have proposed a wide variety of hardware implementations for feed-forward artificial neural networks. These implementations include three key components: a dot-product engine that can compute convolution and fully-connected layer operations, memory elements to store intermediate inter and intra-layer results, and other components that can compute non-linear activation functions.

Dot-product engines, which are essentially high-efficiency accelerators, have so far been successfully implemented in hardware in many different ways. In a study published last year, researchers at the University of Notre Dame in Indiana used dot-product circuits to design a cellular neural network (CeNN)-based accelerator for convolutional neural networks (CNNs).

The same team, in collaboration with other researchers at the University of Minnesota, has now developed a CeNN cell based on spintronic (i.e., spin electronic) elements with high energy efficiency. This cell, presented in a paper pre-published on arXiv, can be used as a neural computing unit.

The cells proposed by the researchers, called Inverse Rashba-Edelstein Magnetoelectric Neurons (IRMENs), resemble standard cells of cellular neural networks in that they are based around a capacitor, but in IRMEN cells, the capacitor represents an input mechanism rather than a true state. To ensure that the CeNN cells are able to sustain the complex operations typically performed by CNNs, the researchers also proposed the use of a dual-circuit neural network.

The team carried out a series of simulations using HSPICE and Matlab to determine whether their spintronic memory cells could enhance the performance, speed and energy efficiency of a neural network in an image classification task. In these tests, IRMEN cells outperformed purely charge-based implementations of the same neural network, consuming ≈ 100 pJ in total per image processed.

"The performance of these cells is simulated in a CeNN-accelerated CNN performing image classification," the researchers wrote in their paper. "The spintronic cells significantly reduce the energy and time consumption relative to their charge-based counterparts, needing only ≈ 100 pJ and ≈ 42 ns to compute all but the final fully-connected CNN layer, while maintaining a high accuracy."

Essentially, compared to previously proposed approaches, IRMEN cells can save a substantial amount of energy and time. For instance, a purely charge-based version of the same CeNN used by the researchers requires over 12 nJ to compute all convolution, pooling and activation stages, while the IRMEN CeNN needs less than 0.14.

"With the growing importance of neuromorphic computing and beyond-CMOS computation, the search for new devices to fill these roles is crucial," the researchers concluded in their paper. "We have proposed a novel magnoelectric analog memory element with a built-in transfer function that also allows it to act as the cell in a CeNN."

The findings gathered by this team of researchers suggest that applying spintronics in neurmorphic computing could have remarkable advantages. In the future, the IRMEN memory proposed in their paper could help to enhance the performance, speed and energy-efficiency of in a variety of classification tasks.


Explore further

Chip design dramatically reduces energy needed to compute with light

More information: Nonvolatile spintronic memory cells for neural networks. arXiv:1905.12679 [cs.ET]. arxiv.org/abs/1905.12679

A mixed signal architecture for convolutional neural networks. arXiv:1811.02636v1. arxiv.org/abs/1811.02636

© 2019 Science X Network

Citation: Spintronic memory cells for neural networks (2019, June 14) retrieved 18 August 2019 from https://techxplore.com/news/2019-06-spintronic-memory-cells-neural-networks.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
207 shares

Feedback to editors

User comments

Jun 14, 2019
I would love to know what the expected cell size of this circuit would be, along with how many cells should be able to fit on a single die. That depends of course on the die size and the technology used in implementing the cell is, but given the specs in this article this design appears to be blindingly fast. If it was also quite small, along with the low energy footprint, this could be very promising for hardware accelerated neural network designs.

Jun 15, 2019
along with how many cells should be able to fit on a single die


That's an irrelevant metric, because the interconnecting wiring takes up some of the die area, and the amount of wiring needed depends on the amount of cells you have and what you want to do with them. The silicon analog to a neural network is a bit gimped by the fact that the "neurons" can't grow their dendrites around to reach other cells, so you need glue logic and a routing system to connect them already in place. With the potential to connect any neuron to any neuron, the routing complexity grows exponentially with the number of cells until most of your chip area is just wires. After all, the point of a neural network is that it's a network - not that it has many neurons per se.

You can stuff individual cells very densely in a grid, but they won't be doing anything useful.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more