Researchers create first neural-network chip built just with memristors

Researchers create first neural-network chip built just with memristors
A memristive neural network. The cartoon depicts a fragment of Prezioso and colleagues’ artificial neural network, which consists of crossing horizontal and vertical wires that have memristor devices (yellow) at the junctions. Input voltages V1 to V3 (the network inputs) drive currents through the memristors, and these currents are summed up in the vertical wires. Artificial neurons (triangles) process the difference between currents in neighbouring wires to produce outputs f1 and f2. The plus and minus symbols on the neurons indicate that the output depends on current differences. Credit: Nature 521, 37–38 (07 May 2015) doi:10.1038/521037a
(Phys.org)—A team of researchers working at the University of California (and one from Stony Brook University) has for the first time created a neural-network chip that was built using just memristors. In their paper published in the journal Nature, the team describes how they built their chip and what capabilities it has.

Memristors may sound like something from a sci-fi movie, but they actually exist—they are electronic analog memory devices that are modeled on human neurons and synapses. Human consciousness, some believe, is in reality, nothing more than an advanced form of memory retention and processing, and it is analog, as opposed to computers, which of course are digital. The idea for memristors was first dreamed up by University of California professor Leon Chua back in 1971, but it was not until a team working at Hewlett-Packard in 2008, first built one. Since then, a lot of research has gone into studying the technology, but until now, no one had ever built a neural-network chip based exclusively on them.

Up till now, most neural networks have been software based, Google, Facebook and IBM, for example, are all working on computer systems running such learning networks, mostly meant to pick faces out of a crowd, or return an answer based on a human phrased question. While the gains in such technology have been obvious, the limiting factor is the hardware—as neural networks grow in size and complexity, they begin to tax the abilities of even the fastest computers. The next step, most in the field believe, is to replace transistors with memristors—each on its own is able to learn, in ways similar to the way neurons in the brain learn when presented with something new. Putting them on a chip would of course reduce the overhead needed to run such a network.

The new chip, the team reports, was created using transistor-free metal-oxide memristor crossbars and represents a basic neural network able to perform just one task—to learn and recognize patterns in very simple 3 × 3-pixel black and white images. The experimental chip, they add, is an important step towards the creation of larger neural networks that tap the real power of remristors. It also makes possible the idea of building computers in lock-step with advances in research looking into discovering just how exactly our neurons work at their most basic level.


Explore further

Image-processing 1,000 times faster is goal of new $5M contract

More information: Training and operation of an integrated neuromorphic network based on metal-oxide memristors, Nature 521, 61–64 (07 May 2015) doi:10.1038/nature14441

Abstract
Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex, with its approximately 1014 synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks based on circuits combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one or several crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits, including first demonstrations of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks. Very recently, such experiments have been extended to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors, whose nonlinear current–voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm to perform the perfect classification of 3 × 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks.

© 2015 Tech Xplore

Citation: Researchers create first neural-network chip built just with memristors (2015, May 7) retrieved 23 March 2019 from https://techxplore.com/news/2015-05-neural-network-chip-built-memristors.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
6363 shares

Feedback to editors

User comments

May 07, 2015
The bars labeled V1, V2, V3 implying three different voltages needed for that array? If so, you have to have say 12 different voltages for a 12X12 array and so forth?


Those are the inputs. Inputs are pretty much always going to be different and changing.
In the diagram, there were three inputs. If there are 12 inputs, then yeah, 12 different and varying voltages. The outputs don't have to be the same number as inputs, and would actually be kind of useless.
Neural nets will either take a large number of inputs and process down to only a few outputs (e.g. an array of pixels processed down to a simple output like "Face", "Toaster", etc.) or will take a small number of inputs and spread them out into discrete values. A prism is a kind of processor in that regard. A neural net can filter a noisy input into cleaner discrete values.

May 07, 2015
they are electronic analog memory devices that are modeled on human neurons and synapses.


No this is not true - but was probably not the authors fault.

Memristors were a mathematical model in the 70's and was the theoretical last type of non-linear passive two-terminal electrical component relating electric charge and magnetic flux linkage. The others are the resistor which passively resists current. the inductor which resists changes in current, the and the capacitor which does store charge is relevant in this context for resisting directional current and allowing alternating current.

They are the tower of power ;-)

May 07, 2015
Maybe they have the right idea. The things we learn as infants are not text, but images and sounds. Perhaps neural nets will need to be trained, or else imaged from another one that has been trained.

May 07, 2015
they are electronic analog memory devices that are modeled on human neurons and synapses


They're nothing of the sort.

The memristor's electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past.


That is completely un-like a human neuron operates.

Furthermore, it wasn't made to replicate one either. The memristor was a theoretical proposition predicted by circuit theory in 1971, and its function in the theory was to link the accumulation of charge to magnetic flux, which shows up as the described behaviour.

The memristors that were later discovered don't necessarily meet the definition, because they operate on completely different mechanisms such as growing metal dendrites through an insulator - which merely mimics the theoretical memristor behaviour.

May 07, 2015
It's not actually cut and dry that we have true memristors at all.

We have devices that change their resistance relative to the current passed through, but whether those are memristors is another matter. HP simply called their discovery a memristor for patenting reasons.

May 08, 2015
For some reason this article reminded me of electron SPIN and anaesthetics. So I googled it, and low and behold found a PhysOrg article on it. Now completely COINCIDENTALLY I am also the very first poster in that thread!!! Now that WAS WEIRD!!!!

http://phys.org/n...sia.html

So my opinion is that consciousness arrives from electron spin. So we need to replicate that behaviour, not just an analogue memristor system.

May 08, 2015
Build a deep learning stack with it and colour me interested. For details, see Geoffrey Hinton on YouTube.


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more