reader comments
39 with 36 posters participating
Despite their name, neural networks are only distantly related to the sorts of things you’d find in a brain. While their organization and the way they transfer data through layers of processing may share some rough similarities to networks of actual neurons, the data and the computations performed on it would look very familiar to a standard CPU.
But neural networks aren’t the only way that people have tried to take lessons from the nervous system. There’s a separate discipline called neuromorphic computing that’s based on approximating the behavior of individual neurons in hardware. In neuromorphic hardware, calculations are performed by lots of small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from others.
On Thursday, Intel released the newest iteration of its neuromorphic hardware, called Loihi. The new release comes with the sorts of things you’d expect from Intel: a better processor and some basic computational enhancements. But it also comes with some fundamental hardware changes that will allow it to run entirely new classes of algorithms. And while Loihi remains a research-focused product for now, Intel is also releasing a compiler that it hopes will drive wider adoption.
To make sense out of Loihi and what’s new in this version, let’s back up and start by looking at a bit of neurobiology, then build up from there.
From neurons to computation
The foundation of the nervous system is the cell type called a neuron. All neurons share a few common functional features. At one end of the cell is a structure called a dendrite, which you can think of as a receiver. This is where the neuron receives inputs from other cells. Nerve cells also have axons, which act as transmitters, connecting with other cells to pass along signals.
How does this process encode and manipulate information? That’s an interesting and important question, and one we’re only just starting to answer.
One of the ways we’ve gone about answering it was via what has been called theoretical neurobiology (or computational neurobiology). This has involved attempts to build mathematical models that reflected the behavior of nervous systems and neurons in the hope that this would allow us to identify some underlying principles. Neural networks, which focused on the organizational principles of the nervous system, were one of the efforts that came out of this field. Spiking neural networks, which attempt to build up from the behavior of individual neurons, is another.
Spiking neural networks can be implemented in software on traditional processors. But it’s also possible to implement them through hardware, as Intel is doing with Loihi. The result is a processor very much unlike anything you’re likely to be familiar with.
earlier iteration of Loihi, a spike simply carried a single bit of information. A neuron only registered when it received one.
Unlike a normal processor, there’s no external RAM. Instead, each neuron has a small cache of memory dedicated to its use. This includes the weights it assigns to the inputs from different neurons, a cache of recent activity, and a list of all the other neurons that spikes are sent to.
One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, where neuromorphic chips come out well ahead. IBM, which introduced its TrueNorth chip in 2014, was able to get useful work out of it even though it was clocked at a leisurely kiloHertz, and it used less than .0001 percent of the power that would be required to emulate a spiking neural network on traditional processors. Mike Davies, director of Intel’s Neuromorphic Computing Lab, said Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. “We’re routinely finding 100 times [less energy] for SLAM and other robotic workloads,” he added.