Scientists Create First Artificial Neurons

Analogies of the human brain as a sort of organic computer (or visa versa, in the case of artificial intelligence) abounds. But it remains a matter of debate among some circles as to whether a truly thinking machine is feasible, and if so, whether it would operate along the same lines as our biological brains.

According to The Economist, an international team of researchers has bridged the gap between “biological brains and electronic ones”.

The general idea of building computers to resemble brains is called neuromorphic computing, a term coined by Carver Mead, a pioneering computer scientist, in the late 1980s. There are many attractions. Brains may be slow and error-prone, but they are also robust, adaptable and frugal. They excel at processing the sort of noisy, uncertain data that are common in the real world but which tend to give conventional electronic computers, with their prescriptive arithmetical approach, indigestion. The latest development in this area came on August 3rd, when a group of researchers led by Evangelos Eleftheriou at IBM’s research laboratory in Zurich announced, in a paper published in Nature Nanotechnology, that they had built a working, artificial version of a neuron.


Dr Eleftheriou’s invention consists of a tiny blob of germanium antimony telluride sandwiched between two electrodes. Germanium antimony telluride is what is known as a phase-change material. This means that its physical structure alters as electricity passes through it. It starts off as a disordered blob that lacks any regular atomic structure, and which conducts electricity poorly. If a low-voltage electrical jolt is applied, though, a small portion of the stuff will heat up and rearrange itself into an ordered crystal with much higher conductivity. Apply enough such jolts and most of the blob will become conductive, at which point current can pass through it and the neuron fires, just like the real thing. A high-voltage current can then be applied to melt the crystals back down and reset the neuron.

This arrangement mimics real neurons in another way, too. Neurons are unpredictable. Fluctuations within the cell mean a given input will not always produce the same output. To an electronic engineer, that is anathema. But, says Tomas Tuma, the paper’s lead author, nature makes clever use of this randomness to let groups of neurons accomplish things that they could not if they were perfectly predictable. They can, for instance, jiggle a system out of a mathematical trap called a local minimum where a digital computer’s algorithms might get stuck. Software neurons must have their randomness injected artificially. But since the precise atomic details of the crystallisation process in IBM’s ersatz neurons differ from cycle to cycle, their behaviour is necessarily slightly unpredictable.

As the Economist article points out, neurons are responsible for most of the “heavy lifting” in the brain, allowing the transmittal of information through various chemical and electrical signals. While artificial, simulatory versions of them have been around — indeed, in various forms, they are responsible for functions such as facial recognition and personalized advertisements on social media — this most recent discovery has gone further than ever before in creating something closer to the real deal.

The team have put their electronic neurons through their paces. A single artificial neuron, hooked up to the appropriate inputs, was able, reliably, to identify patterns in noisy, jittery test data. Dr. Tuma is confident that, with modern chip-making techniques, his neurons can be made far smaller than the equivalent amount of conventional circuitry—and that they should consume much less power.

The next step, says Dr. Eleftheriou, is to experiment with linking such neurons into networks. Small versions of these networks could be attached to sensors and tuned to detect anything from, say, unusual temperatures in factory machinery, to worrying electrical rhythms in a patient’s heart, to specific types of trade in financial markets. Bigger versions could be baked onto standard computer chips, offering a fast, frugal co-processor designed to excel at pattern-recognition tasks—like speech- or face-recognition—now performed by slower, less efficient software running on standard circuitry. Do that and the conceptual gap between artificial brains and real ones will shrink a little further.

Pretty amazing stuff, to say the least. What do you think?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s