Asia Times’ chief science writer Jonathan Tennenbaum is now part of our AT+ Premium team and will be available for consultation for all deep thinkers at AT+ Premium Access.

This is the fifth installment in a series. Read Part 1, Part 2Part 3 and Part 4.

In the previous installment of this series, I gave arguments and evidence to refute the notion that the human brain works like a digital computer. Among other things, it is clear that neurons have nothing to do with the switching elements on a computer chip. 

Faced with this situation, some people might answer: “Of course, you are right. Neurons are very complicated analog-digital systems. But given enough data processing power, it will be no problem to emulate them. It is just a matter of time before we will be able to upload a whole brain.” 

In this context, brain research today finds itself in a paradoxical situation. 

An example: scientists at the “Neural Computation Lab” at the University College, London, have done sophisticated experimental studies concerning the functions of dendrites in the nervous system. Dendrites are elaborate, tree-like structures extending from the body of a neuron, where the majority of the synapses are located, through which the neuron receives impulses from other neurons. Typically from thousands of them. 

Although they were previously regarded as passive signal collectors, modern investigations reveal the dendrites to be highly dynamic structures that actively transform the incoming impulses and can also change their own structure in the course of minutes. The behavior of the dendrites exemplifies the extraordinary plasticity that’s characteristic of all living processes.

Schematic of a neuron with axon and dendritic connections. Modified from Wiki Commons.

On the one side, experimental work at the Neural Computation Lab and elsewhere shows with ever-increasing clarity how absurd it is to compare the brain, or a single neuron, to man-made data processing systems. At the same time, however, scientists continue to frame their results in the language and concepts of computer science.

Consider the introduction to a fascinating paper, “Dendritic Computation,” co-authored by the Neural Computation Lab’s principal investigator, Professor Michael Häusser:

“Brains compute. This means that they process information, creating abstract representations of physical entities and performing operations on this information in order to execute tasks. One of the main goals of computational neuroscience is to describe these transformations as a sequence of simple elementary steps organized in an algorithmic way.”

I wonder at these expressions. Is it really appropriate to apply the term, “computation” to what the brain and its cells are doing? The term normally signifies mathematical calculation with numbers or algebraic symbols.

This is what digital computers do. Humans, such as engineers or physicists or school children learning arithmetic, can also do calculations if they are trained to do so. But the rest of the time their mental activity has nothing to do with computing. 

Is it appropriate, as the authors do, to speak of the brain “performing operations on information” and to expect to be able to reduce this to a succession of “simple elementary steps”? 

One could get the impression that computational neuroscience, as characterized by the cited authors, is more concerned with implementing a pre-existing conceptual scheme – one conducive to computer modeling of the brain – than with discovering the truth about how the brain really works. Unfair, perhaps, but I think raising the issue is justified.

As a matter of fact computer models, particularly when applied to very small portions of nerve tissue, can prove extremely useful for scientific purposes. A good example is the work of Eva Marder and her collaborators on the 30-neuron stomatogastric ganglion of the lobster. (See further discussion below.) But the picture that emerges from her work is quite different from that of a computer carrying out a succession of algorithmic operations. 

Häusser and his co-author do argue convincingly for the need to take better account of the actual biophysics of neuron function, dispensing with the simplistic comparison of neurons with the components of a digital computer. They write:

“Traditionally, relatively simple computational properties have been attributed to the individual neuron, with the complex computations that are the hallmark of brains being performed by the network of these simple elements…[W]e argue that this model is oversimplified in view of the properties of real neurons and the computations they perform.

“Rather, additional linear and nonlinear mechanisms in the dendritic tree are likely to serve as computational building blocks, which, combined, play a key role in the overall computation performed by the neuron…Standard neuronal models do not incorporate many nonlinear computations which are done in parallel processing and locally in each dendrite and its branches…This new variety of dendritic computations leads to model a neuron similarly to a feedforward two-layer network with nonlinear hidden units.”

The last phrase is formulated in the technical language of artificial neural networks. Why? The authors propose, in effect, that every single neuron on the brain should be treated as a tiny neural network in its own right. Now not only the brain as a whole but the individual neurons and pieces of them are supposed to be doing computations! 

In any case, going down to the subcellular level to include “computation mechanisms” in the dendritic tree adds complexity to the models of the brain. Over the last 15 years “dendritic computation” has blossomed into a large and rich field of research.  

Meanwhile, the models of single neurons have become even more complicated. Recently, for example, a paper has appeared with the title “Single Cortical Neurons as Deep Artificial Neural Networks,” proclaiming:

“We introduce a novel approach to study neurons as sophisticated I/O [input-output] information processing units by utilizing recent advances in the field of machine learning. We trained deep neural networks (DNNs) to mimic the I/O behavior of a detailed nonlinear model of a layer 5 cortical pyramidal cell, receiving rich spatio-temporal patterns of input synapse activations.”   

This is all very interesting. But are we getting any closer to understanding the principles of how the brain actually functions? I would suggest that it is stupid to keep thinking of the brain and its neurons in terms of computation and information processing when reality calls for something very different.   

Microscope image of rat neurons and glial cells grown in culture and stained using fluorescent antibodies to a microtubule associated protein (green, reflecting axons) a glial fibrillary protein (red showing astrocyte “feet”) and DNA in the cell nuclei (blue). Source: Wikimedia Commons

Enter the supercomputers

Compared with the supercomputers coming online these days, one might initially get the impression that our brains are just old-fashioned horse-and-buggy technology.  

Recent years have witnessed intense competition between the US and China to build the largest and fastest supercomputers. China’s Sunway TaihuLight, described by the US Department of Energy in 2018 as the world’s fastest computer, has peak computing performance of 125 petaflops (1 petaflop = one thousand million million floating-point operations per second) and over 1,310,000 GB of primary memory. 

In the meantime, the US has raced back into the lead with two new supercomputers, the SUMMIT computer at Oak Ridge National Laboratory and the somewhat slower Sequoia, a third-generation IBM Blue Gene machine at Lawrence Livermore Laboratory.  

SUMMIT has a maximum processing speed of 200 petaflops and 250,000,000 GB of storage capacity. 

These machines require large amounts of electric power, making energy a major part of their operating costs. SUMMIT has a power consumption of 13 megawatts, enough to supply about 10,000 households. 

By contrast, the power consumption of the human brain is only about 25 watts – a half-million times less. That alone ought to suggest that the brain operates on a completely different principle.  

Despite their speed and huge numbers of computing elements, today’s supercomputers are extremely simple systems compared with the human brain or even the tiny brain of an insect. In fact, a single neuron is already vastly more complicated, in terms of the physical processes involved in its activity, than the largest supercomputer. 

The reason why computers are relatively so simple is straightforward. To get a physical system to behave in a strictly repeatable, predictable manner, you must drastically reduce its effective degrees of freedom. You must “enslave” nature, so to speak.

Naturally, on the atomic and subatomic level, the transistors on a microchip are also vastly complex physical systems, possessing innumerable sorts of fluctuations and variations beyond our control. Hence we have designed and produced these systems in such a way that most of the variability is “irrelevant” from the standpoint of the mechanistic behavior we need. 

As the study of the human nervous system progresses, scientists are discovering that human cognitive functions depend upon more and more complex forms of variability and plasticity – from the brain as a whole down to the individual cellular and even molecular levels. As usual in biology, the more you look, the more you find. 

Living organisms display a kind of irreducible complexity that defies the tactics of gross simplification and elimination of all but a tiny number of “relevant variables,” without which modern computer systems could not emulate even a single molecule of water. 

Although simplified models are useful and indispensable, one should never confuse models with reality.

The experimental neurobiologist Eve Marder has spent her entire 40-year scientific career studying the network of 30 neurons constituting the stomatogastric ganglion in the nervous system of a lobster. This supposedly “simple” system displays extraordinarily complex oscillatory behavior.

Marder’s group at Brandeis University and collaborating groups around the world use the microelectrode techniques to directly measure the electrical activity of individual neurons and investigate the role of dozens of neuromodulator peptides, which are secreted by neurons and act upon the electrical characteristics of the cell membranes.

Marder’s work has generated much useful knowledge, in part using computer models to test hypotheses about the biological processes involved. At the same time, such models are very far from being able to accurately predict the actual activity of the 30 neurons in a live ganglion. That is also not the claim or purpose of her research.  

Uploading the Brain

On this background one can appreciate the degree of hype involved in suggestions, that it will become possible, in the foreseeable future, to simulate the whole human brain using a supercomputer.  

A prominent figure in this area is the Israeli brain researcher Henry Markram. In 2005 he initiated the Blue Brain Project2, aimed to build large-scale “bottom up” numerical simulations of a group of about 100,000 neurons from the neocortical column of a rat. 

An interesting project, although one might rightly doubt how accurate such a simulation could be, given the difficulty of simulating even 30 neurons. 

In a 2006 article entitled “The Blue Brain Project,” Markram wrote: 

“Alan Turing (1912-1954) started off by wanting to ‘build the brain’ and ended up with a computer. In the 60 years that have followed, computation speed has gone from 1 floating point operation per second (FLOPS) to over 250 trillion — by far the largest man-made growth rate of any kind in the 10,000 years of human civilization….

“As calculation speeds approach and go beyond the petaFLOPS range, it is becoming feasible to make the next series of quantum leaps to simulating networks of neurons, brain regions and, eventually, the whole brain. Turing may, after all, have provided the means by which to build the brain….

“A model of the entire brain at the cellular level will probably take the next decade. A molecular-level model of a [neocortical column of the brain] is probably feasible within five years, and linking biochemical pathways to models of the genome will probably have been achieved within 10 years. There is no fundamental obstacle to modelling the brain and it is therefore likely that we will have detailed models of mammalian brains, including that of man, in the near future.”

Estimates by various authors of computer performance needed to imitate the human brain “at various levels of detail,” compared with increasing processing speed of computers in flops per second (red line). Source: Wikimedia Commons.

Markram reaffirmed his optimism in a notable TED Global talk in Oxford in 2009:

“We have the mathematics to make neurons come alive…There literally are only a handful of equations that you need to simulate the activity of the neocortex. But what you do need is a very big computer…So now we have this Blue Gene supercomputer. We can load up all the neurons, each one on to its processor, and fire it up, and see what happens…I hope that you are at least partly convinced that it is not impossible to build a brain. We can do it within 10 years.”

IBM Blue Gene/P supercomputer at Argonne National Laboratory, USA. Source: Wikimedia Commons

Markham subsequently obtained a $1 billion grant from the European Commission  for his grandiose “Human Brain Project.” The HBP was launched in October 2013 as one of the two “flagship” projects in the EC’s Future and Emerging Technologies program. It soon ran into trouble, however.

In September 2014 the magazine NATURE published a comment by two senior neuroscientists, Yves Frégnac and Gilles Laurent, entitled “Neuroscience: Where is the brain in the Human Brain Project?” 

The authors, both of whom initially had been engaged in the project, describe it as a “brain wreck.” They called attention to an open letter sent to the European Commission on July 7, 2014, in which neuroscientists from around Europe, as well as from Israel, declared that they would boycott a call for so-called “partnering projects” that were supposed to raise about half of the HBP’s total funding.

In the meantime, the letter had gathered over 750 signatures.

The authors note: “Contrary to public assumptions that the HBP would generate knowledge about how the brain works, the project is turning into an expensive database-management project with a hunt for new computing architectures. In recent months, the HBP executive board revealed plans to drastically reduce its experimental and cognitive neuroscience arm, provoking wrath in the European neuroscience community.” 

In 2015 the Human Brain Project underwent a review. The executive committee, led by Markram, was dissolved and replaced by a more representative body. The project continues today, with a broader orientation and less hype.

As an outsider, I cannot help but suspect that the lobby of computer and IT companies somehow played a role in the “brain wreck” referred to above.   

Needless to say, simulation of living processes means a gigantic market for supercomputers and related IT products in the fields of scientific research, the pharmaceutical industry, psychology and clinical medicine. Large-scale funding from governments and the private sector will assuredly continue.  

But how much of the money will be spent on really useful purposes – and how much to promote stupidity – remains to be seen. 

Part 6: Life is not digital

Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.

Leave a comment