This is the seventh installment in an 8-part series. Read Part 1, Part 2Part 3, Part 4, Part 5 and Part 6

In the previous installment of this series, I showed that – contrary to assumptions made by pioneers of artificial intelligence such as John von Neumann – the human brain bears virtually no resemblance to a digital computer.

Von Neumann in particular should have realized this. He had a broad knowledge of physical science and was in contact with leading researchers in the areas of biophysics and neurophysiology.

But he and others banked on the discrete, “all-or-nothing” nature of neural impulses, which superficially appear analogous to the “0” and “1” of digital computers. He assumed this would allow researchers to ignore the complicated biology and biophysics of real, live neurons and treat the brain as a digital system.

If the goal is to understand how the brain actually works, that was a stupid mistake. But it did inspire early successes in developing computers and primitive AI systems, including the artificial neural network approach that eventually led to today’s “deep learning” systems.

The paradigm “brain = digital computer” is seductive when dealing with philosophical issues concerning the nature of intelligence, cognition and the mind itself.  It also makes the task of artificial intelligence appear quite straightforward. 

The logic is simple: The brain provides the physical substrate for mental processes. Hence, if the brain functions like a digital computer, then the human mind is equivalent to a suitably-programmed computer. 

The hardware is already there. The number of digital elements in the latest supercomputers already exceeds that of neurons in the human brain by over a thousand times. Soon there will be more on a single computer chip.

The new Nvidia A100 chip, unveiled this year, has 54 billion transistors, compared with about 86 billion neurons in the brain. Artificial intelligence would thus simply be a matter of developing the right software. 

Psychology would become just a different language to describe the calculations going on in the brain. Psychologists and philosophers would turn into computer scientists.

In the meantime, the results of neurobiological research have demolished the notion that the brain functions like a digital computer. It is hard to find two objects that are more different from each other than a transistor on a microchip and a living neuron in the brain. 

The reality is taking some time to sink in. The fields of “cognitive neuroscience” and “computational neuroscience” continue to use the term “computation” to describe what happens in the brain. But this is only possible by expanding the meaning of “computation” to embrace – by definition – practically any kind of physical process. 

Can computers emulate the brain? 

“OK, OK,” some readers might respond, “but after all, as a physical system the brain has to obey the laws of physics. That means the brain is still a kind of machine, right? It is just a matter of time before we will be able to simulate it on a computer.”  

Brain and electroencephalogram. Source: Wikimedia Commons

This question raises points that are worth examining in some depth.

Physics has changed greatly since the days of Descartes and Newton. Newtonian physics was literally mechanistic. The universe was supposed to operate like a giant algorithm: From the positions and velocities of each of its particles at a given moment of time, one could (in principle) calculate their positions and velocities at the “next” moment of time, using mathematical equations. 

In this scenario, the universe computes itself step by step, from each moment to the next. The laws of physics are the computer program. The brain would just be a sub-unit of the cosmic computer. The same would naturally hold for individual neurons. 

But what does it mean to talk about “the next” moment of time? Time is continuous, not discrete. Struggling with this issue led Leibniz to introduce the idea of infinitely small intervals of time, and invent what he called infinitesimal calculus to deal with processes of a continuous character. 

Mathematicians today prefer to call it the differential calculus, avoiding the paradoxes raised by Leibniz’s concept. This story is not over, however. Some day the ordinary number system may actually be expanded to include some sort of infinitely small numbers.

One way to do this, called nonstandard analysis, was put forward in the 1960s by Abraham Robinson. Others may turn out to be more interesting. In any case, introducing infinitesimal quantities into mathematics would raise interesting questions concerning their physical meaning.

Chopping up time

Von Neumann and others sought to avoid the continuity problem by assuming the nervous system operates in a discrete fashion. For an ideal Turing machine, time is discrete. Nothing happens between the successive steps. 

This is not quite true for a modern digital computer. Electrical pulses use the time between successive computation cycles, regulated by the CPU clock, to propagate through the system. It also takes time for heat to be dissipated from the transistors. 

Nevertheless, as far as the end result is concerned, a sufficiently large computer can emulate any another computer with 100% accuracy. These systems are discretized to begin with.  

On the other hand, if we want to simulate a continuous physical process on a digital computer, there is no choice but to “discretize” it. 

This is typically done by choosing some fixed “step interval” of time and some discrete grid of data points. Next, one approximates the equations describing the actual process (differential equations) by so-called difference equations involving the discrete parameters.

The computer calculates the solution stepwise, from one time-step to the next. The steps can be so small that the result appears as a continuous curve or motion. 

Computer simulation of wave propagation. Source: Wikimedia Commons

Applied to real continuous processes, the discretization method – which is forced on us by the nature of digital computers – does not always work out. 

As already mentioned, the result is at best only an approximation. In order to improve accuracy, we have to reduce the size of the time-steps and increase the density of the data-point grid.

Doing so, however, greatly increases the number of computations that must be performed. In many problems, the required computer time becomes so large that even supercomputers no longer suffice to produce a reliable prediction.

The butterfly effect

Butterfly and hurricane. Source: Wikimedia Commons

A second problem with the discretization method, is that we don’t capture what is going on in the spaces between the time steps and data points. This can lead to huge errors. 

A typical example is the famous “butterfly effect” in weather prediction: Given the highly nonlinear nature of weather processes, a localized event – on a smaller scale that the data grid – can change an entire weather pattern.

Theoretically, if conditions were unstable enough, a single butterfly flapping its wings could unleash a chain of events leading to a hurricane. Hence the notorious unreliability of weather predictions beyond a day or so, despite the use of supercomputers and sophisticated AI-based methods.

Even when the resulting computer algorithm seems to imitate the original process quite well, the two are never identical.  Sooner or later a real physical system will always diverge from the discretized models. Living systems do so very rapidly.

There is every reason to expect that butterfly effects occur all the time in the brain. In fact, they are essential to its function. For a silently waiting hunter, for example, the slightest noise or glimpse of something out of the corner of the eye can trigger a combination of quick thinking, body movements, emotions, etc., involving whole areas of the brain.

“Suddenly getting an idea” looks like another example, hardly understood in neurological terms, where the triggering event is somehow generated from the inside. 

Indeterminism

So far, I have left out a key piece of the picture: quantum physics – the scientific revolution that occurred a century ago. As every student learns, quantum physics is the basis of chemistry. That also holds for the complex behavior of proteins and other macromolecules in cells. The brain is a quantum system.

Among other things, quantum theory dictates that elementary physical events, such as the moment of transfer of an electron from one orbit to another in an atom or molecule, have a fundamentally “indeterministic” character: We cannot predict them but can only calculate probabilities. At the same time, such events may be coupled with one another in a coherent manner over entire regions of the brain.

There is good reason to expect that cognition involves these sorts of quantum effects. One can also speculate that the above-mentioned “quantum indeterminacy” plays a role in the spontaneous behavior of organisms and the phenomenon of free will. 

Plus, quantum physics may not be the last word. Maybe the physics needed to fully understand the workings of the brain has not yet been developed. Perhaps it never will be. Maybe something will always be missing.

Conclusion: Forget about emulating the human brain on a computer! It could only work if, by some miracle, one could omit 99.99% of the physics and still get the right answers.

But that is how von Neumann and the others started. It didn’t work out.

From brain to mind

One response, akin to behaviorist psychology, would be to say, “We don’t care how the brain works. We’ll just treat it as a black box and use deep learning systems to simulate its behavior.”

In other words, use one black box to imitate another black box. This would be one way to avoid dealing with the physics of the brain. However, the apparent agreement between the outputs of two black boxes does not mean they are identical on the inside. 

Conceptual image of Deep Learning. Image: Humans Are Free

The wild and unpredictable errors made by image recognition systems based on Deep Learning provide a telling example. Even AI experts have been shocked at the success of adversary examples in provoking such errors, revealing that the Deep Learning systems are “tuned” to completely different features of an image than the human observers they are supposed to imitate.  

So-called symbolic artificial intelligence pursues a completely different strategy.  

Symbolic AI focuses on mental processes, as they are revealed in logical reasoning, in the use of language and in other ways, in order to discern the underlying “structures of thought.” The results, it is hoped, will provide the basis for creating computer programs and architectures that can imitate human cognition.    

By doing so, Symbolic AI needs not to worry about the physics of the brain. It goes all the way to the highest level of organization – to the mind itself.

Part 8: Symbolic AI

Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.