This is the fourth installment in a series. Read Part 1, Part 2 and Part 3.

Pursuing the weaknesses of present-day artificial intelligence – what I have called the “stupidity problem” – takes us into the fascinating field of neurobiology, which in recent times has experienced a series of revolutionary discoveries. These discoveries have overturned many of the dogmas about brain function which shaped the early development of AI, at the same time suggesting revolutionary directions for AI in the future. 

AI, the brain and the mind

How does the human brain function? Needless to say, attempts to answer this question have shaped the development of artificial intelligence from its beginnings in the 1940s and 1950s until today. The same goes for the somewhat different question: how does the human mind work?

The early expectation, that one might actually be able to build machines possessing human-like intelligence, found encouragement in three main directions.

Firstly, evidence that the functioning of the human brain and nervous system, while staggeringly complicated from a biological point of view, is based on elementary “all-or-nothing” processes of the sort that can easily be imitated by digital electronic circuits (see below).

Diagram of a single voltage spike (action potential) generated by a neuron
Illustration: Wiki Commons

Secondly, development of symbolic logic and formal languages able to express large parts of higher mathematics, suggesting that all human reasoning might be ultimately reduced to the equivalent of manipulating strings of symbols according to sets of rules. Such formal operations can likewise easily be emulated by a digital computer.

Symbols from the logical calculus invented by Gustav Frege (Begriffschrift 1879) and equivalents used in mathematical logic today. Illustration: Wikimedia



Thirdly, the perspective of building ever faster electronic calculating devices. In this respect, progress since the 1950s has hardly been disappointing: the density of switching elements on present-day microchips exceeds that of neurons in the brain.

Note that the first point pertains to the brain, while the second pertains to the mind. They correspond to the two main orientations which AI has tended to follow in the subsequent period: artificial neural networks and machine learning on the one side, and so-called symbolic artificial intelligence on the other.

The former does not concern itself much with the structural aspects of thought, which are supposed to “emerge” somehow from the training process of the system. By contrast, symbolic AI orients to the putative structure of human thought and language. For the latter purpose there is no need to try to imitate the brain as an organ. One could in principle use any sort of hardware.

The present trend in AI is toward hybrid systems which combine both approaches, while keeping to digital computers as the technological basis for realizing AI systems. AI systems remain without exception mathematically equivalent to Turing machines and can therefore be classified as stupid (See Part 2).

The wrong paradigm

However successful – and even indispensable in many practical spheres today – the dominant approaches to artificial intelligence remain rooted in false conceptions about the nature of the mind and of the brain as a biological organ. 

Unfortunately, the simplistic models of the brain and mind, which were the original point of departure for the AI, have since become the paradigm for nearly all of what is now called cognitive science, as well as a large part of neurobiology. It has become standard practice to impose methods, concepts, models and vocabulary from the fields of artificial intelligence, computer science and information theory, onto the study of the brain and the mind. It is hard to find a scientific paper on these subjects which is not full of terms like “computing,” “processing,” “circuits,” “storage and retrieval of information,” “encoding” “decoding” etc.

Are such terms really appropriate to describing what the human brain and the mind really do?

In science we should always try to fit our concepts and methods as closely possible to the nature of the objects being studied; and at the very least not to ignore their most essential features. 

To this might be objected: how can we ever know what the “true nature” or the “most essential features” of anything might be?

Certainly we cannot ever know for sure, in some absolute way. Nevertheless I would assert that the human mind is in fact capable of gaining insights into the natures of things. Or at very least of recognizing – however belatedly – when a given conceptual model is totally off track compared to the reality it is supposed to represent.  

Exactly this kind of insight is weak or lacking in the phenomenon of stupidity. Very often, in science and elsewhere, people pursuing a wildly inappropriate approach are not deterred by mounting evidence to that effect. Instead, they just modify their theories and explanations to account for the unexpected phenomena. The theories become more and more complicated, while the basic assumptions remain unchanged.   

Living cells vs. dead microchips

An integrated circuit (left) and live neurons in a rat brain. Photos: Wikimedia

On the level of biology and physics the brain has virtually nothing in common with digital processing systems. Why are they so often treated as analogous? Why are concepts of computer science so often used in brain research?

The notion that neurons in the brain might function as digital elements, and the brain as a digital computer, goes back to the discovery of the so-called “all-or-nothing principle” of nerve functioning in the late 19th century. Neurons generate discrete “spikes” of electricity, separated by periods of – apparent! – electrical inactivity. The “firing” of a neuron would correspond to a “1” as opposed to the resting state (“no pulse,” or 0). The impulse propagates down the axons of the neuron, which branch out to as many as ten thousands of other neurons. At the points of contact, the synapses, the voltage spike causes the release of so-called neurotransmitter substances, which in turn communicate the signal to the target neurons. In the state “0” supposedly nothing is communicated. Today even schoolchildren learn this picture.

Rhythmical spikes generated by a Purkinje neuron from the cerebellar cortex of the brain in response to a single DC impulse, as recorded by microelectrodes at different positions on the dendrites and cell body.

The key issue, is how neurons react to incoming signals. In engineering language: what is their “input-output” relationship? It was assumed that this relationship could be represented by a mathematical function, thereby permitting the behavior of a network of interconnected neurons to be simulated by computers in a strictly algorithmic fashion.

Starting with the pioneering work of McCulloch and Pitts (1943), countless mathematical models of this sort have been developed, some of which form the basis for Deep Learning AI systems. An important step was to take account of the fact, that the characteristics of real neuron synapses change during their interaction. For this purpose, the synapses in artificial neural networks are assigned variable numerical weights, whose values are determined in the course of a “learning” process (see Part 2). This typically occurs according to an algorithm based on the so-called Hebbian Rule, first put forward in 1949 by the neuropsychologist Donald Hebb.

The effort to imitate the presumed structure of the brain in this way has proved extremely useful for AI. But what about the real human brain?

Mathematical model of a neuron (left). A model of a neural network (center). A network of real neurons (right).


It is remarkable that in their writings about the human brain, the pioneers of artificial intelligence, such as John von Neumann, Alan Turing, Marvin Minsky, John McCarthy and other pioneers of artificial intelligence, all failed to recognize the implications of the fact, that neurons in the brain are living cells.

It would be very strange if this fact were irrelevant to understanding the phenomena of human cognition!

I don’t mean anything mysterious or esoteric, but merely essential characteristics of living processes, as they ought to be familiar to everyone.

For example, it would be folly to assume that live neurons would behave in the rigidly deterministic manner suggested by a comparison to the elements of a computer or other machine. Living cells would never submit to strict algorithmic procedures unless artificially forced to do so. Is it not probable that the properties of neurons as living individuals – as opposed to dead circuit elements – are key to human cognition?

Like all other cells in the body, neurons deserve to be regarded as individual organisms in their own right. It is well-established that single-celled organisms display, in embryonic form, much of the intelligent behavior we find in multicellular animals: spontaneous behavior of a purposeful as well as playful sort; perception and recognition, and some forms of learning. Like every other multicellular organism, the human body exists as a society composed of living individuals. Thinking corresponds to a social process going on among brain cells. Little or nothing remains fixed, and little or nothing obeys rules of a rigidly mathematical sort. 

Dogmas bite the dust

In this context, discoveries in neurobiology have overturned, one-by-one, nearly all the mechanistic dogmas which prevailed at the time AI was born. Here are some of them:

Dogma 1. The human brain is “hard-wired”: from a certain age onward the “circuits” formed by the neurons and their interconnections remain fixed.  

No. Today it is well-known that in the adult brain new connections are constantly being formed (synaptogenesis) as well as removed (“pruned”). Neuroplasticity, which includes not only synaptogenesis but also constant changes in the morphology of existing synapses and the dendritic trees to which they are attached, play a central role in learning and other cognitive processes.  

Dogma 2. In the adult brain neurons can die, but no new neurons are born.

No. In the hippocampus in particular – a cortical region identified as essential to learning and memory as well as emotional processes – new neurons are constantly being born (neurogenesis). These new neurons move around, migrating through the tissue before settling down into some suitable location and forming connections with other neurons. Neurogenesis is appears necessary for the healthy functioning of this part of the brain.

Dogma 3. Neurons communicate in a strictly “all-or-nothing” fashion, via the generation and propagation of discrete voltage spikes.

No. Neurons possess so-called “subthreshold membrane oscillations.” These are complex oscillations of the electrical potential of their membranes, which are too weak to trigger spikes, but which modify the spiking behavior of the neuron and can be communicated to other neurons without spikes. Among other things, subthreshold membrane oscillations appear to play an important role in synchronization of neuron activity. This discovery has revolutionary implications. The continuous variability of these oscillations, and their propagation from neuron to neuron, contradicts the notion that the brain operates like a digital system.

Dogma 4. All communication between neurons occurs via the network of axons and synapses.  

No. It is now well-established that neurons also communicate via release of specialized molecules into the extracellular space, and their action on so-called extrasynaptic receptors carried by other neurons. This so-called “volume transmission”constitutes a second system of communication, alongside the so-called “wired transmission” via axons and synapses.

Dogma 5. The brain activity underlying cognition is based entirely on the interactions between neurons.

No. It has been established, that in addition to the neurons, the glial cells (astrocytes) in the brain play an active role in perception, memory, learning and control of concious activity. Glial cells outnumber neurons in the brain by a ratio of about 3:2. This discovery of role of glial cells in cognition, marks a revolution in neuroscience. The totality of these cells is sometimes referred to as a “second brain,” although the glial cells are so intimately connected to neurons metabolically and electrically that one can hardly separate the two.

For documentation, the reader can consult the following sources:

1. The Impact of Studying Brain Plasticity

2. What Do New Neurons in the Brains of Adults Actually Do?, Ashley Yeager, The Scientist, May 1, 2020

3. Generation and Propagation of Subthreshold Waves in a Network of Inferior Olivary Neurons 

4. Extracellular-vesicle type of volume transmission and tunnelling-nanotube type of wiring transmission add a new dimension to brain neuro-glial networks and Extrasynaptic exocytosis and its mechanisms: a source of molecules mediating volume transmission in the nervous system
(The latter two are rather technical publications, but give a good impression of the nearly unimaginable complexity of the cell-to-cell interactions which underlie brain activity.)

5. “The Other Brain” by R. Douglas Fields  (Simon and Schuster 2009), and Astrocytes and human cognition: Modeling information integration and modulation of neuronal activity