This is the 9th installment in a series on the stupidity of artificial intelligence. Read Part 1, Part 2Part 3Part 4Part 5Part 6, Part 7 and Part 8.

So-called symbolic artificial intelligence aims at imitating the human mind, rather than the brain as a physical system.  But what is the mind? The concept is nebulous and not only difficult but perhaps even impossible to define. 

To be sure, “mind” should include, in addition to logical reasoning, memory and thinking – plus perception, feeling, emotion, intention, intuition, imagination and so on.

But mental activity goes beyond consciousness as such. Psychologists today, for the most part, reject Freud’s characterizations of unconscious thoughts and wishes, but it is generally accepted that some sort of “cognitive processing” takes place outside the scope of “cognitive awareness.”

Meanwhile, the more you look into your own mind, the more you discover. Much is difficult to describe in words. “Inner experiences,” as one might call them, don’t fit into clear-cut categories. In this respect, the mind is open-ended; you never know what kinds of mental experiences you can have until you have them. 

It is even hard to define exactly what we mean by “thinking.” One might try to narrow it down to logical reasoning, as in mathematical proofs. But in real life, most thinking is not strictly logical. People often make decisions in an intuitive way.

‘Thinking’ is not easily defined. Image: Wikimedia

Ironically, some of the most serious errors are made when people put too much trust in logical reasoning, ignoring the fact that such reasoning is always based on premisses, which might be wrong. Often it is intuition, insight or even “feeling” that alerts us, when logic is leading in the wrong direction.

Can we draw precise dividing lines to isolate thinking (or intelligence), emotion, imagination and the other aspects of mental activity I listed above?

This situation is unpleasant for people accustomed to the clarity and precision of mathematics – not least for people in the fields of computer science and artificial intelligence. The pioneers of artificial intelligence were mostly mathematicians by training. 

One can understand the impulse to clean away the subjective, introspective aspects of the mind, and to focus on only that which is objectively definable, describable, observable and verifiable in exact scientific terms. 

Taken to extremes, the attitude is: “What you can’t express in numbers doesn’t exist.” Indeed, programming digital computers does not leave room for much else.   

On August 31, 1955, AI pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon sent the Rockefeller Foundation “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” This now-famous document began:

“We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

Naturally, the proposal to describe human intelligence “so precisely that a machine can be made to simulate it” automatically excludes those features of the mind that do not fit into neat categories and cannot be expressed in a formal-mathematical framework. 

What AI ends up imitating is not the actual human mind, but only a tiny fragment left over when the mind has been chopped down to fit the authors’ criterion of “precise description.” 

Almost nothing remains of the mind – or at least the mind of a  child – if we agree to Alan Turing’s 1950 proposal:

“Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s.

“Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child-brain that something like it can be easily programmed.” 

An alchemist creating a homunculus – an artificial human being – in a scene from Goethe’s Faust. Source: Wikimedia

Always 50 years ahead

In the 1950s and 1960s, many people working in AI had unrealistic expectations about the possibility and time scale for digital computer-based artificial intelligence to match human intelligence in various fields. Some even went so far as to suggest that “human-level AI” – now called “artificial general intelligence” – might be realized within a few decades. 

In 1957, AI pioneer Herbert Simon declared: “There are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

In 1961 AI pioneer Marvin Minsky stated: “Within our lifetime machines may surpass us in general intelligence.”

In 1967, he seemed even more sure: “We are now immersed in a new technological revolution concerned with the mechanization of intellectual processes,” he said. “Within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm. The problem of creating ‘artificial intelligence’ will be substantially solved.” 

Marvin Minsky at MIT Lab. Photo: Massachusetts Institute of Technology

In 1971  the AI researcher Donald Michie carried out a survey of 67 British and American computer scientists “working in, or close to, machine intelligence.” The scientists were asked, among other things, when they thought “computing systems exhibiting intelligence at adult human level” would be built. Among them, 17 (26%) expected this to happen in 20 years or sooner, and 20 (30%) thought it would happen in 20-50 years. 

That was 50 years ago.

Interestingly, nearly half of those surveyed believed the risk was “substantial” that intelligent machines would eventually take control over human affairs.

Here is a link to an informative critical review of “past AI forecasts and seasons of optimism and pessimism in the field.”

In fairness, let us acknowledge that, from the very beginning until today, prominent voices in the AI community have cautioned against predictions of “human-level AI.” Today some even consider – correctly, in my opinion – that human-level AI is no closer today than it was in the 1950s.  

Naturally, this does not mean that AI has not made progress in other respects. On the contrary, AI has achieved tremendous breakthroughs in areas such as expert systems, speech and pattern recognition, automatic translation, control of complex technical processes, assessment of medical data and robotics. 

For reasons I hope to make clear, the fundamental limits of AI relative to human intelligence have never changed, and will persist at least as long as AI is confined to digital technology and algorithm-based procedures (equivalent to a Turing machine). 

AI remains inherently stupid.    

A Scene from ‘Rossum’s Universal Robots,’ by the Czech playwright Karel Capek 1921. This was the first time the term ‘robot’ was used to designate human-like automata. Source: Wikimedia

AI winter

Exaggerated expectations have repeatedly hindered the development of AI. The most famous case is the collapse in funding for AI research, known as the (first) “AI winter,” in the second half of the 1960s.  

The AI community still suffers from this trauma. Many shudder at the thought that something similar might occur again due to hyped-up claims and overly optimistic projections concerning what AI can accomplish in the short- and medium-term. Some even speak of an “AI bubble” waiting to pop.  

What happened then? In the 1950s and early 1960s, a large portion of the funding for AI went into developing automated computer systems for translation of human languages, such as from Russian into English.

For obvious reasons, the US military and intelligence community was greatly interested in this area. In the 1950s, there was great optimism that high-speed, high-quality translation of arbitrary texts could be realized in the near term. 

The optimism was based on some early superficial successes using symbolic AI, and the belief that the grammar and semantics of human language could fit into a precise mathematical structure.

Artificial Intelligence funding suffered a collapse in funding in the 1950s and 1960s often referred to as an ‘AI winter.’ Image: Facebook

Despite lavish funding by the US government, it gradually became clear that the problem was vastly more difficult than expected. No viable machine translation systems were built.

A critical 1966 appraisal by the Automatic Language Processing Advisory Committee of the National Academy of Sciences – now often described as the “infamous” or “notorious” ALPAC report – led to a drastic cut in funding, not only for translation but for AI in general.

As the British machine translation (MT) expert W. John Hutchins later wrote: “It is unfortunate that the public image of MT has been formed by the disastrous and grossly expensive mistakes of the early work on MT. There is perhaps no other scientific enterprise in which so much money has been spent for so little return.”

Since Hutchins’s 1979 review, machine translation has made tremendous progress. It is today one of the most widely used applications of AI. The breakthrough, however, came with the advent of deep learning systems specialized for this purpose.

Deep learning systems are essentially elaborate curve-fitting techniques. Unlike symbolic AI, they make no attempt to simulate how the human mind actually generates and understands language.

Part 10: The philosophical roots of AI

Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.