After a period of euphoria, criticisms of the deep learning-centered approach to artificial intelligence have been growing. We seem to be entering a new episode of the manic-depressive cycles that have afflicted AI from the very beginning, correlated to ebbs and flows of funding from government agencies and investors.
Many today see the future of artificial intelligence in a revival of so-called symbolic AI – the early approach to AI which aimed at reaching a truly “human” level of intelligence using methods of symbolic (mathematical) logic.
The initial idea is to program the system with a set of axioms (rules), an array of predicates (symbols representing objects, relations, attributes, fields, properties, functions and concepts) and rules of inference, so that the system can carry out logical reasoning of the sort humans do.
Early attempts to develop AI in this direction were laborious, and achieved useful results mainly in the area of so-called expert systems. In 1983 the AI pioneer John McCarthy noted that expert systems’ “performance in their specialized domains are often very impressive.
Nevertheless, hardly any of them have certain common-sense knowledge and ability possessed by any non-feeble-minded human. This lack makes them ‘brittle.’ Common-sense facts and methods are only very partially understood today, and extending this understanding is the key problem facing AI.
By far the most ambitious attempt to solve this problem is the “Cyc” project launched by the AI specialist Douglas Bruce Lenat around 1984. Gradually Lenat and his team built a gigantic AI system, which by 2017 had 1,500,000 terms (416 000 categories of objects, more than 1 million individual objects), 42,500 predicates (relations, attributes, fields, properties, functions), 2,093,000 facts, and 24 million common-sense rules and assertions.
Many of the rules were written individually by members of Lenat’s group, taking over 1000 man-years of work.
The heart of Cyc is an “inference engine” which derives conclusions from statements built up from the terms and predicates in accordance with the rules. For this purpose, Cyc employs tools of mathematical logic such as the second order predicate calculus, modal logic and context logic.
This is all very impressive. Leaving aside the issue of Cyc’s performance in practice, which I am not in a position to judge, some important questions suggest themselves:
Does the structure of Cyc correspond to how common sense is acquired and used by human beings? Or is it more like a very sophisticated form of curve-fitting, like trying to square the circle?
In the effort to approximate a circle by polygons, we are obliged to keep adding more and more sides. But the sides of the polygon are still straight line segments; we never get anything curved. The stupid polygons have more and more corners, whereas the circle has none. In this respect they become more and more unlike the circle.
By analogy, the complexity and database volume of AI systems could grow indefinitely into the future, without ever getting to the Promised Land of “human-like intelligence.” This, however, would not prevent AI from becoming an ever more valuable instrument for human beings, assuming they remain intelligent enough to use it properly.
Stupidity of the AI pioneers
As this series continues I intend to address, in some detail, the second dimension of the AI stupidity problem, which goes back to pioneers of AI and pervades the field still today.
Here are some hints to whet the reader’s appetite and round out some points made earlier in this series.
As emphasized at the outset, I do not mean to imply that the pioneers of AI were stupid people. That would be silly. Von Neumann and Alan Turing, for example, were exceptionally brilliant individuals, and that goes for many others in the field up to today.
Rather, what I have in mind is the stupidity of asserting or believing that human cognition is essentially algorithmic in nature, and/or is based on elementary neural processes of a digital sort, whose outcomes could be exactly reproduced by a large enough computer. More concisely, that the brain is a biological version of a digital computer.
Why was it stupid to make this sort of assertion? Why is it stupid to keep doing that today?
In installments to come I shall focus on two main reasons.
Real living neurons behave completely differently from the switching elements that make up a digital computer. Among many, many other things, living neurons – like all other living cells – have their own spontaneous activity. Like birds singing in the trees, neurons often emit pulses and rhythmic bursts of pulses in the absence of any signals from neurons connected with them.
Real networks of neurons in the brains of humans and animals display nothing of the rigidly algorithmic behavior implied by the early mathematical models of neural networks. Nor do they behave anything like the artificial neural nets upon which today’s “deep learning” AI systems are based.
As we continue this series I shall present fascinating discoveries in neurobiology in recent decades, discoveries that demolish the remains of the “biological computer” concept of the brain.
The modern data were of course not available to AI pioneers like John von Neumann and Alan Turing, nor to McCullock and Pitts – authors of the first mathematical neural net models. The conceptual basis for AI was laid in the 1940s and 1950s.
But there was plenty of evidence of the spontaneous behavior of neurons, their pulsatile bursts, so-called frequency coding in neuromotor control, the existence of chemical forms of communication within the nervous system, the presence of membrane oscillations, etc.
However interesting and fruitful it was for the early development of AI, from a biological point of view the neural net model was nonsense from the very start. Nevertheless, the AI pioneers kept on keeping on with the stupid notion (S 1; see the first installment for the list of four) that the brain is essentially a digital computing system.
In believing they were about to solve the mysteries of the brain and mind, they grossly overestimated the power of their own mathematical methods and ways of thinking (S 3) – methods that, it is true, had been successful in building the first atomic bomb, electronic computing and code-breaking machines, radar, automatic guidance systems and so forth during World War II and the immediate postwar period.
The nature of meaning
The meanings of essential concepts, as they actually occur in human cognitive activity, cannot be adequately defined or represented in formal, combinatorial terms. They cannot be stored in a computer base or incorporated into a software architecture.
The pioneers of artificial intelligence should have recognized this fact, even without the 1930s results of Kurt Gödel in mathematical logic, with which von Neumann, Turing and others were thoroughly familiar. But Gödel’s arguments leave no reasonable doubt concerning the inexhaustibility of meaning, even for such supposedly simple concepts of mathematics as that of a “finite set” or “truth” as it applies to propositions of mathematics.
Judging from their writings, one gets the impression that the pioneers of AI did not understand (S 4) the significance of Gödel’ work.
It is not necessary, however, to study mathematical logic in order to recognize that “meaning” lies outside the universe of combinatorial relationships. No one has to be a “frog at the bottom of the well.”
Next: Neurobiological discoveries demolish the remains of the “biological computer” concept of the brain.
Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.