This is the 8th installment in a series. Read Part 1, Part 2, Part 3, Part 4, Part 5, Part 6 and Part 7. I have repeatedly been challenged to “prove” that human-level artificial intelligence is impossible. My position is that the burden of proof lies on the opposite side.  No serious evidence has been given in support of the thesis that human-level AI is possible. There certainly is no reason to expect that it would be. And there are very good reasons to believe it will never be possible, in any case for an AI system based on a digital computer – more precisely: mathematically equivalent to a Turing machine. Nevertheless, many people nowadays regard it as virtually self-evident that computers will sooner or later become as intelligent as human
TO READ THE FULL STORY

Or subscribe to Asia Times for
$100 per year or $10 per month.

Special discount rates apply for students and academics.

Already a subscriber to Asia Times? Sign in.
TO READ THE FULL STORY

Or subscribe to Asia Times for
$100 per year or $10 per month.

Special discount rates apply for students and academics.

Already a subscriber to Asia Times? Sign in.

This is the 8th installment in a series. Read Part 1, Part 2Part 3Part 4Part 5Part 6 and Part 7.

I have repeatedly been challenged to “prove” that human-level artificial intelligence is impossible. My position is that the burden of proof lies on the opposite side. 

No serious evidence has been given in support of the thesis that human-level AI is possible. There certainly is no reason to expect that it would be.

And there are very good reasons to believe it will never be possible, in any case for an AI system based on a digital computer – more precisely: mathematically equivalent to a Turing machine.

Nevertheless, many people nowadays regard it as virtually self-evident that computers will sooner or later become as intelligent as human beings – and, thereafter, become much more intelligent. The futurist Ray Kurzweil boosts his media ratings regularly with predictions to this effect.

In a 2017 interview, he stated that  “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.” 

Instead of delving into the “Turing test” that Kurzweil refers to, it is more interesting to quote Alan Turing himself, on a BBC radio broadcast in 1951:

“I think it is probable, for instance, that at the end of the century it will be possible to program a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine…This only represents my opinion; there is plenty of room for others.”

Alan Turing in a file photo. Image: www.gov.uk

Note that Turing does not suggest the machine would match the human mind completely, but only that it would be “extremely difficult to guess” the difference. Actually, he had a much more playful and humorous attitude to these questions than one is accustomed to nowadays. In a 1951 lecture “Intelligent Machinery, A Heretical Theory“ Turing said:

“My contention is that machines can be constructed which will simulate the behavior of the human mind very closely. They will make mistakes at times, and at times they may make new and very interesting statements, and on the whole the output of them will be worth attention to the same sort of extent as the output of a human mind… 

“Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious toleration from the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job.” 

At the end of his BBC talk Turing added an interesting reflection: 

“The whole thinking process is still rather mysterious to me, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.”    

ELIZA – the AI psychiatrist

What we know for sure, is that AI systems can fool people into believing the machines are human – at least for a while.  Even primitive AI systems can achieve this.

The reason has less to do with intelligence itself than with the tendency of people to unconsciously project humanness onto almost everything they interact with. 

Wikimedia Commons, modified by JT

In the mid-1960s MIT researcher Joseph Weizenbaum created the first chatbot – a system called ELIZA that permitted the operator to type in statements in English and receive answers from the computer.

Weizenbaum had the brilliant idea to program the computer to imitate a psychotherapy session. For this purpose, he chose the dialog approach of the famous psychologist Carl Rogers.

In his 1976 book From Computer Power and Human Reason – From Judgment to CalculationWeizenbaum relates

“For my first experiment, I gave ELIZA a script designed to permit it to play (I should really say parody) the role of a Rogerian psychotherapist engaged in an initial interview with a patient. The Rogerian psychotherapist is relatively easy to imitate because much of his technique consists of drawing his patient out by reflecting the patient’s statements back to him.” 

Weizenbaum first described his work in a 1966 paper entitled “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man and Machine.” The result caused a sensation and is regarded as one of the turning-points in the history of artificial intelligence. 

To give an impression of how ELIZA works, I decided to play the patient in a short therapy session with ELIZA. Here is how the session began:

ELIZA: Is something troubling you?

Tennenbaum: I feel depressed.

ELIZA: Do you often feel depressed?

Tennenbaum:  Sometimes. 

ELIZA: What does that suggest to you?

Tennenbaum: Maybe I should get treatment. 

ELIZA: You aren’t sure?

Tennenbaum: No. 

ELIZA: Why “no”?

Tennenbaum: Because I don’t like psychiatrists. 

ELIZA: What is the connection, do you suppose?

Tennenbaum: My wife is a psychiatrist. 

ELIZA: Your wife?

Tennenbaum: Yes, she told me to go to you. 

ELIZA: I see.

Tennenbaum: Please help me.

ELIZA: I understand.

Interested readers can try their own session via this link.

Weizenbaum notes in his 1976 book that “DOCTOR, as ELIZA playing psychiatrist came to be known, soon became famous around the Massachusetts Institute of Technology, where it first came into existence, mainly because it was an easy program to demonstrate.”

He adds: “Soon copies of DOCTOR, constructed on the basis of my published description of it, began appearing at other institutions in the United States.”

He was first astonished at the reaction that “DOCTOR” produced around the country. Gradually he became more and more worried about what the reaction meant about society and the possible future impact of AI. He went on to become one of the most outspoken critics of AI.  

Joseph Weizenbaum in a file photo. Image: Facebook

In his 1976 book, Weizenbaum emphasizes the things that particularly shocked him. They are worth briefly commenting on.

“The shocks I experienced as DOCTOR became widely known and ‘played’ were due principally to three distinct events,” he writes.

“1. A number of practicing psychiatrists seriously believed the DOCTOR computer program could grow into a nearly completely automatic form of psychotherapy.”

It astonished Weizenbaum that a primitive system like ELIZA – whose program had a length of only about 20 kilobytes! – could provoke such grandiose expectations. Evidently, he was also appalled at the prospect of entrusting human beings to the care of machines. 

Today automated psychotherapy via chatbox is a reality, and its use is rapidly growing. One of the best-established versions, the smartphone app “Woebot,” carries out over 4 million therapy conversations per week.

Woebot was developed by a group of psychologists at Stanford Universal in cooperation with AI experts. Utilizing the principles of cognitive behavioral therapy, the app is designed to engage its users once per day in short (10 minute) conversations.

At the beginning it asks simple questions about how the user’s feelings and provides helpful suggestion and supportive comments. Gradually, as the system analyzes data from the conversations and monitors the user’s mood, the conversations become more specific. In this process the chatbot becomes like a friend and confidant to the user.

Studies indicate that Woebot and a few analogous systems can achieve significant positive results, particularly with depression. 

Image: Facebook

Back to Weizenbaum for another shocking event: “2. I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it.”

Weizenbaum retells how this happened with his own secretary. She knew very well that DOCTOR was not a person, but a computer program written by her boss. Nevertheless, she became so emotionally involved that she asked Weizenbaum to leave the room while she was conversing with DOCTOR. 

Anthropomorphization is a natural tendency of human beings, extending not only to living objects including pets but often even to inanimate objects. However, as AI develops more and more human-like traits the tendency will grow for anthropomorphization of AI systems to become automatic and unreflected.

As I stated in the opening installment of this series: “The danger is not that AI systems will become more intelligent than humans. Rather, people may become so stupid that they can no longer recognize the difference.”

And Weizenbaum’s final shocking event: “3. Another widespread, and to me surprising, reaction to the ELIZA program was the spread of a belief that it demonstrated a general solution to the problem of computer understanding of natural language.

“In my paper, I had tried to say that no general solution to that problem was possible, ie, that language is understood only in contextual frameworks, that even these can be shared by people to only a limited extent and that consequently even people are not embodiments of any such general solution. But these conclusions were often ignored.”

At the time Weizenbaum wrote ELIZA, natural language comprehension and translation were probably the hottest issues in AI.

The US Defense Department supported much of the initial work in this field, including his and that of Noam Chomsky. There was a great deal of hype – evidently not created by Weizenbaum – concerning near-term practical applications. 

In insisting on the importance of “contextual frameworks,” Weizenbaum put his finger on a decisive issue in artificial intelligence and human language – one I shall discuss in some depth in coming articles.

Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.