Watching the remarkable progress in software development, it is tempting to extrapolate that computers’ artificial intelligence will supersede human activity. Photo: Pixabay / Comfreak

The term “artificial intelligence” was coined in 1956 by a young professor, John McCarthy, with the hypothesis that such new software would explore “every aspect of learning or any other feature of intelligence that can in principle be so precisely described that a machine can be made to simulate it.”

Then in 1958, the idea of building software “neural” networks – inspired by the way the brain works – by Frank Rosenblatt led to parallel processing of data that is now called “deep learning.”

He had a big idea. In 1958 he was quoted in The New York Times saying such a machine would be the first to think like the human brain.

How has that vision fared – will computers ever emulate the human brain?

As computing capabilities have steadily improved since the 1950s, so have the capabilities of the software that now is an essential part of all activities. For examples of assistance in medicine, see The physician’s helper: AI software, Asia Times, December 13, 2019.

Watching this remarkable progress, it is tempting to extrapolate that computers artificial intelligence will supersede human activity. Imagine autonomous robots running the industrial world. Can that happen, or are there fundamental limitations that limit computer “intelligence” that makes humankind unique?

A recent review discusses the limitations of AI software based on massive experience. Quoting Yoshua Bengio, the scientific director of Mila – Quebec AI Institute: “In terms of how much progress we’ve made over the past two decades: I don’t think we’re anywhere close today to the level of intelligence of a two-year-old child. But maybe we have algorithms that are equivalent to lower animals for perception.” 

Artificial intelligence will behave differently in different cultures. Photo: Wikimedia Commons

Among the important shortcomings of AI systems discussed in the review article are the following.

  1. Small deviations from the programmed task can lead to errors. For example, a single pixel in an image can lead to it not being properly recognized. This limits the use of AI software in autonomous driving of vehicles, for example. 
  2. The software cannot reliably quantify uncertainty. Hence human involvement is needed to obtain reliable results. This has been the case, for example, in the use of images to ascertain cancerous conditions. Left alone, the error rate can be very high because of the software’s inability to fully match actual images to programmed ones.
  3. The lack of “common sense.” For example, computer models were tested for their capability to assemble a logically coherent sentence from a group of words. Researchers found rates of success in only the 30% range because the program was unable to apply logical connections consistently.
  4. Doing math is problematic. For example, researchers trained AI software with neural networks on hundreds of thousands of math problems. Yet when tested on 12,500 problems, only 5% of the answers were correct. This limits the use of such AI software in scientific applications. 

AI applications to robots are receiving a great deal of attention. Researchers are trying to create robots with AI able to make decisions and control a physical body in an unpredictable real world.

As described by Tom Chivers in How to Train an All-Purpose Robot (IEEE Spectrum, October 2021, page 35), the search for autonomous robots that can truly replace humans in industry, the military, the home and agriculture is elusive.

The AI limitations described above basically stand in the way and the best that has been achieved are machines that can perform alone tasks in a controlled environment where machine-unpredicted activities are not expected. Otherwise, human involvement is needed. 

For example, robots have been tested in military conditions to detect and remove hazardous obstacles. But these require human control to perform as required to provide the basic “intelligence” to deal with the unexpected. The conclusion to date is that well-designed robots are great tools, but don’t expect them to deal with the unknown. 

So, in summary: We will make these robotic machines more useful with better software, but not necessarily “smarter.”

Henry Kressel is a technologist, inventor and long-term Warburg Pincus private equity investor. Among his technological achievements is the pioneering of the modern semiconductor laser device that enables modern communications systems.