The problem with symbolic AI

So-called symbolic artificial intelligence aims at imitating the human mind, rather than the brain as a physical system.  But what is the mind? The concept is nebulous and not only difficult but perhaps even impossible to define. 

To be sure, “mind” should include, in addition to logical reasoning, memory and thinking – plus perception, feeling, emotion, intention, intuition, imagination and so on.

But mental activity goes beyond consciousness as such. Psychologists today, for the most part, reject Freud’s characterizations of unconscious thoughts and wishes, but it is generally accepted that some sort of “cognitive processing” takes place outside the scope of “cognitive awareness.”

Meanwhile, the more you look into your own mind, the more you discover. Much is difficult to describe in words. “Inner experiences,” as one might call them, don’t fit into clear-cut categories. In this respect, the mind is open-ended; you never know what kinds of mental experiences you can have until you have them. 

It is even hard to define exactly what we mean by “thinking.” One might try to narrow it down to logical reasoning, as in mathematical proofs. But in real life, most thinking is not strictly logical. People often make decisions in an intuitive way.