The subtle art of language: why artificial general intelligence might be impossible

Try being ironic with Siri — it won’t work.

Consciousness is arguably the most mysterious problem humans have ever encountered. In many famous philosophical essays, consciousness is regarded as unsolvable. Yet, as we speak, engineers and cognitive scientists are putting their noses to the grindstone to develop consciousness in artificial intelligence (AI) systems.

Typically, this project is referred to as the development of “artificial general intelligence” (AGI), which covers a wide range of cognitive and intellectual abilities that humans possess. Thus far, this project — being conducted globally in 72 independent research projects — has not produced conscious robots. Rather, as it stands, we have super-intelligent AI that, on the whole, is very narrow in its abilities.

One-trick pony

For example, the best human chess players are utterly demolished in chess matches against computers like IBM’s Deep Blue. To quote author and grandmaster chess player Andrew Soltis, “Right now, there’s just no competition. The computers are just much too good.” However, Deep Blue is only good at chess. We have yet to create an AI system that can outpace or even keep up with general human cognition.

Even Sophia, the famous humanoid robot granted citizenship in Saudi Arabia in 2017, does not demonstrate consciousness or artificial general intelligence. To be sure, some of what Sophia is capable of is astonishingly sophisticated. For instance, Sophia receives visual information, which she can use to recognize individual faces and sustain eye contact. Likewise, Sophia can process language to the extent that she can hold trivial conversations with people. Moreover, Sophia can make over 60 different facial expressions during those conversations. This certainly makes it feel like one is in the presence of a conscious being.

Language is the key to artificial general intelligence

Sophia’s amazing abilities sound sufficient for consciousness, but only superficially. And the reason for this is rooted in language. Human language is profoundly complex. One major distinguishing feature of human communication is that the meaning of what we say often isn’t conveyed explicitly by the literal meaning of our sentences. Instead, the meaning of our words often goes beyond what we expressly assert.

Irony is a good example. Consider going to a Broadway show where the lead actor shows up drunk and puts on a terrible performance. One could jokingly say that the show displayed “peak professionalism and wit.” The average person immediately understands these words to represent the opposite of their literal meaning. In fact, a great deal of human communication is indirect. Sarcasm, metaphor, and hyperbole often convey meaning with greater persuasiveness than literal assertions.

Much of the time, we imply or hint at what we mean, rather than say it directly. Indeed, human communication would be quite bland without our frequent appeal to figures of speech. Poetry and literature essentially would be non-existent. The subtle art of language, in some sense, is part of what makes us human.

A chatbot with a face

Human consciousness, in other words, in part consists of understanding abstract and indirect meanings. And it is precisely this sort of understanding that artificial intelligence is incapable of. Sophia can talk, but the conversation is trivial. Indeed, many computer scientists see Sophia as nothing more than a Chatbot with a face.

Christopher Hitchens once aptly stated that “the literal mind is baffled by the ironic one, demanding explanations that only intensify the joke.” Such literal mindedness toward language is what characterizes artificial intelligence’s relationship with it language. If, for example, Sophia were to hear the earlier Broadway joke, even in context, she may respond, “I don’t know what you’re talking about. The actor was unprofessional and drunk.” In other words, she doesn’t get it.

Even detecting such complex concepts as drunkenness or professionalism would be a tall order for Sophia. Unlike humans and even some animals, sophisticated AI systems like Sophia cannot detect other creatures’ emotional or mental states. Hence, they can only comprehend the word-for-word meaning of sentences. Try being ironic with Siri, for instance. It won’t work. Heck, ask her to find something that isn’t McDonald’s. She can’t do that either.

Theory of mind

We understand other people and their minds by analogy. Unfortunately, such indirectness is something engineers and cognitive scientists have failed to program in artificial intelligence. This is because the human ability to reliably understand each other indirectly is itself a mystery. Our ability to think abstractly and creatively, in other words, is quite challenging to understand. And it is impossible to code for something we don’t understand. That is why novels and poems written by AI fail to create a coherent plot or are mostly nonsensical.

Artificial general intelligence — robot consciousness — might be possible in the distant future. But without a full and comprehensive understanding of language and its countless nuances, AGI certainly will remain impossible.

This article was reprinted with permission of Big Think, where it was originally published.

Related
Why a neurodivergent team will be a golden asset in the AI workplace
Since AI is chained to linear reasoning, workplaces that embrace it will do well to have neurodivergent colleagues who reason more creatively.
When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria
Most antibiotics target metabolically active bacteria, but AI can help efficiently screen compounds that are lethal to dormant microbes.
OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
Can we stop AI hallucinations? And do we even want to?
“Making stuff up” and “being creative” may be two sides of the same coin — but you have to be able to tell the difference.
When AI prompts result in copyright violations, who has to pay?
Who is responsible for copyright violations when they’re produced by generative AI? The technology is outpacing the law.
Up Next
Subscribe to Freethink for more great stories