How AI could learn from Aesop’s fables 

Analogies are a powerful element of human reasoning — but they’re surprisingly hard to master.

Humans are constantly facing situations that seem completely new and uncertain on the surface, from starting a new job to traveling to a new country. As we encounter these scenarios, it can be hard to escape that familiar feeling of “I have no idea what I’m doing.

Yet although we might not realize it on a conscious level, our brains are well equipped for handling new situations. Humans have a natural ability, reasoning by analogy, to find similarities between novel circumstances and those we encountered in the past. It’s the key to allowing us to adapt and even thrive in strange environments.

Analogical reasoning is a vital skill in many aspects of society: from engineering and medicine to education and advertising. Drawing links between past events and the steps in new projects and procedures allows us to avoid mistakes and reproduce past successes — although our ability to make robust analogies is far from perfect.

Humans have a natural ability to find similarities between novel circumstances and those we encountered in the past.

Recently, researchers have begun to explore how AI could be trained to follow the same logic, while also avoiding common human reasoning errors.

So far, however, these algorithms have been limited in the complexity of analogies they can make. To understand how far AI can go in following human reasoning, a team of researchers at the University of Southern California (USC) has turned to some of our oldest written wisdom: fables.

The challenge: Analogical reasoning comes in many different forms. It can be descriptive — linking a fire engine and a tomato by their color — or figurative, like comparing a lover to a summer’s day. 

Alternatively, analogies can identify similarities in the relationships between different objects — like the moon revolving around the Earth, and the Earth revolving around the sun. 

Even further, they can involve picking out parallels between events and their causes – like the Wall Street crash of 1929, and the 2008 financial crisis. 

Even though their characters and settings may be completely different, the morals presented in Aesop’s fables are often very similar.

Reading fables: Before it can tackle such an immense diversity of analogies, AI will need to get better at drawing these kinds of links with human-level accuracy.

To understand AI’s limitations in more detail, a team of researchers examined Aesop’s fables, short stories originating in ancient Greece, which use analogies to convey simple moral messages.

Even though their characters and settings may be completely different, the morals presented in Aesop’s fables are often very similar: some warn against the dangers of greed, naivety, or laziness, while others encourage traits like friendship, generosity, and respect. 

The researchers found it more difficult than they expected to agree on which fables should be paired together.

In their article, released as a preprint, Jay Pujara and colleagues at USC began by dissecting how a human mind approaches the analogies presented in the stories. Altogether, they identified several key factors that the reader might pick up on as they make moral judgements and inferences about the stories, and they identified links between different pairs of fables with similar messages.

According to this framework, Aesop’s fables could be paired according to similar relationships between the stories’ characters, as well as characters’ physical qualities and personality traits. 

The team also considered links between events in the fables, and similarities between their consequences. Finally, they showed how stories with compatible morals could be aligned with each other: for example, the moral message, “know thyself” corresponds to the lesson, “know your worth.”

Artificial analogical reasoning: Building on this analysis, Pujara’s team next proposed a sequence of tasks that AI could follow to make these comparisons and analogies.

But their approach wasn’t perfect: as they drew their dimensions of analogy together, the researchers found it more difficult than they expected to agree on which fables should be paired together.

These are the first steps towards an AI that can draw analogies between past events and new, unfamiliar scenarios.

Their struggle suggested that analogical reasoning, despite its great power, is surprisingly subjective: two different people could approach the message of a fable in two very different ways. The team suggests that these differences in interpretation ultimately stem from the unique knowledge, skills, and life experiences that each individual has gained so far. 

In future studies, they now hope to explore these complex nuances of the human mind in more detail, and ultimately learn how to integrate them with AI. 

The study represents promising first steps towards an AI that can match our own remarkable ability to draw analogies between past events, and new, unfamiliar scenarios — and even pick up on connections so complex that they escape even the sharpest human minds.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Boston Dynamics retires dancing Atlas robot — and debuts its electric replacement
A day after retiring its hydraulic Atlas robot, Boston Dynamics released a video debuting its all-electric, workplace-ready replacement.
Why a neurodivergent team will be a golden asset in the AI workplace
Since AI is chained to linear reasoning, workplaces that embrace it will do well to have neurodivergent colleagues who reason more creatively.
When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria
Most antibiotics target metabolically active bacteria, but AI can help efficiently screen compounds that are lethal to dormant microbes.
OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
Can we stop AI hallucinations? And do we even want to?
“Making stuff up” and “being creative” may be two sides of the same coin — but you have to be able to tell the difference.
Up Next
ai microchip
Subscribe to Freethink for more great stories