How AI could learn from Aesop’s fables 

Analogies are a powerful element of human reasoning — but they’re surprisingly hard to master.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Humans are constantly facing situations that seem completely new and uncertain on the surface, from starting a new job to traveling to a new country. As we encounter these scenarios, it can be hard to escape that familiar feeling of “I have no idea what I’m doing.

Yet although we might not realize it on a conscious level, our brains are well equipped for handling new situations. Humans have a natural ability, reasoning by analogy, to find similarities between novel circumstances and those we encountered in the past. It’s the key to allowing us to adapt and even thrive in strange environments.

Analogical reasoning is a vital skill in many aspects of society: from engineering and medicine to education and advertising. Drawing links between past events and the steps in new projects and procedures allows us to avoid mistakes and reproduce past successes — although our ability to make robust analogies is far from perfect.

Humans have a natural ability to find similarities between novel circumstances and those we encountered in the past.

Recently, researchers have begun to explore how AI could be trained to follow the same logic, while also avoiding common human reasoning errors.

So far, however, these algorithms have been limited in the complexity of analogies they can make. To understand how far AI can go in following human reasoning, a team of researchers at the University of Southern California (USC) has turned to some of our oldest written wisdom: fables.

The challenge: Analogical reasoning comes in many different forms. It can be descriptive — linking a fire engine and a tomato by their color — or figurative, like comparing a lover to a summer’s day. 

Alternatively, analogies can identify similarities in the relationships between different objects — like the moon revolving around the Earth, and the Earth revolving around the sun. 

Even further, they can involve picking out parallels between events and their causes – like the Wall Street crash of 1929, and the 2008 financial crisis. 

Even though their characters and settings may be completely different, the morals presented in Aesop’s fables are often very similar.

Reading fables: Before it can tackle such an immense diversity of analogies, AI will need to get better at drawing these kinds of links with human-level accuracy.

To understand AI’s limitations in more detail, a team of researchers examined Aesop’s fables, short stories originating in ancient Greece, which use analogies to convey simple moral messages.

Even though their characters and settings may be completely different, the morals presented in Aesop’s fables are often very similar: some warn against the dangers of greed, naivety, or laziness, while others encourage traits like friendship, generosity, and respect. 

The researchers found it more difficult than they expected to agree on which fables should be paired together.

In their article, released as a preprint, Jay Pujara and colleagues at USC began by dissecting how a human mind approaches the analogies presented in the stories. Altogether, they identified several key factors that the reader might pick up on as they make moral judgements and inferences about the stories, and they identified links between different pairs of fables with similar messages.

According to this framework, Aesop’s fables could be paired according to similar relationships between the stories’ characters, as well as characters’ physical qualities and personality traits. 

The team also considered links between events in the fables, and similarities between their consequences. Finally, they showed how stories with compatible morals could be aligned with each other: for example, the moral message, “know thyself” corresponds to the lesson, “know your worth.”

Artificial analogical reasoning: Building on this analysis, Pujara’s team next proposed a sequence of tasks that AI could follow to make these comparisons and analogies.

But their approach wasn’t perfect: as they drew their dimensions of analogy together, the researchers found it more difficult than they expected to agree on which fables should be paired together.

These are the first steps towards an AI that can draw analogies between past events and new, unfamiliar scenarios.

Their struggle suggested that analogical reasoning, despite its great power, is surprisingly subjective: two different people could approach the message of a fable in two very different ways. The team suggests that these differences in interpretation ultimately stem from the unique knowledge, skills, and life experiences that each individual has gained so far. 

In future studies, they now hope to explore these complex nuances of the human mind in more detail, and ultimately learn how to integrate them with AI. 

The study represents promising first steps towards an AI that can match our own remarkable ability to draw analogies between past events, and new, unfamiliar scenarios — and even pick up on connections so complex that they escape even the sharpest human minds.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
“Model collapse” threatens to kill progress on generative AIs
Generative AIs start churning out nonsense when trained on synthetic data — a problem that could put a ceiling on their ability to improve.
Up Next
ai microchip
Subscribe to Freethink for more great stories