AI can now understand animal behavior 

It could improve the lives of animals in zoos, the wild, and the lab.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

Instead of analyzing days, weeks, or months of recorded footage for animal behavior studies, researchers can now turn the task over to an open-source algorithm capable of picking up even subtle actions.  

Not only could this save researchers time, it could also improve the lives of animals in zoos, the wild, and the lab.  

Animal behavior: Animals can’t talk to humans (yet), but they can communicate through their behavior — an animal that’s eating less than normal may be experiencing depression or illness, for example.

Much of what we can learn from animal behavior requires that we study them for an extended period of time.

Being able to analyze animal behavior is hugely important for both people and animals.

By carefully examining the behavior of lab rats, pharmaceutical researchers might learn that a medication has the potential to cause stress or relieve it, for example. Zoos can study the behavior of a new elephant to see if it’s integrating well into the herd.

The challenge: Much of what we can learn from animal behavior requires that we study them for an extended period of time. To do that, researchers will often record footage of animals and then watch it back later, manually flagging noteworthy actions.

This is time-consuming, but it’s also susceptible to human error and subjectivity — one researcher might flag a behavior that another would ignore, either by mistake or because they don’t interpret it the same way.

The AI can distinguish individual animals within a group and identify behaviors linked to stress, fear, curiosity, and more.

The algorithm: Researchers at ETH Zurich and University of Zurich have now developed an algorithm for analyzing animal behavior in recorded footage, and they’ve released the code online for anyone to access.

The software can distinguish individual animals within a group and identify behaviors linked to stress, fear, curiosity, and more. It can also pick up on subtle changes in an animal’s behavior over time and analyze interactions — such as grooming — between multiple animals.

One size fits all: The algorithm was trained using footage of mice and macaques, but the creators say it can analyze the behavior of all types of animals.

“Interest has been particularly high among primate researchers, and our technology is already being used by a group that is researching wild chimpanzees in Uganda,” said lead author Markus Marks.

“Our method can recognize even subtle or rare behavioral changes in research animals.”

Mehmet Fatih Yanik

The tech could also be useful for analyzing animal behavior in the lab, according to researcher Mehmet Fatih Yanik.

“Our method can recognize even subtle or rare behavioral changes in research animals, such as signs of stress, anxiety, or discomfort,” he said. “Therefore, it can not only help to improve the quality of animal studies but also helps to reduce the number of animals and the strain on them.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
Three founders look to the future at Freethink’s inaugural Great Progression event
The tech community came together for the launch of the Great Progression event series, curated by Peter Leyden and produced by Freethink.
Why AI today is more toddler than Terminator
In “Raising AI,” author De Kai argues that AIs are more like society’s children than machines under our control.
How proof-of-human tech could save the internet
Sam Altman’s World Network uses iris-scanning Orbs to give people a way to prove that they are people — and not AIs — online.
The AI social network war has begun
A secret prototype, a hardware deal with Jony Ive, and millions of AI images suggest OpenAI is making a play for your social media feed.
Are large language models dyslexic?
Despite outperforming humans at many tasks, multimodal LLMs struggle to read time on a clock — just like many people with dyslexia.
Up Next
trustworthy AI
Subscribe to Freethink for more great stories