Open-source “Davids” are taking on GPT-4 and other Goliaths

Open-source LLMs are a possible antidote to Microsoft and Google’s control of chatbots like GPT-4

Despite the name, OpenAI’s AI is, well, not very open. What’s beneath the hood of some of Big Tech’s best-performing and highest-profile large language models (LLMs) — including OpenAI’s heavyweight chatbot champ GPT-4 and Google’s Bard — is increasingly opaque.

“It doesn’t even release details about its training data and model architecture anymore,” TechTalks’ Ben Dickson points out. That means not only are the specifics of GPT-4 unknown, but also exactly what data it is being used to train it.

Added to the sheer amount of computing power it takes to develop and run models with unknown billions of parameters, the most likely future looked like it would be dominated by proprietary AIs, with high costs and barriers to entry.

What’s beneath the hood of some of Big Tech’s best-performing and highest-profile large language models (LLMs) — including OpenAI’s heavyweight chatbot champ GPT-4 and Google’s Bard — is increasingly opaque

Instead, though, a new wave of open-source LLMs, available to anyone, is knocking down those barriers. And funnily enough, it was another tech juggernaut, Meta, whose AI has helped to kickstart the field. After Meta released its model, called LLAMA, other researchers began building on it, Dickson writes. Eventually, these would lead to other open-source LLMs, like Alpaca 7b, which are so light-weight they can be run on home equipment. 

Various blogs, Github developers, and even the machine-learning developer Hugging Face now share, highlight, and rank lists of open-source LLMs available to researchers, hobbyists, and anyone else who wants to dive in.  

Hugging Face’s Open LLM Leaderboard ranks open-source LLMs on four different benchmarks: a reasoning challenge, based on grade-school science questions; a test of “commonsense inference”; a test that measures how accurate models are at multitasking; and a test examining how likely a model is to repeat false information commonly found online.

The current leader? A model called Falcon-40B-Instruct, made by the UAE’s Technology Innovation Institute. 

The most likely future looked like it would be dominated by closed chatbot AIs, with high costs and barriers to entry, from tech’s giants.

The open-source LLM field is “exploding,” Stella Biderman, head of research at Eleuther AI — which developed the benchmarks Hugging Face uses in their rankings — told Nature. Having open-source models could be important for both academic and economic reasons, leveling the playing field by preventing behemoths from being the only game in town.

Open-source LLMs also allow researchers to peek into the guts of the model and try to figure out why the AI may sometimes spit out false, strange, or toxic responses, Brown computer science professor Ellie Pavlick told Nature. (Pavlick also works for Google AI.)

Having more red-teamers and jailbreakers who are pushing these systems to their limits could help us better understand how they work and why they fail, and identify flaws in their safety measures and controls.

“One benefit is allowing many people — especially from academia — to work on mitigation strategies,” Pavlick said. “If you have a thousand eyes on it, you’re going to come up with better ways of doing it.”

Having open-source LLM models available also democratizes these powerful tools, letting programmers and researchers with less capital and computing power take advantage of their abilities.

A new wave of open-source LLMs, available to anyone, is knocking down the chatbot barriers.

That’s partly because open-source developers have been driving down the amount of money and horsepower needed to make and run effective LLMs. 

The success of small open-source LLMs has proven that you can create a powerful chatbot with “only” a few billion parameters — compared to hundreds of billions for the big models — if trained on a large enough dataset, TechTalks’ Dickson writes. Those models can also be fine-tuned with less data and money than first thought, and they can even be run on local devices, like laptops and phones, instead of super powerful servers.

And finally, because anyone can play, the open-source LLMs are innovating rapidly, without needing to pay the big tech companies for access. We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
In a future with brain-computer interfaces like Elon Musk’s Neuralink, we may need to rethink freedom of thought
In a future with more “mind reading,” thanks to computer-brain interfaces, we may need to rethink freedom of thought.
OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
How one streamer learned to play video games with only her mind
Perrikaryal uses an EEG to translate her brain activity into game commands, turning her mind into a video game controller.
Sam Altman on the future of AI
In the Davos session, “Technology in a Turbulent World,” OpenAI CEO Sam Altman explained where he sees AI heading.
Google AI is searching the world for methane leaks from space
Google will provide computing resources to MethaneSAT, a project to identify climate-harming methane leaks from space.
Up Next
a man holds a smartphone screen to another man's forehead to measure his temperature
Subscribe to Freethink for more great stories