Open-source “Davids” are taking on GPT-4 and other Goliaths

Open-source LLMs are a possible antidote to Microsoft and Google’s control of chatbots like GPT-4
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Despite the name, OpenAI’s AI is, well, not very open. What’s beneath the hood of some of Big Tech’s best-performing and highest-profile large language models (LLMs) — including OpenAI’s heavyweight chatbot champ GPT-4 and Google’s Bard — is increasingly opaque.

“It doesn’t even release details about its training data and model architecture anymore,” TechTalks’ Ben Dickson points out. That means not only are the specifics of GPT-4 unknown, but also exactly what data it is being used to train it.

Added to the sheer amount of computing power it takes to develop and run models with unknown billions of parameters, the most likely future looked like it would be dominated by proprietary AIs, with high costs and barriers to entry.

What’s beneath the hood of some of Big Tech’s best-performing and highest-profile large language models (LLMs) — including OpenAI’s heavyweight chatbot champ GPT-4 and Google’s Bard — is increasingly opaque

Instead, though, a new wave of open-source LLMs, available to anyone, is knocking down those barriers. And funnily enough, it was another tech juggernaut, Meta, whose AI has helped to kickstart the field. After Meta released its model, called LLAMA, other researchers began building on it, Dickson writes. Eventually, these would lead to other open-source LLMs, like Alpaca 7b, which are so light-weight they can be run on home equipment. 

Various blogs, Github developers, and even the machine-learning developer Hugging Face now share, highlight, and rank lists of open-source LLMs available to researchers, hobbyists, and anyone else who wants to dive in.  

Hugging Face’s Open LLM Leaderboard ranks open-source LLMs on four different benchmarks: a reasoning challenge, based on grade-school science questions; a test of “commonsense inference”; a test that measures how accurate models are at multitasking; and a test examining how likely a model is to repeat false information commonly found online.

The current leader? A model called Falcon-40B-Instruct, made by the UAE’s Technology Innovation Institute. 

The most likely future looked like it would be dominated by closed chatbot AIs, with high costs and barriers to entry, from tech’s giants.

The open-source LLM field is “exploding,” Stella Biderman, head of research at Eleuther AI — which developed the benchmarks Hugging Face uses in their rankings — told Nature. Having open-source models could be important for both academic and economic reasons, leveling the playing field by preventing behemoths from being the only game in town.

Open-source LLMs also allow researchers to peek into the guts of the model and try to figure out why the AI may sometimes spit out false, strange, or toxic responses, Brown computer science professor Ellie Pavlick told Nature. (Pavlick also works for Google AI.)

Having more red-teamers and jailbreakers who are pushing these systems to their limits could help us better understand how they work and why they fail, and identify flaws in their safety measures and controls.

“One benefit is allowing many people — especially from academia — to work on mitigation strategies,” Pavlick said. “If you have a thousand eyes on it, you’re going to come up with better ways of doing it.”

Having open-source LLM models available also democratizes these powerful tools, letting programmers and researchers with less capital and computing power take advantage of their abilities.

A new wave of open-source LLMs, available to anyone, is knocking down the chatbot barriers.

That’s partly because open-source developers have been driving down the amount of money and horsepower needed to make and run effective LLMs. 

The success of small open-source LLMs has proven that you can create a powerful chatbot with “only” a few billion parameters — compared to hundreds of billions for the big models — if trained on a large enough dataset, TechTalks’ Dickson writes. Those models can also be fine-tuned with less data and money than first thought, and they can even be run on local devices, like laptops and phones, instead of super powerful servers.

And finally, because anyone can play, the open-source LLMs are innovating rapidly, without needing to pay the big tech companies for access. We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
The next big tech trend will start out looking like a toy
In “Read, Write, Own: Building The Next Era of the Internet,” investor Chris Dixon explains why the biggest trends often go overlooked.
Constitutional warning shot for social media “deplatforming” laws
Can the government tell private websites what they have to publish?
The Supreme Court will soon decide the future of social media
Should social media platforms have the right to decide what speech is permitted? Should the government?
How Brilliant Labs CEO is creating a “symbiosis of humanity and artificial intelligence”
CEO Bobak Tavangar discusses the philosophy behind Brilliant’s latest device, Frame, and his vision for the future of AI.
Why AI playing video games is a big deal
Google’s SIMA can follow human instructions to play 3D video games. Researchers hopes the platform can one day help AI navigate real-world environments.
Up Next
a man holds a smartphone screen to another man's forehead to measure his temperature
Subscribe to Freethink for more great stories