The most impressive language generator yet

OpenAI’s GPT-3 language generator is lighting up the internet.

OpenAI’s new machine learning language generator, GPT-3, is currently an internet darling. With a catholic knowledge of the English canon, drawn from the vast corners of the internet, the AI can generate a piece of writing that can, at times, read as fine as any a human could compose.

It’s a comprehensive step beyond AI’s usual playing with language, and perhaps a hint that machine learning could soon assume powerful roles in writing, much like those it is assuming in medicine and robotics.

GPT-3 is the most powerful language generator yet made, according to MIT’s Technology Review. Inside any neural network’s black box are parameters, the guardrails for its training: GPT-2, released last year, had a massive 1.5 billion parameters; GPT-3, in comparison, has 175 billion — one of those hard-to-comprehend numbers.

Sucking this dataset, Charybdis-like, into its maw, GPT-3 can now spit out, based on all it has “read,” whatever words should follow a given prompt.

Styles and Shortcomings

GPT-3 uses its vast dataset to apply a mathematical prediction as to what words, and in what order, will best complete a given prompt. Some of the results have been deeply impressive, despite that ghostly quality AI writing seems to always contain.

If given a prompt with a sci-fi bent — say, the opening line to George Orwell’s 1984, from the Guardian‘s Alex Hern — it will return a suitably sci-fi result. 

“It was a bright cold day in April, and the clocks were striking thirteen,” Orwell begins.

“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run,” the language generator continued. “I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

It’s a hair disjointed, but read altogether, it does feel like a particularly promising freshman comp assignment. 

People playing with GPT-3 have prompted poetry and prose in the styles of certain authors, composed music, and created code. It’s even written an article on itself. (Your correspondent will fight an AI to keep his job.)

Little wonder that the Twitter cage is rattling.

GPT-3 is not a perfect mimic, or an artificial general intelligence, however. Kevin Lacker’s Turing test of GPT-3 reveals some of its shortcomings.

The language generator has little issue predicting answers to trivia-style questions; it knows who won the 1995 World Series, how many eyes a giraffe has, and the human life expectancy in the United States.

 It proved pretty good at answering common sense questions, too, knowing that an elephant is heavier than a mouse, and a pop can heavier than a paperclip. But give it a prompt that’s a bit odd — Lacker asked, is a pencil heavier than a toaster? — and you can trip it up. 

(GPT-3’s response: the pencil is heavier. This is likely because the literature on comparing pencils and toasters is probably pretty … light.)

Interestingly, it won’t admit it is wrong, and it won’t  call out your question if it’s nonsensical. When asked how to sporgle a morgle, it replied “with a sporgle” (uh, duh).

But even OpenAI’s founder, Sam Altman, tried to temper things a bit. 

“The GPT-3 hype is way too much,” he tweeted. “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Still, while it’s true that GPT-3 is not an artificial general intelligence — or even intelligent, in, like, a philosophical sense (“Music is the most advanced form of mathematics” is a sophomoric, one-blunt insight, GPT-3) — that does not mean it’s as simple as a chatbot toy.

With the ability to write, with varying levels of convincingness, everything from fiction to music, poetry to technical information, journalism to code, GPT-3 could prove to be a powerful tool. Any task that requires the written word could be augmented — or even automated — with a high quality language generator.

And GPT-3 might just be the lede.

Related
Shining a light on oil fields to make them more sustainable
Sensors and analytics give oil well operators real-time alerts when things go wrong, so they can respond before they become disasters.
OpenAI’s GPT-4 outperforms doctors in another new study
OpenAI’s most powerful AI model, GPT-4, outperformed junior doctors in deciding how to treat patients with eye problems.
Watch the first AI vs. human dogfight using military jets
An AI fighter pilot faced off against a human pilot in a “dogfight” using actual planes — a huge milestone in military automation.
AI can help predict whether a patient will respond to specific tuberculosis treatments
Instead of a one-size-fits-all treatment approach, AI could help personalize treatments for each patient to provide the best outcomes.
New AI music generator makes songs from text prompts
AI music generators — AIs that create new music based on users’ text prompts — are lowering the bar for music creation, for better or worse.
Up Next
Subscribe to Freethink for more great stories