Sam Altman on the future of AI

In a session called "Technology in a Turbulent World," the OpenAI CEO explained where he sees AI heading.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Sam Altman has a sign above his desk that reads: “No-one knows what happens next.”

But as the CEO of OpenAI, the creator of ChatGPT, he’s better placed than most to predict where artificial intelligence is heading – and to address what it can and can’t do.

Here are some of Altman’s key quotes from the session ‘Technology in a Turbulent World’.

Productivity gains will grow as people use AI

“Even with its very limited current capability and its very deep flaws, people are finding ways to use [this tool] for great productivity gains or other gains and understand the limitations.

“People understand tools and the limitations of tools more than we often give them credit for. People have found ways to make ChatGPT super useful to them and understand what not to use it for, for the most part.”

“AI has been somewhat demystified because people really use it now. And that’s always the best way to pull the world forward with a new technology.”

Sam Altman, CEO, OpenAI

AI will be able to explain its reasoning to us

Part of being able to trust technology involves understanding how it works. But Altman says truly understanding how generative AI operates will “a little different” than people think now.

“I can’t look in your brain to understand why you’re thinking what you’re thinking. But I can ask you to explain your reasoning and decide if that sounds reasonable to me or not.

“I think our AI systems will also be able to do the same thing. They’ll be able to explain to us in natural language the steps from A to B, and we can decide whether we think those are good steps, even if we’re not looking into it to see each connection.”

AI will not replace our human care for each other

When IBM chess computer Deep Blue beat World Champion Garry Kasparov in 1997, commentators said it would be the end of chess and no-one would bother to watch or play chess again become a computer had won.

But “chess has never been more popular than it is now”, said Altman, and “almost no-one watches two AIs play each other, we’ve very interested in what humans do”.

“When I read a book that I love, the first thing I do when I finish is find out everything about the author’s life, I want to feel some connection to that person that made this thing that resonated with me.”

“Humans know what other humans want. Humans are going to have better tools. We’ve had better tools before, but we’re still very focused on each other.”

Sam Altman, CEO, OpenAI

Humans will deal more with ideas

While AI is widely expected to lead to job growth and job losses, Altman predicts it will change certain roles by giving people space to come up with ideas and curate decisions.

“When I think about my job, I’m certainly not a great AI researcher. My role is to figure out what we’re going to do, think about that and then work with other people to coordinate and make it happen.

“I think everyone’s job will look a little bit more like that. We will all operate at a little bit higher of a level of abstraction. We will all have access to a lot more capability. We’ll still make decisions. They may trend more towards curation over time, but we’ll make decisions about what should happen in the world.”

There’s room for optimism on AI values alignment

“The technological direction we’ve been trying to push this in, is one we believe we can make safe,” said Altman.

Iterative deployment means that society can get used to the technology, and that “our institutions have time to have these discussions to figure out how to regulate this, how to put some guardrails in place”.

Altman said there had been “massive progress” between GPT-3 and GPT-4 in terms of how well it can align itself to a set of values.

But the harder question is: “Who gets to decide what those values are and what the defaults are, what the bounds are? How does it work in this country versus that country? What am I allowed to do with it or not? That’s a big societal question.”

“From the technological approach, there’s room for optimism,” he said, adding that the current alignment techniques would not scale to much more powerful systems, so “we’re going to need to invent new things”.

He welcomed the scrutiny AI technology was receiving.

“I think it’s good that we and others are being held to a high standard. We can draw on lessons from the past about how technology has been made to be safe and how different stakeholders have handled negotiations about what safe means.”

Sam Altman, CEO, OpenAI

And he said it was the responsibility of the tech industry to get input from society into decisions such as what the values are, and the safety thresholds, so that the benefits outweigh the risks.

“I have a lot of empathy for the general nervousness and discomfort of the world towards companies like us… We have our own nervousness, but we believe that we can manage through it and the only way to do that is to put the technology in the hands of people.

“Let society and the technology co-evolve and sort of step by step with a very tight feedback loop and course correction, build these systems that deliver tremendous value while meeting safety requirements.”

New economic models for content will develop

Altman made a distinction between LLMs displaying content and using it to train on.

“When a user says, ‘Hey, ChatGPT, what happened at Davos today?’ we would like to display content, link out to brands of places like the New York Times or the Wall Street Journal or any other great publication and say, ‘Here’s what happened today’ and then we’d like to pay for that. We’d like to drive traffic for that,” said Altman, adding it’s not a priority to train models on that data, just display it.

In future, LLMs will be able to take smaller amounts of higher quality data during their training process and think harder about it and learn more.

When content is used for training, Altman said we need new economic models that would compensate content owners.

“If we’re going to teach someone else physics using your textbook and using your lesson plans, we’d like to find a way for you to get paid for that. If you teach our models, I’d love to find new models for you to get paid based off the success of that… The current conversation is focused a little bit at the wrong level, and I think what it means to train these models is going to change a lot in the next few years.”

Panellists also included: Marc Benioff, Chair and CEO of Salesforce, Julie Sweet, Chair and CEO of Accenture, Jeremy Hunt, UK Chancellor of the Exchequer and Albert Bourla, CEO of Pfizer, to discuss these issues.

This article is republished from the World Economic Forum under a Creative Commons license. Read the original article.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
“Model collapse” threatens to kill progress on generative AIs
Generative AIs start churning out nonsense when trained on synthetic data — a problem that could put a ceiling on their ability to improve.
Up Next
An indian man walking through a rice field.
Subscribe to Freethink for more great stories