Google AI exec: “The mistake would be thinking this is hype.”

A live conversation with Steven Johnson, bestselling author and editorial director of Google Labs.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

On May 29, 2025, Freethink Media brought together visionary technologists, founders, and futurists in San Francisco to explore how breakthrough technologies — from AI to clean energy to synthetic biology — can help reinvent America for the 21st century. Curated by Peter Leyden in partnership with Freethink, “The Great Progression Begins Now” featured bold ideas, dynamic conversations, and a celebration of the innovators shaping what comes next. It was a night to frame the future — and to meet the people building it.

Steven Johnson, bestselling author and editorial director of Google Labs, was one of three remarkable guests to take the stage that night.

During his conversation with Leyden, Johnson talked about recognizing early on that large language models represented something genuinely transformative. “The mistake would be thinking this is hype,” he said, explaining how his initial reporting on OpenAI’s GPT-3 eventually led him to help build NotebookLM, a tool that helps people explore complex ideas by grounding AI in curated research. His favorite question to ask it? “What’s the most surprising fact in this material? And sometimes it shows me something I never would’ve found on my own.”

In a world of endless noise, Freethink Media creates live experiences that connect, inspire, and transform. Sign up to get notified of upcoming events.

The following transcript has been lightly edited for length and readability:

Peter Leyden: The author of literally 15 books, many of them best-selling, New York Times best-selling books. He’s a writer and author, but he is now the editorial director of Google Labs. Google Labs is an important piece of Google’s whole AI ecosystem, and they came up with this thing called Notebook LM, which some of you may have heard about. I hear a few whistles. He did a cover story of New York Times Sunday Magazine in the spring of ’22. Chat GPT 3.5, generative AI, came out in November of ’22, and he basically said what?

Steven Johnson: In the summer of ’21, I’d seen an interview with Greg Brockman from OpenAI in which he kind of demoed some of GPT-3, which was the the model before ChatGPT. Brockman said this interesting thing in this interview: OpenAI was shifting to this interesting hybrid structure that was going to be part for-profit and part non-profit, which turned out to be kind of a big deal many years later, but at the time, I was in the audience wearing my journalist hat, and I was like, “Oh, nobody knows about this. Everybody knows about GPT-3, but nobody knows about this interesting new like hybrid structure.”

And so I went to the Times, and I said, “Hey, I want to write a piece, not really about the AI side, but about the org chart basically over at OpenAI,” and for some reason they said, “Go ahead and do that.”

So I got access to GPT-3, which wasn’t publicly available at that point in October of ’21. I was like, “I at least have to know how this thing works.” I sat down with it, and I remembered so vividly, it was like the first time I saw the web in ’93, ’94. Very similar. Got access to GPT-3 and was like, “Oh shit. This is so much bigger than I realized. I’ll mention this org structure thing in my piece, but I’m going to write a 10,000-word piece about how computers are mastering language in a true, almost innate way, and that is going to change everything about computing and a whole host of things are going to become possible.”

And yes, there are problems. Yes, it hallucinates. Yes, there are alignment issues. Yes, there’s safety issues. There are a whole host of things to figure out, but what this is not is hype. That was what was clear to me is that we have to take this seriously. The one mistake you can’t make is just say, “Oh, that’s just stochastic parrots. That’s just autocomplete on steroids. It’s not real.”

“It was actually kind of a low point in my career in a way.”

Steven Johnson

So, I wrote this piece, came out in April of 2022, that basically was making that argument, and it was literally the most controversial piece I’ve ever written in my life. I’m very proud of that piece, but I did not enjoy a second of it being out in the world because like Twitter was filled with people being like, “Oh, Stephen Johnson fell for the hype. So naive. He just doesn’t understand this stuff. It’s not real.”

It was actually kind of a low point in my career, in a way, but there were two people at Google Labs, which had been newly kind of respawned inside of Google: Clay Bavor, who’s since left, and Josh Woodward, who now runs Labs and Gemini app.

They’d been reading my stuff for years. They knew about my obsession with tools for thought and augmenting human intelligence and all that stuff. They’d read this article, and apparently, Clay turned to Josh and was like, “Maybe we could get Steven to come and give an inspirational speech here at Labs.” And Josh was like, “Maybe we could call up Steven and say, ‘Would you like to build your dream writing and research tool that you’ve been kind of chasing your whole life with a bunch of really smart people here in Labs using our language models?'”

And so they just called me out of the blue and said, “Hey, long-time reader, first-time caller or whatever. Would this be interesting?” And because I’m not an idiot, I was like, “Yes, I would like to do that.” And I don’t know. Here we are. Thanks to amazing cast of characters who helped out.

Two men sit on stage having a discussion in front of an audience; a screen displays "Steven Johnson, Editorial Director of Google Labs.
Kathleen Sheffer Photography

Leyden: Explain what it does.

Johnson: The core idea is that you would not just be having a chat with a general-purpose model, but you would always be grounding, in the language that we tried to develop, the model in whatever documents you provided. And so you weren’t just saying like, “Hey, you know a little bit about everything, ChatGPT.” You would say, “Hey, I’m working on this particular project.”

For me, it was like, “I’m an author researching this book. Here’s my research material. I want you, the AI, to answer all my questions based on this material. I want you to stick to the facts in this material.”

And over time, we built tools, like citations — it gives you an answer, and it actually points you directly to the passage that it used in answering the question so that you can trust, you can fact check, and you’re always one step away from the actual human knowledge, the original source of that knowledge. You can always read the original wherever you are in Notebook LM. It’s not just back in the training data in some amorphous way.

And over time, we began to think of it as a tool for understanding things, really. We don’t have a lot of tools for making sense of things — we have tools for like writing, and we have tools for making slide decks and things like that. But everything in Notebook is designed to help you understand and explore the ideas and material and whatever you’ve loaded in there, whether you’re a student, or you’re an author, or whether you’re a knowledge worker.

“This is like one of those crazy AI moments.”

Steven Johnson

Over time, we started thinking like, “Hey, maybe we can transform the information that you’re trying to understand into different formats.” And that’s when about eight months ago, a team inside of Labs was working on a separate project — some of whom I think may be here — came up with this idea of like, “What if you could turn your sources into a simulated podcast conversation between two AI hosts?”

And that became this thing, Audio Overviews, which is the most viral thing I’ve ever been involved with in my life. There was a whole skit on “Saturday Night Live” making fun of it. That’s how crazy it got. Those of you who’ve heard it, there’s an immediate feeling of like, wow, that’s an amazing. It’s a bit like hearing Siri for the first time. You’re like, “That really sounds like a human,” except it’s two people in conversation.

One funny story is we added this feature a few months after we launched it where you can kind of call into the show. They talk for like 15 minutes normally. Their whole program is to pull out the most interesting ideas from your material, whatever that material is, and explain it to you — turn it into an interesting story or an interesting set of anecdotes. “Make anything interesting” is kind of one of the slogans we have.

Ae added this interactive feature, and — this is like one of those crazy AI moments — it turned out that when you called in to the show to ask a question, you can kind of interrupt them and say like, “Hey, I’d like to hear more about Thomas Edison, actually.” They’re designed to kind of be like, “Oh, sure, we’ll answer that question for you.” But it turned out when you did that in the first version of this, the host sounded a little bit irritated that they’d been interrupted. The tone was like, “Yeah, I mean, we were going to get to that later in the show, but I guess we could talk about it now.”

So we literally had on our roadmap of bug fixes was what we called “friendliness tuning” to just make the host less jerks. It’s a weird world we’re in.

Two men stand at a table as one signs a paper; a "Big Think" event backdrop is in the background.
Kathleen Sheffer Photography

Leyden: So you write books. You’ve designed this tool to write books. When I was talking to him before, I said, “Well, how much more productive would it make you?” And you said, “Well, it’s about 10 to 100 times more productive in writing a book.” Explain to people what you meant by that. 100 times more productive.

Johnson: I’ve written 15,000 books over the last three years since I’ve been at Google. It’s incredible. [Laughs] That’s not entirely true. What I was saying is that there are certain parts of the workflow of being a nonfiction author, in my case, where there are things that Notebook does that are genuinely 10 to 100 times faster.

I’ll give you one example.

I write oftentimes historical nonfiction, and one of the things you do when you’re managing historical nonfiction is you’re managing a timeline. There are a lot of events happening. Last book that I wrote, “Infernal Machine,” was multiple threaded narratives going over 60 years. Just keeping track of when each event happened in the sequence and where everyone was on the timeline is very hard. I would spend hours and hours and hours — because that was built out of literally 500 separate newspaper articles that were part of the research for that book — just trying to get the timeline right.

“That’s the worst part of the work that you have to do, right? That’s not the intellectual creative part of it.”

Steven Johnson

With Notebook now — in fact, I was just doing this today — I have an archive of old letters that were written to Ben Franklin in French in the 1780s that are just raw scans of the handwriting, some of which have dates. They have not been OCRed, so it’s just a grainy old image of French handwriting.

I can put those into a Notebook, and I can say, “Organize these chronologically and summarize each of the notes and explain what it is in English.” And it’ll do it in 10 seconds. Seriously. At an incredible level of accuracy with citations, all that kind of stuff. So you’re trying to write your biography of Ben Franklin, and you’re like, I have all this stuff in French that I don’t understand. It’s all in disordered, and I want to know what’s happening in there. How long would that take you to do? Like a week? Infinity if you don’t speak French?

It’s the worst part of the work that you have to do, right? That’s not the intellectual creative part of it. That’s just understanding what is there in the core text part of the work, and so Notebook just dramatically increases the speed with which I can do things like that.

Leyden: How applicable do you think what you’re doing there is to all knowledge workers? And do you think we can start calculating, like, “Anyone in knowledge work will be twice as productive or three times”?

Johnson: To the extent that your work as a knowledge worker involves synthesizing information that is scattered across multiple documents — 300 open tabs like we all have on our browsers — and trying to find that information and remember that information, that’s a huge part of what knowledge work is in whatever field you’re in. You’re kind of moving kind of bits around in your mind and through these documents. I just think that these tools, if we have the right UI for them, which is what we’re really trying to do with with Notebook as well — there’s underlying AI, but also there’s the surface that you use to kind of interact with it, and chat is only a part of that — that just feels like a very general-purpose kind of problem to solve.

In the early days of Notebook, we would often get kind of like, “Who is it for? What’s the target? Is this for students? Is this for like nonfiction authors? Like pick a lane.” And we were always like, “We don’t really want to pick a lane. It feels like a pretty broad avenue. We think it’s a pretty big market.”

A group of people stand and talk near large arched windows overlooking water; one man in front holds a drink and smiles.
Kathleen Sheffer Photography

Leyden: You’ve written many great books, but one was “Where Good Ideas Come From” about how do we innovate. What is the impact of AI on innovation, particularly if we’re heading into 25 years of crazy big challenges?

Johnson: One of the things that was important in “Good Ideas” is connecting across disciplines. The whole history of innovation is about that kind of cross-pollination between fields. I think that’s one of the places I spent a lot of time in my interactions with Notebook, trying to figure out how capable is it of making those kinds of connections.

A big thing that I often ask, which I think is just amazing that we can ask a computer this, is I’ll load up a bunch of information, and I’ll say, “What is the most surprising fact in this information? What’s the thing that, knowing what you know about me, will surprise me? Will expand my horizons in some way or make some new conceptual link that I wouldn’t have been able to make without the AI kind of introducing me to this idea?” I see very promising signs in that way, so I’m starting to use it more as a creative augmentation tool and not just like: “organize my timeline for me.”

I have a new Notebook called “The Next Book,” and I just fill this with documents where I’m kind of like, “Hey, maybe I’ll write about the Gold Rush,” or “Maybe I’ll write about the anti-nukes movement in the 1960s” or “Maybe I’ll write about this.” It’s just filled with this stuff, and I’ll just sit down and be like, “So, if I did write about the anti-nukes movement in the ’60s and ’70s, what would be the opening scene?” And it’ll be like, “Well it could be this or could be this or could be this.”

These tools are amazing at generating possibilities and spinning out new permutations of it. It’s not like it’s writing the opening scene for me yet, but I’m kind of like, “Oh that’s interesting. Let’s dive into that a little more.” I work as a kind of duet with this technology in a way that was just unimaginable, you know, two years ago. That’s what I think is encouraging.

Leyden: Is there going to be a point where you’re going to say, “Hey write me a book in the style of Steven Johnson and forget about talking to Steven.” Where do you go with that?

Johnson: Yeah, I mean if you can generate a podcast now that’s quite convincing — sounds like it might be on NPR or whatever — like with one click and Notebook, is there a future where there’s a one-click “Write me a Steven Johnson book on this topic. I’m interested. I like his style, you know, just figure it out”?

I think, crazily enough, that is an imaginable future now.

If we do get to that future, the fate of nonfiction authors like myself will be the least of our problems. It will mean that we will have invented something really profoundly close to like super intelligence, not that I’m super intelligent. It would be an amazing kind of leap to be able to build an entire, you know, 300-page book of historical research and all that kind of stuff autonomously in 10 minutes.

If it has that capability, then all of society will have to be restructured with that capability inside of it. I don’t worry too much about me in that scenario. I worry more about the rest of us.

In a world of endless noise, Freethink Media creates live experiences that connect, inspire, and transform. Sign up to get notified of upcoming events.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
Silicon Valley has entered its superstar engineer era
In the race to AGI, tech companies are borrowing playbooks from sports, finance, and film to land elite talent. Here’s why.
AI can dramatically expand human agency
If we get AI right, it will accelerate empowerment, enabling people to learn any subject, start any business, and realize any vision.
A personal assistant for everyone: The promise of ambient AI
We’re leaving the app era and entering the age of ambient AI: intelligent help that’s always on, but never in the way.
Gen Z: We must resist the temptation to cheat on everything
Adopting the “cheat on everything” mentality — treating thinking as a burden AI can eliminate — is not only wrong, it’s dangerous.
There are no new ideas in AI — only new datasets
Our next AI breakthrough will come when we unlock a source of data we’ve either overlooked or never fully harnessed.
Up Next
A man in a patterned shirt speaks on stage in front of a screen displaying "THE GREAT PROGRESSION 2025-2050" and the BIG THINK logo.
Subscribe to Freethink for more great stories