AI is already in the classroom. It’s time colleges caught up.

"The rise of the internet brought about similar fears, yet it ultimately made learning richer and more accessible."
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

Over the past three years, large language models like ChatGPT have gone from curiosities to everyday tools used on college campuses around the world. While some professors embrace them, others ban them. Many institutions fall somewhere in between, setting vague policies or relying on detection tools that have been proven ineffective in many instances. AI may be the most significant innovation in education since the personal computer, and although this revolution has many different characteristics, it echoes past battles over calculators and computers — battles that were ultimately lost by those who tried to resist.

Technology doesn’t wait for policy, and as a current undergraduate student, I believe that the sooner schools catch up, the better we can use these tools to improve learning rather than undermine it. Still, an important question remains: Is it fair to compare AI with past innovations like calculators and the early internet, or is this a fundamentally different challenge?

If personal computers were a bicycle for the mind, today’s AI tools are more like self-driving race cars.

AI is not the first technology to disrupt higher education. In the 1970s, the pocket calculator triggered a wave of backlash among educational institutions. Teachers warned that it would weaken students’ arithmetic skills, and some schools tried to ban calculators altogether. But others saw the potential: If students no longer had to do long division by hand, they could focus on bigger-picture math problems. Eventually, calculators became standard classroom tools, allowing students to shift their focus from manual computation to understanding formulas and solving higher-level, conceptual problems. Studies show that calculators can improve conceptual understanding when used correctly.

This same cycle repeated in the 1990s with personal computers and the early internet. Critics feared that spell-check and copy-paste would erode writing skills, and that search engines like Google and communal encyclopedias like Wikipedia would replace real research. And yes, some students misused those tools. But once schools embraced the technology and taught students how to use it well, evaluate sources, and cite correctly, their academic work improved. Students were no longer limited to the outdated books in their campus libraries, but suddenly had access to a multitude of books, articles, and datasets in multiple languages, at any time.

The cycle of resistance and delayed acceptance is a recurring phenomenon in large institutions, especially those with long-standing traditions in education, such as Columbia University. These universities, responsible for the education of millions of Americans, cannot afford to change course without serious caution. Even when faculty are eager to adapt, such as by updating policies on AI use in student essays, their efforts are often delayed by the university’s complex bureaucracy and layered approval processes. These systems are designed to ensure thoughtful decision-making, but they can struggle to keep pace with rapid technological change. For example, a 2024 global survey conducted by the Digital Education Council found that 86% of students already use AI in their studies, underscoring the technology’s rapid and widespread adoption across disciplines. 

However, it’s clear that the AI revolution is broader and more complex than past technological shifts. Instead of simply speeding up our work, AI can perform tasks that once required deep thinking and creativity, such as writing code or entire essays. Steve Jobs famously referred to the personal computer as a “bicycle for the mind” in 1981, believing it could enhance human intelligence, especially in education, the area where he envisioned the personal computer having the most impact.

But if personal computers were a bicycle for the mind, today’s AI tools are more like self-driving race cars: They don’t just help us think faster — they can take over the wheel entirely.

Rather than fostering an environment of uncertainty and mistrust, universities should redirect their energy toward adaptation.

The debate about integrating AI into the education system mirrors earlier debates, but it feels louder and more urgent. ChatGPT can help students draft essays, debug code, explain complex concepts, or practice new languages. Its capabilities dwarf those ushered in by calculators and the internet. For this reason, it can easily be misused, but banning it outright, as many universities have attempted, is a battle that was lost before it even started. Telling students not to use a tool that is nearly undetectable and freely available won’t stop its use; it will only push it underground and widen the gap between students who are proficient in using it and those who don’t yet know how to use it effectively.

Moreover, the AI detectors that many universities rely on as a first line of defense have proven deeply flawed; for example, some have flagged writing from international students because their sentence structure tends to be simpler. The current tension reveals a deeper problem. An experienced English professor probably doesn’t need software to spot AI-generated essays: The tone, structure, and sudden leap in fluency are often glaring to a trained reader. But without empirical proof, there is no ethical way to penalize the student. Intuition, no matter how informed, cannot serve as formal evidence. This leaves educators in an impossible position; they can either ignore the changes they notice or act on suspicion using imperfect AI detectors. 

At the same time, my anecdotal experience suggests a strange double standard is emerging. In one of my classes, for example, the professor explicitly banned the use of AI but told us the assignment would be made harder because he assumed we’d use it anyway. On the other hand, some students who are unfamiliar with AI or choose not to use it are falling behind because the expectations for writing and coding have quietly shifted. Rather than fostering an environment of uncertainty and mistrust, universities should redirect their energy toward adaptation. That means adjusting assignments, rethinking evaluation, and integrating AI use transparently so the focus remains on learning, not on detection.

Professors can start by building trust and treating students as partners rather than suspects. Many serious students still want to develop strong writing, communication, and critical thinking skills, especially the ability to read and write at an academic level. Instructors can tap into that motivation by designing tasks that AI can assist with but not complete on its own. They can ask students to compare chatbot drafts with their own revisions, explain how they used AI in their writing process, reflect on the strengths and weaknesses of AI-generated responses, or even participate in short oral exams. These steps can make AI use more productive and, most importantly, keep the focus where it belongs: on human learning. While it’s true that AI may have a greater impact than tools of the past, it can still be incorporated into the learning process if it is approached with care, creativity, and a clear purpose.

Artificial intelligence is here to stay, and it will only grow more powerful with time. Detecting it will become harder, its capabilities will expand, and its presence will become even more embedded in student life. But this shouldn’t be seen as a threat or the end of education as we know it. We saw how the rise of the internet brought about similar fears, yet it ultimately made learning richer and more accessible. Resistance to change is part of human nature, and large institutions like universities often move slowly. But whether they choose to lift AI restrictions or not, one thing is clear: The current “in-between” approach is failing both students and faculty. It’s time for schools to stop pretending this technology can be “defeated” and instead begin building an education system that works with AI, not against it.

This article was reprinted with permission of Big Think, where it was originally published.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
Google AI exec: “The mistake would be thinking this is hype.”
Bestselling author and Google Labs’ Editorial Director Steven Johnson talks about the future of AI at Freethink’s Great Progression event.
Siri co-founder: “No matter how smart AI gets, it’s not going to solve all our problems by itself.”
Adam Cheyer, co-founder of Siri and VP of AI Experience at Airbnb, talks about the future of AI at Freethink’s Great Progression event.
A call to innovators in Silicon Valley and beyond to help chart the new way forward
Peter Leyden sums up the key themes and big ideas of his new series at a Freethink Conversation in San Francisco.
Technophobia has a body count
Activists push scary headlines about the harm they predict a technology will cause, but ignore the good things we stand to lose without it.
The AGI economy is coming faster than you think
The impact of AGI on the economy will be big, it’ll happen fast, and it’ll be disruptive. Here’s how the disruption could play out.
Up Next
Exit mobile version