New AI detection tool measures how “surprising” word choices are

The system rarely flags human writing as AI generated, but is “rarely” good enough for use in schools?
Adobe Stock / Freethink / Jacob Hege

A new AI detection tool is reportedly far less likely to falsely flag original human writing as being AI generated — but is “less likely” good enough for use in schools?

The challenge: It’s relatively easy these days for teachers to figure out if a student is guilty of plagiarism — they can just drop a sentence or two from a suspicious essay into Google to see if the text had been pulled from somewhere on the internet.

However, it’s far harder for them to figure out if the student “outsourced” their writing to a large language model (LLM), like ChatGPT — because these AIs generate brand-new content on demand, the text they produce isn’t going to show up in a web search.

“These tools sometimes suggest that human-written content was generated by AI.”


Some groups have developed AI detection tools they claim can tell whether a human or an AI wrote something, but ChatGPT developer OpenAI says they aren’t reliable enough, especially given that a false accusation of cheating could have lasting consequences for a student.

“One of our key findings was that these tools sometimes suggest that human-written content was generated by AI,” the company wrote, adding that a detector it trained itself gave an AI credit for writing Shakespeare and the Declaration of Independence.

What’s new? A team led by researchers at the University of Maryland (UM) has now developed a new AI detection tool, called Binoculars, that is says accurately identified more than 90% of writing samples that were AI generated.

It also had a false-positive rate of just 0.01%, meaning for every 10,000 samples it flagged as being written by AI, only one was actually written by a person. 

Over the course of an academic year, it would incorrectly flag hundreds of student essays as AI creations.

For comparison, software company Turnitin’s AI detection tool — which was previously used by Vanderbilt, Michigan State, and several other major universities — has a false-positive rate of 1%, meaning one out of every 100 accusations of cheating is baseless. 

Vanderbilt stopped using the tool because this false positive rate was high enough that, over the course of an academic year, it would incorrectly flag hundreds of student essays as AI creations.

How it works: Binoculars looks at a piece of writing through two “lenses.” 

The first in an “Observer” LLM. It’s trained to measure “perplexity,” or how unpredictable a text is. LLMs are trained on vast amounts of published material, and they generate text by predicting what word is most likely to come next in a sentence, so the text they write tends to have lower perplexity scores than human-written content.

The second is a “Performer” LLM. It predicts what the next word should be at every point in the text, based on the words that came before it — essentially doing what an AI like ChatGPT would do. The Observer AI then measures the perplexity of the Performer’s choices. 

If there’s little difference between the two scores, Binoculars predicts that the text was likely written by AI.

Schools might be hesitant to use an AI detection tool that could deliver any false positives.

Looking ahead: Binoculars worked on a variety of sample types, including news articles and student essays, and on text generated by several AIs, including OpenAI’s ChatGPT and Meta’s LLaMA-2-7B, which could make it more useful than other, more narrow AI detection tools.

The research still needs to be peer reviewed, but even if it holds up under scrutiny, schools might be hesitant to use it due to the risk of any false positives, even if that risk is far lower than with currently available AI detection tools. There’s also a question of how long such a tool will be effective, if AI models could be tuned to be more creative to evade this kind of checker.

Binoculars researcher Abhimanyu Hans told Business Insider his own team is “conflicted” about whether their system should be used by schools, but they do believe it could be valuable for other applications, such as detecting AI-written content on websites and social media platforms.

As for where that leaves teachers, their only option may be to rework their curriculums to accommodate LLMs, rather than trying to punish kids for using the powerful new tools.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Aerospace engineer explains why AI can’t replace air traffic controllers
For everyone’s safety, humans are likely to remain a necessary central component of air traffic control for a long time to come.
Nvidia’s free tool lets you create your own chatbot right on your PC
Nvidia’s Chat with RTX tool lets you create a custom chatbot that runs locally on your PC and can answer questions about your personal files.
How does studying 500 years of the printing press help us tackle the era of AI?
For around 500 years, the printed word shaped our education and culture. What lessons can we learn from it in the new age of AI?
OpenAI’s text-to-video AI, Sora, is futurism come to life
Sora will let anyone transform their ideas directly into video and the implications are breathtaking.
From besting Tetris AI to epic speedruns – inside gaming’s most thrilling feats
Gaming embraces design elements that promote social connection, creativity, a sense of autonomy – and, ultimately, the sheer joy of mastery.
Up Next
A black and white image of a flower.
Subscribe to Freethink for more great stories