Go To Namecheap.com
Hero image of Spot the bot: uncovering AI-generated text
Creating & Managing Content, News, Tech Roundup

Spot the bot: uncovering AI-generated text

AI-powered text generators — including ChatGPT, Bing, and Bard — are revolutionizing the way we create written content. People use these tools for brainstorming, outlining, and streamlining their writing process. However, not everyone is using these tools responsibly. Some companies may turn to these tools to replace human writers, and people may try to pass off AI-written materials as their own work. 

As more people use generative AI for writing, a critical question has emerged: Is it possible to detect AI-written content? 

This article examines some of the methods used to detect AI-generated writing and why being able to identify human versus machine-authored content matters.

Why identifying the author is important

In many cases, you might not care if a human or an AI wrote a story, as the content and entertainment value may be all that matters. But there are many cases where it may be important to know if a human or AI created the content. Here are a handful of examples.

Depending on your industry, it’s not difficult to imagine needing to ask if something was written by an applicant, a colleague, or an employee — or by ChatGPT.

chunks of text highlighted with exclamation points

How good are automated detection tools today? 

There are a number of tools out there that claim to detect whether or not something has been written by AI, including:

These tools, and others like them, are already being used in schools to determine whether or not a student has cheated. Employers likewise are turning to these tools to verify that their employees wrote materials. And freelance clients may use these products to determine who wrote the materials they’re paying for.

The problem is, all of the existing AI-detection tools are problematic at best. The Washington Post and TechCrunch both ran tests on AI text detectors. The results suggested that the detectors are unreliable and, in some cases, will both fail to properly identify AI-generated text as well as incorrectly flag human-written text as being AI-generated. 

In its test, the Washington Post reviewed the software Turnitin, which has been deployed in thousands of schools all across the US. The paper found that the software misidentified the likely author more than half the time, misidentifying the author frequently enough to make its results questionable at best. What was most troublesome about their test was that it frequently flagged legitimate student writing as likely to have been done by ChatGPT. In real life, such conclusions could lead to terrible consequences. 

TechCrunch had similar results when running its own tests. On some styles of writing, the detectors were reasonably accurate, but for shorter samples, the software options failed to make correct determinations more often than they got them right.

In my own tests, the detectors did mostly identify my original work as not being written by AI but faltered on a sample that was written by AI but edited by me. And in some cases, they also did not properly identify AI-generated writing.

Why aren’t AI detectors more accurate? 

Like ChatGPT and other generators, the AI detectors also rely on large language models. The goal of this type of software is to determine the statistical likelihood that something is AI-generated. 

As the Washington Post reported, the software tries to determine if the content is  “extremely consistently average,” which, according to Eric Wang, Turnitin’s vice president of AI, will suggest AI wrote it.

However, at least some of the time, human writing may fall into that statistical “average,” and Wang even admits that certain fields, such as science and math, strive for that similarity with rigid style expectations that will then trip up the detectors. 

Furthermore, this “average” is a moving target as the AI generators improve, so unless the software is constantly tweaked and updated, it will fall behind and become even less accurate. As the Washington Post noted, “​​Detectors are being introduced before they’ve been widely vetted, yet AI tech is moving so fast, any tool is likely already out of date.”

Another problem with detectors is the inherent bias that comes with the process. As New Scientist described, researchers at the University of Copenhagen tested multiple detection options using content written by both native and non-native English speakers. They discovered that the detectors were much more accurate when judging the work from native English speakers. The researchers attributed the bias to the fact that AI systems are usually trained on large text databases that contain predominantly native English content.

The MIT Technology Review noted that it may be impossible to ever get a tool that can identify AI text with complete certainty because the whole point of AI language models is to create text that sounds human. 

Recognizing the potential for error, most AI detection software comes with some sort of warning that you shouldn’t rely on the results alone. However, it’s not difficult to imagine that some people will take the results at face value, which is quite disturbing. 

Alert box

Hope on the horizon?

One idea that could improve detection is watermarking. As Search Engine Journal described it, this process “involves cryptography in the form of embedding a pattern of words, letters and punctuation in the form of a secret code.” 

Currently, watermarking does not exist within AI generators, but it may be coming, in part to fend off potential legal or government challenges to the technology. However, even with watermarking, the detectors will always stumble if the writer reworks some of the output, with a mix of human and AI text much more difficult to flag.

Was this article helpful?
Get the latest news and deals Sign up for email updates covering blogs, offers, and lots more.
I'd like to receive:

Your data is kept safe and private in line with our values and the GDPR.

Check your inbox

We’ve sent you a confirmation email to check we 100% have the right address.

Help us blog better

What would you like us to write more about?

Thank you for your help

We are working hard to bring your suggestions to life.

Jackie Dana avatar

Jackie Dana

Jackie has been writing since childhood. As the Namecheap blog’s content manager and regular contributor, she loves bringing helpful information about technology and business to our customers. In her free time, she enjoys drinking copious amounts of black tea, writing novels, and wrangling a gang of four-legged miscreants. More articles written by Jackie.

More articles like this
Get the latest news and deals Sign up for email updates covering blogs, offers, and lots more.
I'd like to receive:

Your data is kept safe and private in line with our values and the GDPR.

Check your inbox

We’ve sent you a confirmation email to check we 100% have the right address.

Hero image of Tech Beat by Namecheap – 12 May 2023Spot the bot: uncovering AI-generated text
Next Post

Tech Beat by Namecheap – 12 May 2023

Read More