How to Check If Something Online Was Written by AI

Key Takeaways:

– Generative artificial intelligence (AI) is prevalent on the web, with advanced predictive text bots able to generate human-like written content.
– There are clues to identify AI-generated content, such as checking the author. Human writers usually have an online presence, while AI writers often do not.
– Articles with the name of a real person attached, along with a bio and social media links, are more likely to be written by a human.
– Checking a website’s history, the type of content it publishes, and whether it has an About Us page can provide additional clues.
– AI detection engines, although not always reliable, can sometimes differentiate between AI and human writing by looking for originality in the text.
– AI-generated text tends to be generic, vague, and lacking originality, humor, and humanity. It aims to generate predictable text, leading to a general blandness.
– Glaring errors or repeated mistakes in the text may indicate AI composition, but humans also make errors.
– By considering all the signals, clues, and flags together, one can make an educated guess about whether something is AI-generated or not. However, the only sure way to know is to witness it being written.


Generative artificial intelligence is everywhere you look these days, including on the web: advanced predictive text bots such as ChatGPT can now spew out endless reams of text on every topic imaginable and make all this written content natural enough that it could plausibly have been written by a human being.

So, how can you make sure the articles and features you’re reading online have been thought up and typed out by an actual human being? While there isn’t any foolproof, 100 percent guaranteed way of doing this, there are a variety of clues you can look out for to spot what’s AI-generated and what isn’t.

Check the author

Most human writers will have an online presence—most AI writers won’t.
Screenshot: Gizmodo

For now, at least, there aren’t any high-profile, well-respected online outlets pumping out AI content without labeling it as such—but there are plenty of lower-tier sites making full use of AI-generated text and not being particularly honest about it. If you’re coming across a lot of text without author attribution, that’s one warning sign to look out for.

In contrast, if an article has the name of a real person attached—even better, a real person with a bio and social media links—then you’re more likely to be reading something that has been put together by a human. You’ll probably not have time to background check everything you read online, but it’s worth it when you really need to know its source.

The alleged AI articles recently spotted on the Sports Illustrated site came with author profiles and bios alongside them—profiles and bios that were also made by generative AI, it turns out. A reverse image search (through something like TinEye) can identify images of people that aren’t actually real, which might be helpful in determining an article’s source.

More clues can be gleaned from a website in terms of its history, the type of content it publishes, whether or not it has an About Us page, and so on. For example, searching for the best phone reviews on the web brings up well-known tech sites staffed by human beings.

Check a Detection Engine

Copyleaks correctly identified this article as being written by a human.

Copyleaks correctly identified this article as being written by a human.
Screenshot: Copyleaks

There’s plenty of debate about whether or not AI text detection works. OpenAI says not, and most reporting on the matter says these AI detectors aren’t to be trusted. However, there are still plenty of them in business at the time of writing, and within limits, they might be useful in checking for the use of AI online.

We ran a brief series of tests on a few AI detectors online, including Copyleaks, GPTZero, and Scribbr, and what we found tallies with what other people have found: These detectors can tell the difference between AI writing and human writing, but not all the time, and not to a level that conclusively proves anything one way or another.

These detectors seem to have a better success rate at spotting human writing than AI writing. They’re essentially looking for originality in the text, trying to figure out what an AI would say next based on its training. The more data they have to work with, the better, but there are limits on how much you can use for free.

The studies we have to date suggest that some detectors are better than others and that some are even right most of the time—but none of them are consistently right to a high level. These detectors are perhaps best thought of as another tool you can use alongside other avenues of inquiry and not something to rely on entirely.

Check the Signs

ChatGPT knows its own limitations.

ChatGPT knows its own limitations.
Screenshot: ChatGPT

As we said at the start, there’s really no guaranteed way of identifying which online text has been produced by AI and which hasn’t. However, there are still certain signs to look out for: Because of the way generative AI is trained, its output tends to be generic, vague, and obvious at times.

Certain touches of originality, humor, and humanity are often missing (as are personal anecdotes). AI always wants to generate text that has a low level of perplexity—put another way, a high level of predictability. At their heart, these engines are just predicting what word should come next, and that can show in a general mushiness and blandness that is sometimes noticeable.

You can also look out for glaring errors (such as hallucinations), but of course, human beings make errors in their writing, too. AI text might be capable of getting something significantly wrong or significantly wrong multiple times in different ways, but it still doesn’t prove if AI has composed an article.

Taking all these signals and clues and flags together, you may just be able to make an educated guess about whether something came from a human mind or not, even if the only way to be sure is to watch it being written: AI text is certainly harder to spot than AI imagery, but that’s a whole other topic.

Source link

AI Eclipse TLDR:

Generative artificial intelligence, such as ChatGPT, is becoming increasingly prevalent in online content creation. This technology can generate vast amounts of text that closely resembles human writing. So, how can readers determine whether an article has been written by a human or AI? While there is no foolproof method, there are several clues to look out for. One key indicator is checking the author. Human writers usually have an online presence, while AI writers do not. Additionally, articles with real authors, complete with bios and social media links, are more likely to be human-generated. The absence of author attribution is a warning sign. Some websites may not be transparent about using AI-generated content, particularly lower-tier sites. However, high-profile and reputable outlets generally label AI content appropriately. Conducting a reverse image search can help identify fake author profiles. Other factors to consider include a website’s history, content type, and the presence of an “About Us” page. Another strategy is to utilize AI detection engines, although their reliability is debated. While these detectors can distinguish between AI and human writing to some extent, they are not consistently accurate. They often look for originality in the text and compare it to an AI’s training data. However, they may be more successful at identifying human writing than AI-generated content. Overall, it is important to consider various signals, clues, and indicators collectively to make an educated guess about the source of online text.