"If you want, I can write you the same text using typical GPT phrases, and then a modified version in a more human style, so you can see the difference. Want to try it?"
Like this ChatGPT, the most famous AI assistant, answers the question: "How do I recognize if a text was written by ChatGPT?"
We accept the offer of an AI assistant and receive two fictional texts on whether social networks affect mental health.

Photo: Printscreen/ChatGPTHow ChatGPT sees its writing style:
The first text should be typical AI version: no emotion, dry facts, distinctly professional writing style. The second text, which is supposed to represent a more "human" version, is full of emotions, metaphors and other stylistic figures, in an attempt to imitate a text signed by a human hand.

Photo: Printscreen/ChatGPTWhat it looked like after the humanization attempt:
However, can a more "human" version of AI text really pass the test of humanity? The average media consumer would probably read this text, like any other, without thinking that there is no human being behind it. But it is unlikely that someone who uses artificial intelligence on a daily basis would not suspect that the mentioned text was written by an AI.
Paradoxically, ChatGPT fell into a trap. In an attempt to make the second text as believable as possible, he embellished it so much with metaphors and attributes that even a text that is supposed to be an example of typical AI writing seems more natural.
However, everything we write about still comes down to feeling. There is still no concrete way to be sure what is written by a human and what by a machine. So how to recognize the writing style of artificial intelligence, which is improving every day? Or, on the other hand: if you use artificial intelligence in your writing, how can the trained eye not notice it?
How useful are detectors?
A Google search offers a number of websites that are supposed to detect AI text.
We tested several of the most popular detectors using AI-generated text about social media and mental health.
"GPTZero," which calls itself "the leading AI detector," was 90 percent sure that our text was written by a chatbot. This percentage would be just fine if we didn't try something else: we removed the last question mark in the text and ran it through the program again. This fooled the detector, which now claimed with 72 percent certainty that the text was written entirely by a human.
By the way, "GPTZero" is one of the most used AI detectors among professors, at least in the English-speaking world.
Sidekicker estimated that 90 percent of our text is written by artificial intelligence. However, when we ran the text through this software again, the percentage dropped to 85 percent. After the third attempt, the figure dropped to 77 percent.
Justdone initially estimated that 84 percent of our text was the product of AI. After the second attempt, that number dropped to 71 percent.
However, none of these programs are free – premium versions cost from $20 to $30 per month. For now, at least in the Serbian language, they do not show that they are worth the money.
The free detectors put out by Google fared even worse, mostly rating our text, untouched by human hands, as "entirely human-written."
Long dash, dash and hyphen
If we can't trust the computer, the only thing left is to get to know artificial intelligence and its writing style.
The first indication that the text may have been generated by AI is the long lines in the text, in places where it is possible to do without them.
"So maybe it's time to ask: do we manage the networks — or do they manage us?" is a typical example of a linguistic construct used by AI. The line is used here to add drama, but in a spirit that suits English more than the Serbian language. Additionally, the dashes used by ChatGPT (—) are longer than the dashes normally used in writing (–), but should not be confused with the hyphen (-), which is a completely different punctuation mark.
So, if you see a line in the writing, it does not necessarily mean that there is artificial intelligence behind the text, because it is a popular sign in the Serbian language. But there are certain signs that can point to a computer author. For example, if the hyphen is immediately before the last word in the sentence, it is very possible that AI was used - artificial intelligence "loves" to emphasize certain words.
Additionally, if the dash is in a place where the average human would intuitively use a comma, that can also be a sign of AI in writing.
Here are two typical examples:
-
"Technology is developing rapidly — faster than many can keep up with — and raises many ethical questions."
-
"The problem isn't just the amount of time we spend online — it's how we use that time."
The second sentence is particularly interesting because, in addition to the long line, it also uses a very common hint of artificial intelligence as the author, which is a construction we can call: not this, but that.
Not this, but that.
The French "Le Mond" has already written about this "signature" of artificial intelligence. He relies on the frequent use of two basic stylistic figures.
The first involves stringing together two syntagmatic statements of opposite meanings, in the construction of the type: "It's not that, but this", or: "It's not only..., but also...". The first half creates an expectation, assumption, or belief; the other knocks him down.
Another structure, as "Le Mond" writes, is based on a tripartite rhythm: a sequence of three statements that reinforce or complement each other.
It can be three verbs, three short sentences, or a gradual gradation in three steps leading to a more complex idea: "It is a system that restrains, that strengthens, that locks."
We asked ChatGPT to write a text consisting of these characteristic signatures. Here's what we got:

Photo: Printscreen/ChatGPTChatGPT "loves" drama
Some more indicators
Some other indicators that there may not be a person behind the text are phrases, clichés, but also the avoidance of a personal attitude.
Artificial intelligence will generally try to present both sides of the issue: "although some experts claim one thing, others point to...". Of course, the exceptions are when the person writing the "prompt" asks the chatbot to clearly represent a certain position.
In addition, ChatGPT uses words and phrases that soften the claims: probably, maybe, in most cases, it cannot be said with certainty, it is possible…
As the artificial intelligence itself explains to us, this language is part of OpenAI's security design to reduce harm and inaccurate information.
And since a chatbot isn't human, it doesn't have feelings, and it can't even recount personal anecdotes. He does not know how to enrich the text with authentic examples and experiences. Everything is too ironed and there is no personal stamp.
After all, a computer can't even take a joke, so that's where the hardest test for artificial intelligence lies. Finally, here's a joke created by ChatGPT:
"Why is artificial intelligence never late for a meeting? Because it already knows everything you're going to say — and has a ready answer in three styles."