• About Us
  • Contact Us
Thursday, January 15, 2026
  • Login
CXOTECH
  • NEWS
  • CXO TALKS
  • ANALYSIS
  • STRATEGY
  • HOW TO
No Result
View All Result
  • NEWS
  • CXO TALKS
  • ANALYSIS
  • STRATEGY
  • HOW TO
No Result
View All Result
CXOTECH
No Result
View All Result

A Recent Study Finds AI-generated Text Detection Impossible

Ali Ömer Yıldız by Ali Ömer Yıldız
May 17, 2023
in News
A A
Pearson and Chegg Lose $1 billion in Market Cap After ChatGPT Alert

The suffocating hype surrounding generative algorithms, as well as their uncontrollable development, have pushed many individuals to seek a reliable solution to the AI-text identification challenge. According to a new study, the problem is doomed to remain unsolvable.

While Silicon Valley corporations tweak their business models around new, ubiquitous buzzwords like machine learning, ChatGPT, generative AIs, and large language models (LLM), someone is attempting to avoid a future in which no one can distinguish statistically composed texts from those assembled by actual human intelligence.

However, according to a study conducted by five computer scientists from the University of Maryland, the future may already be here. “Can AI-Generated Text Be Reliably Detected?” the scientists wondered. The conclusion they reached was that text generated by LLMs cannot be consistently identified in practical circumstances, both theoretically and practically.

According to the scientists, the unregulated use of LLMs can result in “malicious consequences” like as plagiarism, fake news, spamming, and so on, therefore reliable detection of AI-based writing would be a vital component to ensuring the responsible usage of services like ChatGPT and Google’s Bard.

The study examined current state-of-the-art LLM detection methods, demonstrating that a simple “paraphrasing attack” is sufficient to fool them all. A competent (or even malevolent) LLM service can “break a whole range of detectors” by utilizing a mild word rearrangement of the originally created text.

Even with watermarking techniques or neural-network-based scanners, detecting LLM-based text is “empirically” impossible. In the worst-case scenario, paraphrase can reduce the accuracy of LLM detection from 97 percent to 57 percent. The scientists concluded such a detector would perform no better than a “random classifier” or a coin flip.

Watermarking methods, which add an invisible signature to AI-generated text, are totally erased by paraphrasing and provide an extra security risk. According to the researchers, a hostile (human) actor may “infer hidden watermarking signatures and add them to their generated text,” causing the harmful / spam / fake text to be identified as text generated by the LLM.

According to Soheil Feizi, one of the study’s authors, we just need to accept that “we may never be able to reliably say if a text is written by a human or an AI.”

An enhanced effort in authenticating the source of text information could be one solution to this bogus text-generation issue. According to the scientist, social platforms have begun to widely verify accounts, which may make distributing AI-based misinformation more difficult.

Post Views: 224
Tags: AIAI-Generated TextChatGPTGoogleBardLLMSoheil Feizi
Previous Post

Scotland Launches the World’s First Driverless Bus Service

Next Post

ChatGPT CEO: Artificial Intelligence Regulation is a Must

Next Post
ChatGPT CEO: Artificial Intelligence Regulation is a Must

ChatGPT CEO: Artificial Intelligence Regulation is a Must

Company Strategies: Apple

Apple Claims Its App Store Avoided $2B in Fraudulent Sales Last Year

LATEST NEWS

Cloudflare, Explained: Why One Company Can Make the Internet Feel Broken
ANALYSIS

Cloudflare, Explained: Why One Company Can Make the Internet Feel Broken

December 23, 2025

In recent months, Cloudflare has found itself in the headlines—not for a new product launch or an acquisition, but for...

Read moreDetails
Where Should You Live in Dubai?

Where Should You Live in Dubai?

December 7, 2025
New Study Reveals the Blueprint for European Digital Sovereignty: Computing Power, Cloud, Open Source and Capital

New Study Reveals the Blueprint for European Digital Sovereignty: Computing Power, Cloud, Open Source and Capital

December 1, 2025
AI Unicorns 2025: The Billion-Dollar Startups Shaping the Future

AI Unicorns 2025: The Billion-Dollar Startups Shaping the Future

November 20, 2025
BeamSec Presents Alfred Plus Agentic AI Solution at GITEX Global 2025

BeamSec Presents Alfred Plus Agentic AI Solution at GITEX Global 2025

October 21, 2025

Follow Us On LinkedIn

Categories

  • ANALYSIS
  • CIO Exclusive
  • Company Analysis
  • cxotalks
  • HOW TO
  • News
  • STRATEGY

Tags

5G AI AI-powered Amazon Android Apple Artificial intelligence chatbot ChatGPT China Chip CIO CXO Cyberattack Cybersecurity Electric Car Elon Musk ElonMusk EV Facebook GITEX Google Huawei Instagram Intel iOS iPhone Japan META Microsoft NASA Nvidia OpenAI Sam Altman samsung Space SpaceX Tesla Threads TikTok TSMC Twitter Whatsapp Xiaomi YouTube
  • About Us
  • Contact Us

© 2023 CXO MEDYA

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • About Us
  • Contact Us
  • Latest News
  • Privacy Policy
  • Tech Events & Conferences 2024

© 2023 CXO MEDYA