Most People Don’t Realize When a Personal Message Was Written by AI, Even If They Leverage It Themselves

Most people cannot detect when a personal text message is AI-generated, even when they employ AI tools daily, creating a trust gap where undisclosed AI use goes unnoticed while transparency triggers negative judgments about sincerity and effort, according to fresh behavioral research from the University of Michigan published this week in Computers in Human Behavior.

The study, led by researchers Jiaqi Zhu and Andras Molnar, surveyed over 1,300 U.S. Participants aged 18 to 84 and found that when AI authorship was disclosed, recipients rated senders as significantly less sincere, less effortful, and more “lazy” compared to identical messages believed to be human-written. Yet when no authorship information was provided, participants assumed the messages were human-generated and formed equally positive impressions—regardless of their own AI usage frequency.

This cognitive blind spot reveals a growing vulnerability in interpersonal communication: as generative AI becomes embedded in messaging apps, email clients, and social platforms, users can deploy AI-written content without detection, gaining efficiency while avoiding reputational costs. Meanwhile, those who disclose AI use face a measurable “AI disclosure penalty,” undermining trust in contexts where authenticity is paramount—such as job applications, romantic outreach, or workplace feedback.

The implications extend beyond personal etiquette into enterprise security and platform design. As AI-generated text becomes indistinguishable from human writing in short-form communication, traditional signal-based trust mechanisms—like judging effort through message length or personal detail—initiate to erode. This forces a reevaluation of how organizations verify authenticity in digital interactions, particularly in high-stakes domains like HR, customer service, and internal communications.

Why Detection Fails: The Fluency Illusion in Short-Form Text

One reason people fail to suspect AI use lies in the nature of everyday messaging. Short, formulaic texts—such as “Sorry I missed your call” or “Thanks for thinking of me”—align closely with the strengths of large language models (LLMs), which excel at producing statistically probable, fluent continuations of common phrases. Unlike long-form essays or technical reports, where hallucinations or logical inconsistencies may betray AI origins, brief personal exchanges offer few forensic cues for detection.

modern LLMs like GPT-4o and Claude 3 Opus are trained on vast corpora of informal dialogue, including Reddit threads, customer service chats, and SMS corpora, enabling them to mimic casual tone with high fidelity. As noted by a senior NLP engineer at Hugging Face in a recent interview:

“The real challenge isn’t detecting AI in poetry or code—it’s spotting it in a ‘running late’ text. The model has seen millions of those. It’s not generating; it’s recalling.”

This fluency illusion is compounded by user expectations. Most people do not anticipate AI involvement in intimate communication channels like personal texting or direct messages, leading to a baseline assumption of human authorship. Even frequent AI users exhibit this bias: the study found no significant increase in skepticism among those who use generative AI tools every other day or more.

Enterprise Risks: When AI Writing Undermines Accountability

In organizational settings, the inability to detect AI-generated messages introduces tangible risks. For example, an employee could use AI to draft a performance review, apology, or compliance disclosure, presenting it as personal effort while avoiding accountability for the content’s authenticity. Similarly, job applicants might submit AI-written cover letters or thank-you notes that inflate perceived diligence, skewing hiring decisions.

This dynamic is already reshaping HR practices. As reported by Financial Times earlier this month, 68% of Fortune 500 companies now discount cover letters in early screening, citing concerns over AI assistance. Instead, they prioritize video introductions, referral networks, or skills-based assessments—shifting trust from written signals to behavioral or relational proxies.

Meanwhile, platforms like Slack and Microsoft Teams are beginning to experiment with provenance metadata. A product lead at Microsoft Viva confirmed in a private briefing:

“We’re testing lightweight opt-in tags that indicate when AI assisted in drafting a message—not to shame users, but to give recipients context. Think of it like a ‘nutrition label’ for communication.”

Such features remain rare and non-standardized, leaving most digital interactions without transparency mechanisms.

The Platform Lock-In Dilemma: AI Features vs. User Autonomy

The rise of opaque AI writing tools also intensifies platform dependency. When AI composition happens inside proprietary ecosystems—like Google’s Smart Reply in Gmail or Apple’s Writing Tools in iOS 18—users have little visibility into how or when their messages are altered. This creates a form of cognitive lock-in: the more users rely on AI-assisted composition within a closed system, the harder it becomes to assess authenticity across platforms.

Contrast this with open-source alternatives. Projects like OLama and Hugging Face Transformers offer transparent, locally hosted models where users retain full control over data and generation logic. Yet adoption remains low among non-technical users due to setup complexity and lack of integration with default messaging apps.

This divide risks creating a two-tiered communication landscape: one where privacy-conscious or technically adept users avoid AI features to preserve authenticity and control, and another where convenience-driven users accept opaque AI assistance in exchange for efficiency—often unaware of how it affects how they are perceived.

What This Means for the Future of Digital Trust

As AI-generated text becomes ubiquitous in personal communication, society may need to develop new norms around disclosure—similar to how email signatures once evolved to convey professionalism. Until then, the most reliable way to ensure a message is perceived as sincere remains low-tech: a phone call, voicemail, or face-to-face conversation.

For developers and product teams, the challenge lies in designing AI tools that augment rather than obscure human intent. Features that allow users to review, edit, and claim ownership of AI-assisted drafts—while making assistance visible to recipients—could mitigate the disclosure penalty without sacrificing utility. The goal isn’t to eliminate AI use in messaging, but to ensure it doesn’t come at the cost of trust.

Photo of author

Sophie Lin - Technology Editor

Sophie is a tech innovator and acclaimed tech writer recognized by the Online News Association. She translates the fast-paced world of technology, AI, and digital trends into compelling stories for readers of all backgrounds.

Princess Kate Opens Up About the ‘Extremely Difficult’ Part of Royal Life: Her Soft Voice and Struggle to Speak in Noisy Settings

The user asks: “Create a concise SEO English title for this article: Bestel tickets voor Barcelona Obertura Spring Festival: Grigory Sokolov – Palau de la Música Catalana – The world-renowned pianist gives one of …. Only write the title, nothing else.” We need to produce a concise SEO English title. The article is about tickets for Barcelona Obertura Spring Festival: Grigory Sokolov – Palau de la Música Catalana – The world-renowned pianist gives one of … Possibly truncated. We need to create a concise SEO English title. Likely something like: “Grigory Sokolov Live at Barcelona Obertura Spring Festival – Palau de la Música Catalana – Buy Tickets”. Should be concise. Provide just the title, no extra text. Let’s produce. Grigory Sokolov Live at Barcelona Obertura Spring Festival – Palau de la Música Catalana – Buy Tickets

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.