Why People React Negatively to AI-Edited Texts Even When They Are Good

Why People React Negatively to AI-Edited Texts Even When They Are Good

A growing number of readers are showing a strong negative reaction to texts labeled as AI-generated, even when the content is accurate and well-written. Research indicates that this response is not necessarily about the quality of the text itself, but rather about perceptions of authorship and human involvement.

Studies published in 2024 and 2025 reveal a consistent pattern. Experiments conducted with participants in the US and UK showed that when a headline was marked as AI-generated, readers rated it as less trustworthy and were less likely to share it, regardless of whether the headline was factual or written by a human. The researchers emphasize that the issue is not that AI equals falsehood, but that the label triggers assumptions of full automation with no human oversight.

The frustration stems from the idea that a machine did all the work while the human merely pressed a button, rather than recognizing AI as a supportive tool in the creative process. Additional findings from the Journal of Business Research show similar effects with emotionally charged marketing messages: AI attribution reduces perceived authenticity and willingness to recommend, though the effect is weaker if AI is seen as an editor rather than the author, or if the content is factual rather than emotional.

Skepticism toward AI is not without basis. Research in Technology in Society warns of increased content homogeneity over time, even as individual productivity initially rises. Some studies, including work with BCG consultants, indicate that AI improves performance in tasks within its capabilities but can harm quality when applied beyond its “zone of competence.” Blind enthusiasm or outright rejection of AI oversimplifies a complex reality.

On the other hand, AI has demonstrated tangible benefits. A study of 5,179 customer support employees found that generative AI increased overall productivity by 14%, with a 34% boost for less experienced staff, while improving customer satisfaction and retention. Experiments with 758 consultants using GPT-4 showed over 40% gains in task quality, a 25% increase in speed, and higher completion rates for creative and text-based work within the AI’s effective scope. Similarly, research in Nature Human Behaviour found that AI-assisted brainstorming improved the creativity of ideas compared to working without technology or using standard web searches, particularly for incremental improvements rather than radical innovations.

AI assistance has also shown benefits for writing. A 2025 study in AI & Society found that AI tools improved essay quality across participants, especially for weaker writers, suggesting a leveling effect rather than widening performance gaps. Overall, AI proves capable of enhancing workflow, refining structure, and improving baseline quality, with the key factor being how humans employ it.

Creative work remains the most contentious area. Users often experience moral discomfort when AI is involved in text, ideas, or art. While AI outputs can be nonsensical or banal, they can also spark new directions and insights that a human author might not have considered, offering a unique form of creative support when guided properly.

The takeaway for the market is that AI tools can provide significant productivity and quality boosts, but success depends on thoughtful integration rather than blind reliance. Competitors and content creators who understand how to leverage AI effectively while maintaining human oversight are likely to gain a meaningful advantage.

Informational material. 18+.

" content="b3bec31a494fc878" />