College Papers & AI: The Secret Way Professors Might Be Spotting ChatGPT Essays
NORWICH, England – Think your college professor can't tell when you used ChatGPT to write that essay? A new scientific study suggests they probably can, and it explains why. While AI models are getting remarkably good at mimicking proper grammar and sentence structure, they seem to miss a crucial human element: the ability to genuinely connect with the reader. Research shows that writing produced by ChatGPT often feels like a one-way transmission rather than the natural, engaging conversation humans build into their written arguments.
This study, published in the journal Written Communication, found that while AI can certainly generate grammatically correct and logically structured academic text, it falls short when it comes to the personal, interactive style common in student writing. This key difference could be the reason some instructors can still identify AI-generated assignments, even as the technology improves.
“The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don’t yet have tools to reliably detect AI-created texts,” stated study co-author Ken Hyland from the University of East Anglia.
How Human Writers Engage Readers
The researchers analyzed a large collection of essays and found clear patterns in how human writers naturally structure arguments compared to AI. They discovered that essays written by students contained many more instances and types of techniques designed to engage the reader, making the writing feel more interactive and persuasive.
The study specifically looked at what they call "engagement markers." These are basically rhetorical tools writers use to connect with their audience, draw them into the discussion, and guide their thinking towards specific conclusions. Think of them as the verbal cues or conversational touches that make you feel like the writer is talking directly to you or sharing their thought process along the way.
Comparing 145 essays written by university students in the UK to 145 similar essays generated by ChatGPT on the same subjects, the results were clear: the students used over three times more engagement features than the AI. Students frequently incorporated questions, included small personal side comments (asides), and directly referenced the reader to create a sense of shared exploration with who was reading their work.
For example, student essays often posed questions like whether scientists should take responsibility for global issues, or made personal observations about British identity in relation to its geography and Europe. These sorts of elements help build a rapport or conversational relationship with the reader that was consistently absent in the essays generated by the AI.
Why AI Writing Feels Different
While ChatGPT is skilled at producing text that is technically correct and well-structured, it seems to struggle with the more human aspects of writing that are key to persuasion. The AI model tends to rely heavily on presenting facts and referencing commonly known information. However, it rarely includes the personal touches or direct interaction with the reader that make academic arguments truly compelling to a human audience.
Professor Hyland, a prolific author in the field, explained that experienced human writers instinctively create a mental picture of their potential readers and adapt their writing style accordingly. ChatGPT, despite its impressive capabilities in language processing, cannot genuinely understand its audience or anticipate questions or disagreements from a reader without very specific prompting instructions.
"The AI essays mimicked academic writing conventions, but they were unable to inject text with a personal touch or to demonstrate a clear stance," Hyland noted.
The study found that ChatGPT completely avoided using personal asides – those short comments or thoughts where a writer steps briefly away from the main point to share something personal or reflective. This lack creates what the researchers described as a text that is "dialogically closed," meaning it doesn't feel open to a back-and-forth with the reader. It comes across as impersonal or lacking a certain depth. The research team suspects this limitation is due to how models like ChatGPT are trained, which emphasizes clarity and conciseness, rather than authentic conversation.
Additionally, the study observed that ChatGPT essays didn't use appeals to logical reasoning effectively, suggesting it might be better at regurgitating existing factual information than building complex logical arguments or developing original ideas. This finding aligns with other research indicating that current AI models still face challenges with higher-level cognitive tasks like true critical thinking.
Integrating AI in Education
For students who might be tempted to use AI to write their papers, this study provides concrete evidence that current AI output is missing key human elements that educators are used to seeing (perhaps even subconsciously). For college professors and teachers, it offers potential markers they can look for to help identify AI-generated content without relying solely on specialized detection software, which has proven to be unreliable at times.
Instead of seeing AI purely as a tool for cheating, the researchers suggest that tools like ChatGPT could actually become useful resources for teaching writing. By having students compare text generated by AI with examples of effective human writing, they could learn to identify and incorporate persuasive engagement strategies, helping them develop their own distinct voice and critical thinking skills, even when using AI tools responsibly.
“When students come to school, college, or university, we’re not just teaching them how to write, we’re teaching them how to think – and that’s something no algorithm can replicate,” added Professor Hyland.
For the time being, producing genuinely engaging academic writing remains a skill uniquely associated with humans. While ChatGPT can certainly arrange facts and follow established structures, it appears to lack the intuitive understanding that good writing is essentially a conversation with the person reading it. Until AI can truly predict and interact with the human mind on the other side of the page, the most convincing arguments will still come from people, not programs.
Study Details & Methodology
Methodology
Analyzed two sets of argumentative essays: 145 written by second-year British university students and 145 generated by ChatGPT 4.0 on identical prompts.
Used computational text analysis (corpus linguistics tools) to search for approximately 100 types of pre-defined "engagement markers."
Each potential instance was manually checked to confirm it functioned as an engagement marker.
Data was normalized (converted to occurrences per 1,000 words) for fair comparison. Statistical differences were assessed using log-likelihood tests.
Key Findings (Results Summary)
Student essays used significantly more engagement markers overall (16.99 per 1,000 words) compared to ChatGPT essays (5.40 per 1,000 words).
Students used far more questions and personal asides than AI.
ChatGPT heavily relied on appealing to assumed shared knowledge (common beliefs, typical situations) but completely avoided using appeals to logical reasoning.
The AI produced no personal asides and very few direct questions.
AI texts showed less variation in their use of engagement markers compared to student texts.
Acknowledged Limitations
Focused only on interactional features in academic writing, which might be an area where AI is predictably weaker.
Acknowledges that undergraduate students may not represent peak human writing proficiency and might overuse certain features.
AI model's training data could have biases (e.g., skewed towards certain styles or topics) which limit its ability to fully replicate authentic academic writing nuances.
Funding & Publication Info
No conflicts of interest or external financial support were reported by the authors.
The study, "Does ChatGPT Write Like a Student? Engagement Markers in Argumentative Essays," was published in 2025 in the journal Written Communication by SAGE Publications.
Research authors: Feng (Kevin) Jiang (Beihang University, China) and Ken Hyland (University of East Anglia, UK).
Join the conversation