Essay challenge: ChatGPT vs students
Study reveals who writes better (and it's not the AI)
- Date:
- April 30, 2025
- Source:
- University of East Anglia
- Summary:
- Researchers have been putting ChatGPT essays to the test against real students. A new study reveals that the AI generated essays don't yet live up to the efforts of real students. While the AI essays were found to be impressively coherent and grammatically sound, they fell short in one crucial area -- they lacked a personal touch. It is hoped that the findings could help educators spot cheating in schools, colleges and universities worldwide by recognizing machine-generated essays.
- Share:
ChatGPT vs students: study reveals who writes better (and it’s not the AI)
AI generated essays don’t yet live up to the efforts of real students - according to new research from the University of East Anglia (UK).
A new study published today compared the work of 145 real students with essays generated by ChatGPT.
While the AI essays were found to be impressively coherent and grammatically sound, they fell short in one crucial area – they lacked a personal touch.
As the line between human and machine writing continues to blur, the study underlines the importance of fostering critical literacy and ethical awareness in the digital age.
It is hoped that the findings could help educators spot cheating in schools, colleges and universities worldwide by recognising machine-generated essays..
Prof Ken Hyland, from UEA’s School of Education and Lifelong Learning, said: “Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments.
“The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don’t yet have tools to reliably detect AI-created texts.
“In response to these concerns, we wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers.”
The research team analysed 145 essays written by real university students and another 145 generated by ChatGPT.
“We were particularly interested in looking at what we called ‘engagement markers’ like questions and personal commentary,” said Prof Hyland.
“We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive.
“They were full of rhetorical questions, personal asides, and direct appeals to the reader – all techniques that enhance clarity, connection, and produce a strong argument.
“The ChatGPT essays on the other hand, while linguistically fluent were more impersonal. The AI essays mimicked academic writing conventions but they were unable to inject text with a personal touch or to demonstrate a clear stance.
“They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic.
“This reflects the nature of its training data and statistical learning methods, which prioritise coherence over conversational nuance,” he added.
Despite its shortcomings, the study does not dismiss the role of AI in the classroom.
Instead, the researchers say that tools like ChatGPT should be used as teaching aids rather than shortcuts.
“When students come to school, college or university, we’re not just teaching them how to write, we’re teaching them how to think - and that’s something no algorithm can replicate,” added Prof Hyland.
This study was led by UEA in collaboration with Prof Kevin Jiang of Jilin University, China.
‘Does ChatGPT write like a student? Engagement markers in argumentative essays’ is published in the journal Written Communication.
ENDS
Story Source:
Materials provided by University of East Anglia. Note: Content may be edited for style and length.
Cite This Page: