Research from the University of East Anglia (UEA) in collaboration with Jilin University has revealed significant differences in writing styles between essays produced by students and those generated by AI, specifically ChatGPT. The study assessed 145 student essays alongside an equal number crafted by the AI model, aiming to identify stylistic cues that can determine the authenticity of written work.
The findings suggest that while essays drafted by ChatGPT exhibit a smooth linguistic flow, they ultimately lack the personal engagement that characterises authentic student writing. According to Professor Ken Hyland from UEA’s School of Education and Lifelong Learning, this gap in engagement offers educators the potential to differentiate between genuine student work and AI-generated content. Hyland expressed concern regarding the broader implications of AI tools, stating, “Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments.”
The researchers focused on what they termed “engagement markers,” which include personal commentary, rhetorical questions, and direct appeals to the reader. They found that real student essays displayed a variety of these techniques, making them more interactive and persuasive. In contrast, the ChatGPT essays, despite being coherent, felt impersonal and lacked a clear stance on the topics discussed. Hyland commented, "The AI essays... were unable to inject text with a personal touch or to demonstrate a clear stance," leading to a perception of flatness and reduced engagement.
While the study illustrates the limitations of AI in replicating human writing nuances, it does not categorically dismiss the use of AI tools in education. The researchers propose that, when employed transparently, AI can serve as a valuable educational resource. Hyland elaborated on this view, suggesting that “when students come to school, college or university, we're not just teaching them how to write, we're teaching them how to think—and that's something no algorithm can replicate.”
To mitigate the risks of academic dishonesty, the study encourages educators to implement process-based tasks requiring drafts and reflections—steps that AI cannot provide. Additionally, teaching students to recognise engagement markers will enhance their writing skills while helping them discern the nature of the text they encounter, whether generated by machines or humans.
The study arrives at a time when detection software is striving to adapt to the rapidly evolving landscape of generative AI. Current commercial tools often struggle with texts that blend human and machine writing styles. Although stylistic indicators presently provide a helpful advantage, these may soon be inadequate as AI continues to evolve and incorporate more conversational elements.
Maintaining the integrity of coursework as a reflection of independent thought is critical. The research underscores the importance of preserving authentic student voices in academic writing. By identifying the areas where AI-generated text falls short, educators can further develop strategies to foster original expression and critical thinking among students.
The study has been published in the journal Written Communication, highlighting the necessary discourse on digital literacy in education as students increasingly interact with AI-generated content in various platforms long before entering academic environments.
Source: Noah Wire Services