The use of artificial intelligence (AI) among undergraduate students in the UK has seen a significant increase, with a recent report indicating that more than 88% of students have engaged with generative AI tools for their studies. This figure demonstrates a striking rise from just over half (53%) within the past year, as reported by the Higher Education Policy Institute (HEPI). The survey conducted by Savanta involved a sample of 1,041 full-time undergraduate students and was carried out in December.
Josh Freeman, policy manager at HEPI, remarked on the rapid change in student behaviour, stating, “It is almost unheard of to see changes in behaviour as large as this in just 12 months.” He emphasised that universities need to adapt quickly, suggesting that every assessment should be scrutinised to determine if it can be effortlessly completed using AI. Freeman noted the necessity for "bold retraining initiatives for staff in the power and potential of generative AI."
Students reported a variety of functions for which they utilise AI, the most common being for explanations of concepts (58%) and summarising articles (48%). Additionally, 41% indicated using AI for brainstorming research ideas, while 39% found it useful for organising their thoughts. The primary motivations behind the use of these technologies were time-saving, cited by 51% of respondents, and improving work quality, claimed by 50%.
The report also communicated insights regarding the perceptions of AI-generated work in academia. Approximately 18% of students admitted to including AI-generated and edited text in their assessments, while there was notable variance in acceptance levels across different fields of study. For example, 45% of students in science, engineering, or medical-related courses believed that AI-generated content could achieve a good grade, whereas only 29% of humanities students felt similarly.
Despite the rising usage and acceptance of AI, many students expressed confusion regarding university policies on the technology. While 80% felt that policies were clear and 76% believed their institution could detect AI use in assessments, there remained a pervasive sense of ambiguity. One student noted, “It’s still all very vague and up in the air if/when it can be used and why," highlighting the mixed messages received from faculty regarding AI use.
The HEPI report pointed out "persistent digital divides" in AI competency, with male students and those from more affluent backgrounds being more frequent users. Nearly half of the surveyed students noted prior usage of AI tools during their school years.
Janice Kay, director of Higher Futures, addressed the implications of these findings, stating in her foreword to the HEPI report that while it is "a positive sign overall" that students are learning to harness AI, it signifies upcoming challenges that need institutional attention.
As technologies like ChatGPT and Google Bard become integral to educational practices, questions surrounding ethical usage and assessment integrity arise. Institutions are encouraged to reassess their evaluation methodologies to adapt to this rapidly evolving landscape.
Source: Noah Wire Services