University College London (UCL) is poised to implement significant changes in its law school assessments, moving to ensure that more than half of these evaluations are resistant to artificial intelligence (AI) assistance. This initiative, according to the law school, aims to uphold the integrity and trustworthiness of the degrees it awards, a necessary step in the face of rapidly evolving AI tools which threaten to undermine traditional educational methods.

In a detailed paper published by UCL, academic leaders have stressed the imperative of creating “secure assessments.” This term refers to evaluations that guarantee AI does not substitute for the skills or knowledge being measured—a critical distinction in legal education where competencies like critical thinking and ethical judgment are paramount. The paper articulates that assessments must reflect the core educational mission of the faculty: to offer transformative learning experiences that produce highly skilled, internationally respected graduates.

UCL has already established regulations across all its departments that prevent the use of AI to create or modify content in assessments unless explicitly permitted for educational reasons. The law school’s latest move to actively construct assessments that are resistant to AI is particularly significant, especially as it seeks to revert to pre-pandemic practices where in-person examinations were more prevalent than coursework submitted remotely. This shift comes in response not only to the proliferation of AI tools capable of passing professional competency tests, such as the Watson Glaser test and various contract exams, but also to the increasing sophistication of AI-generated content, which the law school describes as “AI slop”—text considered to lack the depth necessary for genuine academic inquiry.

The approach taken by UCL is reflective of a broader concern within academia regarding the impact of AI technologies. Institutions globally are grappling with how best to integrate AI into their educational frameworks while maintaining standards of rigor and integrity. Schools such as Victoria University of Wellington have recently reintroduced handwritten exams as a direct response to the challenges posed by AI. UCL’s paper underscores that educational institutions must not merely react to the marketing directives of technology firms but should instead assertively guide their own technological integration to fulfil their educational missions.

In the legal sector, the integration of AI is already making waves. The Solicitors Regulation Authority's recent approval of Garfield.Law, an entirely AI-driven law firm, illustrates the judiciary's acceptance of AI’s growing role. Concurrently, judges now receive updated guidance on the use of AI tools, showcasing an active engagement with technology that contrasts with the caution urged in educational settings.

Amid this shifting landscape, UCL is not dismissing the utilitarian benefits that AI tools can offer students. The university has produced a framework outlining when AI can be used responsibly in assessments. This framework includes three categories: assessments where AI tools are entirely prohibited, those where AI can assist but not replace human effort, and scenarios where AI plays a fundamental role. Such guidance is crucial in fostering an environment of accountability and transparency, highlighted by the university’s insistence that students must acknowledge AI contributions to their work transparently.

Further backing this initiative, UCL emphasizes the importance of critical skills—such as the ability to think independently and engage creatively with information—that are crucial in the legal profession. The law school's perspective is that while AI can serve as a resource for research and drafting, it cannot replicate the nuanced and ethical considerations required in legal practice.

As legal educators adapt to the emergent realities of AI, they face the challenge of finding balance. They must prepare students for a future in which legal professionals will undoubtedly leverage AI tools while ensuring that the essence of legal education—fostering rigorous intellectual capabilities—remains intact. With many universities revising their honour codes and academic integrity policies to accommodate these changes, UCL’s proactive measures could serve as a model for other institutions grappling with similar dilemmas.

Reflecting on these trends, the conversation surrounding AI in legal education is evolving, raising vital questions about the competence and readiness of future attorneys to engage in a legal landscape increasingly influenced by technology. As UCL boldly charts a course to maintain the integrity of its assessments, the broader discourse on AI's role in academic practice continues to unfold, demanding careful consideration and strategic foresight.


Reference Map

  1. Paragraphs 1, 2, 5, 6, 7
  2. Paragraph 3
  3. Paragraph 4
  4. Paragraph 4
  5. Paragraph 6
  6. Paragraph 6
  7. Paragraph 8

Source: Noah Wire Services