Last year’s suggestion that Scottish law schools are failing to engage with generative artificial intelligence has prompted a coordinated rebuttal from the Committee of Heads of Scottish Law Schools, which says the picture painted was partial and misleading. According to organisers of the Global Legal‑Tech Competition held in Edinburgh in February 2025, Scottish institutions have been active in hosting and participating in events that connect legal education with emerging technology. Industry surveys of legal firms in Scotland, meanwhile, show the wider profession is cautious but increasingly interested in trialling AI, underscoring a shifting, if measured, landscape.

The committee argues that many Scottish schools prefer to teach principles and ethical use rather than train students on individual commercial products that may not survive or become mainstream. That stance reflects concerns revealed in the sector: a large proportion of Scottish law firms have not yet adopted AI tools and most remain worried about bias and reliability, even as a minority are comfortable using AI to support decision‑making.

National assessment authorities have also moved to set clear expectations. The Scottish Qualifications Authority published guidance and a position statement for the 2025–26 academic session defining acceptable and unacceptable uses of generative AI in assessments and stressing that learners must still be able to demonstrate required knowledge and skills. These policy signals inform how universities design coursework and examinations.

Law schools describe a mixed approach: embedding critical engagement with AI across curricula, running practical exercises that require students to interrogate and critique outputs, and collaborating with practitioners for up‑to‑date workplace perspectives. Edinburgh’s event programme and guest lectures from practising firms were cited as examples of how students are exposed to both the potential and limits of legal tech.

Practical skills teaching has shifted partly in response to these concerns about academic integrity and professional readiness. Several institutions report increased use of assessments that prioritise oral presentations, portfolios and supervised practical tasks designed to evaluate individual reasoning and ethical judgement rather than reliance on automated outputs. At the same time, some universities are experimenting with chatbot services to provide routine student support.

Postgraduate and specialist programmes are expanding to meet demand for deeper technical and regulatory expertise. New and forthcoming courses, such as a postgraduate LLM in Law, Technology and Innovation, and other degrees combining law with computing subjects, indicate growing curricular investment in the intersection of law and AI. Universities also point to collaborative research projects on responsible AI involving ethicists and data scientists.

Regulation, environmental impact and professional standards remain central to academic debate. The SQA’s assessment guidance and broader institutional policies aim to preserve the integrity of certification while allowing considered engagement with AI; environmental scholars within law faculties have urged careful assessment of the carbon and resource costs of widespread model use. Those competing priorities shape a cautious pedagogy that privileges sustainability and ethics.

Taken together, the committee says, Scottish law schools are neither ignoring AI nor rushing to adopt every commercial tool. They portray a strategy oriented towards teaching legal reasoning, ethical use and regulatory understanding, while collaborating with the profession and national bodies to ensure students are prepared for practice as the technology evolves. Industry surveys and institutional programme developments suggest the profession and academia are moving forward, albeit at a deliberate pace.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services