Professor Fraser Sampson, a former UK Biometrics and Surveillance Camera Commissioner, has explored the implications of Artificial Intelligence (AI) in policing, particularly in relation to the use of discretionary powers. His insights, shared in a recent article on Biometric Update, hinge on the concept of discretion as articulated by Lord Scarman in his 1981 report following the Brixton Disorders.

Discretion, as defined by Scarman, encompasses the ability of police officers to tailor their actions to varying circumstances, notably in instances of street searches. As policing relies on discretionary decision-making, the integration of AI into this framework raises critical questions regarding the balance between efficiency and the responsible exercise of power.

Sampson notes that while AI has the potential to analyse data and detect patterns with remarkable speed, it also presents unique challenges when applied to law enforcement. He emphasises that AI systems could enhance operational efficiency, yet their involvement in decision-making processes necessitates scrutiny. The distinction between using AI as a decision-making tool versus a mere decision-support mechanism is vital; the former raises ethical considerations about how much authority should be conferred upon AI systems in high-stakes scenarios.

The need to maximise the efficacy of public investment in AI is crucial. Sampson argues that if AI can enhance policing efforts, particularly in areas like child safety and sexual exploitation prevention, the question becomes how to manage the discretionary powers granted to these systems. He calls for a comprehensive framework to determine the limitations of AI's role in policing, advocating for a balanced approach that ensures human judgement remains at the forefront.

Sampson draws on fictional references, such as the film Demolition Man, to highlight concerns regarding the over-reliance on AI in policing. In the film, AI fails to respond effectively to a complex human threat, underscoring the inherent unpredictability in real-world scenarios. Therefore, he argues, a practical strategy is needed to preserve human discretion in conjunction with technological advancements.

One potential benefit he identifies for AI in policing is its capacity to offer objective assessments regarding the proportionality and justification of proposed actions. Furthermore, while concerns about AI bias in enforcement remain prevalent, Sampson suggests that AI could potentially mitigate human biases, facilitating a more equitable approach to policing.

Accountability also emerges as a crucial theme. Sampson questions how citizens can voice grievances regarding AI-influenced decisions made by police, and whether AI technologies should be held accountable in cases of misuse. This points to a broader discourse on trust between law enforcement and communities, especially as the integration of AI takes shape.

Reflecting on the findings of a CENTRIC survey concerning public attitudes towards police use of AI, Sampson indicates that there are significant concerns about the potential for the police to deflect responsibility onto AI systems in the event of failures. This aspect of public sentiment may reveal deeper anxieties regarding both technological interventions in policing and the effectiveness of police practices overall.

Sampson's analysis concludes by recognising the enormous potential that AI holds in transforming various aspects of policing, such as forensics, biometrics, and crime mapping. However, he stresses that the integration of AI into discretionary policing requires careful contemplation to ensure trust and accountability are upheld within communities.

By considering the substantial benefits alongside the emerging challenges, Sampson advocates for a dual approach that prioritises both technology's capabilities and the fundamental principles of human judgement and discretion.

Source: Noah Wire Services