This summer, stuck in Marseille behind a roadblock I had not expected, I followed Waze instead of a friend’s local knowledge and found myself immobile at a construction site. It is a small, everyday irritation, but emblematic of a larger question about authority in modern life: when technology and human judgement diverge, whom do we trust? [1]

Two and a half centuries after Immanuel Kant urged his contemporaries to “Sapere aude!”, “Have courage to use your own understanding!”, we face a new potential guardian of the mind. Kant defined enlightenment as “man's emergence from his self-imposed immaturity,” the condition of being unable to use one’s understanding without guidance from another. According to the Encyclopaedia entry on Kant’s essay, that “other” has historically been priests, monarchs and other claimants of external authority, but today it risks being code. [2][1]

The rapid uptake of AI amplifies the risk that convenience will be mistaken for wisdom. A global survey cited in the lead article found widespread recent use of AI, and OpenAI reported that most prompts concern non work-related topics, with writing among the most common uses. That shift, from tools that assist specialist tasks to tools that intervene in personal reflection, choice and expression, raises questions about whether we are outsourcing parts of the reasoning process that historically helped form individual judgement. [1]

Empirical work gives reason for concern. A small study at the Massachusetts Institute of Technology used EEG to show reduced cognitive activity among essay writers who could rely on AI, with participants increasingly copying blocks of text over time. Separately, research reported by Live Science in April 2025 found that large language models can be overconfident and exhibit cognitive biases similar to humans, and other studies suggest LLMs often oversimplify or misrepresent scientific findings. These findings point to two dangers: that humans will under-exercise their reasoning, and that the outputs they accept as authoritative may be biased or misleading. [1][6][4]

Behavioural research adds another layer. A study led by Aalto University reported that regular AI use alters self-assessment, making users more likely to overestimate their abilities. In effect, the seduction of effortless answers can both blunt critical faculties and warp confidence, producing a populace more prone to accept machine-generated judgements and less able to interrogate them. [5]

The technical opacity of many AI systems compounds the problem. Leading AI researchers have warned that advanced models may develop internal reasoning processes that elude human understanding, complicating efforts to verify alignment with human values. If we cannot inspect the chain of inference, following an AI’s recommendation becomes less an exercise in reasoned judgement than an act of faith in a black box. Industry statements acknowledging model limitations do not wholly dispel this epistemic unease. [7]

That does not make AI an enemy of progress. It can accelerate discovery, automate tedious work and augment human capabilities in ways that are profoundly beneficial. The challenge is to design social and institutional habits that preserve the exercise of human reason: education that prioritises critical thinking, interfaces that make AI reasoning transparent and contestable, and cultural norms that treat machine suggestions as prompts for deliberation rather than substitutes for it. As the lead commentary argued, Kant’s enlightenment was not merely a quest for efficiency but for emancipation; the exercise of reason creates agents rather than dependents. [1]

The question before us is not whether to use AI but how to use it without surrendering the capacities that underpin liberal democracy. If we allow convenience to become a new orthodoxy, if, in dubio pro machina, we habitually defer to the algorithm when doubt arises, we risk trading the messy labour of thought for a smoother, but passive, form of subordination. Preserving the Enlightenment project in the age of AI will require deliberate practices that keep human judgement active, institutions that make machine reasoning accountable, and a public culture that prizes inquiry over easy reassurance. [1][2][6][7]

📌 Reference Map:

##Reference Map:

  • [1] (The Guardian) - Paragraph 1, Paragraph 3, Paragraph 7, Paragraph 8
  • [2] (Wikipedia) - Paragraph 2, Paragraph 8
  • [4] (Live Science) - Paragraph 4
  • [5] (Live Science / Aalto University study) - Paragraph 5
  • [6] (Live Science) - Paragraph 4, Paragraph 8
  • [7] (Live Science) - Paragraph 6, Paragraph 8

Source: Noah Wire Services