Professors at several American universities are reviving oral examinations as a frontline response to the rapid spread of generative artificial intelligence in student work, arguing that face-to-face questioning reveals understanding that polished, AI-assisted submissions conceal. According to The Associated Press, instructors from Cornell to the University of Pennsylvania have begun requiring students to defend written work verbally to demonstrate genuine comprehension. [2]
Educators describe a pattern: take-home essays and problem sets increasingly arrive flawless in form but thin on demonstrable reasoning, prompting doubts about whether the students themselves produced the thinking. A survey cited by the New York Sun found widespread student use of AI tools, a reality that has intensified faculty interest in assessment methods that cannot be outsourced to a chatbot. [2][4]
At Cornell, biomedical engineering faculty have introduced 20-minute Socratic-style oral defences following submitted problem sets, reallocating grading labour from papers to conversations and using teaching assistants to handle larger classes. The move is presented not merely as an anti-cheating measure but as a way to restore habits of reflection and explanation that instructors say are eroding. [2]
Universities are experimenting across disciplines and formats. At the University of Pennsylvania, seminar instructors now combine written research with live questioning; at New York University, some courses use cold-calling, mandatory presentations and redesigned office hours to put students on the spot. These practices aim to make students articulate their own reasoning rather than rely on machine-produced prose. [2][4]
Beyond returning to pre-modern assessment techniques, some academics are incorporating AI into the oral format itself. Faculty at NYU have trialled AI-driven speaking agents that conduct exams, probe students about group projects and adapt follow-up prompts based on responses, while faculty members grade with AI assistance. University of Auckland researchers argue that such interactive oral assessments are among the most authentic ways to measure knowledge in an era when written work can be generated automatically. [2][3]
Not all instructors embrace oral assessments without reservation. Critics note the potential for these exams to disadvantage students with social anxiety, communication disorders or language barriers; training programmes and careful scheduling are being offered to reduce stress and ensure fairness. Proponents counter that one-on-one conversations can surface the strengths of quieter students who might otherwise disappear in large lectures. [2]
The debate also touches on scalability and faculty workload. Pilot studies during the pandemic and subsequent research projects have investigated ways to scale oral testing, including shorter oral interactions, distributed assessment teams and AI-assisted interviews; the experiments are presented as attempts to balance rigour with the administrative realities of large undergraduate cohorts. According to The Associated Press and academic commentators, early evidence suggests oral formats can be adapted, though they require institutional support to sustain. [2][3]
As colleges refine their response to generative AI, the shift towards spoken assessment underscores a larger pedagogical question: whether schools will merely police academic honesty or redesign learning to prioritise skills that resist automation, such as real-time reasoning, critical questioning and the capacity to explain one’s thinking. The outcome will shape not only how students are tested but what counts as a meaningful education in the AI age. [2][4]
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2]
- Paragraph 2: [2], [4]
- Paragraph 3: [2]
- Paragraph 4: [2], [4]
- Paragraph 5: [2], [3]
- Paragraph 6: [2]
- Paragraph 7: [2], [3]
- Paragraph 8: [2], [4]
Source: Noah Wire Services