Programming instructors are being forced into rapid, improvised redesign as generative AI becomes part of everyday student workflows, from web search to code editors and word processors. In an article for O'Reilly Radar, Sam Lau of UC San Diego and his co-authors describe this as a new kind of teaching labour: educators are not simply setting rules for AI use, but trying to reshape assignments, assessments and classroom practice around tools they do not control. Their research, which they say will be presented at CHI 2026, is based on interviews with 13 undergraduate computing instructors and a survey of 169 faculty members.
The authors call the result "emergency pedagogical design", borrowing from the idea of emergency remote teaching during the pandemic. Their point is that this is not the same as carefully planned AI integration. Instructors are reacting to a fast-changing environment, often after courses have already been built, and they are doing so indirectly, through policies, assignments and course infrastructure rather than by changing the tools students actually use.
The study suggests that the biggest obstacle is not opposition to AI, but fragmentation. According to the survey, 81% of instructors said they were open or very open to using generative AI in teaching, yet only 28% said the same of their colleagues. That gap leaves many educators working alone. The result, the paper argues, is a patchwork of course-level policies that can look, from the student side, like a "wild west" of conflicting rules. The authors also found concern that unequal access to paid AI tools could deepen learning disparities.
Instructors are also struggling to assess what students can do unaided. Several interviewees said students could perform well on take-home work but falter when asked to demonstrate basic skills under supervision. One reported that roughly a third of a 450-student class scored zero on a short coding task, despite acceptable assignment grades. Others responded by shifting some credit to oral check-ins, written explanations or custom chatbot support, but those approaches created fresh problems around staffing and consistent marking.
Resources appear to be the decisive constraint. More than half of surveyed instructors said they lacked the resources to implement generative AI effectively, and 62% said they did not have enough time. The burden was heavier at minority-serving institutions, where faculty were more likely to report inadequate resources and heavier teaching loads. The paper argues that this matters because the most ambitious redesigns were concentrated among the most privileged instructors, those with lighter workloads, extra funding or large course teams. The authors say that if universities want AI-era teaching to be fair, they will need training, evidence and funding, not just enthusiasm.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [1]
- Paragraph 2: [2], [3]
- Paragraph 3: [2], [3]
- Paragraph 4: [2], [3]
- Paragraph 5: [2], [5], [6]
Source: Noah Wire Services