Students increasingly rely on artificial intelligence (AI) tools such as ChatGPT to assist with their coursework, prompting educators to reconsider teaching methods and assessment formats. Melkior Ornik, an assistant professor in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign, has conducted a study to examine how well AI performs in an engineering course context and to explore potential strategies for integrating AI into education.

Over the past two years, concerns have grown among academics regarding students’ use of AI models to complete assignments. To assess whether students could leverage AI to pass an engineering course without prior knowledge, Ornik and his PhD student, Gokul Puthumanaillam, ran a pilot study in a third-year undergraduate course on the mathematics of autonomous systems. They evaluated the performance of the free version of OpenAI’s ChatGPT (GPT-4) on course assignments and exams.

Ornik shared their approach: “What we said is, 'Okay let's assume that indeed the students are, or at least some students are, trying to get an amazing grade or trying to get an A without any knowledge whatsoever. Could they do that?'”

The results, detailed in a preprint paper titled "The Lazy Student's Dream: ChatGPT Passing an Engineering Course on Its Own," indicated that ChatGPT performed well overall, earning approximately a low B grade. However, the AI's efficacy varied depending on the assignment type. For closed-form problems such as multiple choice questions and straightforward calculations, the AI excelled, achieving near-perfect scores. Ornik remarked, “It got almost 100 percent on those sorts of questions.”

In contrast, tasks requiring deeper critical thinking and problem-solving, such as design projects requiring explanation, exploration of problem-solving methods, and visual data representation, saw a significant drop in performance, with the AI receiving a grade at roughly a D level. Ornik explained, “Questions that were more like ‘hey, do something, try to think of how to solve this problem and then write about the possibilities for solving this problem and then show us some graphs that show whether your method works or doesn't,’ it was significantly worse there.”

These findings have implications for how educators might adapt their teaching. Ornik drew a comparison between the arrival of AI in education and the introduction of calculators in classrooms in the past. He outlined the challenge: “Before calculators, people would do these trigonometric functions... Then of course that kind of got out of fashion and people stopped teaching students how to use this tool because now a bigger beast came to town.” The question he poses is what aspects of education remain worth teaching in the AI era and whether focus should shift toward higher-level cognitive skills less amenable to AI solutions.

Discussions with colleagues in the University of Illinois’ College of Education highlighted parallels with fundamental skills taught to younger students, such as mental arithmetic and multiplication tables. Although digital tools exist, these skills contribute to brain development and future learning capacity. Ornik noted, “It is still good for them just in terms of their future learning capability and future cognitive capabilities to teach that.”

Regarding approaches to AI in education, Ornik identified three potential strategies. The first involves treating AI as an adversary, designing assessments to prevent AI use through oral exams or AI-resistant assignments. The second views AI as a friend, encouraging students to use AI tools responsibly. The third approach, which Ornik favours, considers AI as an accepted reality—“AI as a fact”—that students will inevitably use in academic and professional contexts. This approach emphasises teaching critical thinking about AI’s outputs, ensuring students verify information rather than accepting it uncritically. “Students tend to over-trust computational tools and we should really be spending our time saying, 'hey you should use AI when it makes sense but you should also be sure that whatever it tells you is correct,'” Ornik explained.

Ornik also acknowledged uncertainties related to AI’s sustainability and broader issues such as data privacy and copyright. He observed that the current AI landscape resembles the dot-com bubble era around 2000, with around-the-clock AI branding extending even to unlikely products like barbecue grills, though their underlying technology may not differ significantly from what has existed for decades.

Looking ahead, Ornik and colleagues are preparing to extend their research to multiple engineering courses, exploring how to adapt course content and assessments in an AI-prevalent environment. One aim is to create a “critical thinking module” to educate students about AI’s capabilities and limitations. This would include examples where ChatGPT has made significant errors relevant to their coursework. Another goal is to identify which course materials remain valuable and which may need adjustment or removal.

“Quite likely there will be courses that need to be approached in different ways and sometimes the material will be worth saving but we'll just change the assignments,” Ornik said. “And sometimes maybe the thinking is, 'hey, should we actually even be teaching this anymore?'”

The study and ongoing efforts at the University of Illinois highlight the ongoing adjustment within higher education as AI tools become a standard component of students’ academic experience.

Source: Noah Wire Services