Arizona State University’s new AI learning app has triggered unease among some faculty members who say their teaching materials were used without warning or consultation. The web app, Atomic, lets users pay $5 a month to generate personalised study modules from ASU course content, assembling readings, quizzes and video snippets into what the university says is a flexible learning product for people beyond its enrolled students.

Several professors said they only discovered the tool after their own lectures, slides and assignments had already been pulled into it. Chris Hanlon, a literature professor, said he was startled to see his likeness appear inside a module generated from his own material and described the result as badly distorted. He also found that a reference to literary critic Cleanth Brooks had been garbled, another sign that the system can misread and repurpose academic content in ways that may be misleading.

The controversy goes beyond embarrassment over mislabelled clips. Faculty members say it raises wider questions about ownership, consent and control at a university that increasingly promotes artificial intelligence as part of its educational mission. ASU’s intellectual property policy gives the Board of Regents ownership of most instructional material created by employees in the course of their work, while content placed on the university’s Canvas system can be redistributed under the platform’s terms. That legal backdrop leaves open a sensitive issue: whether professors understand how far their material can travel once it enters university systems.

Michael Ostling, a religious studies professor who attended a recent faculty question-and-answer session with president Michael Crow, said Crow described Atomic as an early experiment that was not yet ready for broad use. Ostling and others are worried that stripped-down course fragments could be detached from the context that makes classroom teaching responsible, especially in areas such as race, gender and sexuality. They also fear bad actors could use the system to manufacture misleading “evidence” about what professors teach, echoing earlier controversies over online syllabus platforms being used to target academics.

The launch also fits into ASU’s broader embrace of AI. In recent months, the university has expanded its collaboration with OpenAI and offered ChatGPT Edu to students, staff and faculty, presenting artificial intelligence as a driver of student success and research productivity. That enthusiasm has not erased concern among academics, however. Across higher education, faculty debates over AI have increasingly centred on academic integrity, data privacy and whether institutions are moving faster than their own safeguards.

ASU said only that a pilot began in April and that it is testing how existing digital content can be reused to reach learners outside degree programmes. But for the professors whose work is already being scraped into Atomic, the deeper issue is not whether AI can personalise education. It is who gets to decide how teaching is repackaged, and whether the people whose labour made that content possible have any meaningful say in the process.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services