The UK government is currently examining the intersection of copyright law and artificial intelligence (AI) training through a consultation initiated in December 2024. This effort seeks to address concerns regarding the rights of creators—artists, writers, and composers—when their work is used to train AI models. However, policy experts have raised alarms over the potential consequences of barring major tech companies, such as OpenAI, Google, and Meta, from using copyrighted materials, suggesting that this may compromise the quality of AI models and the broader economy.
The proposed system would allow AI developers to utilise online content for training purposes unless the rights holders explicitly opt out. This suggestion has encountered significant pushback from organisations representing the creative sector, which argue that it shifts the burden onto creators to prevent their works from being used, rather than requiring AI developers to seek consent beforehand. Likewise, technology companies have expressed reservations, contending that the opt-out model could hinder their ability to determine permissible content, impede commercial applications, and impose unnecessary transparency requirements.
During a recent webinar hosted by the Centre for Data Innovation think tank, experts provided insights into the implications of the proposed regulations. Benjamin White, founder of copyright reform advocacy group Knowledge Rights 21, stated that such restrictions could negatively impact not only the creative sectors but also the scientific community. White noted, "The rules that affect singers affect scientists, and the rules that affect clinicians affect composers as well," emphasising that copyright protections influence investments across various fields. He further expressed that existing copyright exceptions do not facilitate effective data sharing among universities or allow NHS trusts to circulate training data based on copyrighted materials.
Bertin Martens, a senior fellow at economic think tank Bruegel, highlighted the inherent contradictions within the media industries' position. "They’re all using these models to increase their own productivity...by withholding their data for training, they reduce the quality," Martens argued, indicating that a failure to provide robust data could lead to biased AI systems. He cited the impracticality of big AI companies obtaining licences from a wide array of small publishers, which incurs prohibitive transaction costs.
Julia Willemyns, co-founder of the tech policy research project UK Day One, underscored the impracticality of the proposed opt-out scheme, warning that less restrictive jurisdictions could continue to train AIs using the same content, thereby resulting in the UK lagging behind in technological advancements. "This slows down technology diffusion and has negative productivity effects," she stated. Additionally, she pointed out that artists may not gain significant income from AI licensing deals, suggesting that the overall gains for creators would remain marginal at best.
The discussion extended to a recent incident involving AI-generated art mimicking the style of Studio Ghibli, a renowned Japanese animation studio. This controversy sparked debate over the boundaries of artistic appropriation in the AI space. Willemyns observed that the visibility of Studio Ghibli's works increased as more people were drawn to its films, suggesting that the argument against AI-generated mimicking may be less impactful than it seems. Martens echoed this sentiment, reflecting on the competitive environment created by such AI outputs, which could ultimately be beneficial for popular products.
The panel concluded that while AI models should not directly reproduce training data, training on publicly available content should remain permissible to foster innovation. The experts suggested that a clear framework allowing for text and data mining exemptions within UK copyright law could streamline the regulatory environment, avoiding the complications that might arise from a fragmented approach.
Source: Noah Wire Services