Politics Copy RSS link Link copied to clipboard!

South Africa withdraws draft AI policy after discovery of fabricated sources

South African Minister of Communications and Digital Technologies Solly Malatsi withdrew the draft National Artificial Intelligence Policy following the discovery of at least six AI-generated, fictitious sources. The error, described as AI hallucinations, occurred without proper verification. Tyronne McCrindle of Article One highlighted the breach of public trust and the dangers of relying on digital technologies without human oversight. The incident underscores the need for public engagement and accurate information in policy-making regarding disruptive technologies.

Taylor Swift files trademarks for voice and likeness to combat AI misuse

TAS Rights Management filed three new trademark applications with the U.S. Patent & Trademark Office for Taylor Swift. The filings cover specific voice soundbites, including 'Hey, it's Taylor Swift' and 'Hey, it's Taylor,' and a well-known image of the artist holding a pink guitar. This legal move aims to protect her public image from unauthorized AI-generated content, following similar actions by Matthew McConaughey. Experts suggest this strategy establishes a protective infrastructure against future digital exploitation.

Zine creators resist AI influence in underground publishing

Self-published zine creators and publishers are expressing concern over the integration of artificial intelligence into their traditionally handmade art form. While some artists like Jesse Pimenta and Steve Simkins have experimented with AI tools for layout and coding, others, including Rachel Goldfinger and Maddie Marshall, are producing anti-AI zines to protest the technology. Publishers such as Ione Gamble of Polyester and MagCulture's Jeremy Leslie note a divide, with some rejecting AI-generated content entirely while others remain open to innovative AI-assisted works. The debate highlights tensions between efficiency and the grassroots, human-centric values of zine culture.

YouTube expands AI deepfake detection to Hollywood

YouTube has extended its likeness detection tool to Hollywood studios, talent agencies, and represented artists. The system identifies AI-generated videos replicating a person's face or identity without permission. Unlike Content ID, it scans for AI likenesses rather than copyrighted material. Participants verify identity to enroll, and flagged content undergoes a review process considering context before potential removal. This expansion follows earlier rollouts to creators and journalists, aiming to address concerns over impersonation and unauthorized commercial use as generative AI tools improve.

Google introduces AI content detection methods to protect search quality

Google has released a January 2026 report detailing its methods for detecting AI-generated content, utilizing watermarking and AI classifiers. The search engine aims to reduce 'Scaled Content Abuse' and low-value material to manage indexing costs and maintain user trust. Law firms are advised to avoid publishing unedited AI content to prevent traffic declines or deindexing, as Google prioritizes original, high-effort material that demonstrates expertise.

Spotify introduces AI credits in song info

Spotify has launched a beta feature displaying AI credits in song information panels, effective from April 16. Partnering with DistroKid, the platform uses the DDEX metadata standard to show voluntary AI disclosure flags alongside songwriter and producer credits. The system relies on creators to declare AI usage during upload; it does not automatically detect AI content. This transparency initiative aims to inform listeners about AI involvement in tracks without attempting to police or remove content.

South Africa withdraws draft AI policy after fake sources scandal

South Africa has withdrawn its first draft national artificial intelligence policy following the discovery of fictitious sources in its reference list. Communications Minister Solly Malatsi attributed the error to unverified AI-generated citations. The policy, intended to guide AI development and address ethical risks across the continent, included proposals for new regulatory bodies and private sector incentives. The incident raises concerns about the unchecked use of AI in official government work.

Study finds 17.6% of new websites generated by AI

Researchers from Imperial College London, Stanford University, and the Internet Archive analysed websites published between 2022 and 2025 using the Wayback Machine. The study identified that 17.6% of sites launched in this period were fully generated by AI, while 35.3% received AI assistance. This surge correlates with the 2022 launch of ChatGPT. Although public concern regarding misinformation and language standardisation exists, the study found no statistically significant evidence of a general decline in information accuracy or writing uniformity.

Val Kilmer estate approves AI recreation for film As Deep as the Grave

The estate of actor Val Kilmer has granted approval for the use of artificial intelligence to recreate his likeness and voice in the film As Deep as the Grave. This project utilizes technology to simulate Kilmer's performance despite his physical limitations and death, marking a significant development in the industry's approach to synthetic performance. The case has ignited a broader debate regarding the ethics of digital resurrection, creative ownership, and the future of acting as an actor's identity becomes a licensable digital asset.

Raju Narisetti argues trust is critical infrastructure for the future of AI

Raju Narisetti, a veteran journalist and global media leader, discusses the future of AI regulation and its impact on open knowledge. Speaking on the 'Regulating AI' podcast, he warns that while AI lowers the cost of information creation, it also lowers the cost of misinformation, risking a world where plausibility triumphs over proof. Narisetti emphasises that trust must be treated as infrastructure rather than a byproduct. He advocates for multilingual design to address language equity, noting that 7,000 languages are spoken but only 10 dominate the internet. He argues that emerging economies, particularly India, must be co-creators in AI development rather than just data sources. Narisetti states that by 2030, 'truth with receipts' will be more valuable than commodity content, urging leaders to build systems where errors can be traced and corrections seen.

Stanford and Imperial College study finds one third of new websites are AI-generated

A joint study by Stanford University and Imperial College London reveals that approximately 35% of new websites created by mid-2025 were generated or assisted by artificial intelligence, up from near zero before late 2022. Researchers found AI-generated content exhibits 33% higher similarity and artificially positive sentiment compared to human text. While no significant rise in misinformation was detected, experts warn that reduced content diversity could negatively impact society, recommending verification procedures and search engine adjustments to mitigate risks.

Google employees urge CEO to halt Pentagon classified AI deal

Over 600 Google employees from AI and Cloud divisions, including DeepMind staff, signed a letter to CEO Sundar Pichai urging the company to reject a new agreement granting the US Department of Defense access to Gemini AI models for classified military work. The deal, an amendment to a $200 million contract, allows deployment on classified networks for mission planning and weapons targeting. Critics fear the loss of oversight could cause irreparable damage to Google's reputation and violate ethical boundaries regarding autonomous weaponry and surveillance, echoing the 2018 Project Maven revolt.

Previous Next