Politics Copy RSS link Link copied to clipboard!

Two officials suspended over botched draft AI policy

The Department of Communications and Digital Technologies suspended two unnamed officials following an investigation into the Draft National Artificial Intelligence Policy. Minister Solly Malatsi withdrew the document after News24 revealed it contained fictitious academic references. Director-General Nonkqubela Jordan-Dyani attributed the error to irresponsible AI tool usage. The department is conducting an internal review and plans consequence management for those found at fault.

Content creator outlines AI workflow and legal limits for 2026

A professional content creator with over 15 years of experience details their integrated AI workflow for 2026, emphasising human oversight in ideation, research, and editing. The article reviews tools including ChatGPT, Claude, Midjourney, and Suno, while addressing legal frameworks such as the EU AI Act and copyright restrictions on fully AI-generated works. The creator warns against publishing unedited AI content due to risks of hallucinations, generic tone, and lack of originality, advocating instead for AI as a productivity assistant rather than a replacement for human expertise.

AI-written books flood the market echoing Orwell's novel-writing machines

Author Laura Beers discusses the rise of AI-generated literature, drawing parallels to George Orwell's dystopian vision in '1984'. The article highlights legal settlements involving Anthropic and Grammarly for copyright infringement and identity misappropriation. Beers notes that thousands of books on Amazon are now written or polished by AI tools like Sudowrite and Squibler. While readers struggle to distinguish AI prose from human writing, Beers expresses skepticism that machines can produce true art without human experience, predicting a future of mass-produced fiction similar to the 'jam and bootlaces' of Orwell's Ministry of Truth.

DigiCert launches Content Trust Manager for AI media

DigiCert has launched Content Trust Manager, a managed service within its DigiCert ONE platform, to verify the origin and integrity of digital media against AI-generated content. The product uses cryptographic signing and the C2PA standard to attach tamper-evident credentials to images and video. Targeting large businesses, media groups, and public sector bodies, the tool supports API, browser, and on-premises deployment to demonstrate responsible AI practices and meet regulatory demands for content authenticity.

Rod Sims argues Australian news levy leaves too much bargaining power with platforms

Rod Sims, chair of the Australian Competition and Consumer Commission, supports the government's news bargaining incentive (NBI) exposure draft but criticises its design. He argues the 25% cap on individual deals leaves excessive bargaining power with platforms like Google and Meta, potentially disadvantaging medium and small media businesses. Sims highlights delays in implementation and the need to extend the regime to generative AI companies. He urges the government to finalise the legislation by mid-year to prevent staff layoffs and protect journalism.

Courts May Classify AI Transcription Tools as Wiretapping Devices Under Privacy Laws

Legal experts warn that companies using AI transcription tools face significant liability under the California Invasion of Privacy Act (CIPA). Recent litigation, including cases against Otter.AI and customer service vendors, argues that AI systems acting as third parties to record and analyze conversations without explicit consent constitute wiretapping. Courts are determining if these tools qualify as unauthorized third parties capable of data reuse for training. To mitigate risk, organizations must implement clear disclosure, obtain informed consent, and update vendor contracts to prohibit unauthorized data processing.

Supervisor Dom Zanger warns against submitting AI-generated letters as original community opinions

Supervisor Dom Zanger expresses concern regarding a trend in local media where letters to the editor appear to be fully generated by AI tools like ChatGPT and submitted as original personal thoughts. Zanger argues that this practice erodes authenticity, misleads readers, and lowers the value of civic discourse by replacing genuine community voices with synthetic outputs. He calls for transparency, urging contributors to disclose AI usage or write their own imperfect thoughts to maintain trust in public dialogue.

Elon Musk confirms xAI used OpenAI models to train Grok

Elon Musk testified in a California federal court that xAI used distillation techniques on OpenAI models to train its chatbot, Grok. This admission occurred during a lawsuit where Musk accuses OpenAI of abandoning its nonprofit mission. The revelation confirms suspicions that American AI labs routinely distill each other's work. The case involves OpenAI, CEO Sam Altman, and co-founder Greg Brockman, raising questions about intellectual property and industry competition.

Next