Politics Copy RSS link Link copied to clipboard!

EDPB issues opinion on GDPR compliance for AI model training

The European Data Protection Board (EDPB) published an opinion outlining GDPR requirements for developing and using artificial intelligence models. The guidance addresses data minimization, transparency, and the criteria for determining if an AI model is anonymous. It details the three-step test for establishing legitimate interest as a legal basis for processing personal data during training and application. Furthermore, the opinion explains how unlawful initial data processing at the development stage impacts the lawfulness of subsequent processing, whether by the same controller or a third party, emphasizing the need for risk reduction measures and proper documentation.

AI visibility trackers are breaking analytics and marketing strategies

Jan-Willem Bobbink warns that AI visibility trackers are causing misreporting and resource misallocation by triggering self-referential loops known as the 'ouroboros effect'. When tools fetch data to report visibility, they artificially inflate metrics, creating false positives that mislead budget decisions. The article advises treating log file data with skepticism and focusing on competitor-relative mentions rather than total fetches until cleaner tracking methods are developed.

United Nations warns AI in advertising risks fuelling information crisis

The United Nations, via the Department of Global Communications and the Conscious Advertising Network, warns that unchecked AI adoption in advertising could deepen a global information integrity crisis. With global ad spending exceeding $1 trillion, the brief highlights risks including accelerated disinformation, hate speech, and threats to independent journalism. UN Senior Adviser Charlotte Scaddan and Harriet Kingaby of the Conscious Advertising Network emphasise that without guardrails, AI risks undermining the digital environments brands depend on. The briefing calls for aligned governance frameworks, greater transparency in AI supply chains, and improved media buying visibility to protect information ecosystems and business returns.

Queer storytellers in Nigeria resist algorithmic erasure through independent archives

Pamela Adie discusses how queer filmmakers in Nigeria face censorship and police raids, leading to stories remaining hidden from public datasets. This invisibility risks algorithmic erasure as AI systems train on incomplete data. Initiatives like EhTv Network attempt to create independent archives to preserve these narratives, ensuring they are not lost to future knowledge systems.

Thales report reveals bad bots comprise 40% of global internet traffic

According to the Thales 2026 Bad Bot Report, automated traffic accounted for 53% of all internet activity in 2025, with malicious bad bots representing 40% of this volume. The United States faced the highest volume of attacks at 59%, followed by Australia, the United Kingdom, and France. Financial services was the most targeted industry, absorbing 24% of all bot attacks. AI-driven bot activity increased more than tenfold in 2025, with daily blocked requests rising from 2 million to 25 million. The report highlights that AI agents are blurring detection boundaries by mimicking legitimate human behavior, making it difficult to distinguish between benign automation and malicious intent. Consequently, organizations are shifting from blocking bots to managing and verifying AI access through cryptographic headers and policy enforcement.

China launches months-long campaign against AI misuse

The Cyberspace Administration of China (CAC) has initiated the 2026 'Qinglang' campaign, a months-long enforcement operation targeting AI-enabled fraud, deepfakes, disinformation, and illegal applications violating privacy or intellectual property rights. Coordinated with the Ministry of Public Security, the campaign addresses issues such as voice-cloning scams, unfiled generative AI services, and non-compliant training data. This action occurs against a backdrop of expanded domestic regulations and follows a White House accusation of industrial-scale AI theft by Chinese entities. Violations may result in administrative penalties, service suspensions, or criminal referrals.

Italy's AGCM closes probes into DeepSeek, Mistral, and Nova AI over hallucination transparency

Italy's competition authority, the AGCM, has closed investigations into three AI providers—DeepSeek, Mistral AI, and Nova AI—after they accepted binding commitments to improve user warnings regarding AI hallucinations. The companies agreed to display prominent, contextual disclaimers within chat interfaces and provide Italian translations of disclosures. Non-compliance within a 120-day window could result in fines of up to approximately $11.6 million. This resolution establishes a precedent for consumer protection obligations in AI usage.

AlgorithmWatch calls for ban on non-consensual sexualized deepfakes in AI Act

AlgorithmWatch recommends implementing a ban on AI systems capable of creating non-consensual sexualized deepfakes within the EU AI Act overhaul. The organisation argues that generative AI is increasingly used to commit sexualized violence against women and girls, causing severe psychological distress and silencing victims in public debates. While the Digital Services Act addresses content dissemination, AlgorithmWatch insists on stricter liability for AI companies and platforms, mandatory consent requests, and clearer definitions of consent to close legal gaps and hold perpetrators accountable.

South Africa withdraws AI policy after discovery of fabricated research references

South Africa's Department of Communications and Digital Technologies withdrew its Draft National Artificial Intelligence Policy 16 days after publication following the discovery of fabricated academic references. The errors, attributed to generative AI hallucination without proper human verification, compromised the document's integrity. Communications Minister acknowledged the failure of oversight. The incident highlights risks regarding epistemic and information integrity in public policy production, prompting calls for stricter accountability, transparency, and explainability in the use of AI for government documents.

Chinese regulators penalise ByteDance apps and website over AI labeling breaches

Chinese internet regulators have penalised Jianying, Maoxiang apps and the Jimeng AI website, all operated by ByteDance, for failing to comply with rules requiring clear labeling of AI-generated and synthetic content. The Cyberspace Administration of China directed local offices to summon the entities, ordering corrections, issuing warnings, and penalising responsible personnel. This action underscores authorities' push to tighten oversight of emerging AI technology.

South Africa withdraws draft AI policy after discovery of fabricated sources

South African Minister of Communications and Digital Technologies Solly Malatsi withdrew the draft National Artificial Intelligence Policy following the discovery of at least six AI-generated, fictitious sources. The error, described as AI hallucinations, occurred without proper verification. Tyronne McCrindle of Article One highlighted the breach of public trust and the dangers of relying on digital technologies without human oversight. The incident underscores the need for public engagement and accurate information in policy-making regarding disruptive technologies.

Previous Next