Politics Copy RSS link Link copied to clipboard!

Google signs Pentagon AI contract amid employee protests

On April 27, over 600 Google employees signed a letter opposing the company's AI work for the Pentagon. Despite previous commitments to restrict military use, Google confirmed signing a deal to provide classified AI services to the US Defence Department, alongside other tech firms. The decision reflects a shift from Google's earlier stance and has led to internal dissent, increased moderation, and concerns about transparency and ethics. The move aligns with rising US defence spending and global security collaborations, marking a change in the company's engagement with military technology.

Fake wildlife videos created with generative AI could cause long-term damage to nature

An article from The Bay Net warns that fake animal videos generated by generative AI are proliferating on social media, potentially causing long-term harm to wildlife conservation and public safety. The piece highlights issues such as misinformation regarding animal behavior, risks to foragers using AI-generated mushroom guides, and the erosion of trust in natural wonders. It notes that while platforms like Facebook and Instagram require AI labeling, enforcement is inconsistent. The article provides a guide to identifying fakes through visual clues, video length, and source verification, recommending expert resources like iNaturalist and Merlin for authentic wildlife identification.

Chinese court rules AI cannot be sole reason for employee dismissal

A Hangzhou Intermediate People's Court in China ruled that companies cannot dismiss employees solely to replace them with artificial intelligence tools. The case involved a quality assurance supervisor, Zhou, whose contract was terminated after refusing a demotion to a lower-paying role intended for an AI system. The court determined that replacing human workers with AI does not constitute a significant change in objective circumstances under Chinese labour law and that the offered alternative position involved unreasonable pay cuts. This decision establishes a legal precedent protecting employees from dismissal based exclusively on automation.

China inaugurates World Data Organisation to bridge data divide

China inaugurated the World Data Organisation in Beijing in March with a mission to bridge the data divide and unlock data value for the digital economy. This move reflects Beijing's strategy to address finite human-generated data for AI training by reorganising its data-sharing regime. The initiative aims to pool digitised data to feed specialised AI models for sectors like telemedicine and finance, countering global data exhaustion concerns.

Mexico enters first regulatory phase for artificial intelligence amid business integration

Mexico is entering its initial regulatory phase regarding artificial intelligence, characterized by accumulating ethical principles and technical standards rather than comprehensive legislation. As AI shifts from a support tool to an executor of critical tasks, businesses face pressure to justify concrete returns and manage legal risks related to data usage and content generation. The gap between market demands for value and the lack of clear legal strategies poses a risk to financial results and market confidence. Experts note that competitive advantage now depends on correct operational integration rather than mere adoption.

Sharebite data suggests AI adoption drives longer work hours and late-night orders

Sharebite data indicates that AI adoption correlates with increased work hours, evidenced by a doubling of Saturday client orders and a 57% rise in late-night orders. Sharebite CEO Dilip Rao attributes this to AI extending workdays rather than reducing them, supported by research from Harvard Business Review, UC Berkeley, and the National Bureau of Economic Research. The trend reflects a shift where AI complements human labor, requiring verification and integration time, amidst broader corporate pressure to adopt the technology.

Founder advises candidates to show human heart and grit to stand out in AI job market

Kristina Simmons, founder of Overwater Ventures and former partner at Andreessen Horowitz, advises job seekers to differentiate themselves in the AI era by demonstrating authentic human qualities. She recommends personal networking, using AI for research and creative tasks like designing pitch decks, and showcasing specific ideas for applying AI to the role. However, she warns against relying on technology for interviews, emphasising that passion, energy, and tone cannot be faked. The advice targets candidates navigating a market where AI is used for mass applications and resume tailoring.

Sam Altman asks GPT-5.5 to plan its own launch party

OpenAI CEO Sam Altman described asking GPT-5.5 to plan its own debut party during a fireside chat at Stripe Sessions. The model suggested holding the event on May 5, keeping speeches short, and having human creators deliver a toast. It also proposed a feedback loop for future model suggestions. Altman noted the request was strange but confirmed the plan. John Collison of Stripe shared a similar anecdote about an internal agent purchasing an HTTP design online. The interaction highlights emerging autonomous behaviors in advanced AI models.

Philosophy majors recruited by AI companies for ethics roles

AI companies such as Google DeepMind and Anthropic are hiring philosophy majors to shape machine behaviour and ensure alignment with human values. Roles include training chatbots for honesty and developing ethical governance frameworks. While salaries range from six figures to over $400,000, experts note the trend is still early and anecdotal, with most firms hiring fewer than 10 such specialists. Skeptics warn that commercial pressures may limit the actual influence of these ethicists.

South African schools and universities must adapt to artificial intelligence

Robyn Shepherd, an attorney at SchoemanLaw Inc, argues that South African educational institutions should not ban artificial intelligence but instead focus on developing digital literacy and implementing clear policies. The opinion piece highlights the need to equip students with skills to use AI responsibly, critically, and ethically to prepare them for the modern workforce. It also emphasises compliance with the South African Constitution and the Protection of Personal Information Act (POPIA) to protect student rights and data privacy while balancing innovation with protection.

China blocks Meta acquisition of AI startup Manus citing Singapore washing

China's National Development and Reform Commission has prohibited Meta's proposed US$2 billion acquisition of Manus, a Chinese AI startup, ordering the parties to unwind the transaction. Regulators stated the deal violated foreign investment security review laws because Manus failed to declare the takeover, despite relocating its headquarters to Singapore. Officials warned against 'Singapore washing,' where domestic firms use offshore structures to bypass oversight on technology, data, and talent originating in China. While blocking the deal, Beijing affirmed its support for domestic AI expansion and innovation.

Tech giants form Agentic AI Foundation to standardise protocols amid concerns over premature consensus

In December 2025, Anthropic, OpenAI, Google, Microsoft, and others joined the Linux Foundation to create the Agentic AI Foundation, consolidating competing protocols like MCP and AGENTS.md. While adoption surged with over 8 million MCP server downloads by April 2025, the article highlights risks of premature standardisation before reasoning capabilities mature. Rapid AI advancement, workforce disruption for junior developers, and technical debt accumulation are cited as critical challenges. The initiative aims to enable interoperability but faces scrutiny regarding whether current architectural assumptions will constrain future innovation.

Adobe legal chief calls for creator protection as policymakers and tech companies reframe copyright in the era of AI

Louise Pentland, Adobe's Chief Legal Officer, urged policymakers and tech companies to adopt a pragmatic approach to copyright in the age of AI rather than radical legal overhauls. Speaking at Adobe Summit 2026, Pentland highlighted the 2025 US Copyright Office decision granting protection to an AI-assisted image as a precedent. She advocated for maintaining human creativity through existing frameworks, clearer guidance, and technologies like Content Credentials to verify authenticity and protect creators from deepfakes, warning that failing to protect artists could undermine the data foundations of generative AI.

OpenAI rolls out advanced account security for ChatGPT users

OpenAI introduced an opt-in Advanced Account Security setting for ChatGPT on Thursday. The feature requires passkeys or physical security keys, removes email and SMS recovery options, and excludes enrolled accounts from model training by default. OpenAI partnered with Yubico to offer discounted security key bundles. The update aims to protect users handling sensitive tasks, such as journalists and researchers, against phishing and digital attacks. Users in the Trusted Access for Cyber program must enable the feature by June 1.

Experts warn humanity faces imminent extinction risk from superintelligent AI

AI safety experts and ControlAI warn that superintelligent AI poses an extinction risk to humanity within two to five years. Citing Anthropic's autonomous vulnerability-exploiting model, Mythos, the article argues governments are ignoring the accelerating threat. It calls for treating superintelligent AI as a national and global security risk, advocating for an international coalition to prohibit its development to prevent catastrophe.

OpenAI targets smartphone launch with AI-first device strategy

OpenAI is developing its own smartphone in collaboration with chipmakers Qualcomm and MediaTek, and manufacturing partner Luxshare. The device will feature an embedded AI operating system allowing direct interaction via AI agents rather than traditional apps. The project involves a vertically integrated approach similar to Apple's, potentially including custom chips and design input from Jony Ive. OpenAI aims to unveil the first consumer device in the second half of 2026, with mass production possibly starting in 2028. This move reflects a strategic shift towards hardware and a more profit-driven structure.

Spotify rolls out Verified badge to distinguish human artists from AI

Spotify has launched a new 'Verified by Spotify' badge to help listeners identify human musicians from AI-generated content. The green checkmark, appearing on profiles and in search results, signifies that an artist meets authenticity standards including sustained engagement and genuine presence. Profiles representing primarily AI music or AI-created personae are ineligible. The initiative addresses industry concerns over synthetic tracks, following reports that Deezer sees 44% of new uploads as AI-generated. Additionally, Spotify is adding a new information section to all artist pages displaying career highlights and performance history.

Objection launches private accountability system for journalism

Objection, a self-funded entity backed by Peter Thiel and Balaji Srinivasan, has launched a service allowing individuals to file complaints about unfair reporting for $2,000. Investigations are conducted by a team including former National Security Agency operatives and submitted to an AI tribunal comprising models from OpenAI, Anthropic, Google, xAI, and Mistral. Journalists may defend their reporting, but verdicts are issued and published regardless. The service aims to act as a private accountability system.

Two South African officials investigated over fake AI references in national policy

Two senior Department of Communications officials, Dumisani Sondlo and Mlindi Mashologu, are under investigation for including AI-hallucinated references in the draft South Africa National Artificial Intelligence Policy. Communications Minister Solly Malatsi withdrew the policy on 26 April 2026 following an investigation by Article One and News24. The African National Congress demanded Malatsi appear before Parliament to explain the drafting process. On 30 April, the department confirmed two officials were preliminarily suspended pending the outcome of the inquiry into the credibility of the policy's evidence base.

ChatGPT enables new digital micro-economies and accelerates content creation

The article discusses how ChatGPT is transforming value generation by lowering entry barriers for digital activities such as editorial content, e-commerce, social media management, and video production. It highlights that while AI automates tasks like drafting and coding, human oversight remains essential for quality and strategy. The text identifies emerging opportunities in these sectors where AI acts as an accelerator rather than a replacement for human skills.

Next