Politics Copy RSS link Link copied to clipboard!
UN Women survey finds online violence and deepfakes push women out of work
A global survey of 641 female activists and journalists conducted between August and November 2025 by UN Women and City St George's, University of London, reveals that online violence is driving women out of public life and their careers. More than a quarter of respondents received unwanted intimate images, while 6% experienced deepfakes. Consequences include significant mental health impacts, with 24% reporting anxiety or depression and 19% self-censoring at work. Only 25% reported incidents to police, and 10% saw charges brought against abusers. The report highlights how avoidance techniques like self-censorship are more common than resistance, forcing women into less visible roles or resignation.
China's commitment to employment stability limits rapid AI adoption
A 2025 Chinese court ruling established that replacing workers with AI is not valid grounds for dismissal, requiring firms to attempt renegotiation or retraining first. This decision, cited as a model case, highlights a growing tension between Beijing's AI Plus initiative targets and its imperative to maintain employment stability amidst high youth unemployment. While the government promotes AI diffusion for economic growth, political pressure to avoid labor displacement is forcing policy retreats, such as freezing autonomous vehicle approvals in Wuhan and limiting Meituan's delivery robot rollouts. Officials are now debating frameworks like 'AI + Employment' and potential universal basic income to mitigate structural job losses.
OpenAI CEO Sam Altman warns AI washing is real but tech job displacement is on the way
OpenAI CEO Sam Altman stated that while some companies falsely attribute layoffs to AI, genuine job displacement is imminent. Altman anticipates palpable impacts in the coming years alongside new roles. Conversely, data from the Yale Budget Lab shows no significant macroeconomic effects yet, with a February study finding 90% of C-suite executives reported no AI impact on employment. However, other leaders like Anthropic CEO Dario Amodei predict significant white-collar job losses, and some firms have already cited AI for workforce reductions.
Rising costs and productivity questions challenge economics of AI code generation
The economic viability of using AI for code generation is facing scrutiny due to escalating usage costs and unproven productivity gains. Anthropic doubled its cost estimates for Claude Code, with average daily costs rising to $13 per developer. Microsoft's GitHub Copilot is transitioning to usage-based billing. Research indicates many companies saw zero revenue growth after AI adoption, while some face increased workloads, burnout, and server strain. Organizations are now questioning whether the expenses outweigh the benefits.
Websites must function as extractable sources for AI agents
The article argues that websites are shifting from being primary destinations to serving as canonical sources for AI agents. As AI tools summarize and recontextualize content, the message must be independent of visual design. Brands are advised to structure content for portability, ensuring core value propositions survive extraction without relying on layout or specific platforms.
KKR invests $10B in AI power plants and data centers
KKR announced a $10 billion investment dedicated to building AI-focused power plants and data centers. This move supports the US tech ecosystem and aligns with broader commitments by major firms like Google, Microsoft, Amazon, and Meta, which are collectively investing over $100 billion globally. The investment aims to secure long-term power agreements to meet rising electricity demands from AI data centers.
Coatue launches Next Frontier to acquire land for AI data centers
Coatue Management has launched Next Frontier, a venture aimed at acquiring land for large-scale AI data center developments. The $70 billion firm, led by Philippe Laffont, targets physical infrastructure bottlenecks in the AI sector. Initial projects include a campus in Indiana for Anthropic and a joint venture with Fluidstack. The initiative involves tens of billions in potential spending, reflecting investor focus on power and land access. Coatue partners Robert Yin and Peter Wallace lead the effort, with Laffont providing personal funding alongside management and outside investors.
Beijing Electronic Digital Intelligence launches Spark AI Cloud 2 0 and AI China Tour initiative
On April 16 2026 Beijing Electronic Digital Intelligence BEDI concluded the JXQ AI Forum 2026 in Beijing. The event featured the launch of Spark AI Cloud 2 0 an integrated AI production system and the China Urban Artificial Intelligence Index Report. BEDI also introduced the AI China Tour initiative and established the Collaborative Industry Alliance for AI Innovation Districts. These actions aim to drive industry city integration during the 15th Five Year Plan period by addressing deployment costs and scalability challenges while supporting targeted regional development strategies across China.
Phishing remains primary method for threat actors to gain unauthorized access
Cisco Talus researchers report that phishing accounted for over one-third of network access incidents in the first quarter of 2026. State-sponsored and criminal actors are using large language models to develop phishing lures and malicious scripts, particularly targeting health care and government sectors. Cisco recommends implementing multi-factor authentication, robust patch management, and centralized logging to mitigate these risks.
HR Magazine outlines data-driven recruitment strategies for Hong Kong talent acquisition
HR Magazine discusses implementing data-driven recruitment in Hong Kong to address competitive hiring challenges. The article details how HR teams can use analytics and AI to improve hiring quality, reduce bias, and ensure compliance with local regulations. It provides practical steps for adopting applicant tracking systems and predictive analytics, while warning against overreliance on algorithms and poor data quality. The piece highlights applications across financial services, tech, and logistics sectors, emphasizing the need to blend technology with human judgment.
Mistral AI releases Medium 3.5 open-source model amid mixed market reception
Mistral AI released the Medium 3.5 model on April 29, a 128-billion parameter dense model priced at $1.50 per million input tokens and $7.50 per million output tokens. The release includes agentic features and a unified model architecture. However, the model faces criticism for high costs and lower benchmark performance compared to Chinese alternatives like Alibaba's Qwen 3.6. Despite mixed technical reception, the model is positioned as a strategic option for European enterprises requiring GDPR compliance and non-Chinese infrastructure.
Anthropic in talks to secure custom AI chips from Fractile
Anthropic is in early-stage discussions with UK-based semiconductor start-up Fractile to secure a future supply of specialized AI inference chips. The chips, expected to launch later in the decade, are designed to run trained AI models more efficiently than general-purpose GPUs, offering lower energy consumption and reduced latency. This partnership aligns with Anthropic's strategy to diversify its supply chain and reduce reliance on dominant supplier NVIDIA as it scales its Claude family of models.
Michael Cooper explores trust and entanglement in the age of AI agents
Michael Cooper, author of The Weekender newsletter, discusses the necessity of visible trust structures for AI adoption, comparing it to historical shifts in elevator and payment technologies. The article highlights the conflict between Sam Altman and Elon Musk regarding OpenAI's governance and the broader industry's struggle with accountability. Cooper argues that future value will shift to human-centric qualities like trust and status as automation increases. The piece also touches on the concept of 'evolvable AI' and the concentration of power among a few tech leaders shaping the industry's direction.
Google signs Pentagon AI contract amid employee protests
On April 27, over 600 Google employees signed a letter opposing the company's AI work for the Pentagon. Despite previous commitments to restrict military use, Google confirmed signing a deal to provide classified AI services to the US Defence Department, alongside other tech firms. The decision reflects a shift from Google's earlier stance and has led to internal dissent, increased moderation, and concerns about transparency and ethics. The move aligns with rising US defence spending and global security collaborations, marking a change in the company's engagement with military technology.
Fake wildlife videos created with generative AI could cause long-term damage to nature
An article from The Bay Net warns that fake animal videos generated by generative AI are proliferating on social media, potentially causing long-term harm to wildlife conservation and public safety. The piece highlights issues such as misinformation regarding animal behavior, risks to foragers using AI-generated mushroom guides, and the erosion of trust in natural wonders. It notes that while platforms like Facebook and Instagram require AI labeling, enforcement is inconsistent. The article provides a guide to identifying fakes through visual clues, video length, and source verification, recommending expert resources like iNaturalist and Merlin for authentic wildlife identification.
Chinese court rules AI cannot be sole reason for employee dismissal
A Hangzhou Intermediate People's Court in China ruled that companies cannot dismiss employees solely to replace them with artificial intelligence tools. The case involved a quality assurance supervisor, Zhou, whose contract was terminated after refusing a demotion to a lower-paying role intended for an AI system. The court determined that replacing human workers with AI does not constitute a significant change in objective circumstances under Chinese labour law and that the offered alternative position involved unreasonable pay cuts. This decision establishes a legal precedent protecting employees from dismissal based exclusively on automation.
China inaugurates World Data Organisation to bridge data divide
China inaugurated the World Data Organisation in Beijing in March with a mission to bridge the data divide and unlock data value for the digital economy. This move reflects Beijing's strategy to address finite human-generated data for AI training by reorganising its data-sharing regime. The initiative aims to pool digitised data to feed specialised AI models for sectors like telemedicine and finance, countering global data exhaustion concerns.
Mexico enters first regulatory phase for artificial intelligence amid business integration
Mexico is entering its initial regulatory phase regarding artificial intelligence, characterized by accumulating ethical principles and technical standards rather than comprehensive legislation. As AI shifts from a support tool to an executor of critical tasks, businesses face pressure to justify concrete returns and manage legal risks related to data usage and content generation. The gap between market demands for value and the lack of clear legal strategies poses a risk to financial results and market confidence. Experts note that competitive advantage now depends on correct operational integration rather than mere adoption.
Sharebite data suggests AI adoption drives longer work hours and late-night orders
Sharebite data indicates that AI adoption correlates with increased work hours, evidenced by a doubling of Saturday client orders and a 57% rise in late-night orders. Sharebite CEO Dilip Rao attributes this to AI extending workdays rather than reducing them, supported by research from Harvard Business Review, UC Berkeley, and the National Bureau of Economic Research. The trend reflects a shift where AI complements human labor, requiring verification and integration time, amidst broader corporate pressure to adopt the technology.
Founder advises candidates to show human heart and grit to stand out in AI job market
Kristina Simmons, founder of Overwater Ventures and former partner at Andreessen Horowitz, advises job seekers to differentiate themselves in the AI era by demonstrating authentic human qualities. She recommends personal networking, using AI for research and creative tasks like designing pitch decks, and showcasing specific ideas for applying AI to the role. However, she warns against relying on technology for interviews, emphasising that passion, energy, and tone cannot be faked. The advice targets candidates navigating a market where AI is used for mass applications and resume tailoring.