Politics Copy RSS link Link copied to clipboard!
China disposes of 98,000 social media accounts for unlabelled sources
The National Internet Information Office of China has disposed of approximately 98,000 social media accounts due to failure to label information sources and misleading public perception. Punishments include content deletion, posting bans, interaction restrictions, and permanent bans. Affected accounts covered international conflicts, public policies, and rural life. The authority mandates future content, including AI-generated and fictional material, must clearly identify sources to ensure transparency.
Technologist Mike Pepi Proposes Taxing AI Slop to Fund Cultural Institutions
Technologist Mike Pepi proposes a one percent annual tax on companies furnishing or hosting generative AI content to address the proliferation of AI slop. The revenue would fund a publicly controlled grant system for cultural institutions, artists, and researchers who provided training data. Pepi argues this approach restores balance to human creativity and offers a more practical solution than regulatory pauses or bans, aiming to prevent the destruction of cognitive labor while avoiding punitive measures that might provoke industry resistance.
McClatchy journalists refuse to sign AI-generated articles
McClatchy journalists across 14 US states have collectively refused to sign articles generated by the company's internal AI tool, Content Scaling Agent. The tool rewrites original reporter drafts for different audiences, prompting writers to label the output as 'AI-assisted' rather than attributing it to themselves. Over 65 union members from The Miami Herald and The Bradenton Herald formally protested, citing contract violations regarding major technological changes and concerns that the practice damages community trust. The conflict highlights tensions between the publisher's goal to increase output and journalists' demands for transparency and proper attribution.
AI tools enable merchants to build custom ecommerce applications via vibe coding
The article explains how vibe coding allows merchants to create custom software for repetitive ecommerce tasks like price monitoring. Using ChatGPT and Replit, the author built a price intelligence tool in 18 minutes. This approach offers an alternative to manual work, SaaS subscriptions, or hiring developers for non-critical automation needs.
Alphabet shares rise while Meta falls amid divergent AI earnings results
Alphabet and Meta reported earnings alongside Amazon and Microsoft, revealing a split in AI investment returns. Alphabet's Google cloud unit beat projections with $20 billion in sales, driving a 10 percent share gain. Conversely, Meta increased capital expenditure targets to $145 billion but saw shares drop 8.6 percent due to slower consumer AI adoption and lack of cloud services. Amazon and Microsoft also reported cloud growth, though Microsoft shares fell on modest Copilot uptake. The tech sector faces scrutiny over whether massive infrastructure spending yields tangible results.
Jay Goldberg argues parents should govern social media and AI access instead of government bans
Jay Goldberg, Canadian affairs manager at the Consumer Choice Center, criticizes Manitoba Premier Wab Kinew's proposal to ban social media and artificial intelligence for teenagers. Goldberg contends that active parenting, rather than government intervention, is the appropriate solution. He cites evidence from Australia's youth social media ban, noting that many teenagers circumvent restrictions using VPNs and that the law has severed millions of social connections. Goldberg further argues that such bans lack sufficient evidence to improve mental health and delay youth preparation for the digital age.
UN Women survey finds online violence and deepfakes push women out of work
A global survey of 641 female activists and journalists conducted between August and November 2025 by UN Women and City St George's, University of London, reveals that online violence is driving women out of public life and their careers. More than a quarter of respondents received unwanted intimate images, while 6% experienced deepfakes. Consequences include significant mental health impacts, with 24% reporting anxiety or depression and 19% self-censoring at work. Only 25% reported incidents to police, and 10% saw charges brought against abusers. The report highlights how avoidance techniques like self-censorship are more common than resistance, forcing women into less visible roles or resignation.
China's commitment to employment stability limits rapid AI adoption
A 2025 Chinese court ruling established that replacing workers with AI is not valid grounds for dismissal, requiring firms to attempt renegotiation or retraining first. This decision, cited as a model case, highlights a growing tension between Beijing's AI Plus initiative targets and its imperative to maintain employment stability amidst high youth unemployment. While the government promotes AI diffusion for economic growth, political pressure to avoid labor displacement is forcing policy retreats, such as freezing autonomous vehicle approvals in Wuhan and limiting Meituan's delivery robot rollouts. Officials are now debating frameworks like 'AI + Employment' and potential universal basic income to mitigate structural job losses.
OpenAI CEO Sam Altman warns AI washing is real but tech job displacement is on the way
OpenAI CEO Sam Altman stated that while some companies falsely attribute layoffs to AI, genuine job displacement is imminent. Altman anticipates palpable impacts in the coming years alongside new roles. Conversely, data from the Yale Budget Lab shows no significant macroeconomic effects yet, with a February study finding 90% of C-suite executives reported no AI impact on employment. However, other leaders like Anthropic CEO Dario Amodei predict significant white-collar job losses, and some firms have already cited AI for workforce reductions.
Rising costs and productivity questions challenge economics of AI code generation
The economic viability of using AI for code generation is facing scrutiny due to escalating usage costs and unproven productivity gains. Anthropic doubled its cost estimates for Claude Code, with average daily costs rising to $13 per developer. Microsoft's GitHub Copilot is transitioning to usage-based billing. Research indicates many companies saw zero revenue growth after AI adoption, while some face increased workloads, burnout, and server strain. Organizations are now questioning whether the expenses outweigh the benefits.
Websites must function as extractable sources for AI agents
The article argues that websites are shifting from being primary destinations to serving as canonical sources for AI agents. As AI tools summarize and recontextualize content, the message must be independent of visual design. Brands are advised to structure content for portability, ensuring core value propositions survive extraction without relying on layout or specific platforms.
KKR invests $10B in AI power plants and data centers
KKR announced a $10 billion investment dedicated to building AI-focused power plants and data centers. This move supports the US tech ecosystem and aligns with broader commitments by major firms like Google, Microsoft, Amazon, and Meta, which are collectively investing over $100 billion globally. The investment aims to secure long-term power agreements to meet rising electricity demands from AI data centers.
Coatue launches Next Frontier to acquire land for AI data centers
Coatue Management has launched Next Frontier, a venture aimed at acquiring land for large-scale AI data center developments. The $70 billion firm, led by Philippe Laffont, targets physical infrastructure bottlenecks in the AI sector. Initial projects include a campus in Indiana for Anthropic and a joint venture with Fluidstack. The initiative involves tens of billions in potential spending, reflecting investor focus on power and land access. Coatue partners Robert Yin and Peter Wallace lead the effort, with Laffont providing personal funding alongside management and outside investors.
Beijing Electronic Digital Intelligence launches Spark AI Cloud 2 0 and AI China Tour initiative
On April 16 2026 Beijing Electronic Digital Intelligence BEDI concluded the JXQ AI Forum 2026 in Beijing. The event featured the launch of Spark AI Cloud 2 0 an integrated AI production system and the China Urban Artificial Intelligence Index Report. BEDI also introduced the AI China Tour initiative and established the Collaborative Industry Alliance for AI Innovation Districts. These actions aim to drive industry city integration during the 15th Five Year Plan period by addressing deployment costs and scalability challenges while supporting targeted regional development strategies across China.
Phishing remains primary method for threat actors to gain unauthorized access
Cisco Talus researchers report that phishing accounted for over one-third of network access incidents in the first quarter of 2026. State-sponsored and criminal actors are using large language models to develop phishing lures and malicious scripts, particularly targeting health care and government sectors. Cisco recommends implementing multi-factor authentication, robust patch management, and centralized logging to mitigate these risks.
HR Magazine outlines data-driven recruitment strategies for Hong Kong talent acquisition
HR Magazine discusses implementing data-driven recruitment in Hong Kong to address competitive hiring challenges. The article details how HR teams can use analytics and AI to improve hiring quality, reduce bias, and ensure compliance with local regulations. It provides practical steps for adopting applicant tracking systems and predictive analytics, while warning against overreliance on algorithms and poor data quality. The piece highlights applications across financial services, tech, and logistics sectors, emphasizing the need to blend technology with human judgment.
Mistral AI releases Medium 3.5 open-source model amid mixed market reception
Mistral AI released the Medium 3.5 model on April 29, a 128-billion parameter dense model priced at $1.50 per million input tokens and $7.50 per million output tokens. The release includes agentic features and a unified model architecture. However, the model faces criticism for high costs and lower benchmark performance compared to Chinese alternatives like Alibaba's Qwen 3.6. Despite mixed technical reception, the model is positioned as a strategic option for European enterprises requiring GDPR compliance and non-Chinese infrastructure.
Anthropic in talks to secure custom AI chips from Fractile
Anthropic is in early-stage discussions with UK-based semiconductor start-up Fractile to secure a future supply of specialized AI inference chips. The chips, expected to launch later in the decade, are designed to run trained AI models more efficiently than general-purpose GPUs, offering lower energy consumption and reduced latency. This partnership aligns with Anthropic's strategy to diversify its supply chain and reduce reliance on dominant supplier NVIDIA as it scales its Claude family of models.
Michael Cooper explores trust and entanglement in the age of AI agents
Michael Cooper, author of The Weekender newsletter, discusses the necessity of visible trust structures for AI adoption, comparing it to historical shifts in elevator and payment technologies. The article highlights the conflict between Sam Altman and Elon Musk regarding OpenAI's governance and the broader industry's struggle with accountability. Cooper argues that future value will shift to human-centric qualities like trust and status as automation increases. The piece also touches on the concept of 'evolvable AI' and the concentration of power among a few tech leaders shaping the industry's direction.
Google signs Pentagon AI contract amid employee protests
On April 27, over 600 Google employees signed a letter opposing the company's AI work for the Pentagon. Despite previous commitments to restrict military use, Google confirmed signing a deal to provide classified AI services to the US Defence Department, alongside other tech firms. The decision reflects a shift from Google's earlier stance and has led to internal dissent, increased moderation, and concerns about transparency and ethics. The move aligns with rising US defence spending and global security collaborations, marking a change in the company's engagement with military technology.