Politics Copy RSS link Link copied to clipboard!

Phishing remains primary method for threat actors to gain unauthorized access

Cisco Talus researchers report that phishing accounted for over one-third of network access incidents in the first quarter of 2026. State-sponsored and criminal actors are using large language models to develop phishing lures and malicious scripts, particularly targeting health care and government sectors. Cisco recommends implementing multi-factor authentication, robust patch management, and centralized logging to mitigate these risks.

HR Magazine outlines data-driven recruitment strategies for Hong Kong talent acquisition

HR Magazine discusses implementing data-driven recruitment in Hong Kong to address competitive hiring challenges. The article details how HR teams can use analytics and AI to improve hiring quality, reduce bias, and ensure compliance with local regulations. It provides practical steps for adopting applicant tracking systems and predictive analytics, while warning against overreliance on algorithms and poor data quality. The piece highlights applications across financial services, tech, and logistics sectors, emphasizing the need to blend technology with human judgment.

Mistral AI releases Medium 3.5 open-source model amid mixed market reception

Mistral AI released the Medium 3.5 model on April 29, a 128-billion parameter dense model priced at $1.50 per million input tokens and $7.50 per million output tokens. The release includes agentic features and a unified model architecture. However, the model faces criticism for high costs and lower benchmark performance compared to Chinese alternatives like Alibaba's Qwen 3.6. Despite mixed technical reception, the model is positioned as a strategic option for European enterprises requiring GDPR compliance and non-Chinese infrastructure.

Anthropic in talks to secure custom AI chips from Fractile

Anthropic is in early-stage discussions with UK-based semiconductor start-up Fractile to secure a future supply of specialized AI inference chips. The chips, expected to launch later in the decade, are designed to run trained AI models more efficiently than general-purpose GPUs, offering lower energy consumption and reduced latency. This partnership aligns with Anthropic's strategy to diversify its supply chain and reduce reliance on dominant supplier NVIDIA as it scales its Claude family of models.

Michael Cooper explores trust and entanglement in the age of AI agents

Michael Cooper, author of The Weekender newsletter, discusses the necessity of visible trust structures for AI adoption, comparing it to historical shifts in elevator and payment technologies. The article highlights the conflict between Sam Altman and Elon Musk regarding OpenAI's governance and the broader industry's struggle with accountability. Cooper argues that future value will shift to human-centric qualities like trust and status as automation increases. The piece also touches on the concept of 'evolvable AI' and the concentration of power among a few tech leaders shaping the industry's direction.

Google signs Pentagon AI contract amid employee protests

On April 27, over 600 Google employees signed a letter opposing the company's AI work for the Pentagon. Despite previous commitments to restrict military use, Google confirmed signing a deal to provide classified AI services to the US Defence Department, alongside other tech firms. The decision reflects a shift from Google's earlier stance and has led to internal dissent, increased moderation, and concerns about transparency and ethics. The move aligns with rising US defence spending and global security collaborations, marking a change in the company's engagement with military technology.

Fake wildlife videos created with generative AI could cause long-term damage to nature

An article from The Bay Net warns that fake animal videos generated by generative AI are proliferating on social media, potentially causing long-term harm to wildlife conservation and public safety. The piece highlights issues such as misinformation regarding animal behavior, risks to foragers using AI-generated mushroom guides, and the erosion of trust in natural wonders. It notes that while platforms like Facebook and Instagram require AI labeling, enforcement is inconsistent. The article provides a guide to identifying fakes through visual clues, video length, and source verification, recommending expert resources like iNaturalist and Merlin for authentic wildlife identification.

Chinese court rules AI cannot be sole reason for employee dismissal

A Hangzhou Intermediate People's Court in China ruled that companies cannot dismiss employees solely to replace them with artificial intelligence tools. The case involved a quality assurance supervisor, Zhou, whose contract was terminated after refusing a demotion to a lower-paying role intended for an AI system. The court determined that replacing human workers with AI does not constitute a significant change in objective circumstances under Chinese labour law and that the offered alternative position involved unreasonable pay cuts. This decision establishes a legal precedent protecting employees from dismissal based exclusively on automation.

China inaugurates World Data Organisation to bridge data divide

China inaugurated the World Data Organisation in Beijing in March with a mission to bridge the data divide and unlock data value for the digital economy. This move reflects Beijing's strategy to address finite human-generated data for AI training by reorganising its data-sharing regime. The initiative aims to pool digitised data to feed specialised AI models for sectors like telemedicine and finance, countering global data exhaustion concerns.

Mexico enters first regulatory phase for artificial intelligence amid business integration

Mexico is entering its initial regulatory phase regarding artificial intelligence, characterized by accumulating ethical principles and technical standards rather than comprehensive legislation. As AI shifts from a support tool to an executor of critical tasks, businesses face pressure to justify concrete returns and manage legal risks related to data usage and content generation. The gap between market demands for value and the lack of clear legal strategies poses a risk to financial results and market confidence. Experts note that competitive advantage now depends on correct operational integration rather than mere adoption.

Sharebite data suggests AI adoption drives longer work hours and late-night orders

Sharebite data indicates that AI adoption correlates with increased work hours, evidenced by a doubling of Saturday client orders and a 57% rise in late-night orders. Sharebite CEO Dilip Rao attributes this to AI extending workdays rather than reducing them, supported by research from Harvard Business Review, UC Berkeley, and the National Bureau of Economic Research. The trend reflects a shift where AI complements human labor, requiring verification and integration time, amidst broader corporate pressure to adopt the technology.

Founder advises candidates to show human heart and grit to stand out in AI job market

Kristina Simmons, founder of Overwater Ventures and former partner at Andreessen Horowitz, advises job seekers to differentiate themselves in the AI era by demonstrating authentic human qualities. She recommends personal networking, using AI for research and creative tasks like designing pitch decks, and showcasing specific ideas for applying AI to the role. However, she warns against relying on technology for interviews, emphasising that passion, energy, and tone cannot be faked. The advice targets candidates navigating a market where AI is used for mass applications and resume tailoring.

Sam Altman asks GPT-5.5 to plan its own launch party

OpenAI CEO Sam Altman described asking GPT-5.5 to plan its own debut party during a fireside chat at Stripe Sessions. The model suggested holding the event on May 5, keeping speeches short, and having human creators deliver a toast. It also proposed a feedback loop for future model suggestions. Altman noted the request was strange but confirmed the plan. John Collison of Stripe shared a similar anecdote about an internal agent purchasing an HTTP design online. The interaction highlights emerging autonomous behaviors in advanced AI models.

Philosophy majors recruited by AI companies for ethics roles

AI companies such as Google DeepMind and Anthropic are hiring philosophy majors to shape machine behaviour and ensure alignment with human values. Roles include training chatbots for honesty and developing ethical governance frameworks. While salaries range from six figures to over $400,000, experts note the trend is still early and anecdotal, with most firms hiring fewer than 10 such specialists. Skeptics warn that commercial pressures may limit the actual influence of these ethicists.

South African schools and universities must adapt to artificial intelligence

Robyn Shepherd, an attorney at SchoemanLaw Inc, argues that South African educational institutions should not ban artificial intelligence but instead focus on developing digital literacy and implementing clear policies. The opinion piece highlights the need to equip students with skills to use AI responsibly, critically, and ethically to prepare them for the modern workforce. It also emphasises compliance with the South African Constitution and the Protection of Personal Information Act (POPIA) to protect student rights and data privacy while balancing innovation with protection.

China blocks Meta acquisition of AI startup Manus citing Singapore washing

China's National Development and Reform Commission has prohibited Meta's proposed US$2 billion acquisition of Manus, a Chinese AI startup, ordering the parties to unwind the transaction. Regulators stated the deal violated foreign investment security review laws because Manus failed to declare the takeover, despite relocating its headquarters to Singapore. Officials warned against 'Singapore washing,' where domestic firms use offshore structures to bypass oversight on technology, data, and talent originating in China. While blocking the deal, Beijing affirmed its support for domestic AI expansion and innovation.

Tech giants form Agentic AI Foundation to standardise protocols amid concerns over premature consensus

In December 2025, Anthropic, OpenAI, Google, Microsoft, and others joined the Linux Foundation to create the Agentic AI Foundation, consolidating competing protocols like MCP and AGENTS.md. While adoption surged with over 8 million MCP server downloads by April 2025, the article highlights risks of premature standardisation before reasoning capabilities mature. Rapid AI advancement, workforce disruption for junior developers, and technical debt accumulation are cited as critical challenges. The initiative aims to enable interoperability but faces scrutiny regarding whether current architectural assumptions will constrain future innovation.

Adobe legal chief calls for creator protection as policymakers and tech companies reframe copyright in the era of AI

Louise Pentland, Adobe's Chief Legal Officer, urged policymakers and tech companies to adopt a pragmatic approach to copyright in the age of AI rather than radical legal overhauls. Speaking at Adobe Summit 2026, Pentland highlighted the 2025 US Copyright Office decision granting protection to an AI-assisted image as a precedent. She advocated for maintaining human creativity through existing frameworks, clearer guidance, and technologies like Content Credentials to verify authenticity and protect creators from deepfakes, warning that failing to protect artists could undermine the data foundations of generative AI.

OpenAI rolls out advanced account security for ChatGPT users

OpenAI introduced an opt-in Advanced Account Security setting for ChatGPT on Thursday. The feature requires passkeys or physical security keys, removes email and SMS recovery options, and excludes enrolled accounts from model training by default. OpenAI partnered with Yubico to offer discounted security key bundles. The update aims to protect users handling sensitive tasks, such as journalists and researchers, against phishing and digital attacks. Users in the Trusted Access for Cyber program must enable the feature by June 1.

Experts warn humanity faces imminent extinction risk from superintelligent AI

AI safety experts and ControlAI warn that superintelligent AI poses an extinction risk to humanity within two to five years. Citing Anthropic's autonomous vulnerability-exploiting model, Mythos, the article argues governments are ignoring the accelerating threat. It calls for treating superintelligent AI as a national and global security risk, advocating for an international coalition to prohibit its development to prevent catastrophe.

Next