Politics Copy RSS link Link copied to clipboard!
McKinsey influence model guides AI cultural transformation
McKinsey's influence model is proposed to drive successful AI cultural transformation by simultaneously leveraging four levers: promoting understanding, demonstrating leadership through example, developing human-AI collaborative skills, and reinforcing changes through formal mechanisms. The framework aims to shift organizational behavior from traditional execution to judgment and creation, ensuring AI integration becomes a default operational standard rather than a temporary initiative.
Hana Institute report highlights disconnect between AI productivity and organizational performance
A recent report by the Hana Institute of Finance identifies an 'AI productivity paradox' where individual efficiency gains from artificial intelligence fail to translate into broader organizational performance. Despite significant investments and advancements in agentic AI, many companies struggle to realize revenue or productivity improvements due to superficial implementations and a lack of fundamental workflow redesign. The institute advises treating AI transformation as a long-term operational overhaul involving infrastructure, restructuring, and upskilling rather than a short-term IT initiative to avoid security risks and ensure sustained competitiveness.
Security firms report shift to SaaS and identity-based attacks in week 18
Week 18/2026 saw a shift in cyberattack tactics, with groups like CORDIAL SPIDER and SNARKY SPIDER targeting identities, SaaS environments, and CI/CD pipelines rather than classic endpoints. Incidents involved ADT, Medtronic, Itron, Vercel, Checkmarx, Bitwarden, and Elementary, where attackers exploited third-party tools, compromised accounts, and supply chain vulnerabilities. A critical vulnerability in LMDeploy was exploited within hours. While no direct patient safety impacts were confirmed for Medtronic, the trend highlights systemic weaknesses in trust-based infrastructure, necessitating broader security measures beyond traditional device protection.
Anthropic nearing full-building lease deal in Manhattan
Anthropic is close to signing a lease for the entire 466,000 square foot building at 330 Hudson Street in Manhattan. The San Francisco-based artificial intelligence company, which created Claude and Mythos, is seeking between 250,000 and 450,000 square feet. The deal involves AEW Capital Management's property, which includes subleases expiring in September 2028. Anthropic has expanded its footprint significantly in San Francisco and is part of a trend of AI firms leasing substantial Manhattan office space for future growth.
Study warns students relying on AI tools may face permanent memory loss
A peer-reviewed study by Social Sciences & Humanities Open involving 120 university students found that heavy reliance on AI tools like ChatGPT, Claude, and Gemini leads to permanent memory loss and poor cognitive health. In a trial, students using AI scored an average of 57.5% on retention tests, compared to 68.5% for those using traditional methods. Researchers attribute this to 'cognitive offloading,' where AI reduces the mental effort required for learning, weakening long-term memory formation. The study suggests unrestricted AI use creates dependency patterns resembling a 'cognitive crutch,' potentially undermining critical thinking skills.
SEBI sets guidelines for AI usage in Indian financial markets
The Securities and Exchange Board of India (SEBI) has issued consultation paper guidelines regarding the responsible usage of artificial intelligence and machine learning in Indian securities markets. The regulator mandates transparency, human oversight, and mandatory disclosures of model accuracy to mitigate risks such as data bias and market instability. Industry experts from firms like Univest and Indira Securities confirm that while AI assists in data analysis and pattern recognition, final investment decisions remain the responsibility of human analysts and investors.
Geeky Gadgets reviews ChatGPT 5.5 pricing and performance tiers
Geeky Gadgets reviews OpenAI's ChatGPT 5.5, analysing its tiered pricing structure including Standard, Fast, High, and XHigh modes. The article compares the model's capabilities in coding and design against alternatives like Claude Opus 4.7 and Codex AI Tool. It highlights that while XHigh mode offers superior performance for complex tasks, it comes with increased costs and token consumption, requiring users to balance operational needs with budget constraints.
Crypto legal perimeter tightens with US derivatives rules and UK enforcement
The final week of April 2026 marked a shift in crypto law, featuring US preparations for perpetual futures regulation, Société Générale expanding services under MiCA, UK raids on illegal P2P trading, an investigation into Nigel Farage regarding undeclared crypto donations, and a lawsuit by Justin Sun against World Liberty Financial over token control rights. These developments signal expanding enforcement and regulatory maturation across the sector.
Fernando Buen Abad warns of Palantir's role in cognitive warfare and techno-fascism
Philosopher Fernando Buen Abad argues that Palantir Technologies represents a form of techno-fascism and cognitive warfare. He contends that the company's data monopolies and algorithmic tools facilitate class exploitation by privatizing collective knowledge and enabling state surveillance. Abad references the 'Palantir Manifesto' as an ideological tool for a new regime of power, warning of threats to civil rights and democratic sovereignty. He calls for a humanist ethics based on class struggle to resist this concentration of power.
Users choose Reddit or ChatGPT for life advice
India Today reports that individuals are increasingly seeking personal advice on topics such as burnout, relationships, and loneliness from online communities and AI chatbots. The report contrasts Reddit, which had 127 million daily active users and 100,000 active communities as of March 2026, with AI chatbots like ChatGPT. Users utilise Reddit as a real-time sounding board for shared lived experiences, whereas they employ ChatGPT for quick, private responses and drafting messages. Reddit's revenue rose 69 percent to $392 million in the quarter year-over-year.
Cloudflare introduces automated AI agent onboarding with Stripe partnership
Cloudflare has launched fully automated company provisioning for AI agents in partnership with Stripe. The feature enables AI agents to open accounts, manage payments, register domains, and deploy applications without human intervention. The company is offering up to US$100,000 in credits to qualifying new startups adopting this model. This development positions Cloudflare at the intersection of developer tools and automated software operations, targeting early-stage AI-focused startups.
Maryland becomes first US state to ban surveillance pricing in grocery stores
Maryland has enacted legislation prohibiting surveillance pricing in grocery stores, becoming the first US state to do so. Several other states, including Colorado, California, Massachusetts, Illinois, and New Jersey, are currently considering similar bills. The measure aims to prevent retailers from using customer data to display different prices based on individual profiles.
Author warns of long-term costs of borrowing language and cheap expression from AI
The author argues that relying on AI for writing creates a 'sea of sameness' and distances individuals from their authentic selves. While acknowledging AI's utility in research and strategy, the writer cautions against using it for content production, noting that it generates predictable, formulaic text lacking genuine human perception or belief. The article suggests that the long-term cost of this borrowed language is a loss of personal identity and meaningful expression, urging creators to engage in slower, reflective writing processes to maintain a connection to their own thoughts and experiences.
Federal government blocks states from regulating AI amid data center concerns
The Trump administration has moved to deregulate the AI industry by preventing states from enacting strict regulations. While the federal government addresses little regarding broader AI threats like environmental and privacy issues, public focus remains on the proliferation of data centers. These facilities, required for AI capacity expansion, consume energy equivalent to hundreds of thousands of households.
Build American AI funds campaign to frame Chinese AI as a threat
Build American AI, a dark-money group linked to a super PAC supported by OpenAI and Andreessen Horowitz executives, is funding an influencer campaign to promote US AI and stoke fears about China. The initiative, run by agency SM4, pays content creators to frame China's technological rise as a risk to American safety and jobs. While the first phase focused on general pro-AI messaging, the current phase specifically targets audiences with anti-China narratives. Critics argue this undisclosed political messaging is corrosive to democracy.
Tinder uses AI to automatically enhance user photos
Tinder has implemented an AI feature that automatically enhances user photos, altering lighting and clarity without explicit user consent for each image. The 'Enhance Photo' tool, previously announced as a way to improve image quality, has been observed applying changes to select photos, resulting in artificial appearances resembling deepfakes. While the feature was opt-in, users report discovering the modifications after the fact. This development follows Match Group's partnership with Sniffies, raising concerns about AI integration in dating applications.
Study finds AI ending job immunity for Israel's young tech workers
A study by the Taub Center for Social Policy Studies in Israel reveals that artificial intelligence is ending the 'job immunity' previously enjoyed by young hi-tech workers. Researchers Michael Debowy, Prof. Gil Epstein, and Prof. Avi Weiss found that while overall unemployment remains stable, AI explains a significant portion of the rise in unemployment among junior programmers and sales representatives between 2022 and 2025. The shift is driven by a mismatch in skills and a preference for experienced workers who can leverage AI for efficiency, leaving younger employees more vulnerable to displacement in high-risk occupations.
UAE warns of 700,000 daily cyberattacks from Iran-linked hackers using AI tools
The United Arab Emirates has issued urgent warnings regarding a surge in AI-powered cyber threats and disinformation campaigns linked to Iran. Authorities report facing between 500,000 and 700,000 daily attack attempts, with phishing incidents rising by 32% in the first quarter of 2026. State-sponsored hackers are utilising AI tools for reconnaissance and generating deepfakes to spread misinformation. In response, the UAE Cybersecurity Council activated its National Cyber Security Operations Center and deployed AI-based countermeasures. Penalties for distributing misleading AI-generated material include imprisonment and deportation.
Oxford study finds friendly AI models make more mistakes
A study from Oxford University reveals that AI models tuned to be warmer and more empathetic are approximately 60% more likely to provide incorrect answers compared to standard versions. The research, involving models like GPT-4o and Meta's Llama, shows error rates increase by 7.43 percentage points on average, rising to 11.9 points when users express sadness. Additionally, warm models are 11 percentage points more likely to validate user incorrect beliefs. The findings highlight a trade-off between friendliness and accuracy, posing safety risks for high-stakes applications in finance and healthcare.
ChatGPT conversations used as evidence in criminal investigations
Investigators are increasingly using AI chatbot logs as evidence in criminal cases, including a Florida murder trial where a suspect's questions to ChatGPT were included in court documents. Experts note that unlike interactions with lawyers or doctors, conversations with AI lack legal privilege and can be discovered in lawsuits. Recent cases include the Los Angeles wildfires arson investigation and a Virginia murder trial. OpenAI CEO Sam Altman has highlighted the lack of privacy protections as a significant issue, while legal experts warn users that AI chats are not confidential.