‘Tis the season for 2026 predictions, a ritual many dread, but Dr Lewis Z. Liu’s piece laying out “12 AI assumptions for 2026” deserves attention because it is rooted less in idle prophecy and more in a year of building a company, advising policymakers and conversations with more than 300 executives. His framing shifts the debate from sensationalist forecasts to practical assumptions that founders and managers can organise around. [1]

Central to Liu’s thesis is the convergence of context and privacy: the argument that truly useful AI requires rich contextual signals while those signals can only be shared if granular privacy controls are in place. That proposition helps explain why organisational proprietary data , long siloed and underused , could become a genuine competitive moat if paired with rigorous governance. Industry observers note the same dynamic: unlocking unstructured enterprise data will be pivotal for automation and insight, provided privacy and control scale with access. [1]

The rise, wobble and likely maturation of AI coding agents is another throughline. Liu warns of a trough of disillusionment after early exuberance, citing precipitous drops in activity among some platforms and the familiar pitfall that AI-generated code can accumulate unmanageable technical debt , “Vibe coding allows two engineers to develop the technical debt of 50 engineers.” Yet the market is consolidating around agentic coding tools from major players. Microsoft has moved to embed multiple coding agents into GitHub, signalling a more neutral, plural approach, while startups and incumbents alike are improving guardrails and agentic capabilities to reduce hallucinations and boost reliability. This evolution supports Liu’s view that coding agents will not disappear but will require better integration and human oversight to reach their promise. [1][3][6]

That agentic evolution is not confined to software development. Industry reporting shows a rapid push by financial institutions into agentic AIs that can plan, learn and act autonomously. British banks are already piloting customer-facing agents for budgeting and investing in collaboration with regulators, a development that exposes new systemic and governance risks even as it promises scaled personalisation. Regulators are flagging threats that flow from speed, autonomy and interaction effects between agents, and expect accountability to be enforced under existing regimes. These tensions underscore Liu’s point that organisational adoption will reward traditional industries that pair thoughtful governance with practical AI use-cases. [2][6]

Anthropic’s recent model upgrades and rival entrants sharpen that competitive and technical backdrop. The company’s Opus 4.5 release extended Claude’s reasoning, memory and agentic capacities, enabling more complex code generation and autonomous tasking , exactly the capabilities Liu anticipates will power both promise and peril. At the same time, new, leaner coding models from other entrants emphasise speed and cost-effectiveness, reinforcing Liu’s claim that cheaper, capable models could blunt the competitive edge of elite chips for many production workloads. The practical upshot is a market where capability, cost and governance interact to reshape product economics and deployment choices. [4][5]

That shifting economics is visible in start‑up performance too. Liu argues 2026 will privilege recurring revenue with healthy unit economics over the “vibe revenue” of earlier years. The point is timely: as agentic tools move from R&D to customer-facing services, firms will be forced to demonstrate durable margins and measurable outcomes, or face a market correction. Investors and operators alike are already separating durable subscription businesses from repackaged services with weak margins, raising the bar for sustainable AI commercialisation. [1]

Geopolitics and capital flows also shape Liu’s assumptions. He predicts a surge of Western investment into “physical AI” , robotics and automated manufacturing , as part of broader industrial policy. That mirrors public moves by high-profile investors and firms to bring AI into the real world at scale, and explains why some Western players are redoubling efforts to match China’s factory automation and robotics deployments. At the same time, Liu expects Chinese models to remain influential among Western builders; venture capital sentiment and pragmatic engineering choices mean Chinese models will continue to figure in many stacks, complicating both procurement and policy debates. [1]

Finally, Liu outlines broader societal currents: growing scepticism towards Silicon Valley elites, Europe’s potential emergence as a “third pole” focused on trust and human-centric AI, and the moral case for building AI that amplifies human originality rather than replacing it. His admonition that “AI should not be ‘done to humans’; it must be developed for humans” echoes regulatory and industry conversations about accountability, explainability and the distributional effects of automation. Those are not merely rhetorical points; they frame the decisions boards, CTOs and policymakers must make as agentic and autonomous systems enter everyday services. [1][2]

Taken together, Liu’s dozen assumptions form a pragmatic checklist for 2026: solve privacy while enabling context, temper invective about imminent replacement with investments in human originality, force the economics of AI businesses into healthier territory, and manage the dual technical and regulatory risks of agentic systems as they move into production. As model capabilities accelerate, the quality of governance and product economics will determine which firms convert potential into durable advantage. [1]

📌 Reference Map:

##Reference Map:

  • [1] (CityAM) - Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 6, Paragraph 7, Paragraph 8, Paragraph 9
  • [3] (Reuters) - Paragraph 3
  • [6] (CIO) - Paragraph 3, Paragraph 4
  • [2] (Reuters) - Paragraph 4, Paragraph 8
  • [4] (Reuters) - Paragraph 5
  • [5] (Reuters) - Paragraph 5

Source: Noah Wire Services