Artificial intelligence (AI) has transitioned from an aspirational concept to an integral part of modern business operations. According to data from McKinsey and various industry reports, 78% of global organisations now employ AI in at least one business function, demonstrating its widespread adoption across sectors such as telecommunications, finance, and retail. This rapid integration is also evident among smaller businesses, with 89% of small US companies utilising AI for routine tasks. Despite this expansion, many organisations remain cautious in moving beyond pilot projects, with 68% reporting that less than 30% of their AI experiments have progressed into full-scale operational deployment.

After three decades dedicated to policy-making, Ana Paula Vescovi’s recent academic pursuit at Wharton Business School delves into management and leadership amidst AI’s rise, focusing on the intersection of technology, strategy, and workplace psychology. Vescovi underscores the lack of consistent metrics to evaluate AI’s return on investment, particularly with generative AI, revealing a disconnect between the race for efficiency and the understanding of its broader impacts. This gap highlights a critical organisational choice: whether to use AI to replace human workers for quick gains or to augment human capabilities for enduring innovation and trust.

The broader economic history reminds us, as scholars Daron Acemoglu and Simon Johnson have noted, that technological progress is not inherently equitable or beneficial to all. Technological advancement can either concentrate power and exacerbate inequality or distribute opportunities and foster well-being and growth. This dual potential is mirrored in today’s AI deployment, which redefines work by automating tasks that once showcased human talent, possibly eroding worker autonomy, competence, and the social fabric of the workplace.

Work psychology identifies three pillars essential for employee well-being and productivity—learning, autonomy, and belonging. Research cited by Vescovi, including studies by Professor Stefano Puntoni, indicates that AI’s current implementations often reduce empathy and altruism among workers, and consumers tend to value products more when human involvement is apparent. Hence, the technology itself is less decisive than the design of human-AI interactions, which must translate organisational culture into ethical standards and measurable outcomes.

Generative AI also offers a transformative potential for innovation. Christian Terwiesch of Wharton describes “innovation tournaments” enabled by AI, where broader participation in ideation and problem-solving is encouraged, with humans acting as curators rather than mere executors. This shift combines machine speed with human judgment, creating new competitive advantages for companies that can effectively balance these forces.

The future success of AI-driven innovation depends not merely on technology but on the cultural frameworks that surround it. Companies that foster structured experimentation alongside creative freedom can generate not only products but also meaningful progress. Vescovi suggests that AI’s promise lies in restoring human time and energy for reflection and reinvention, provided that fair transition policies support those impacted and continuous learning is prioritised.

The ultimate trajectory of AI’s societal impact hinges on economic incentives, public policy, and market valuation of human-centric investments. If systems focus exclusively on short-term productivity, AI use is more likely to replace workers and concentrate wealth. Conversely, valuing human development and knowledge dissemination could make AI a catalyst for shared prosperity. Thus, collaboration between companies and regulators is essential to align technical advancements with social and economic progress.

In conclusion, AI embodies the potential for either widespread abundance or deepening inequality. The determining factor will not be the sophistication of algorithms themselves but the collective human choices, ethical frameworks, and incentives constructed around them.

📌 Reference Map:

  • Paragraph 1 – [1] (LNG in Northern BC), [2] (The Global Statistics), [5] (Hostinger)
  • Paragraph 2 – [1] (LNG in Northern BC)
  • Paragraph 3 – [1] (LNG in Northern BC)
  • Paragraph 4 – [1] (LNG in Northern BC)
  • Paragraph 5 – [1] (LNG in Northern BC)
  • Paragraph 6 – [1] (LNG in Northern BC)
  • Paragraph 7 – [1] (LNG in Northern BC)
  • Paragraph 8 – [1] (LNG in Northern BC)

Source: Noah Wire Services