As the financial sector contends with a rapidly evolving landscape of cybercrime, the rising adoption of artificial intelligence (AI) is shaping new strategies for fraud prevention. According to the Federal Trade Commission, consumers lost more than $12.5 billion to fraud in 2024, a stark 25% increase over the previous year, underscoring the growing urgency to enhance security frameworks amid increasingly sophisticated scams. Investment frauds, imposter scams, and online shopping deceptions accounted for significant portions of these losses, revealing the urgent need for multi-layered and intelligent fraud defenses.

In this context, financial institutions are turning to AI technologies, including generative and agentic AI systems, to not only detect but proactively prevent such fraudulent activities. However, the deployment of AI introduces complex challenges around data privacy, ethical use, and responsible governance. The World Economic Forum’s 2025 white paper highlights these intricacies, emphasising the crucial balance between leveraging AI to enhance operational security and protecting the privacy and trust of consumers. Establishing clear frameworks for responsible AI adoption is essential to foster both effectiveness in fraud prevention and adherence to ethical standards.

JoAnn Stonier, a Fellow of Data and AI at Mastercard and an expert in privacy and data governance, provides valuable insights into how financial institutions can navigate this balance. Her experience leading Mastercard's enterprise-wide data governance and analytics underscores the value of adopting mature, deterministic AI systems over chasing the hype of emerging technologies. Mastercard’s 17 years of using data and analytics to monitor global transactions demonstrate how AI can significantly improve real-time fraud detection by analysing spending patterns rather than individual identities.

Stonier explains that responsible AI in fraud prevention focuses on minimal data use, such as transaction date, time, location, and purchase amount, enabling pattern recognition without compromising individual privacy. This approach, known as data minimization, ensures that AI systems detect suspicious behaviour without collecting more personal information than necessary, maintaining trust and compliance with privacy regulations. Such advancements mean fewer false positives, improving customer experience by recognising legitimate spending sprees across multiple locations, for instance, without unnecessary interruption.

Beyond foundational AI, emerging “agent-ish” AI systems , precursors to fully autonomous AI , promise future enhancements in fraud detection capabilities. However, Stonier stresses that human oversight remains essential as these systems increase in complexity. Effective AI deployment demands a "team-sport" governance approach that involves collaboration among product, innovation, and risk teams, all aligned by a clear purpose and iterative evaluation processes. This collective responsibility ensures a balanced management of risks, from model bias to data security, while continuously adapting to new challenges.

The broader ecosystem echoes this call for structured and balanced AI integration. At the 2025 World Economic Forum in Davos, NTT DATA CEO Abhijit Dubey advocated for international standards in AI regulation to address risks linked to intellectual property, energy use, and misinformation, suggesting that global coherence is key to responsibly harnessing the technology's potential. Meanwhile, the FBI’s 2025 data indicating a 33% increase in global cybercrime losses to over $16 billion reveals that the fight against fraud is far from over and that technology alone cannot suffice without human governance and cooperation.

Regional data, such as that from Texas and Georgia, reflects these national trends, showing significant year-over-year increases in fraud losses and varied scam tactics targeting different demographics. These findings reinforce the necessity for financial institutions to adopt robust, adaptable AI systems capable of responding to diverse and evolving threats, always considering ethical implications and operational transparency.

In summary, as financial organisations face escalating fraud threats, the integration of responsible AI, prioritising mature technologies, data minimization, and collaborative governance, offers a promising pathway. JoAnn Stonier's experience at Mastercard illustrates that the future of fraud prevention depends not just on advanced algorithms, but on maintaining human oversight, fostering a culture of trust, and ensuring AI innovations are ethically aligned and practically effective in safeguarding customers and the broader financial ecosystem.

📌 Reference Map:

  • [1] (Emerj) - Paragraphs 1, 3, 4, 5, 6, 7, 8
  • [2] (World Economic Forum) - Paragraph 2
  • [3] (FTC) - Paragraph 1
  • [4] (Reuters, WEF Davos) - Paragraph 7
  • [5] (FBI Report) - Paragraph 7
  • [6] (Axios, Texas) - Paragraph 7
  • [7] (Axios, Georgia) - Paragraph 7

Source: Noah Wire Services