Artificial intelligence is sharpening the tools used by fraudsters, making long-standing confidence tricks far more persuasive and costly for victims, according to U.S. law-enforcement officials and cybersecurity researchers. The U.S. Postal Inspection Service, which investigates crimes that exploit the postal system, says a growing share of the scams its inspectors pursue are now aided by AI-generated photos, voice clones and fabricated videos that lend false credibility to fake identities. Postal Inspector Eric Shen said, "The scams have stayed the same, but with AI, we are starting to see it become more realistic."
Industry and government data indicate the financial toll is already large and rising as criminals scale these operations. According to reporting by Axios, Americans lost more than $1.16 billion to romance scams in 2025, while broader analyses show billions of dollars are siphoned annually through investment fraud, phishing and other schemes that use AI to automate and personalise attacks. Security firms note that in the vast majority of AI-enhanced scams the technology is used to create convincing content that lowers victims' defences.
Scammers are combining traditional social-engineering tactics with generative models to produce tailored interactions that mimic trusted people or institutions. The Postal Inspection Service and analysts explain how AI can generate polished profiles, stage realistic video calls, reproduce a loved one’s voice and craft phishing messages that emulate legitimate firms, all designed to extract money or sensitive information. "That's what the scammers want. They want your wallet. They want your bank account. They want to take all the money out of your account," Shen said.
Romance fraud remains a high-profile example of this trend. The FBI and reporting by Axios warn that approaches such as celebrity impersonation, prolonged "pig butchering" campaigns that cultivate trust before requesting large transfers, and tragedy or "worker abroad" narratives are increasingly bolstered by deepfakes and automated chat tools that sustain long-running deceptions. Victims frequently report accelerated intimacy and requests for secretive payments as warning signs.
Investment and cryptocurrency schemes also benefit from AI’s ability to create convincing façades. Security commentary from Norton and analysis by F-Secure describe how scammers spin up realistic-looking websites, email templates and investor communications that mimic legitimate financial services, while AI-driven targeting identifies people most likely to respond. Experts say offers that promise unusually high, risk-free returns or pressure recipients to act immediately should be treated with scepticism.
Practical precautions urged by law enforcement and cybersecurity providers emphasise verification and restraint. The Postal Inspection Service advises ignoring unsolicited messages that demand urgent payment, deleting suspicious offers and confirming contacts through independent, trusted channels. Norton recommends avoiding clicks on dubious links and conducting research only on official sites, while F-Secure highlights the role of AI in content generation and urges heightened vigilance around unexpected voice or video communications.
Authorities are asking the public to report AI-assisted scams so investigators can track evolving tactics and disrupt fraud rings. The Postal Inspection Service and consumer protection agencies provide online portals for complaints and guidance, and cybersecurity firms suggest organisations and individuals adopt stronger authentication, digital literacy training and layered defences to reduce exposure to AI-enabled deception.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [3], [4]
- Paragraph 2: [2], [6]
- Paragraph 3: [3], [4]
- Paragraph 4: [2], [7]
- Paragraph 5: [4], [5], [6]
- Paragraph 6: [4], [6]
- Paragraph 7: [3], [6]
Source: Noah Wire Services