The age of AI is here. Some embrace the change, other fear it. But perhaps surprisingly, as AI tools become more prevalent, most people believe they are equipped to spot AI-generated scams. By contrast, new research reveals a worrying trend: as people get more familiar with AI, they’re more likely to fall for these scams.
New research found fear of AI-generated scams decreased by 18% year-over-year, with only 61% of people now expressing worry that someone would use AI to defraud them. During the same period, the number of people who admitted to being successfully duped by these scams increased by 62% overall.
So, what is the reality?
A Proliferation of Scams
Traditional scam attempts rely on mass, generic messages hoping to catch a few victims. Someone receives a message from the “lottery” claiming that a recipient won a prize, or a fake business offering someone employment. In exchange for providing their bank account details, the messages would promise money in return. Of course, that was never true, and instead the victim lost money.
With AI, scammers are now getting more personalized and specific. A phishing email may no longer be riddled with grammatical errors or sent from an obviously spoofed account. AI also gives scammers more tools at their disposal.
For example, voice cloning allows scammers to replicate the voice of a friend or family member with just a three second audio clip. In fact, we’re starting to see more people swindled out of money because they believe a message from a family member is asking for ransom, when it’s actually from a scammer.
The Trust Breakdown
This trend harms both businesses and consumers. If a scammer were to gain access to a customer’s account information, they could drain an account of loyalty points or make purchases using a stolen payment method. The consumer would need to go through the hassle of reporting the fraud, while the business would ultimately need to refund those purchases (which can lead to significant losses).
There’s also a long-term impact to this trend: AI-generated scams erode trust in brands and platforms. Imagine a customer receiving an email claiming to be from Amazon or Coinbase support, an unauthorized user was trying to gain access to their account, and that the user should call support immediately to fix the issue. Without obvious red flags, they may not question its legitimacy until it’s too late.
A customer who falls for a convincing deepfake scam doesn't just suffer a financial loss; their confidence in the brand is forever tarnished. They either become hyper-cautious or opt to take their business elsewhere, leading to further revenue loss and damaged reputations.
The reality is that everyone pays the price when scams become more convincing, and if companies fail to take steps to establish trust, they wind up in a vicious cycle.
What's Fueling the Confidence Gap?
To address this confidence gap, it’s important to understand why the divide exists in the first place. Digital natives have spent years developing an intuitive sense for spotting "obvious" scams — the poorly written emails or suspicious pop-ups offering a free iPod. This exposure creates a dangerous blind spot: when AI-generated scams perfectly mimic legitimate communication, that same intuition fails.
Consider how the brain processes a typical workday. You're juggling emails, Slack messages, and phone calls, relying on split-second pattern recognition to separate signal from noise. A message from "your bank" looks right, feels familiar, and arrives at a plausible time.
The problem compounds when scammers use AI to perfectly replicate not just logos and language, but entire communication ecosystems. They're not just copying Amazon's email template; they're replicating the timing, context, and behavioral patterns that make legitimate messages feel authentic. When a deepfake voice call sounds exactly like a colleague asking for a quick favor, a pattern-matching brain tends to confirm that interaction as normal.
This explains why the most digitally fluent users are paradoxically the most vulnerable. They've trained themselves to navigate digital environments quickly and confidently. But AI-powered scams exploit that very confidence.
You can read the full article here.
The generations most confident in detecting an AI-generated scam are the ones most likely to get duped: 30% of Gen Z have been successfully phished, compared to just 12% of Baby Boomers. Regardless of your age or your feelings about AI, it is imperative we all learn to recognize the potential for financial harm, and take the steps necessary to protect ourselves.