Your CFO joins a video call with three members of the executive team. The CEO is there. The General Counsel is there. The conversation is normal—quarterly figures, a pending acquisition, and a wire transfer that needs to go out before close of business. Everyone looks right. Everyone sounds right. The CFO authorizes the transfer.

Except none of those people were on the call. Every face was synthetic. Every voice was cloned. And the money is gone.

This isn’t science fiction. It already happened—and the price tag was $25.6 million.


This Is Now the #1 Threat Keeping CEOs Awake

According to the World Economic Forum’s Global Cybersecurity Outlook 2026, cyber-enabled fraud has overtaken ransomware as the top concern for chief executive officers worldwide. That’s a seismic shift. For years, ransomware dominated every boardroom conversation. Now, the thing executives fear most is being impersonated by their own digital likeness.

The numbers back it up. Seventy-three percent of global leaders surveyed reported that they or someone in their professional network was directly affected by cyber-enabled fraud in 2025. In North America, that figure hit 79 percent. And AI is the accelerant—87 percent of respondents identified AI-related vulnerabilities as the fastest-growing cyber risk of the past year.

We’ve crossed a threshold. The threat isn’t just that someone might send a phishing email pretending to be your CEO. The threat is that someone can become your CEO—on camera, on the phone, in real time—and no one in the room can tell the difference.


Real Attacks. Real Losses. Real Executives.

This isn’t theoretical. Here’s what’s already happened:

The $25.6 Million Video Call (Arup, 2024) A finance employee at UK engineering giant Arup joined a video conference with what appeared to be the company’s CFO and other senior leaders. Every participant was a deepfake. Over the course of the call, the employee was directed to execute 15 separate wire transfers totaling $25.6 million across five bank accounts. By the time the fraud was discovered, the money was gone.

The $499,000 Zoom Call (Singapore, 2025) A finance director at a multinational firm in Singapore authorized a transfer after a Zoom call with company leadership. The attackers had proactively suggested the video call—a move that created false confidence. Every executive on the screen was AI-generated. The company lost nearly half a million dollars in a single session.

The Italian Defense Minister Scheme (2025) Criminals used AI-generated audio to impersonate Italy’s Defence Minister in calls to corporate executives, claiming they needed emergency funds to help free journalists held abroad. At least one executive wired €1 million to a Hong Kong account before the scheme was uncovered.

These aren’t outliers. CEO fraud now targets an estimated 400 companies per day. AI scam activity surged over 1,200 percent in 2025, far outpacing the growth of traditional fraud.


Why This Is Different from Every Other Threat

Business Email Compromise (BEC) has cost organizations billions for years. But until recently, it had a natural ceiling: a well-trained employee could spot a suspicious email. The grammar was off. The domain was wrong. The request didn’t feel right.

AI has removed that ceiling.

Deepfake voice cloning requires as little as three seconds of audio to generate a convincing replica of someone’s voice. Executives who speak on earnings calls, podcasts, webinars, and YouTube interviews are handing attackers exactly what they need. Combine that voice with a deepfake video generated from publicly available photos and footage, and you have an impersonation that can fool even close colleagues.

This is no longer BEC. This is multimodal impersonation—email, voice, and video combined into a single attack that exploits the one thing your security stack can’t easily filter: human trust.

And the attackers know where the money moves. They study your org chart. They know who reports to whom. They know when your CEO travels, when your CFO is in meetings, and when your finance team processes large transactions. The call comes at exactly the right time, from exactly the right person, asking for exactly the kind of action that wouldn’t normally raise a flag.


What the C-Suite Needs to Do—Now

This threat requires executive-level action, not just IT-level tooling. Here’s what we recommend to every leadership team we work with:

1. Establish Out-of-Band Verification for All High-Value Transactions

No wire transfer, vendor change, or financial authorization should be completed based solely on a video call, phone call, or email—no matter who appears to be making the request. Implement a mandatory callback protocol using a pre-established, trusted phone number. If the CEO calls and asks for a transfer, your finance team hangs up and calls the CEO back on a known number. Every time. No exceptions.

2. Reduce Your Executive Digital Footprint

Audit how much high-quality audio and video of your leadership team exists publicly. Earnings calls, keynote presentations, podcast appearances, and social media videos are all raw material for voice cloning and deepfake generation. You don’t have to go silent, but you should understand the tradeoff and make deliberate decisions about what stays public.

3. Run Deepfake Tabletop Exercises

Most organizations run phishing simulations. Almost none run deepfake simulations. Conduct tabletop exercises where a synthetic voice or video of your CEO requests an urgent wire transfer. Measure how your team responds. Do they follow the callback protocol? Do they escalate? Or do they comply because it looked and sounded like the boss?

4. Implement Multi-Party Authorization

No single individual should be able to authorize a high-value transaction. Require dual or multi-party approval for any wire transfer, vendor payment, or account change above a defined threshold. This single control would have prevented every case described in this article.

5. Update Your Incident Response Plan

Your IR plan likely covers ransomware, data breaches, and phishing. Does it cover executive impersonation? Define what happens when an employee suspects they’re on a call with a deepfake. Who do they contact? How do payments get frozen? How do you preserve evidence? If you don’t have answers to these questions, you have a gap.

6. Brief Your Board

Boards are increasingly being held personally accountable for cybersecurity failures. Gartner’s 2026 cybersecurity trends report highlights regulatory volatility and executive liability as top-tier concerns. Your board needs to understand that deepfake fraud is not a technology problem—it’s a governance problem. If your organization doesn’t have controls in place and a loss occurs, the question won’t be “how did the attacker get in?” It will be “why didn’t leadership act?”


The Uncomfortable Truth

The technology to impersonate any executive, in real time, with high fidelity, is now widely accessible. It doesn’t require a nation-state budget. It doesn’t require deep technical expertise. Off-the-shelf AI tools can clone a voice in seconds and generate a convincing face in minutes.

The organizations that survive this era won’t be the ones with the most advanced detection tools. They’ll be the ones that built verification into their culture—where no one, regardless of title, can bypass the process. Where “trust but verify” isn’t a poster on the wall but a protocol that runs every time money moves.

At Secure Roots, we help leadership teams build exactly that kind of resilience. From executive threat briefings to tabletop exercises to incident response planning, we work with organizations to close the gaps that AI-powered fraud is designed to exploit.


The Bottom Line

If your organization hasn’t stress-tested its ability to detect a deepfake impersonation of your own CEO, you’re already behind. The attackers have the tools. The question is whether your people have the training and your processes have the controls to stop them.

Don’t wait for a $25 million wake-up call.

Talk to Secure Roots →