AI deepfakes and crypto social engineering in 2026: when the “CEO” on Zoom is not real

Synthetic media lowered the cost of targeted fraud. Treasury teams and high-net-worth individuals are prime targets. Technical defenses still start with approvals policy and hardware verification—not magical AI detectors alone. Baseline: phishing guide and multisig response.

Attack patterns we are seeing

Short-loop voice clones trick assistants into “confirming” wire details. Video-call deepfakes impersonate executives authorizing urgent stablecoin sends from hot wallets. Romance scammers use synthetic video to deepen trust before moving victims to fake exchanges—an evolution of themes in romance scam crypto.

Verification rituals that scale

Out-of-band authentication should use channels established before the incident: callback numbers from the corporate directory, not numbers read aloud on the same suspicious call. For large transfers, enforce multi-person approval with time delays and require at least one approver to be physically present or use hardware keys that were not plugged in during the suspicious session.

Why on-chain hygiene still dominates outcomes

Even perfect deepfake theater ends in a transaction. Malicious approvals, spoofed domains, and drainer contracts are the same primitives as 2022—only the social layer got cheaper to fake. Train teams to read transaction calldata previews and to reject “install this quick meeting plugin” requests on finance machines.

After a deepfake-led loss

Preserve recordings only if lawfully allowed in your jurisdiction; hash files and store chain-of-custody notes. Capture wallet addresses and transaction hashes immediately for tracing. Report through channels in our reporting guide.

Individuals protecting family

Agree on a family “safe word” for money emergencies and teach elders that video can lie. Combine with prevention checklist habits. If you are already a victim, first 24 hours still applies.

← All blog posts