
Reality Defender
Stopping deepfakes before they become a problem
The Financial Industry Regulatory Authority (FINRA) is a self-regulatory organization that exists to make and enforce rules for brokerage and other financial services firms, including measures to deter fraud. In late 2023, FINRA imposed a $1.1 million fine on SoFi Securities for failures in its mostly automated process for verifying customer identities that allegedly enabled $8.6 million in unauthorized transfers (SoFi consented to the settlement, while neither admitting nor denying the allegations). What makes this especially scary is that the tools fraudsters have at their disposal for identifying fraud are evolving at an alarming rate, according to Ben Colman, co-founder and CEO of Reality Defender, the leading developer of technology to detect AI content and prevent AI-based fraud. As he argued in a recent post, the rise of cheap and capable generative AI tools is only making these attacks more relentless, more sophisticated, and higher in number: “If basic identity theft can cause such damage, imagine the havoc that sophisticated deepfake impersonations could wreak.”
It was with this sense of urgency that Reality Defender recently submitted formal comments to FINRA in response to a request for input as the regulator modernized industry rules. We recently caught up with Ben to ask him about what the era of “deepfakes” means for financial fraud and what hope the industry has for detecting and preventing it.
DCVC: Tell us about your response to FINRA.
Colman: FINRA’s regulatory modernization initiative presented a critical opportunity to address a fundamental security gap that existing regulations simply weren’t designed to handle. The SoFi Securities case perfectly illustrates how today’s automated identity verification systems are vulnerable to sophisticated attacks.
What made this request particularly timely is that deepfake technology transforms the same basic vulnerabilities that allegedly enabled SoFi’s exploitation into exponentially more dangerous threats. We saw an opportunity to help FINRA get ahead of this curve rather than react to it after major losses occur. The regulatory framework needs updating to explicitly recognize real-time deepfake detection as not just reasonable, but necessary for compliance.
DCVC: How big is the threat of AI-enabled deepfakes for financial services?
Colman: The threat is already substantial and growing rapidly. According to Deloitte, generative-AI-enabled fraud losses in the US financial sector could reach approximately $25 billion by 2027. FINRA itself acknowledges in its 2025 Annual Regulatory Oversight Report that fraudsters are increasingly using “deepfake media to impersonate well-known finance personalities” and creating “synthetic IDs” for fraudulent brokerage accounts.
The statistics are telling: one in five FINRA members reports difficulty identifying customers despite implementing global Know-Your-Customer protocols, while 42 percent of fraud occurs during customer onboarding — precisely where deepfake technology proves most effective.
If we as governments and enterprises don’t begin countering this threat now, we’re looking at a future where every customer interaction, every executive communication, and every trusted contact verification becomes a potential attack vector. The exponential improvement in AI technology means that what costs thousands of dollars and requires technical expertise today will be available for hundreds of dollars with point-and-click simplicity tomorrow.
DCVC: How optimistic are you about digital authentication and deepfake detection becoming standard?
Colman: I’m cautiously optimistic, particularly given the regulatory momentum we’re seeing. FINRA’s proactive approach to modernizing rules signals that regulators understand they need to get ahead of this threat.
The key is creating regulatory clarity that encourages adoption rather than hindering it. In our submission, we recommended that FINRA establish a regulatory safe harbor for good-faith implementation of deepfake detection systems and explicitly recognize these technologies as reasonable fraud prevention measures.
What gives me confidence is that the financial services industry has historically been quick to adopt security technologies when the business case is clear and the regulatory framework supports it. The question isn’t whether this will happen, but how quickly we can create the standards and guidance that enable widespread, responsible deployment.