Deepfakes in Cybersecurity: Tools, Stats, and News (as of Nov 2024)
Security related deepfake tools, attacks, stats, research, and media coverage.
Hi Everyone - Kicking off what I hope will become a reoccurring series within my newsletter.
As I have been working on DeepTrust, I have found myself obsessively sourcing and aggregating deepfake tools, headlines, stats, research, and media coverage that impact cybersecurity.
Since I have found this information to be tremendously helpful, I thought I’d begin sharing it on a regular basis. Here’s collection 1. Enjoy.
New Deepfaking Tools:
New deepfake face swapping software was made available publicly on github. Video demo available here from a linkedin post.
A new startup Pickle posted a public demo of new high quality deepfake video conferencing tooling they are selling to remote employees. You can check it out here.
Anam.ai is building and selling deepfake avatars and spokespeople. Demo here
Captions Echo AI product allows you to generate deepfake videos based on audio clips. Demo here
AKOOL released their beta for realtime webcam deepfakes. Demo here
Great demo on LinkedIn of DeepFacelive, a new real-time deepfake video tool
Synthesia’s hyper realistic deepfake avatars will soon have full bodies.
Zoom announced that they will let AI avatars attend meetings for you
High Profile Deepfake Attacks:
Senator Ben Cardin was targeted with a deepfake Zoom call.
Ferrari exec targeted by a deepfake impersonation of their CEO.
KnowBe4 hired a fake employee who passed 4 video interviews before being hired.
Exabeam caught a job candidate using deepfake technology during a video interview for an available position on their security team.
Arup lost $25m after a finance employee was told to wire money by a live deepfake of their CFO on a Zoom call.
The CEO of Wiz revealed that they were targeted with deepfake voice phishing attacks.
Dozens of US companies accidentally hired North Korean spies as remote workers.
Deepfake Stats and Research:
Between 2023 and 2024:
From the recent Team8 CISO Village Survey:
Phishing attacks and deepfake-enhanced fraud pose the greatest AI-powered threats to organizations.
75% of respondents said phishing attacks and 56% said deepfake-enhanced fraud through voice or video were their biggest AI concerns.
22.5% list deepfake detection as the largest security problem facing their organization where they have no solution available to help them.
Financial services CISO's listed deepfake detection as the #4 pain point they are facing.
CSO Online published an article discussing a Deloitte study on how deepfakes are being used to target businesses. Notably, the article highlights the increasing usage of deepfakes in social engineering attacks across voice and video calls.
"Deepfaked voice calls are becoming more common, but deepfaked videoconferencing is happening as well, says Mike Weil, digital forensics leader and managing director at Deloitte Financial Advisory Services. When an employee hears a CFO’s voice, or sees a CEO on a video call, most are wired to follow instructions, without questioning the request, he notes."
A recent Morgan Stanley newsletter cited social engineering and deepfakes as the #1 and #3 ways hackers are abusing AI.
A recent report from Google on how AI has been misused over the last year provided data highlighting that deepfake voice and video attacks have already become a severe issue.
Impersonation (pretending to be someone) is by far the top type of misuse of AI.
87% Of these impersonations involved audio or video, with audio being more common of those two, accounting for 50% of all use cases - video was 37.5%
76% of enterprises lack sufficient voice and messaging fraud protection post ChatGPT
61% of enterprises suffer significant losses to mobile fraud, with smishing (SMS phishing) and vishing (voice phishing) being the most prevalent and costly.
According to research from Medius:
Half of finance professionals (53%) have experienced attempted deepfake scamming attacks, with 43% admitting they have fallen victim to an attack and 85% saying deepfake technology poses an existential crisis to the business’ financial security. When asked, the vast majority of professionals (87%) admitted that they would make a payment if they were “called” by their CEO or CFO to do so. This is concerning as more than half (57%) of financial professionals can independently make financial transactions without additional approval
From the Egress Report:
What stuck out to the researchers was some of the new technologies that are available to threat actors. Egress said that analysis of phishing kits being offered for sale on the dark web found that 75% advertised some sort of AI capability, and 82% offered deepfake creation features.
“One of the most troubling findings is the rapid commoditization of AI in phishing toolkits, which is putting advanced threats into the hands of less sophisticated cybercriminals,” explained Egress senior VP of threat intelligence Jack Chapman.
“However, the report highlights one enduring reality: modern phishing threats are increasingly driven by impersonation tactics, which have become the backbone of many advanced and targeted attacks against organizations.”
From the CRN Report:
audio deepfake attacks against businesses have rapidly become commonplace in recent years
audio deepfake attacks are now so common “to the point where it's even impacting myself as a cybersecurity researcher,” he said. “The prevalence is massive.”
Experts say that AI-generated impersonation has become a widespread problem since simple-to-use voice cloning became available more than a year ago.
On the business side, according to a report released this week by identity verification vendor Regula, nearly half of surveyed businesses say they’ve been targeted with audio deepfake attacks over the past two years.
From the IronScales Report:
Concerns around deepfakes are increasingly widespread and rapidly intensifying:
Over 94% of IT professionals express some level of concern about the threat deepfakes currently pose to their organizations.
When asked about the threat deepfakes will pose in the near future, the percentage of “very concerned” respondents rose sharply, to a staggering 74%.
Deepfake defense is quickly climbing toward the top of organizations’ priorities:
Over 43% of IT professionals say deepfake defense will rank as their organizations’ top security priority in the next 12-18 months.
An additional 48% acknowledge that it will be an important part of their security operations.
74% expect deepfake threats to worsen significantly, making them a key issue for organizations in the near future.
64% believe the volume of deepfake-enabled attacks will increase in the next 12-18 months, surpassing ransomware and account takeovers.
75% of respondents reported that their organizations had experienced at least one deepfake related incident within the last 12 months.
30% targeted with live video
23% targeted with live audio
Nearly three quarters 73% of respondents say their organizations will invest in deepfake protection within the next 12 months.
One interesting point to note is that, despite the unmistakable air of concern around deepfakes today and the fact that over 67% of respondents have already begun providing cybersecurity training related to identifying deepfakes, only 41% of respondents are very confident in their organizations’ ability to defend against deepfake threats, which is cause for concern.
Accenture announced a huge new focus on deepfake security and education.
47% of organizations have dealt with deepfake attacks
Per IProov: “Almost three quarters (73%) of organizations are implementing solutions to address the deepfake threat but confidence is low with the study identifying an overriding concern that not enough is being done by organizations to combat them. More than two-thirds (62%) worry their organization isn’t taking the threat of deepfakes seriously enough”.
Regula’s survey data shows “a significant rise in the prevalence of video deepfakes, with a 20% increase in companies reporting incidents compared to 2022**
After aligning the 2024 survey data with the 2022 cohort for a direct comparison, it reveals that 49% of companies experienced both audio and video deepfakes, up from 37% and 29%, respectively, in 2022. However, the unadjusted 2024 survey—which includes a larger sample size and new regions such as Singapore, in place of countries like Australia and Turkey—indicates that 50% of companies were affected by both types of deepfakes”.
The Department of Financial Services issued a notice highlighting that “AI-enabled social engineering presents one of the most significant threats to the financial services sector”. And that the “lower barrier to entry for threat actors, in conjunction with AI-enabled deployment speed, has the potential to increase the number and severity of cyberattacks”.
The Microsoft Digital Defense Report 2024 stated “As deepfakes become more common in the business environment, organizations will have to implement countermeasures, such as requiring additional verification for transactions”. They also stated that “In the coming year, we anticipate the biggest rises in automated fraud and election interference, CSAM and NCII production, and the use of XPIA and deepfake impersonation as cyberattack and fraud channels.”
In their CISO survey, Team8 “found that AI-powered phishing attacks and the rise of deepfakes had emerged as top concerns, just a year after many in the cohort had hoped generative AI would be nothing more than a passing fad”.
FS-ISAC published a report discussing how deepfakes are impacting the financial sector. They found that 60% executives say their firms have no protocols regarding deepfake risks and that losses from deepfake and other AI-generated frauds are expected to reach tens of billions of dollars in the next few years.
A recent Microsoft report highlighted some additional key data around deepfakes and social engineering attacks.
In a recent study funded by the National Science Foundation, investigating the vulnerability of different groups to deepfake videos, results showed that the general adult population was only 46% likely to correctly identify a deepfake video as inauthentic.
The Federal Trade Commission (FTC) currently ranks imposter scams as the most reported type of fraud. The losses from these scams have been increasing since 2019. For 2023 only, the scams resulted in losses of $2.7 billion. The median loss per scam was $1,000.
A 2023 survey found that 37% of organizations globally have experienced some form of voice deepfake fraud attempt. This trend is particularly concerning since AI has the potential to enable more accurate and misleading imposter scams.
Only 11% of respondents to a different poll believed they could accurately identify AI content, and the recent coverage of altered images of public figures has further heightened concerns about the impact of synthetic content on trust in the information ecosystem.
Deepfake Security Media Coverage:
Rachel Tobac provided a great demonstration showing how deepfakes can he used to attack organizations.
Here is a great demo of a fully AI avatar joining a live zoom meeting and interacting with a team.
Greylock published “Deepfakes and the New Era of Social Engineering”
Evan Ratliff concluded a podcast series, Shell Game, where he created a voice clone of himself and used it for different social experiments.
A BBC reporter created an AI avatar of himself and had it join work meetings. Video here.
Great talk at the 2024 BSidesSF cybersecurity conference highlighting how deepfakes and generative AI are compromising voice biometrics: BSidesSF 2024 - Your voice confirms my identity (Ethan McKee-Harris)
Great talk from RSA last year about how AI can be leveraged in social engineering: CatPhish Automation - The Emerging Use of Artificial Intelligence in Social Engineering
TechCrunch published an article discussing the dangers of voice cloning, social engineering, and voice phishing.
I hope you enjoyed this blog! Please subscribe and share with your friends. New subscribers help assure me that people care and let me know that I should keep putting these out.
A little about me…
I'm a co-founder at DeepTrust where we help security teams defend employees from social engineering, voice phishing, and deepfakes across voice and video communication channels. Seamlessly integrating with VoIP services like Zoom, Microsoft Teams, Google Meet, RingCentral and others, DeepTrust works in realtime to verify audio sources, detect deepfakes, and alert both users and security teams of suspicious requests.
If you’re interested in learning more, or have any additional questions, I’d love to chat! Feel free to shoot me an email at noah@deeptrust.ai.