We Need to Stop Fighting Deepfakes...
and instead focus on solving the real problems that deepfakes make worse
Over the last 6 months or so, I have been building increasing conviction on a spicy take: We need to stop trying to fight deepfakes.
I’m sure this comes as a surprise considering we can all generally agree that deepfakes are bad.
Everyday it seems like there is a new headline discussing something terrible someone is doing with a deepfake.
But that’s exactly my point - doing WITH a deepfake. Deepfakes are a tool.
You see "deepfake" is a vague term that encompasses all kinds of ways generative AI tools can be abused.
It’s a red herring, a distraction from what’s actually important.
It allows AI tool creators to avoid responsibility for how their tools can be misused by drawing a distinction between “good AI” and “bad AI”.
In doing this, when we attempt to take action to fight (and regulate) deepfakes, we quickly realize that this is almost impossible because it’s not actually any one singular thing - despite what the name implies.
Instead, deepfakes are just malicious outputs from the same assortment of AI tools and technology that we are all so collectively enamored with right now.
Like pretty much everything in life, generative AI is not black and white - it’s grey. Grey is complicated. Grey requires nuance and critical thinking (gross).
Ok Noah, so if deepfakes aren’t the problem what is?
I’m glad you asked.
We (the tech community) are building incredible new tools that allow the realistic synthetic replication of previously uniquely human traits at a scale never before imaginable, and have made these tools accessible to literally anyone.
In making these tools openly accessible, we either somehow forgot that bad people exist and thought they wouldn’t be abused, or we just didn’t care - which is much more likely.
If we are being honest, the standard playbook in tech is to create things and let other people figure out the consequences.
Regardless, once these tools started being abused (duh) a narrative caught on that malicious outputs of these tools (deepfakes) were the underlying problem that needed solving.
The problem is that we focused on the wrong problem.
While the ability to generate malicious AI content is A problem, its not THE problem.
While the ability to identify malicious AI content is A solution, its not THE solution.
By focusing on the wrong problem, we have all collectively wasted a lot of time, energy, and money on solutions that aren’t actually solving anything.
Bad.
“Solving” deepfakes from first principles.
The cat is out of the bag. Generative AI is here, it won’t ever go away, it’s getting better quickly, and it will continue to do so.
It’s also a fact of life that there will always be bad people who do bad things and they will use whatever available tools are most helpful in achieving their goals.
So where do we go from here?
Deepfakes are distinct from other outputs of generative AI because of intent.
It’s not the fact that something is AI generated that makes it bad, it’s WHAT that AI generated thing is being used for and WHY it was created that makes it bad.
Badness is derived from the intent, objectives, and usage of the AI, not from the AI itself.
AI allows me to replicate a voice in real-time.
I can use this capability for live language translation.
I can also use this capability for voice phishing, fraud, and scams.
It’s not the AI voice that’s the underlying problem in this instance, its voice phishing, fraud, and scams - AI is just lowering the barrier to entry, increasing the effectiveness of attacks, and allowing the automation and scaling of these attacks.
This concept is important. It allows us to identify the underlying problems we need to address and how we should go about solving them.
From a solutioning perspective, in order to “solve” deepfakes, we need to focus on solving each of the underlying problems impacted by deepfakes (like voice phishing, fraud, and scams) and design solutions that address these problems even when AI is used.
If we want to take it a step further, and disincentivize malicious use of AI, lawmakers can step in. Not to necessarily regulate AI developers (separate discussion), but to severely penalize using AI for criminal purposes.
There will always be new technology and with it our problems will continue to change and evolve.
The path forward is to stay focused on what the real underlying problems plaguing our society are, understand how new technology impacts these problems, and adjust our priorities, strategies, and solutions accordingly.
We must stay focused on solving real problems and accounting for technology, not solving for technology and ignoring real problems.
I hope you enjoyed this blog! Please subscribe and share with your friends. New subscribers help assure me that people care and let me know that I should keep putting these out.
A little about me…
I'm a co-founder at DeepTrust where we help security and fraud teams protect employees from live voice and video threats with intelligent call security across VoIP platforms.
DeepTrust protects against both conventional and deepfake threats by using live conversational telemetry, layered threat detection, and org specific knowledge bases to deliver real-time personalized security coaching to employees and risk remediation guidance to security teams.
If you’re interested in learning more, or have any additional questions, I’d love to chat! Feel free to shoot me an email at noah@deeptrust.ai.