The (New) Social Dilemma
With technology, you don’t have to overwhelm people’s strengths. You just have to overwhelm their weaknesses.
Those of you who read my blog on a regular basis know that there are 3 authors and newsletters I return to over and over again, and who’s ideas have had considerable impact on my thinking.
Kyle Harrison, in particular, published a long (but incredible) newsletter that I think should become required reading for every founder going forward. In it, he discusses the critical importance of building a clear and compelling narrative around your company, and working to cultivate a passionate community of believers and supporters.
With this, I decided it was a good time to revisit the narrative around my own startup, DeepTrust, and why we are doing what we are doing. A lot has changed since I made a first attempt at documenting our reason for existence, and I think an update is long overdue. I know my last newsletter covered our journey so far as a company, but I think it’s critical to be clear on WHY we exist.
With that, I’d like to tell you a story.
Back in 2020, I watched The Social Dilemma on Netflix and it shook me. Not because anything in it was a surprise, but because it was able to effectively articulate a lot of the concerns I’ve had growing up in the first generation with access to social media from a young age.
There was one quote in particular from the show that stood out to me, and continues to haunt me on an ongoing basis:
“With technology, you don’t have to overwhelm people’s strengths. You just have to overwhelm their weaknesses. This is overpowering human nature. And this is checkmate on humanity.” - Tristan Harris (Former design ethicist at Google)
In the show, the quote is referring to the addictive properties intentionally built into social media platforms, but I think it applies much more broadly, and especially rings true when it comes to generative AI.
The last couple of years have seen incredible advances in the quality, application, and adoption of generative AI. While this has unlocked amazing new capabilities for humanity, it comes with a much darker new reality as well.
I believe that (continuing) rapid improvements in generative AI have made it so we can no longer rely on our own senses to help us discern what is real from what is fake online.
Specifically, going back to Tristan’s quote, I believe that AI has now surpassed human weakness, and is quickly approaching human strength.
The result of this is that we have entered a new era of digital communication. An era where we can no longer trust a familiar voice. An era where we can no longer believe what we are seeing or hearing in real-time.
A level of doubt and uncertainty has been introduced into all digital communication and media. Things we once took for granted as real can now be questioned. Anyone can claim something is AI generated, and people have to legitimately consider that this might be the case.
This is a dramatic shift with far reaching implications across society.
So how do we solve this?
I believe it starts with helping people trust what they hear again.
Generative audio is progressing the fastest, and humans are not used to ever having to question a familiar voice.
It used to be that when we heard someone’s voice over the phone or on a call, we could confidently conclude that it was that person. No longer.
Companies like ElevenLabs, Typecast, Speechify, HeyGen, Respeecher, and WellSaid have all built incredible platforms that enable the cheap, quick, recreation of voices at a level of quality never before imagined. These platforms have all gotten so good that an everyday person cannot reliably identify AI generated speech from real speech when they are listening for it, let alone when they aren’t.
Outside of this, we have never had to think about protecting our own voices, or even consider where they might appear. From social media, to YouTube, to voicemail inbox messages… our voices are extremely accessible. In fact, a 2023 McAfee report found that 53% of adults share their voice online at least once a week, with 49% doing so over 10 times per week. When only 3 seconds of audio is needed to create a voice clone, anyone’s likeness is up for grabs.
The end result of this is that generative audio presents the highest risk of misuse in the near term. We are already seeing increasingly frequent headlines of social engineering and phishing attacks that used deepfake voices to target both individuals and businesses.
While social engineering and voice phishing (vishing) have always existed, attackers have now been handed powerful tools to launch these attacks increasingly effectively, cheaply, and at scale.
Let’s revisit the first half of the quote I referenced before:
“With technology, you don’t have to overwhelm people’s strengths. You just have to overwhelm their weaknesses.” - Tristan Harris (Former design ethicist at Google)
For businesses, and especially remote or distributed businesses, this dramatically impacts the need for call security across voice and video communication channels. Two channels that historically haven’t required as much security, as humans could be reliably counted on to identify potential imposters.
The reality is that this has changed, and the majority of businesses outside of banking and financial services have not taken meaningful actions to secure their voice calls, and certainly not their video calling platforms. Even companies in banking and financial services are quickly identifying that these communication channels are often woefully under protected compared to the rest of their organizations.
This is where we aim to help at DeepTrust.
We are building a call security platform to help protect employees across voice and video communication channels. DeepTrust doesn’t just identify deepfakes, it takes a layered approach to allow security teams to confidently protect their employees from all forms of social engineering, vishing, and fraud.
So what does this look like in practice?
It means that all VoIP calling platforms for an organization are protected with audio watermarking that ensures callers are speaking from their verified company device, deepfake detection to identify synthetic voices, and context analysis that alerts call participants of attempted social engineering or manipulation - all in real-time.
If an employee joins a call using an unauthorized device, it’s flagged. If a deepfake is used on a call, it’s flagged. If someone attempts to manipulate an employee into taking a compromising action or giving up sensitive information on a call, both the employee and security team are alerted in real-time.
Our goal at DeepTrust is to ensure that businesses can confidently protect their employees from all types of voice based social engineering, and ensure that attackers can’t use generative AI to exploit the human element of security.
I’ll close by noting that this is just the beginning at DeepTrust. Call security is critical, and is our primary focus at this time, but generative AI has introduced a problem much larger than just call security.
Long term, at DeepTrust we are setting out to protect human authenticity by building the trust layer for the internet and allowing full provenance over digital content and communication.
If you’re interested in learning more, or partnering with us to turn this vision into a reality, I’d love to connect!
I hope you enjoyed this blog! Please subscribe and share with your friends. New subscribers help assure me that people care and let me know that I should keep putting these out.
A little about me…
I'm a co-founder at DeepTrust where we help security teams defend employees from social engineering, vishing, and deepfakes across voice and video communication channels with active call security. Seamlessly integrating with VoIP services like Zoom, Microsoft Teams, Google Meet, RingCentral and others, DeepTrust works in realtime to verify audio sources, detect deepfakes, and alert both users and security teams of suspicious requests.
If you’re interested in learning more, or have additional questions about deepfakes, I’d love to chat! Feel free to shoot me an email at noah@deeptrust.ai.