NEW: We're now partnered with Catapult x UPenn as a content partner! Learn more about our partnership here.
Jan. 11, 2025

1017: THE SCARIEST EPISODE EVER! These are the REALITIES of the IMMEDIATE risks of AI w/ Aman Ibrahim

Send us a text

Ever wondered how artificial intelligence could shape the future of security? Join us for an insightful episode with Aman Ibrahim, the innovative co-founder of DeepTrust, as we unravel the pressing challenges and incredible potential AI brings to the world of cybersecurity. Aman, inspired by his Eritrean roots and his family's dedication to community and technology, shares his mission to safeguard human authenticity against modern threats like voice phishing, deep fakes, and social engineering. With a rich background in machine learning and experience in both healthcare and tech startups, Aman's journey is both inspiring and informative.

Our conversation ventures deep into the risks of misinformation and the devastating impact of AI-generated scams. As we explore the ease with which voices and likenesses can be manipulated, we highlight the real-world consequences that have already cost Americans billions. Aman's personal stories reveal the urgency of these issues, underscoring how even the most cautious individuals can fall prey to these high-tech scams. The discussion extends to the misuse of deepfake technology in marketing, prompting a reflection on the challenges consumers face in distinguishing real endorsements from fabricated ones.

We also tackle the critical issue of cybersecurity in corporate communication, with platforms like Zoom becoming hotbeds for sophisticated cybercriminal activities. Aman illustrates how AI-driven attacks are evolving, emphasizing the need for proactive defense measures. Discover the innovative concept of co-pilot agents, designed to empower employees in navigating risky situations by providing real-time guidance and ensuring sensitive information is handled with care. Don't miss this eye-opening conversation with Aman Ibrahim, as we navigate the intricate landscape of AI security together.

ABOUT AMAN

Aman Ibrahim is a co-founder at DeepTrust, where he focuses on leveraging AI to help security teams defend against voice phishing, deepfakes, and social engineering attacks across voice and video communication channels. Prior to DeepTrust, Aman worked as an ML engineer at Cruise, where he scaled model and data distribution systems, and he also built models in healthcare research labs to address challenges in nutrition, palliative care, and disease detection. His work at DeepTrust is driven by the mission to protect human authenticity in an age of rapidly advancing AI technology.

LINKS & RESOURCES

Chapters

00:00 - Protecting Against AI Threats

04:04 - Building Trust Amid Misinformation

17:46 - Cybersecurity Threats in Corporate Communication

24:06 - Empowering Security Through Co-Piloting

36:15 - Guest Appreciation and Engagement

Transcript

WEBVTT

00:00:00.160 --> 00:00:01.183
Hey, what is up?

00:00:01.183 --> 00:00:04.431
Welcome to this episode of the Wantrepreneur to Entrepreneur podcast.

00:00:04.431 --> 00:00:06.825
As always, I'm your host, brian LoFermento.

00:00:06.825 --> 00:00:09.676
I'll tell you what new year, new problems.

00:00:09.676 --> 00:00:17.173
We are entering an entirely new world that is going to and is presenting so many incredible possibilities.

00:00:17.173 --> 00:00:30.454
But with those possibilities come some threats, some dangers, some considerations, some things we need to think about, and that's why today, we've gone out and found an incredible entrepreneur who's bringing in very important solution.

00:00:30.454 --> 00:00:37.090
That's what I'm going to kick things off by saying a very important solution to the planet that truly is addressing societal problems.

00:00:37.090 --> 00:00:43.307
This is someone who is actively part of the solution of things that we're all going to face this year and beyond.

00:00:43.328 --> 00:00:44.530
So let me tell you about today's guest.

00:00:44.530 --> 00:00:46.234
His name is Aman Ibrahim.

00:00:46.234 --> 00:00:59.747
Aman is a co-founder at DeepTrust, where he focuses on leveraging AI to help security teams defend against voice phishing, deep fakes and social engineering attacks across voice and video communication channels.

00:00:59.747 --> 00:01:02.225
If you're thinking to yourself, I'm immune to that.

00:01:02.225 --> 00:01:03.046
You're not.

00:01:03.046 --> 00:01:08.560
I'm not Aman's not, nobody is, and Iman and I are going to get really serious about talking about that in today's episode.

00:01:08.921 --> 00:01:24.170
Prior to Deep Trust, iman worked as an engineer at Cruise, where he scaled model and data distribution systems and he also built models in healthcare research labs to address challenges in nutrition, palliative care and disease detection.

00:01:24.170 --> 00:01:27.703
Address challenges in nutrition, palliative care and disease detection.

00:01:27.703 --> 00:01:33.153
His work at DeepTrust is driven by the mission to protect human authenticity in an age of rapidly advancing AI technology.

00:01:33.153 --> 00:01:35.144
It's big stuff we're talking about today.

00:01:35.144 --> 00:01:38.471
I'm personally so excited to hear all of his thoughts.

00:01:38.471 --> 00:01:41.123
We were talking off air and I said let's stop, let's hit record.

00:01:41.123 --> 00:01:44.290
So let's dive straight into my interview with Aman Ibrahim.

00:01:44.290 --> 00:01:55.525
All right, aman, it's so hard for me to not just jump straight into it with you, but first things first.

00:01:55.566 --> 00:01:56.066
Welcome to the show.

00:01:56.066 --> 00:01:56.487
Thank you so much.

00:01:56.487 --> 00:01:58.370
I've never been welcomed so warmly.

00:01:58.370 --> 00:01:59.659
I for a second thought.

00:01:59.659 --> 00:02:02.603
I was an audience member and I was like excited to see who's coming on.

00:02:02.603 --> 00:02:05.768
I was like, oh wait, it's me Never been introduced like that.

00:02:05.768 --> 00:02:06.808
I appreciate that a lot.

00:02:06.929 --> 00:02:07.450
I love that.

00:02:07.450 --> 00:02:10.032
No, honestly, I mean, you know, I said that to you off the air.

00:02:10.032 --> 00:02:20.705
As a podcaster, this is important stuff for us, but then you and I obviously think about the world at large, far outside of just our industries, and it's big stuff we're talking about today.

00:02:20.705 --> 00:02:23.590
So before we get there, aman, I'm gonna put you on the spot, and then we're jumping straight into the fun stuff.

00:02:23.590 --> 00:02:24.211
Who the heck is, aman?

00:02:24.211 --> 00:02:26.514
How did you even start doing all these things?

00:02:26.514 --> 00:02:27.724
Take us beyond the bio.

00:02:28.780 --> 00:02:30.326
Yeah, no, that's a great question.

00:02:30.326 --> 00:02:51.453
Of course, as you said, my name is Aman Ibrahim, my background is in machine learning, engineering, and what got me into this problem space and just solving problems like this is I've always had this desire to solve very difficult problems that have an opportunity to help people you know over the long term.

00:02:51.453 --> 00:03:06.551
And where that came from and I like to say this comes from like a very personal side of myself my father and my parent, my mom, came to this country from a country in East Africa it's called Eritrea, and my father was fortunate enough to come here to study computer science.

00:03:06.551 --> 00:03:10.106
He worked at, like, ibm and Cisco, so I was growing up I was exposed to that.

00:03:10.106 --> 00:03:21.427
My mother she's always been very community driven, so for me it was just by its nature that I had that love to solve technical problems while also finding ways to serve people.

00:03:21.427 --> 00:03:24.151
It even began when I was like in kindergarten.

00:03:24.151 --> 00:03:36.792
My dad would contribute to the refugee community by building computers for them and when I was in kindergarten, six, seven, eight years old, also built computers alongside with him to help people that way.

00:03:36.792 --> 00:03:46.484
And, as you mentioned before, I spent some time in like health tech research, but even built in healthcare startups Started.

00:03:46.484 --> 00:04:04.645
Two other startups of my own have over 40 different side projects that I built were like serving students, teachers, athletes, doctors, patients, and even when I joined Cruise, the very motivation to be a machine learning engineer there was.

00:04:04.664 --> 00:04:07.030
Yes, solving the cutting edge of technology in that space was amazing.

00:04:07.030 --> 00:04:09.554
But then what is the returns that you get out of that when you solve that problem?

00:04:09.554 --> 00:04:21.283
You're not only talking about the opportunity to make transportation accessible to people albeit the news that came out of Cruise recently but the ambition was making transportation as accessible to people.

00:04:21.283 --> 00:04:35.548
Removing that concept of traffic is possible with self-driving cars, but then most importantly is the fact, besides any other disease right after, it's always what perishes people the most is car accidents and removing that.

00:04:35.990 --> 00:04:40.029
For me, if I could contribute to that just even a little bit, that was incredibly motivating.

00:04:40.029 --> 00:04:53.475
So when I finally took myself into the industry instead of working at some FANG company, obviously like the Facebook, googles and things like that I was very much more driven to come to an opportunity like that.

00:04:53.475 --> 00:05:06.401
So naturally, when the opportunity and problem space of deep trust arose, which we'll get into, it was just very natural into who I am and why to solve that problem, you know, helping people trust their own eyes and ears again.

00:05:06.401 --> 00:05:08.447
So yeah, that's basically it.

00:05:08.447 --> 00:05:09.771
That's how I came into the space.

00:05:10.319 --> 00:05:12.444
Yeah, aman, I love that overview.

00:05:12.444 --> 00:05:19.052
I appreciate so many parts of what you just shared with us, and now it's clear to me why we were able to click so quickly today.

00:05:19.052 --> 00:05:20.740
We're both from immigrant families.

00:05:20.740 --> 00:05:24.670
As the son of an immigrant mom, I feel like it's factored into the way that we see the world.

00:05:24.670 --> 00:05:25.952
We see endless possibilities.

00:05:26.293 --> 00:05:44.713
Our families came to this country in the pursuit of freedom, in the pursuit of possibilities, in the pursuit of you and I having more doors open to us than what they had themselves, and I think that that so deeply is integrated into the way that we see the world, which it makes perfect sense to me that you're part of very big solutions, amman.

00:05:44.713 --> 00:05:52.326
So let's talk about that solution that you wanna be a part of here and that you're actively working to solve, because this is big stuff and I said it at the top of this episode.

00:05:52.326 --> 00:06:04.249
A lot of people will probably think, ah, this is either so far into the future that I don't have to worry about it, or they're probably thinking I'm immune to it agency owner, what the heck is all this stuff going to do for me?

00:06:04.249 --> 00:06:05.932
Or I'm a web developer, aman.

00:06:05.932 --> 00:06:09.175
Paint the picture of why the heck this is a cause that you care so much about.

00:06:10.100 --> 00:06:12.105
Yeah, absolutely so.

00:06:12.105 --> 00:06:30.591
The problem space that we're focusing on today is the fact that with only three seconds of audio, a single profile picture, you can steal someone's likeness very easy and if you've been on social media enough, there's been enough content where we saw that genitive content used for entertainment, funny things as well.

00:06:30.591 --> 00:06:34.750
But for every new technology, there's also the misuse of that very technology.

00:06:34.750 --> 00:06:38.028
So, for our perspective, ai isn't inherently bad.

00:06:38.028 --> 00:06:41.809
It's just there are people who have certain intentions.

00:06:41.809 --> 00:06:46.262
So with the very little that you need in are people who have certain intentions.

00:06:46.262 --> 00:06:50.410
So with the very little that you need, in terms of both skillset and sample content, you can easily mimic anyone.

00:06:50.410 --> 00:06:54.365
And we're talking about this is not again, like you've said and that was a great point.

00:06:54.365 --> 00:06:57.526
I thank you for bringing it up this is not an emerging threat.

00:06:57.526 --> 00:06:58.588
It's already here.

00:06:59.151 --> 00:07:07.665
There is already a report put out by the FTC where regular, everyday Americans have already lost over $3.3 billion from imitation scams.

00:07:07.665 --> 00:07:18.874
Those are scams that are either someone imitating someone you recognize like personally, or imitating a celebrity telling you oh yeah, there's this government check.

00:07:18.874 --> 00:07:23.009
And these are very things that me and my co-founder experienced ourselves.

00:07:23.009 --> 00:07:37.891
Part of why my co-founder is very motivated to solve this problem was one day his grandfather was sitting in the room, got a phone call saying hey, it's me, noah, noah's his name stole his voice, his very voice, and asked him to send him money and these sort of things.

00:07:37.891 --> 00:07:48.235
And then another time for myself, my mom was sitting in a room where she was watching what she thought was an advertisement from Oprah someone she trusts, telling her hey, there's a government check that you need to sign up for.

00:07:48.235 --> 00:07:51.269
Just put in your social security number here and you'll get it.

00:07:53.201 --> 00:08:08.312
So this is people taking this new technology that you can again be able to mimic voices and likeness however you please, with very little skill set and using it for very malicious intent and this goes from misinformation to imitations, scamming and phishing.

00:08:08.312 --> 00:08:12.069
So just general, like another example, use cases.

00:08:12.069 --> 00:08:23.543
If I can steal your second voice in three seconds, imagine if you're in a high pro, high, high situation court case where someone submits evidence of your recording saying this and that.

00:08:23.543 --> 00:08:30.153
So there's, there's a huge general problem of how to even authenticate what I'm seeing and hearing.

00:08:30.153 --> 00:08:34.303
My own biology is fooling me so yeah, yeah, aman.

00:08:34.403 --> 00:08:37.807
These are big, important things and it's funny we could talk about it in the business context.

00:08:37.807 --> 00:08:41.490
Dude, you and I actually didn't talk about this yet, but I see it every time.

00:08:41.490 --> 00:08:50.051
I log on to Instagram now I see some marketers, who obviously are not the most ethical marketers in the world, are using Joe Rogan clips.

00:08:50.051 --> 00:08:59.111
Because, joe Rogan we could find hours, hundreds, if not thousands of hours of his voice out there and they're having him endorse products that they're advertising.

00:08:59.111 --> 00:09:01.427
And so us, as the consumers, we can't tell.

00:09:01.427 --> 00:09:09.493
And Joe Rogan has a large and loyal audience and they're probably sitting there thinking, oh, he endorses this supplement or this, whatever it is.

00:09:09.801 --> 00:09:16.530
People are mimicking that, and you made a very important point to me before we hit record today is that we've always relied on.

00:09:16.530 --> 00:09:18.014
Well, I can tell a scam.

00:09:18.014 --> 00:09:23.690
Ok, if I get an email from a Nigerian prince who's promising $50 million, I know that that's fake.

00:09:23.690 --> 00:09:32.182
But if I can see something with my own eyes, if I can see it, then I can believe it.

00:09:32.182 --> 00:09:34.851
And so, seeing Joe Rogan, it's a video of him, it's his voice, it's him talking about this, but it's not real.

00:09:34.851 --> 00:09:41.951
Talk about how this has become the first time in history that what we see is not what we might be able to believe anymore.

00:09:42.940 --> 00:09:47.904
Yeah, exactly, this comes very down to the fact that trust is a very sensitive thing.

00:09:47.904 --> 00:10:01.360
When you say you have a Nigerian prince reaching out to you, you've never met this person, there's no context between you and them engaging with one another, so it takes a lot, for there's a huge barrier for, say, that Nigerian prince to get what he wants out of you.

00:10:01.360 --> 00:10:21.587
But when you bring familiarity into something, whether it's you know, your grandson or your mom or your dad or your sibling, or, say, a trusted figure, someone who you've taken their advice from before, whether it might be a Joe Rogan to your doctor or whatever it may be, you quickly like, lower your guard and now anything can happen.

00:10:21.587 --> 00:10:27.076
And the sort of perspective that we put into place is the question.

00:10:27.076 --> 00:10:30.203
This is what I'll state.

00:10:30.345 --> 00:10:34.923
There's this concept called the liar's paradox and, for your viewers, take a moment to Google the liar's paradox.

00:10:34.923 --> 00:10:40.802
It's a concept in psychology where they have a straw man figure and I'm paraphrasing here.

00:10:40.802 --> 00:10:44.730
But basically, anytime this person speaks, it's always a lie.

00:10:44.730 --> 00:10:47.583
It's just this made up figure and whatever history.

00:10:47.583 --> 00:10:55.942
And there's a moment that this liar says I am a liar and that becomes a paradox in the sense yes, they are a liar, but in that moment they're telling the truth.

00:10:55.942 --> 00:11:00.120
So what we actually perceive the problem space actually to be is generative.

00:11:00.120 --> 00:11:17.620
Ai actually causes a liar's paradox in and of itself, where, if you see enough content right, where, whether you're seeing or hearing something that looks and sound real but then ends up being generated or fake, the question no longer becomes what is fake, but what is actually even real at that point.

00:11:17.620 --> 00:11:24.325
So that is, that is essentially the problem that we're really looking at from this misuse of this technology.

00:11:24.345 --> 00:11:31.323
So, yeah, I'm going to ask you a big, broad question, then, and it's inevitably going to lead into us talking about deep trust in the work that you're doing.

00:11:31.323 --> 00:11:39.851
But how do we in today's world here we are in 2025, how the heck do we even begin to tell what's real and what's fake?

00:11:41.000 --> 00:11:50.094
Yeah, I mean, listen, at this stage, of course, it's very much a lot of theories and nothing is truly proven until you actually literally execute upon it.

00:11:50.094 --> 00:12:01.508
But for us, our perspective is this you should bring in concepts that are already recognizable, you know, and let me take it to like very basic forms of trust.

00:12:01.508 --> 00:12:12.421
You know, I told you like what lowers your guard when you find something that's familiar and you trust the source of it, and that's the concept that we're trying to bring into what we do today.

00:12:12.421 --> 00:12:15.671
In the long term, how do you build essentially the trust layer for the internet?

00:12:15.671 --> 00:12:23.164
And I want to bring the ability for us to be able to put providence into content when is something coming from?

00:12:23.164 --> 00:12:31.832
And if you can establish a standard that does that, like we already have done for you know, uh, end-to-end encryption and things like this.

00:12:31.832 --> 00:12:36.139
This is how you can eventually trust where things come from, and this is not a new concept whatsoever.

00:12:36.139 --> 00:12:53.383
When you, when you, for example, receive like an academic paper and, uh, someone just hands it to you, you don't just say, oh yeah, trust it, know you have references, and then those references have references, and then you eventually have essentially a chain of where that knowledge came from, and this is where we have, say, core and that sort of knowledge.

00:12:53.903 --> 00:12:57.293
Content has that demand for that sort of authenticity.

00:12:57.293 --> 00:13:04.576
Now you need to have some sort of way to validate where something is coming from and there's finally a demand to build that sort of standard.

00:13:04.576 --> 00:13:34.335
So, at any point content is created, generated or modified, our perspective is, in the long term, we need to have some sort of standard set in place where you put a signature into the content that doesn't disturb the human experience of it, but then that very cryptographic, immutable key that's signed into the content that you can't edit, you can't remove, you can't manipulate, can not only tell you hey, this came from this source, but it came from this chain of sources, and then it's up to you to decide whether you trust that chain of sources.

00:13:34.335 --> 00:13:42.335
A very easy example that I would love to see happen in the future is, let's say, a video clip is produced and ends up on your Twitter feed.

00:13:42.335 --> 00:13:46.323
You can now look at it and you can say, okay, someone clipped this on their iPhone.

00:13:46.323 --> 00:13:51.677
Or actually it starts from, say, a Sony movie that was produced, you know, and that was signed by Sony.

00:13:51.677 --> 00:14:04.969
Someone you know edited it on Photoshop, so then it got signed again individually, and then someone trimmed it on their iPhone, signed again, and then it finally ends up on your feed.

00:14:04.969 --> 00:14:07.779
So now you have a true source or true chain of where things came from, and then that's how you truly trust where things come.

00:14:07.799 --> 00:14:13.745
Now, until we achieve that standard, that's like obviously a big, ambitious goal to again build what I'm saying is the trust layer for the internet.

00:14:13.745 --> 00:14:19.955
There are steps along the way and we'll get into those details and it's part of what we're building today as part of our product.

00:14:19.955 --> 00:14:30.792
We're not jumping straight into, obviously, the dream goal here, but in order to get to the dream goal, there is a journey, both a technical one and a one that's very focused on solving people's real current problems.

00:14:30.792 --> 00:14:38.812
So, yeah, that's how we truly believe, like the golden rule or the golden path to solve this problem altogether.

00:14:38.832 --> 00:14:47.176
Yeah, I love how big you're thinking about this, aman, because, truth be told, it's people like you, it's innovators, it's pioneers, who have that grand vision.

00:14:47.176 --> 00:15:03.879
It's the only way we're going to get there is, first, dream big and then let's bring it back to the actual actionable building blocks, which I love the work that you guys are doing at DeepTrust, because I just think about all of my businesses and businesses, large and small, across the entire United States, across the world.

00:15:03.879 --> 00:15:07.240
We're digital, more digital than ever before Since the pandemic.

00:15:07.240 --> 00:15:08.889
So many more people are working remotely.

00:15:08.889 --> 00:15:12.946
We live on Zoom, we live on Slack, we live on Google Meet, we live on Microsoft Teams.

00:15:12.946 --> 00:15:15.852
With that in mind, where have you identified?

00:15:15.852 --> 00:15:20.634
What is that gap, the immediate and actionable gap that Deep Trust is plugging?

00:15:21.644 --> 00:15:23.994
Yeah, so you've already gotten the hint there.

00:15:23.994 --> 00:15:29.898
We've become a very much digital driven society where we don't have to do things, for example, in person.

00:15:29.898 --> 00:15:33.936
Like, believe it or not those who are listening, I'm not in the same room as Brian.

00:15:33.936 --> 00:15:38.773
I'm probably a couple thousand miles away, you know, and that's the very reality that we live in.

00:15:38.773 --> 00:15:42.350
A couple thousand miles away, you know, and that's the very reality that we live in.

00:15:42.350 --> 00:15:54.410
We are post-COVID, where we've, you know, gotten established to this distributed and remote workforce, and then, on top of that too, we're now post-gen AI, where the very likeness of people can be manipulated and you basically have, like a man in the middle of attack.

00:15:54.410 --> 00:16:02.278
When you're trying to communicate to someone, you think you're speaking to Brian or you're speaking to Aman, but it's someone in the middle that's impersonating that very individual.

00:16:02.904 --> 00:16:04.510
And the problem we're focusing on today.

00:16:04.510 --> 00:16:14.868
Just to get to what we're doing today, we're helping security teams at enterprises, especially that are in the regulated and sensitive data space.

00:16:14.868 --> 00:16:24.096
We're helping those security teams protect their employees and organizations against social engineering, whether or not deepfakes are involved on the voice and video communication channel.

00:16:24.096 --> 00:16:31.730
So that's what we're doing today and, uh, we, this came from like a year of just like sitting down, talking to people, truly understanding.

00:16:31.730 --> 00:16:33.317
Where are those problems?

00:16:33.317 --> 00:16:42.307
Today we saw so many opportunities, like we defined 30 different customer profiles, some of which included intellectual property.

00:16:42.307 --> 00:16:54.846
So, in the IP space, people who make content and their entire revenue stream, or their likeness, is their business, their bread and butter how do you protect their likeness in the wild?

00:16:54.946 --> 00:17:05.290
So, someone like yourself and obviously we can talk about the Drakes and the Weekends as we remember a year and a half ago and then there's even again I briefly mentioned it there's the digital forensic space.

00:17:05.290 --> 00:17:07.011
How do you you're in the court of law?

00:17:07.011 --> 00:17:08.371
The stakes are incredibly high.

00:17:08.371 --> 00:17:20.858
How do you actually understand that the evidence that you're putting in front of such an important decision is bonafide, it's genuine or it's manipulated, and the list goes on and on, such as, again, misinformation, trust and safety.

00:17:20.858 --> 00:17:27.830
It's again the very core of how we operate as a society.

00:17:27.830 --> 00:17:31.045
We again, biologically, are very dependent on our eyes and ears, but now, as a society, we're very digitally dependent as well too.

00:17:31.045 --> 00:17:36.046
So there's so much of our foundations, of our day-to-day life, that's going to be shaken by this very thing.

00:17:36.046 --> 00:17:42.820
And today, again, we're focusing on where the pull and demand is the highest, whereas again in these enterprise security spaces.

00:17:43.041 --> 00:17:45.854
So yeah, yeah, Aman, I love the way you tackle that.

00:17:45.854 --> 00:17:49.969
I think about one of my favorite entrepreneurs in the world is Dante Jackson.

00:17:49.969 --> 00:17:58.113
He's a cybersecurity expert based out of Georgia and Dante and I when we talk, I love how much he thinks like a hacker.

00:17:58.113 --> 00:18:12.112
He thinks like the people who are looking to do wrong, and so, having this conversation with you today, I'm picturing the gap that your business plugs and I'm thinking well, who the heck would want to join a corporate Zoom call and what damage could they do there?

00:18:12.112 --> 00:18:14.290
And then I immediately jumped straight to man.

00:18:14.330 --> 00:18:28.833
If I could impersonate a company CFO and I could penetrate within a finance department meeting and authorize them to write the entrepreneur to entrepreneur podcast, a massive check Well, that's coming from the CFO, so of course, people are going to listen to that.

00:18:28.833 --> 00:18:33.154
Paint that picture for us, because I would imagine that you've thought about this at a way deeper level than I did.

00:18:33.154 --> 00:18:35.413
I'm just going to write a check to this podcast.

00:18:35.413 --> 00:18:45.055
But what are those real life threats and concerns that probably even enterprise level employees aren't even thinking about when they enter a Zoom meeting?

00:18:46.057 --> 00:18:47.547
Yeah, I don't even have to paint a picture.

00:18:47.547 --> 00:18:50.153
Those malicious Mozart's are already out there.

00:18:50.153 --> 00:19:09.566
There's already been multiple cases, especially in the financial services space, for that particular situation, where again you have all types of businesses dependent on this type of communication channel, you're going to be a tiny startup to the little department of defense where again you can impersonate anyone, and obviously your first thought was like hey, a CFO.

00:19:09.566 --> 00:19:24.316
But we've seen incidences where it was a equal level colleague reaching out to their IT help desk and the reality was the colleague was around the corner and through that they were able to get credentials of the business and then thus steal customer data.

00:19:24.316 --> 00:19:33.833
And then we've even seen huge events, such as the incident that happened in Hong Kong where a accountant was actually following their very training.

00:19:33.833 --> 00:19:39.589
They received an email from the CFO saying hey, we have this really important deal happening.

00:19:39.589 --> 00:19:40.510
You need to act quick.

00:19:40.510 --> 00:19:43.096
And naturally the person was a skeptic.

00:19:43.096 --> 00:19:52.027
And again, this person's not some sort of idiot, this is a trained accountant that works for a multinational firm all the way in Hong Kong.

00:19:52.027 --> 00:19:57.910
And they're like I'm going to follow my training, jump on a Zoom call first to get the situation.

00:19:57.910 --> 00:19:59.032
They jumped on the call.

00:19:59.032 --> 00:20:09.679
They not only saw the cfo, but then they recognized and saw people from from their immediate uh uh group in hong kong and because of that they're like, oh, clearly, this, why would this be?

00:20:09.679 --> 00:20:16.876
Uh, you know, made up or ingenuous, so they wire the money away and that was a 25 million dollar scam.

00:20:16.876 --> 00:20:21.313
So this is this is not even a oh, what could happen?

00:20:21.313 --> 00:20:22.015
Sort of situation.

00:20:22.075 --> 00:20:37.111
There's been multiple repeated attacks that we've already seen, both in the public domain and private, where we've gotten to learn those incidences by talking to chief security officers directly, cisos, and this is just every single cybersecurity report that we see.

00:20:37.111 --> 00:20:59.199
They clearly not only see this as the fastest growing attack vector, but there is very much no reason for a malicious actor to not utilize this technology over the next year, and I almost want to say in some ways I say this sometimes this is a perfect time to be a villain, in the sense that it's not even the fact that you can create individual, incredibly convincing attacks, but you can now automate it.

00:20:59.199 --> 00:21:09.704
You can have, say, a language model sitting behind it responding to a person in real time and you can just send that call to a thousand people and you're talking about email.

00:21:09.704 --> 00:21:12.010
Phishing campaigns have like a 5% success rate.

00:21:12.010 --> 00:21:23.652
We've not only seen internally success rates above 50% to 70%, but we've already seen reports of people not even using exact voice clones.

00:21:23.652 --> 00:21:35.909
They're using like a general voice and a chat GPT language model behind it and they were able to do bank transfer scams at like 50% success rate and it only cost them about like five to 10 cents.

00:21:35.909 --> 00:21:40.191
So I mean, if you're not a good person, why wouldn't you do this?

00:21:40.191 --> 00:21:43.931
And that's actually something we've briefly mentioned in your question.

00:21:44.721 --> 00:21:46.226
There's a cybersecurity CEO.

00:21:46.226 --> 00:21:49.684
You say he thinks like a malicious actor and even what we do at DeepTrust.

00:21:49.684 --> 00:21:57.153
We've built this AI bot called Terrify Terrifyai and what it does is an AI conversational bot.

00:21:57.153 --> 00:21:59.942
It speaks to you, it's very friendly, it's trying to be your friend.

00:21:59.942 --> 00:22:00.903
It's like hey, how are you?

00:22:00.903 --> 00:22:02.184
What's your name?

00:22:02.665 --> 00:22:07.894
And then within 15 seconds, it not only memorizes things about you, but starts mimicking your style of speech.

00:22:07.894 --> 00:22:10.586
So I don't know if you're from the Valley or from the South.

00:22:10.586 --> 00:22:12.692
You got a particular twang to your voice.

00:22:12.692 --> 00:22:18.662
It'll start repeating that back to you, but then within 15 seconds, it not only does that, it has your entire voice and it's speaking back to you in your own voice.

00:22:18.662 --> 00:22:30.048
So we showcase that just to help people understand through the experience, because it's one thing for me to be on a podcast or like an interview or whatever to tell you oh my God, the danger is here, it does this or that.

00:22:30.048 --> 00:22:33.971
It's another thing to literally experience it yourself within 10 to 15 seconds.

00:22:33.971 --> 00:22:39.453
So this is what we're talking about in terms of the threat landscape.

00:22:39.453 --> 00:22:41.075
It's not a oh here.

00:22:41.075 --> 00:22:43.196
Maybe it's already coming and growing.

00:22:43.836 --> 00:22:44.297
Whoa.

00:22:44.297 --> 00:22:45.018
Come on.

00:22:45.018 --> 00:22:51.442
First things first.

00:22:51.442 --> 00:23:04.311
I want to say I'm grateful that you are one of the good guys who's pioneering these innovations, because you can so effortlessly show us and obviously this is just the tip of the iceberg of your expertise here in a short podcast interview, and so I can only imagine the depths to which you are well-versed in all of these things.

00:23:04.311 --> 00:23:09.546
And when I say all of these things, I want to call it out for listeners is that we're not talking hypotheticals.

00:23:09.546 --> 00:23:17.660
In today's episode, aman is coming with real life business examples, real life case studies, societal this transcends business.

00:23:17.660 --> 00:23:26.284
And so we are here, if you tune into this episode, thinking, oh, this is going to be a fun year ahead in AI we're already at a point where these things are happening?

00:23:26.324 --> 00:23:40.942
Yeah, so, aman, I want to ask you this, because when you talk about deep trust and even more than just you talking about deep trust it's so clear to me that you have that long-term vision as well as the actionable short-term solution I love is part of your messaging.

00:23:40.942 --> 00:23:46.842
It seems to me like you guys view deep trust as an agent, as someone who's on your side.

00:23:46.842 --> 00:23:58.080
I mean, when I saw your messaging a security co-pilot for employees, it is someone, or, in this case, something, that is of service to the employees, to the organization that it's deployed in.

00:23:58.080 --> 00:24:06.748
Talk to us about that form factor and that delivery mechanism that you've built at deep trust to make it a reality for enterprises yeah, absolutely the.

00:24:06.827 --> 00:24:25.305
The reason we want to co-pilot is, listen, every company that has a security team or an IT department, they enforce new processes and training upon you and the reality is, if you're just you know, say, another engineer or salesperson or marketer, you just do them for the sake of doing them, if you even do them.

00:24:25.305 --> 00:24:32.404
And the reality is, the true risk for a business is the human risk in the sense that, hey, listen, I'm a human being.

00:24:32.404 --> 00:24:45.711
I can't memorize it and remember every single piece and aspect of this particular guideline or this policy or that one, and when I engage into a risky situation with the business, I'm not going to naturally be as well equipped.

00:24:45.711 --> 00:24:55.489
So let people stay good with what they're good at and empower them, and let them not have to be as worried about the security risks and have a co-pilot that's there to assist you.

00:24:55.489 --> 00:25:14.701
So, like our very product, it comes in the form factor today as like an agent that joins your calls and eventually we want to we can eventually move off of being like a physical presence in the call, but regardless, it's there to analyze what's being said, who's saying it in the context of the business, and then if you need any assistance, it'll give you just in time training.

00:25:15.102 --> 00:25:20.201
So, for example, it could be obviously like a malicious attacker where we have doubts in their identity.

00:25:20.201 --> 00:25:23.729
They might be pushing or adding urgency to their requests.

00:25:23.729 --> 00:25:28.211
We can then become a source basically saying, hey, slow down a little bit.

00:25:28.211 --> 00:25:29.276
We have doubt in their identity.

00:25:29.276 --> 00:25:33.167
According to your training, just ask these questions before you move forward.

00:25:33.167 --> 00:25:43.428
And what that also does is it empowers the person because, for example, if you're that accountant in Hong Kong and you have that CFO talking to you, that power dynamic is very hard to refuse.

00:25:43.428 --> 00:25:49.789
But imagine the enablement that has when you're like, hey, the agent is requiring me to ask you these questions.

00:25:49.789 --> 00:25:51.020
It's not me, right?

00:25:51.020 --> 00:26:08.086
And then, even when you're talking about something that's more mundane and might even be naive, where you might have two employees who are not malicious at all but they may be mishandling sensitive information, you can just have the agent in there say, hey, just remember, by the way, this and that you're meant to do this and this.

00:26:08.519 --> 00:26:15.008
For example, like Aman is an engineer and I'm exaggerating here he's coming in asking for a wire.

00:26:15.008 --> 00:26:16.385
He could be like, hey, slow down.

00:26:16.385 --> 00:26:23.564
He doesn't have the access to this action Loop in this person or that person to actually engage in this opportunity.

00:26:23.564 --> 00:26:31.317
And what this does, again, is remove the responsibility, the heavy burden of hey, you're the last line of security for our entire business.

00:26:31.317 --> 00:26:38.965
Make sure you do the right thing and you actually have something that's basically the most educated security employee in the call every time with you.

00:26:38.965 --> 00:26:45.561
So, again, allowing people to just stay good with what they're good at and not having them to necessarily put all that pressure on.

00:26:45.561 --> 00:26:51.226
And I think one other thing to throw in there that I meant to throw in there was when it comes to breaches 90% of them, you know.

00:26:51.226 --> 00:27:13.491
So this is, this is again meant to empower the people to not necessarily have to, you know, become the sole reason why you know your company falls apart.

00:27:13.491 --> 00:27:16.970
Oh yeah, that's basically the value that we look to put.

00:27:17.420 --> 00:27:18.643
Yeah, I love that, aman.

00:27:18.643 --> 00:27:19.644
I love the way you think about it.

00:27:19.644 --> 00:27:23.011
I love the way that you guys have built out DeepTrust, love the way you articulate it.

00:27:23.011 --> 00:27:30.414
It's fascinating to me because here we are, we're talking about some high-level tech stuff and obviously you've been a large language model engineer, you.

00:27:30.414 --> 00:27:33.005
You've worked in so many different capacities within this field.

00:27:33.005 --> 00:27:50.915
But it's fascinating to me and it's just a huge kudos to you and the entire team behind the scenes is that you're also thinking about a company's internal controls, the fact that you're thinking about the hierarchy, and how am I going to tell the CFO, no, let that onus fall on the technology that's supporting us.

00:27:50.915 --> 00:27:53.092
So I think it's absolutely brilliant the work that you're doing.

00:27:53.313 --> 00:27:58.090
I want to put you on the spot here because you are so well versed in all of these things and you're totally not prefaced here.

00:27:58.090 --> 00:28:09.517
So it's a bit of an unfair question, but I know that listeners will get a lot of benefit from this, and that is what's some of your advice for entrepreneurs, or even maybe extrapolate just to societally, just to individuals these days.

00:28:09.517 --> 00:28:10.584
I'll share with you.

00:28:10.584 --> 00:28:15.486
One of the more brilliant innovations I've seen is I saw a fellow entrepreneur who's a very public figure.

00:28:15.486 --> 00:28:25.842
He has a page on his website that says these are my only verified accounts and he lists from his website unless you get an email from this email address, it's not me.

00:28:25.842 --> 00:28:31.805
Unless you see someone on TikTok and Instagram and Facebook, unless it's these three profiles, it is not me.

00:28:31.805 --> 00:28:34.711
So I think that's a really low tech solution there.

00:28:34.711 --> 00:28:41.480
What are some of your best pieces of advice and strategies that we can protect ourselves, whether it's low tech or high tech?

00:28:42.463 --> 00:28:52.827
Yeah, yeah, yeah, the first layer of defense and maybe what I should actually say the people who are the most exposed are the people who are least informed.

00:28:52.827 --> 00:28:55.336
They're not aware of this problem actually even existing.

00:28:55.336 --> 00:28:57.241
They're the ones who are the most exposed.

00:28:57.241 --> 00:28:59.606
So, always, always, always.

00:28:59.606 --> 00:29:04.511
Step one Obviously, I wish I could say, hey, just buy a deep trust agent.

00:29:04.632 --> 00:29:11.093
No, step one is actually get yourself informed, understand the stakes that we're talking about, Be aware of the fact.

00:29:11.093 --> 00:29:18.461
Hey, listen, as an average American, I'm likely to have already uploaded enough of my likeness onto the internet.

00:29:18.461 --> 00:29:22.619
So it's very possible someone can steal my likeness and I should be aware of that.

00:29:22.619 --> 00:29:24.243
Now, two, what can you do?

00:29:24.243 --> 00:29:32.585
Obviously, the next step is you know, make sure the people around you are also informed, especially those who are close to you, your loved ones and you know they're not perfect solutions.

00:29:32.664 --> 00:29:46.053
But make sure you and your family have, you know, private passcodes or secret phrases in case they're in a situation where it seems like someone's putting pressure and urgency and it sounds like someone I recognize.

00:29:46.113 --> 00:30:03.181
Then you can at least have some sort of true proof point to be like okay, listen, this is how I know who you are or not, and in fact, that's how a lot of these there's a couple of stories that have actually happened, like this, where the CEO of Ferrari thought he was talking to someone else or, excuse me, another executive thought he was talking to the CEO of Ferrari.

00:30:03.181 --> 00:30:21.253
Someone else or, excuse me, another executive thought he was talking to the CEO of Ferrari and the only reason why the person didn't fall for their asks was right towards the end of the call the guy remembered oh, I recommended a book to the Ferrari CEO and he proceeded to just ask him that personal question.

00:30:21.253 --> 00:30:24.321
So part of the safety that we have today is making sure your personal relationships are aware.

00:30:24.321 --> 00:30:34.770
Don't be taken advantage, um, by having maybe these one-on-one pass phrases or things like that so it can help you in the moment, and then, uh, and then that's, I think.

00:30:34.830 --> 00:30:47.130
I think that's the the main thing that you can really do today yeah, really well said I think it's important advice aman, truth be told, as a podcaster, I tell that to everybody that I know personally, I tell it to my fellow podcasters.

00:30:47.130 --> 00:30:49.881
So this is important for all of us to confront today.

00:30:49.881 --> 00:31:02.906
You've actually increased my urgency about this, because when you talk about even just uploading pictures and you talk about three seconds of audio, probably every single person I know on their own facebook is already exposed.

00:31:02.906 --> 00:31:04.932
So so, aman, huge kudos to you.

00:31:04.932 --> 00:31:11.049
I want to reiterate again I'm so glad that you're one of the good ones and that you are a positive force in this industry.

00:31:11.500 --> 00:31:12.724
I want to ask you this question.

00:31:12.724 --> 00:31:13.367
It's super broad.

00:31:13.367 --> 00:31:16.548
I ask it at the end of every interview and it's entrepreneur to entrepreneur.

00:31:16.548 --> 00:31:21.847
But, of course, you also have your hat on with regards to the work that you do, and that is what's your one best piece of advice.

00:31:21.847 --> 00:31:31.215
You not only are a subject matter expert doing the very important work that you're doing with DeepTrust, but you're also a fellow entrepreneur growing a business serving your clients, being part of a solution.

00:31:31.215 --> 00:31:37.461
What's that one thing that you've picked up along the way that you want to share with our audience at all different stages of their business growth journey?

00:31:38.503 --> 00:31:41.327
Yeah, first I want to preface by saying like I'm not an expert.

00:31:41.327 --> 00:31:54.152
I'm always still learning, but for me personally, what I've found to be the driving force, or the driving thing that I keep in mind day to day, is make sure you're solving actual people's problems.

00:31:54.152 --> 00:31:58.371
And what I mean by that is especially us engineering founders.

00:31:58.371 --> 00:32:03.949
We love to just build cool things, and I put myself in that situation enough times to know the pain.

00:32:03.949 --> 00:32:05.192
So I'm not coming.

00:32:05.192 --> 00:32:09.670
I'm not coming with this advice in the sense of like yeah, I'm holier than thou.

00:32:09.670 --> 00:32:12.125
No, I've, I've gone through what it looks like to.

00:32:12.125 --> 00:32:15.665
You know, build something cool but then not have anyone who actually wants it.

00:32:15.866 --> 00:32:19.103
And the way you figure out what people want is you sit down and you talk to them.

00:32:19.103 --> 00:32:24.874
There isn't like really a better way to to figure out like what problems people need, uh, to have solved.

00:32:24.874 --> 00:32:28.849
And there are good problems to solve and there's like bad problems to solve.

00:32:28.849 --> 00:32:41.462
And what I mean by that it's, for example, you know you could, people could have problems in their life that weren't necessarily something that they would, I guess, say, pay for to go away Like, bluntly speaking, you know my hands are a little cold.

00:32:41.482 --> 00:32:45.832
You know it'd be really cool if something could automatically put gloves in my hand, right, but I'm not going to.

00:32:45.832 --> 00:32:50.770
I'm not going to pay for a robot that, you know, does that for me the moment I think about it, you know, at least not right now.

00:32:50.770 --> 00:32:55.211
So that's a problem that I have, but it isn't one that I'm motivated to actually get solved.

00:32:55.211 --> 00:32:59.794
So talk to people and really figure out what is what is your day-to-day.

00:32:59.794 --> 00:33:12.625
You know pains that you have, that you would love to go away, and I think a really good book to reference is the Mom Test, and this is something I love to always recommend to very technical founders and entrepreneurs.

00:33:12.625 --> 00:33:17.301
At least take a look at that book and understand like, who am I serving?

00:33:17.301 --> 00:33:24.222
You know what problems do they have and you know how do I make their day in life a little bit better, step by step.

00:33:24.844 --> 00:33:29.489
Yes, gosh, aman, so much good advice and valuable insights from you here today.

00:33:29.489 --> 00:33:31.330
You've successfully scared us.

00:33:31.330 --> 00:33:32.853
You've successfully impressed us.

00:33:32.853 --> 00:33:39.240
You've successfully over-delivered in all the ways, aman, and I'm so impressed with the work that you're doing.

00:33:39.240 --> 00:33:42.444
I want to toss it to your way to drop those links so that listeners know where to go.

00:33:42.444 --> 00:33:46.969
But before I do so, I want to share with listeners so many of the real things that Aman shared with us today.

00:33:47.269 --> 00:33:48.270
Go to his website.

00:33:48.270 --> 00:33:50.712
He's going to tell you his website in just a second Spoiler alert.

00:33:50.712 --> 00:33:51.554
It's already in the show notes.

00:33:51.554 --> 00:33:52.414
You can click right on through.

00:33:52.414 --> 00:33:59.102
But, with that said, when you go to his website, so much of Aman's messaging is not just to scare people.

00:33:59.102 --> 00:34:01.364
It is purely to showcase.

00:34:01.364 --> 00:34:02.644
This is the reality.

00:34:02.644 --> 00:34:07.809
Whether you want to walk around with your eyes closed or not, this is a real current problem, not a future problem.

00:34:07.809 --> 00:34:11.893
You'll see, what really struck me the first time I went to DeepTrust's website is there's a video.

00:34:11.893 --> 00:34:14.456
They call it voice theft in action.

00:34:14.456 --> 00:34:17.987
You're going to see this stuff that Aman's been talking about.

00:34:17.987 --> 00:34:24.672
Hearing it's one thing, seeing it as another, but actually experiencing it is obviously a whole different level of it.

00:34:24.672 --> 00:34:30.788
So the brand that they're building behind the scenes at DeepTrust is right in line with the importance of the work that they're doing.

00:34:30.788 --> 00:34:34.523
So, aman, that's just me tooting your horn a little bit, but man, take it from here.

00:34:34.523 --> 00:34:35.887
Drop those links on listeners.

00:34:35.887 --> 00:34:36.951
Where should they go from?

00:34:36.990 --> 00:34:37.172
there.

00:34:37.172 --> 00:34:50.349
I think if there's one thing, one place you should go, just go to our website, deeptrustai, deeptrustai, and then from there there's two things I'd check out One our blog.

00:34:50.349 --> 00:35:00.427
I know sometimes some of us might just want to watch content, but I'm telling you, my co-founder puts a lot of interesting information every single week on not only the problem space, but how you should go about.

00:35:00.427 --> 00:35:12.119
Again, the way you solve this problem first is keeping yourself informed and then, if you want to try something a little cool and scary Brian did mention the video click the video, check it out, but the website link is there.

00:35:12.119 --> 00:35:21.123
It's called Terrify tariffai and just take 10 to 20 seconds to have a conversation with an AI bot that'll steal your voice.

00:35:21.123 --> 00:35:26.634
It could be fun, but at least it's very educational.

00:35:26.634 --> 00:35:28.083
It's definitely an educational experience.

00:35:28.083 --> 00:35:34.686
Voices are deleted once you're done using it, of course, and I guess you could have a conversation with yourself technically.

00:35:34.686 --> 00:35:36.840
But yeah, that's what I would say.

00:35:36.840 --> 00:35:37.523
That's what I'd say.

00:35:37.983 --> 00:35:39.829
Aman, in over a thousand episodes.

00:35:39.829 --> 00:35:44.322
It's the first time this has been the call to action hey, listeners, go get your voices stolen.

00:35:44.322 --> 00:35:50.927
But, Aman, it fits perfectly honestly, because there's no other way to experience it other than experiencing it.

00:35:50.927 --> 00:35:56.490
So, listeners, all those links are down below in the show notes, no matter where it is that you're tuning into today's episode.

00:35:56.490 --> 00:36:04.376
Aman, I personally am so grateful for the work that you're doing and also the fact that you came on here and you've shared so transparently and in a really important way.

00:36:04.376 --> 00:36:08.300
You've opened our eyes to the realities of current situation.

00:36:08.300 --> 00:36:15.733
We've got a fun and exciting year ahead of us, but with that comes a lot of responsibility to do things right and make sure that we're protected in all the ways.

00:36:15.733 --> 00:36:20.632
So, Aman, on behalf of myself and all the listeners worldwide, thanks so much for coming on the show today.

00:36:21.679 --> 00:36:22.041
Absolutely.

00:36:22.041 --> 00:36:23.306
And can I say one thing as well too?

00:36:23.306 --> 00:36:26.208
Thank you for having me, brian, and also thank you for the work you're doing.

00:36:26.208 --> 00:36:30.335
Like I'm going to come out of this episode and then recording feeling a little bit more energized.

00:36:30.335 --> 00:36:38.865
You know, and being a founder and entrepreneur, you sometimes need as much of the energy you can get, and this was honestly a very enjoyable conversation.

00:36:38.865 --> 00:36:40.048
Thank you for the opportunity.

00:36:40.389 --> 00:37:00.737
Thanks Amman, Thanks Amon, to our amazing guests.

00:37:00.737 --> 00:37:09.487
There's a reason why we are ad-free and have produced so many incredible episodes five days a week for you, and it's because our guests step up to the plate.

00:37:09.487 --> 00:37:13.146
These are not sponsored episodes, these are not infomercials.

00:37:13.146 --> 00:37:16.630
Our guests help us cover the costs of our productions.

00:37:16.630 --> 00:37:27.586
They so deeply believe in the power of getting their message out in front of you, awesome entrepreneurs and entrepreneurs, that they contribute to help us make these productions possible.

00:37:27.586 --> 00:37:36.088
So thank you to not only today's guests, but all of our guests in general, and I just want to invite you check out our website because you can send us a voicemail there.

00:37:36.088 --> 00:37:37.425
We also have live chat.

00:37:37.425 --> 00:37:41.286
If you want to interact directly with me, go to thewantrepreneurshowcom.

00:37:41.286 --> 00:37:43.422
Initiate a live chat.

00:37:43.422 --> 00:37:43.483
It.

00:37:43.483 --> 00:37:44.184
Go to thewantrepreneurshowcom.

00:37:44.184 --> 00:37:44.706
Initiate a live chat.

00:37:44.706 --> 00:37:53.465
It's for real me, and I'm excited because I'll see you, as always, every Monday, Wednesday, Friday, Saturday and Sunday here on the Wantrepreneur to Entrepreneur podcast.