WEBVTT
00:00:00.160 --> 00:00:01.183
Hey, what is up?
00:00:01.183 --> 00:00:04.431
Welcome to this episode of the Wantrepreneur to Entrepreneur podcast.
00:00:04.431 --> 00:00:06.825
As always, I'm your host, brian LoFermento.
00:00:06.825 --> 00:00:09.676
I'll tell you what new year, new problems.
00:00:09.676 --> 00:00:17.173
We are entering an entirely new world that is going to and is presenting so many incredible possibilities.
00:00:17.173 --> 00:00:30.454
But with those possibilities come some threats, some dangers, some considerations, some things we need to think about, and that's why today, we've gone out and found an incredible entrepreneur who's bringing in very important solution.
00:00:30.454 --> 00:00:37.090
That's what I'm going to kick things off by saying a very important solution to the planet that truly is addressing societal problems.
00:00:37.090 --> 00:00:43.307
This is someone who is actively part of the solution of things that we're all going to face this year and beyond.
00:00:43.328 --> 00:00:44.530
So let me tell you about today's guest.
00:00:44.530 --> 00:00:46.234
His name is Aman Ibrahim.
00:00:46.234 --> 00:00:59.747
Aman is a co-founder at DeepTrust, where he focuses on leveraging AI to help security teams defend against voice phishing, deep fakes and social engineering attacks across voice and video communication channels.
00:00:59.747 --> 00:01:02.225
If you're thinking to yourself, I'm immune to that.
00:01:02.225 --> 00:01:03.046
You're not.
00:01:03.046 --> 00:01:08.560
I'm not Aman's not, nobody is, and Iman and I are going to get really serious about talking about that in today's episode.
00:01:08.921 --> 00:01:24.170
Prior to Deep Trust, iman worked as an engineer at Cruise, where he scaled model and data distribution systems and he also built models in healthcare research labs to address challenges in nutrition, palliative care and disease detection.
00:01:24.170 --> 00:01:27.703
Address challenges in nutrition, palliative care and disease detection.
00:01:27.703 --> 00:01:33.153
His work at DeepTrust is driven by the mission to protect human authenticity in an age of rapidly advancing AI technology.
00:01:33.153 --> 00:01:35.144
It's big stuff we're talking about today.
00:01:35.144 --> 00:01:38.471
I'm personally so excited to hear all of his thoughts.
00:01:38.471 --> 00:01:41.123
We were talking off air and I said let's stop, let's hit record.
00:01:41.123 --> 00:01:44.290
So let's dive straight into my interview with Aman Ibrahim.
00:01:44.290 --> 00:01:55.525
All right, aman, it's so hard for me to not just jump straight into it with you, but first things first.
00:01:55.566 --> 00:01:56.066
Welcome to the show.
00:01:56.066 --> 00:01:56.487
Thank you so much.
00:01:56.487 --> 00:01:58.370
I've never been welcomed so warmly.
00:01:58.370 --> 00:01:59.659
I for a second thought.
00:01:59.659 --> 00:02:02.603
I was an audience member and I was like excited to see who's coming on.
00:02:02.603 --> 00:02:05.768
I was like, oh wait, it's me Never been introduced like that.
00:02:05.768 --> 00:02:06.808
I appreciate that a lot.
00:02:06.929 --> 00:02:07.450
I love that.
00:02:07.450 --> 00:02:10.032
No, honestly, I mean, you know, I said that to you off the air.
00:02:10.032 --> 00:02:20.705
As a podcaster, this is important stuff for us, but then you and I obviously think about the world at large, far outside of just our industries, and it's big stuff we're talking about today.
00:02:20.705 --> 00:02:23.590
So before we get there, aman, I'm gonna put you on the spot, and then we're jumping straight into the fun stuff.
00:02:23.590 --> 00:02:24.211
Who the heck is, aman?
00:02:24.211 --> 00:02:26.514
How did you even start doing all these things?
00:02:26.514 --> 00:02:27.724
Take us beyond the bio.
00:02:28.780 --> 00:02:30.326
Yeah, no, that's a great question.
00:02:30.326 --> 00:02:51.453
Of course, as you said, my name is Aman Ibrahim, my background is in machine learning, engineering, and what got me into this problem space and just solving problems like this is I've always had this desire to solve very difficult problems that have an opportunity to help people you know over the long term.
00:02:51.453 --> 00:03:06.551
And where that came from and I like to say this comes from like a very personal side of myself my father and my parent, my mom, came to this country from a country in East Africa it's called Eritrea, and my father was fortunate enough to come here to study computer science.
00:03:06.551 --> 00:03:10.106
He worked at, like, ibm and Cisco, so I was growing up I was exposed to that.
00:03:10.106 --> 00:03:21.427
My mother she's always been very community driven, so for me it was just by its nature that I had that love to solve technical problems while also finding ways to serve people.
00:03:21.427 --> 00:03:24.151
It even began when I was like in kindergarten.
00:03:24.151 --> 00:03:36.792
My dad would contribute to the refugee community by building computers for them and when I was in kindergarten, six, seven, eight years old, also built computers alongside with him to help people that way.
00:03:36.792 --> 00:03:46.484
And, as you mentioned before, I spent some time in like health tech research, but even built in healthcare startups Started.
00:03:46.484 --> 00:04:04.645
Two other startups of my own have over 40 different side projects that I built were like serving students, teachers, athletes, doctors, patients, and even when I joined Cruise, the very motivation to be a machine learning engineer there was.
00:04:04.664 --> 00:04:07.030
Yes, solving the cutting edge of technology in that space was amazing.
00:04:07.030 --> 00:04:09.554
But then what is the returns that you get out of that when you solve that problem?
00:04:09.554 --> 00:04:21.283
You're not only talking about the opportunity to make transportation accessible to people albeit the news that came out of Cruise recently but the ambition was making transportation as accessible to people.
00:04:21.283 --> 00:04:35.548
Removing that concept of traffic is possible with self-driving cars, but then most importantly is the fact, besides any other disease right after, it's always what perishes people the most is car accidents and removing that.
00:04:35.990 --> 00:04:40.029
For me, if I could contribute to that just even a little bit, that was incredibly motivating.
00:04:40.029 --> 00:04:53.475
So when I finally took myself into the industry instead of working at some FANG company, obviously like the Facebook, googles and things like that I was very much more driven to come to an opportunity like that.
00:04:53.475 --> 00:05:06.401
So naturally, when the opportunity and problem space of deep trust arose, which we'll get into, it was just very natural into who I am and why to solve that problem, you know, helping people trust their own eyes and ears again.
00:05:06.401 --> 00:05:08.447
So yeah, that's basically it.
00:05:08.447 --> 00:05:09.771
That's how I came into the space.
00:05:10.319 --> 00:05:12.444
Yeah, aman, I love that overview.
00:05:12.444 --> 00:05:19.052
I appreciate so many parts of what you just shared with us, and now it's clear to me why we were able to click so quickly today.
00:05:19.052 --> 00:05:20.740
We're both from immigrant families.
00:05:20.740 --> 00:05:24.670
As the son of an immigrant mom, I feel like it's factored into the way that we see the world.
00:05:24.670 --> 00:05:25.952
We see endless possibilities.
00:05:26.293 --> 00:05:44.713
Our families came to this country in the pursuit of freedom, in the pursuit of possibilities, in the pursuit of you and I having more doors open to us than what they had themselves, and I think that that so deeply is integrated into the way that we see the world, which it makes perfect sense to me that you're part of very big solutions, amman.
00:05:44.713 --> 00:05:52.326
So let's talk about that solution that you wanna be a part of here and that you're actively working to solve, because this is big stuff and I said it at the top of this episode.
00:05:52.326 --> 00:06:04.249
A lot of people will probably think, ah, this is either so far into the future that I don't have to worry about it, or they're probably thinking I'm immune to it agency owner, what the heck is all this stuff going to do for me?
00:06:04.249 --> 00:06:05.932
Or I'm a web developer, aman.
00:06:05.932 --> 00:06:09.175
Paint the picture of why the heck this is a cause that you care so much about.
00:06:10.100 --> 00:06:12.105
Yeah, absolutely so.
00:06:12.105 --> 00:06:30.591
The problem space that we're focusing on today is the fact that with only three seconds of audio, a single profile picture, you can steal someone's likeness very easy and if you've been on social media enough, there's been enough content where we saw that genitive content used for entertainment, funny things as well.
00:06:30.591 --> 00:06:34.750
But for every new technology, there's also the misuse of that very technology.
00:06:34.750 --> 00:06:38.028
So, for our perspective, ai isn't inherently bad.
00:06:38.028 --> 00:06:41.809
It's just there are people who have certain intentions.
00:06:41.809 --> 00:06:46.262
So with the very little that you need in are people who have certain intentions.
00:06:46.262 --> 00:06:50.410
So with the very little that you need, in terms of both skillset and sample content, you can easily mimic anyone.
00:06:50.410 --> 00:06:54.365
And we're talking about this is not again, like you've said and that was a great point.
00:06:54.365 --> 00:06:57.526
I thank you for bringing it up this is not an emerging threat.
00:06:57.526 --> 00:06:58.588
It's already here.
00:06:59.151 --> 00:07:07.665
There is already a report put out by the FTC where regular, everyday Americans have already lost over $3.3 billion from imitation scams.
00:07:07.665 --> 00:07:18.874
Those are scams that are either someone imitating someone you recognize like personally, or imitating a celebrity telling you oh yeah, there's this government check.
00:07:18.874 --> 00:07:23.009
And these are very things that me and my co-founder experienced ourselves.
00:07:23.009 --> 00:07:37.891
Part of why my co-founder is very motivated to solve this problem was one day his grandfather was sitting in the room, got a phone call saying hey, it's me, noah, noah's his name stole his voice, his very voice, and asked him to send him money and these sort of things.
00:07:37.891 --> 00:07:48.235
And then another time for myself, my mom was sitting in a room where she was watching what she thought was an advertisement from Oprah someone she trusts, telling her hey, there's a government check that you need to sign up for.
00:07:48.235 --> 00:07:51.269
Just put in your social security number here and you'll get it.
00:07:53.201 --> 00:08:08.312
So this is people taking this new technology that you can again be able to mimic voices and likeness however you please, with very little skill set and using it for very malicious intent and this goes from misinformation to imitations, scamming and phishing.
00:08:08.312 --> 00:08:12.069
So just general, like another example, use cases.
00:08:12.069 --> 00:08:23.543
If I can steal your second voice in three seconds, imagine if you're in a high pro, high, high situation court case where someone submits evidence of your recording saying this and that.
00:08:23.543 --> 00:08:30.153
So there's, there's a huge general problem of how to even authenticate what I'm seeing and hearing.
00:08:30.153 --> 00:08:34.303
My own biology is fooling me so yeah, yeah, aman.
00:08:34.403 --> 00:08:37.807
These are big, important things and it's funny we could talk about it in the business context.
00:08:37.807 --> 00:08:41.490
Dude, you and I actually didn't talk about this yet, but I see it every time.
00:08:41.490 --> 00:08:50.051
I log on to Instagram now I see some marketers, who obviously are not the most ethical marketers in the world, are using Joe Rogan clips.
00:08:50.051 --> 00:08:59.111
Because, joe Rogan we could find hours, hundreds, if not thousands of hours of his voice out there and they're having him endorse products that they're advertising.
00:08:59.111 --> 00:09:01.427
And so us, as the consumers, we can't tell.
00:09:01.427 --> 00:09:09.493
And Joe Rogan has a large and loyal audience and they're probably sitting there thinking, oh, he endorses this supplement or this, whatever it is.
00:09:09.801 --> 00:09:16.530
People are mimicking that, and you made a very important point to me before we hit record today is that we've always relied on.
00:09:16.530 --> 00:09:18.014
Well, I can tell a scam.
00:09:18.014 --> 00:09:23.690
Ok, if I get an email from a Nigerian prince who's promising $50 million, I know that that's fake.
00:09:23.690 --> 00:09:32.182
But if I can see something with my own eyes, if I can see it, then I can believe it.
00:09:32.182 --> 00:09:34.851
And so, seeing Joe Rogan, it's a video of him, it's his voice, it's him talking about this, but it's not real.
00:09:34.851 --> 00:09:41.951
Talk about how this has become the first time in history that what we see is not what we might be able to believe anymore.
00:09:42.940 --> 00:09:47.904
Yeah, exactly, this comes very down to the fact that trust is a very sensitive thing.
00:09:47.904 --> 00:10:01.360
When you say you have a Nigerian prince reaching out to you, you've never met this person, there's no context between you and them engaging with one another, so it takes a lot, for there's a huge barrier for, say, that Nigerian prince to get what he wants out of you.
00:10:01.360 --> 00:10:21.587
But when you bring familiarity into something, whether it's you know, your grandson or your mom or your dad or your sibling, or, say, a trusted figure, someone who you've taken their advice from before, whether it might be a Joe Rogan to your doctor or whatever it may be, you quickly like, lower your guard and now anything can happen.
00:10:21.587 --> 00:10:27.076
And the sort of perspective that we put into place is the question.
00:10:27.076 --> 00:10:30.203
This is what I'll state.
00:10:30.345 --> 00:10:34.923
There's this concept called the liar's paradox and, for your viewers, take a moment to Google the liar's paradox.
00:10:34.923 --> 00:10:40.802
It's a concept in psychology where they have a straw man figure and I'm paraphrasing here.
00:10:40.802 --> 00:10:44.730
But basically, anytime this person speaks, it's always a lie.
00:10:44.730 --> 00:10:47.583
It's just this made up figure and whatever history.
00:10:47.583 --> 00:10:55.942
And there's a moment that this liar says I am a liar and that becomes a paradox in the sense yes, they are a liar, but in that moment they're telling the truth.
00:10:55.942 --> 00:11:00.120
So what we actually perceive the problem space actually to be is generative.
00:11:00.120 --> 00:11:17.620
Ai actually causes a liar's paradox in and of itself, where, if you see enough content right, where, whether you're seeing or hearing something that looks and sound real but then ends up being generated or fake, the question no longer becomes what is fake, but what is actually even real at that point.
00:11:17.620 --> 00:11:24.325
So that is, that is essentially the problem that we're really looking at from this misuse of this technology.
00:11:24.345 --> 00:11:31.323
So, yeah, I'm going to ask you a big, broad question, then, and it's inevitably going to lead into us talking about deep trust in the work that you're doing.
00:11:31.323 --> 00:11:39.851
But how do we in today's world here we are in 2025, how the heck do we even begin to tell what's real and what's fake?
00:11:41.000 --> 00:11:50.094
Yeah, I mean, listen, at this stage, of course, it's very much a lot of theories and nothing is truly proven until you actually literally execute upon it.
00:11:50.094 --> 00:12:01.508
But for us, our perspective is this you should bring in concepts that are already recognizable, you know, and let me take it to like very basic forms of trust.
00:12:01.508 --> 00:12:12.421
You know, I told you like what lowers your guard when you find something that's familiar and you trust the source of it, and that's the concept that we're trying to bring into what we do today.
00:12:12.421 --> 00:12:15.671
In the long term, how do you build essentially the trust layer for the internet?
00:12:15.671 --> 00:12:23.164
And I want to bring the ability for us to be able to put providence into content when is something coming from?
00:12:23.164 --> 00:12:31.832
And if you can establish a standard that does that, like we already have done for you know, uh, end-to-end encryption and things like this.
00:12:31.832 --> 00:12:36.139
This is how you can eventually trust where things come from, and this is not a new concept whatsoever.
00:12:36.139 --> 00:12:53.383
When you, when you, for example, receive like an academic paper and, uh, someone just hands it to you, you don't just say, oh yeah, trust it, know you have references, and then those references have references, and then you eventually have essentially a chain of where that knowledge came from, and this is where we have, say, core and that sort of knowledge.
00:12:53.903 --> 00:12:57.293
Content has that demand for that sort of authenticity.
00:12:57.293 --> 00:13:04.576
Now you need to have some sort of way to validate where something is coming from and there's finally a demand to build that sort of standard.
00:13:04.576 --> 00:13:34.335
So, at any point content is created, generated or modified, our perspective is, in the long term, we need to have some sort of standard set in place where you put a signature into the content that doesn't disturb the human experience of it, but then that very cryptographic, immutable key that's signed into the content that you can't edit, you can't remove, you can't manipulate, can not only tell you hey, this came from this source, but it came from this chain of sources, and then it's up to you to decide whether you trust that chain of sources.
00:13:34.335 --> 00:13:42.335
A very easy example that I would love to see happen in the future is, let's say, a video clip is produced and ends up on your Twitter feed.
00:13:42.335 --> 00:13:46.323
You can now look at it and you can say, okay, someone clipped this on their iPhone.
00:13:46.323 --> 00:13:51.677
Or actually it starts from, say, a Sony movie that was produced, you know, and that was signed by Sony.
00:13:51.677 --> 00:14:04.969
Someone you know edited it on Photoshop, so then it got signed again individually, and then someone trimmed it on their iPhone, signed again, and then it finally ends up on your feed.
00:14:04.969 --> 00:14:07.779
So now you have a true source or true chain of where things came from, and then that's how you truly trust where things come.
00:14:07.799 --> 00:14:13.745
Now, until we achieve that standard, that's like obviously a big, ambitious goal to again build what I'm saying is the trust layer for the internet.
00:14:13.745 --> 00:14:19.955
There are steps along the way and we'll get into those details and it's part of what we're building today as part of our product.
00:14:19.955 --> 00:14:30.792
We're not jumping straight into, obviously, the dream goal here, but in order to get to the dream goal, there is a journey, both a technical one and a one that's very focused on solving people's real current problems.
00:14:30.792 --> 00:14:38.812
So, yeah, that's how we truly believe, like the golden rule or the golden path to solve this problem altogether.
00:14:38.832 --> 00:14:47.176
Yeah, I love how big you're thinking about this, aman, because, truth be told, it's people like you, it's innovators, it's pioneers, who have that grand vision.
00:14:47.176 --> 00:15:03.879
It's the only way we're going to get there is, first, dream big and then let's bring it back to the actual actionable building blocks, which I love the work that you guys are doing at DeepTrust, because I just think about all of my businesses and businesses, large and small, across the entire United States, across the world.
00:15:03.879 --> 00:15:07.240
We're digital, more digital than ever before Since the pandemic.
00:15:07.240 --> 00:15:08.889
So many more people are working remotely.
00:15:08.889 --> 00:15:12.946
We live on Zoom, we live on Slack, we live on Google Meet, we live on Microsoft Teams.
00:15:12.946 --> 00:15:15.852
With that in mind, where have you identified?
00:15:15.852 --> 00:15:20.634
What is that gap, the immediate and actionable gap that Deep Trust is plugging?
00:15:21.644 --> 00:15:23.994
Yeah, so you've already gotten the hint there.
00:15:23.994 --> 00:15:29.898
We've become a very much digital driven society where we don't have to do things, for example, in person.
00:15:29.898 --> 00:15:33.936
Like, believe it or not those who are listening, I'm not in the same room as Brian.
00:15:33.936 --> 00:15:38.773
I'm probably a couple thousand miles away, you know, and that's the very reality that we live in.
00:15:38.773 --> 00:15:42.350
A couple thousand miles away, you know, and that's the very reality that we live in.
00:15:42.350 --> 00:15:54.410
We are post-COVID, where we've, you know, gotten established to this distributed and remote workforce, and then, on top of that too, we're now post-gen AI, where the very likeness of people can be manipulated and you basically have, like a man in the middle of attack.
00:15:54.410 --> 00:16:02.278
When you're trying to communicate to someone, you think you're speaking to Brian or you're speaking to Aman, but it's someone in the middle that's impersonating that very individual.
00:16:02.904 --> 00:16:04.510
And the problem we're focusing on today.
00:16:04.510 --> 00:16:14.868
Just to get to what we're doing today, we're helping security teams at enterprises, especially that are in the regulated and sensitive data space.
00:16:14.868 --> 00:16:24.096
We're helping those security teams protect their employees and organizations against social engineering, whether or not deepfakes are involved on the voice and video communication channel.
00:16:24.096 --> 00:16:31.730
So that's what we're doing today and, uh, we, this came from like a year of just like sitting down, talking to people, truly understanding.
00:16:31.730 --> 00:16:33.317
Where are those problems?
00:16:33.317 --> 00:16:42.307
Today we saw so many opportunities, like we defined 30 different customer profiles, some of which included intellectual property.
00:16:42.307 --> 00:16:54.846
So, in the IP space, people who make content and their entire revenue stream, or their likeness, is their business, their bread and butter how do you protect their likeness in the wild?
00:16:54.946 --> 00:17:05.290
So, someone like yourself and obviously we can talk about the Drakes and the Weekends as we remember a year and a half ago and then there's even again I briefly mentioned it there's the digital forensic space.
00:17:05.290 --> 00:17:07.011
How do you you're in the court of law?
00:17:07.011 --> 00:17:08.371
The stakes are incredibly high.
00:17:08.371 --> 00:17:20.858
How do you actually understand that the evidence that you're putting in front of such an important decision is bonafide, it's genuine or it's manipulated, and the list goes on and on, such as, again, misinformation, trust and safety.
00:17:20.858 --> 00:17:27.830
It's again the very core of how we operate as a society.
00:17:27.830 --> 00:17:31.045
We again, biologically, are very dependent on our eyes and ears, but now, as a society, we're very digitally dependent as well too.
00:17:31.045 --> 00:17:36.046
So there's so much of our foundations, of our day-to-day life, that's going to be shaken by this very thing.
00:17:36.046 --> 00:17:42.820
And today, again, we're focusing on where the pull and demand is the highest, whereas again in these enterprise security spaces.
00:17:43.041 --> 00:17:45.854
So yeah, yeah, Aman, I love the way you tackle that.
00:17:45.854 --> 00:17:49.969
I think about one of my favorite entrepreneurs in the world is Dante Jackson.
00:17:49.969 --> 00:17:58.113
He's a cybersecurity expert based out of Georgia and Dante and I when we talk, I love how much he thinks like a hacker.
00:17:58.113 --> 00:18:12.112
He thinks like the people who are looking to do wrong, and so, having this conversation with you today, I'm picturing the gap that your business plugs and I'm thinking well, who the heck would want to join a corporate Zoom call and what damage could they do there?
00:18:12.112 --> 00:18:14.290
And then I immediately jumped straight to man.
00:18:14.330 --> 00:18:28.833
If I could impersonate a company CFO and I could penetrate within a finance department meeting and authorize them to write the entrepreneur to entrepreneur podcast, a massive check Well, that's coming from the CFO, so of course, people are going to listen to that.
00:18:28.833 --> 00:18:33.154
Paint that picture for us, because I would imagine that you've thought about this at a way deeper level than I did.
00:18:33.154 --> 00:18:35.413
I'm just going to write a check to this podcast.
00:18:35.413 --> 00:18:45.055
But what are those real life threats and concerns that probably even enterprise level employees aren't even thinking about when they enter a Zoom meeting?
00:18:46.057 --> 00:18:47.547
Yeah, I don't even have to paint a picture.
00:18:47.547 --> 00:18:50.153
Those malicious Mozart's are already out there.
00:18:50.153 --> 00:19:09.566
There's already been multiple cases, especially in the financial services space, for that particular situation, where again you have all types of businesses dependent on this type of communication channel, you're going to be a tiny startup to the little department of defense where again you can impersonate anyone, and obviously your first thought was like hey, a CFO.
00:19:09.566 --> 00:19:24.316
But we've seen incidences where it was a equal level colleague reaching out to their IT help desk and the reality was the colleague was around the corner and through that they were able to get credentials of the business and then thus steal customer data.
00:19:24.316 --> 00:19:33.833
And then we've even seen huge events, such as the incident that happened in Hong Kong where a accountant was actually following their very training.
00:19:33.833 --> 00:19:39.589
They received an email from the CFO saying hey, we have this really important deal happening.
00:19:39.589 --> 00:19:40.510
You need to act quick.
00:19:40.510 --> 00:19:43.096
And naturally the person was a skeptic.
00:19:43.096 --> 00:19:52.027
And again, this person's not some sort of idiot, this is a trained accountant that works for a multinational firm all the way in Hong Kong.
00:19:52.027 --> 00:19:57.910
And they're like I'm going to follow my training, jump on a Zoom call first to get the situation.
00:19:57.910 --> 00:19:59.032
They jumped on the call.
00:19:59.032 --> 00:20:09.679
They not only saw the cfo, but then they recognized and saw people from from their immediate uh uh group in hong kong and because of that they're like, oh, clearly, this, why would this be?
00:20:09.679 --> 00:20:16.876
Uh, you know, made up or ingenuous, so they wire the money away and that was a 25 million dollar scam.
00:20:16.876 --> 00:20:21.313
So this is this is not even a oh, what could happen?
00:20:21.313 --> 00:20:22.015
Sort of situation.
00:20:22.075 --> 00:20:37.111
There's been multiple repeated attacks that we've already seen, both in the public domain and private, where we've gotten to learn those incidences by talking to chief security officers directly, cisos, and this is just every single cybersecurity report that we see.
00:20:37.111 --> 00:20:59.199
They clearly not only see this as the fastest growing attack vector, but there is very much no reason for a malicious actor to not utilize this technology over the next year, and I almost want to say in some ways I say this sometimes this is a perfect time to be a villain, in the sense that it's not even the fact that you can create individual, incredibly convincing attacks, but you can now automate it.
00:20:59.199 --> 00:21:09.704
You can have, say, a language model sitting behind it responding to a person in real time and you can just send that call to a thousand people and you're talking about email.
00:21:09.704 --> 00:21:12.010
Phishing campaigns have like a 5% success rate.
00:21:12.010 --> 00:21:23.652
We've not only seen internally success rates above 50% to 70%, but we've already seen reports of people not even using exact voice clones.
00:21:23.652 --> 00:21:35.909
They're using like a general voice and a chat GPT language model behind it and they were able to do bank transfer scams at like 50% success rate and it only cost them about like five to 10 cents.
00:21:35.909 --> 00:21:40.191
So I mean, if you're not a good person, why wouldn't you do this?
00:21:40.191 --> 00:21:43.931
And that's actually something we've briefly mentioned in your question.
00:21:44.721 --> 00:21:46.226
There's a cybersecurity CEO.
00:21:46.226 --> 00:21:49.684
You say he thinks like a malicious actor and even what we do at DeepTrust.
00:21:49.684 --> 00:21:57.153
We've built this AI bot called Terrify Terrifyai and what it does is an AI conversational bot.
00:21:57.153 --> 00:21:59.942
It speaks to you, it's very friendly, it's trying to be your friend.
00:21:59.942 --> 00:22:00.903
It's like hey, how are you?
00:22:00.903 --> 00:22:02.184
What's your name?
00:22:02.665 --> 00:22:07.894
And then within 15 seconds, it not only memorizes things about you, but starts mimicking your style of speech.
00:22:07.894 --> 00:22:10.586
So I don't know if you're from the Valley or from the South.
00:22:10.586 --> 00:22:12.692
You got a particular twang to your voice.
00:22:12.692 --> 00:22:18.662
It'll start repeating that back to you, but then within 15 seconds, it not only does that, it has your entire voice and it's speaking back to you in your own voice.
00:22:18.662 --> 00:22:30.048
So we showcase that just to help people understand through the experience, because it's one thing for me to be on a podcast or like an interview or whatever to tell you oh my God, the danger is here, it does this or that.
00:22:30.048 --> 00:22:33.971
It's another thing to literally experience it yourself within 10 to 15 seconds.
00:22:33.971 --> 00:22:39.453
So this is what we're talking about in terms of the threat landscape.
00:22:39.453 --> 00:22:41.075
It's not a oh here.
00:22:41.075 --> 00:22:43.196
Maybe it's already coming and growing.
00:22:43.836 --> 00:22:44.297
Whoa.
00:22:44.297 --> 00:22:45.018
Come on.
00:22:45.018 --> 00:22:51.442
First things first.
00:22:51.442 --> 00:23:04.311
I want to say I'm grateful that you are one of the good guys who's pioneering these innovations, because you can so effortlessly show us and obviously this is just the tip of the iceberg of your expertise here in a short podcast interview, and so I can only imagine the depths to which you are well-versed in all of these things.
00:23:04.311 --> 00:23:09.546
And when I say all of these things, I want to call it out for listeners is that we're not talking hypotheticals.
00:23:09.546 --> 00:23:17.660
In today's episode, aman is coming with real life business examples, real life case studies, societal this transcends business.
00:23:17.660 --> 00:23:26.284
And so we are here, if you tune into this episode, thinking, oh, this is going to be a fun year ahead in AI we're already at a point where these things are happening?
00:23:26.324 --> 00:23:40.942
Yeah, so, aman, I want to ask you this, because when you talk about deep trust and even more than just you talking about deep trust it's so clear to me that you have that long-term vision as well as the actionable short-term solution I love is part of your messaging.
00:23:40.942 --> 00:23:46.842
It seems to me like you guys view deep trust as an agent, as someone who's on your side.
00:23:46.842 --> 00:23:58.080
I mean, when I saw your messaging a security co-pilot for employees, it is someone, or, in this case, something, that is of service to the employees, to the organization that it's deployed in.
00:23:58.080 --> 00:24:06.748
Talk to us about that form factor and that delivery mechanism that you've built at deep trust to make it a reality for enterprises yeah, absolutely the.
00:24:06.827 --> 00:24:25.305
The reason we want to co-pilot is, listen, every company that has a security team or an IT department, they enforce new processes and training upon you and the reality is, if you're just you know, say, another engineer or salesperson or marketer, you just do them for the sake of doing them, if you even do them.
00:24:25.305 --> 00:24:32.404
And the reality is, the true risk for a business is the human risk in the sense that, hey, listen, I'm a human being.
00:24:32.404 --> 00:24:45.711
I can't memorize it and remember every single piece and aspect of this particular guideline or this policy or that one, and when I engage into a risky situation with the business, I'm not going to naturally be as well equipped.
00:24:45.711 --> 00:24:55.489
So let people stay good with what they're good at and empower them, and let them not have to be as worried about the security risks and have a co-pilot that's there to assist you.
00:24:55.489 --> 00:25:14.701
So, like our very product, it comes in the form factor today as like an agent that joins your calls and eventually we want to we can eventually move off of being like a physical presence in the call, but regardless, it's there to analyze what's being said, who's saying it in the context of the business, and then if you need any assistance, it'll give you just in time training.
00:25:15.102 --> 00:25:20.201
So, for example, it could be obviously like a malicious attacker where we have doubts in their identity.
00:25:20.201 --> 00:25:23.729
They might be pushing or adding urgency to their requests.
00:25:23.729 --> 00:25:28.211
We can then become a source basically saying, hey, slow down a little bit.
00:25:28.211 --> 00:25:29.276
We have doubt in their identity.
00:25:29.276 --> 00:25:33.167
According to your training, just ask these questions before you move forward.
00:25:33.167 --> 00:25:43.428
And what that also does is it empowers the person because, for example, if you're that accountant in Hong Kong and you have that CFO talking to you, that power dynamic is very hard to refuse.
00:25:43.428 --> 00:25:49.789
But imagine the enablement that has when you're like, hey, the agent is requiring me to ask you these questions.
00:25:49.789 --> 00:25:51.020
It's not me, right?
00:25:51.020 --> 00:26:08.086
And then, even when you're talking about something that's more mundane and might even be naive, where you might have two employees who are not malicious at all but they may be mishandling sensitive information, you can just have the agent in there say, hey, just remember, by the way, this and that you're meant to do this and this.
00:26:08.519 --> 00:26:15.008
For example, like Aman is an engineer and I'm exaggerating here he's coming in asking for a wire.
00:26:15.008 --> 00:26:16.385
He could be like, hey, slow down.
00:26:16.385 --> 00:26:23.564
He doesn't have the access to this action Loop in this person or that person to actually engage in this opportunity.
00:26:23.564 --> 00:26:31.317
And what this does, again, is remove the responsibility, the heavy burden of hey, you're the last line of security for our entire business.
00:26:31.317 --> 00:26:38.965
Make sure you do the right thing and you actually have something that's basically the most educated security employee in the call every time with you.