hey natan hey john Hey Nathan, hey John.
Let me give John the right to speak.
Okay, can you hear me? All right, yes, we can hear you John.
All right, let's give it a few more minutes for
people to join okay let me try to put some music in the same time
Say hi to Nathan, everybody.
Actually, now you're better.
It's all good now, Stephanie.
Hey, how are you guys? Can you hear me?
Yeah. Hi, Nathan. How are you?
Doing good. It's good to hear your voice again.
Oh, good. Thanks for joining us. I appreciate it.
I could talk to Nathan all day. I told John this.
Okay, well, do you want to jump into it or do you want to wait a little bit?
Yeah, let's jump into it.
I'd like to actually thank Nathan as well to join this space.
Because I think the first time Nathan joined this space was a few weeks ago, talking about AI.
So it's really cool to have you here this evening, Nathan.
To give a short introduction about yourself might be interesting as well
about dodgeball the company you're a software architect for like to be honest
when I saw dodgeball I thought about the movie with Vince Vaughn you know like a
while back obviously that's not that's not a thing like if you could give a
small introduction about yourself Nathan that, that would be great.
Yeah, Google also agrees with you about Dodgeball.
So you have to search for Dodgeball and Rod in order to find us.
So I can give a really quick introduction to myself.
I actually, I studied government and psychology when I was in college.
I always thought that I wanted to be a lawyer, actually.
But my first job was actually at an aerospace company, and I started doing data analysis,
automation, and quickly sort
of worked my way into full stack development there. Then I started getting into software. I
worked as an engineering manager at a company in the business to business software as a service called Printforia.
I've had a consulting company for a long time,
and then I also started working at Dodgeball
Dodgeball is a software as a service
that provides anti-fraud orchestration capabilities
to companies of all sorts of sizes.
a billion tools out there that can combat specific parts of fraud, but it's really hard to like
integrate them all together, get them working cohesively. And even if you have done it, it's
really hard to make changes to that system once you've done it. So that's our focus is on making
that really seamless for folks. That sounds like an almost impossible problem, actually, to me.
It's a very hard problem, but I think that it takes two different skill sets.
One is you have to actually understand what is happening out there in the world,
and then you have to be able to architect a system that's very flexible,
but also not impossible to use
for someone who's not an engineer.
And I'm really actually super excited
about where it is right now.
I think it's gone through a few iterations
and it seems like we're really starting
to be able to help people.
We had our first five-star review for our Shopify plugin
the other day so that you don't even have to be
You can just plug right into our systems.
Why is it one size does not fit all?
Yeah, so I would say that a lot of companies,
when they think of solving fraud,
they think of integrating with one solution.
So what I would call is like the first baby step
into fighting fraud is if you use Stripe,
which pretty much everyone does
when they're doing payments online,
Stripe has this thing called Stripe Radar
and you can turn it on and use it to start detecting,
you know, potential negative signals
when it comes to their machine learning analysis.
And then what people tend to find is that, you know, Stripe Radar is really good at what it does,
but it doesn't have the whole picture of like everything that's happened for this customer.
Like, did they did this customer try to log in four times and fail?
Did they try logging into like eight different accounts right before they submitted this payment?
Did they do a bunch of different things on their computer?
Well, as soon as you want to start looking at that stuff,
you can't just use Stripe Rater.
You have to do something else.
And so then people add their second one.
And before you end up getting too far along,
people will have like five or six different systems
that they're trying to manage, and that's super hard to do.
So I think Dodgeball really speaks to folks who are already dealing with that trouble or people who want to
avoid ever having to run into that trouble but then why doesn't um why doesn't one combination
of solutions that you know why don't you provide the best combination of everything or and you know
the you know industry? Is there different needs
or is it different costs?
Why do you give customers
Yeah, I think that because
how you adapt to fighting fraud
is so individual per customer,
per company, per enterprise.
The biggest thing that people
do this big shift. And then all of a sudden, everything's broken and my hair's on fire.
They want to be able to like, just watch for a little while and then like test the rules that
they already have and make sure that nothing dramatic changes in terms of like, Visa is not
mad at them because they're getting a bunch of chargebacks or whatever it may be. So a lot of
times people will, you know, build out exactly what they already have, just so that they can make sure like,
I'm not crazy, everything's everything's going. And then they can do an A-B test. And then they
can A-B test two different services who both claim to be the greatest service that's ever
existed. And they can see, hey, when I went down path A, it was very, very effective. And when I
went down path B, you know, I was getting a bunch of false positives.
And that's all stuff that we make really easy for folks to set up.
But, you know, no one really knows what the exact solution is.
In fact, I would argue that there is not just one solution, just because the types of fraud
just because the types of fraud that people encounter are so vast.
that people encounter are so vast.
You know, if you're one of the examples that I always like to give is like there's there's authorized and unauthorized fraud.
So like you can commit fraud without ever logging into an application.
And that'd be like your classic example of like a romance scam or a sweepstakes scam.
Like you just won a billion dollars. Like i just need you to go and send me
some money like that person hasn't had to authenticate they just tricked someone into
doing something but then you know you have your more traditional authorized um fraud which would
potentially be something like identity fraud which is you know hacking into someone's account whether
it's a fraud as a service system that's like selling credentials or whatever it may be
and each of those um really
applies to different industries some people don't care about the unauthorized fraud they're like
our users are smarter than that um you know some people have their own solutions for it and really
we want to be as flexible as other people are yeah meet them where they are and then be able to help
them constantly iterate to a better solution.
I've kind of put in mind here, the image I have is some poor
guy that's in his garage trying to build a boat. And he's about
two thirds finished with it. And he's messed it up completely.
Then he calls an actual, an actual engineer, he says, Okay,
let me see if I can rescue you.
And you honestly, we we have conversations like that.
And I tell you, man, people are so happy when they're like, oh my gosh, like this thing
that you clearly have thought about for a long time, like I was just running into it
and I was about to have a panic attack.
So it's really cool to be able to help people with that stuff.
But you got to sort of pick them up wherever they happen to wander into the swamp.
You've got to say, okay, here you are now.
What's the best path given where you've ended up?
And that, you know, you can solve that
by just being like consultants,
but our goal is not to be consultants.
We really want to make it so that the tool can guide people. And so we're kind of constantly listening to feedback and, and trying to do that
iteration so that it's easy for people to discover the correct things to do. And then, you know,
ideally get to the point where, you know, they can do it completely on their own.
Can you give me a couple of examples of, of's say, the types of customers, types of verticals
that require very distinct solutions in this respect?
Yeah, so, you know, there's any enterprise customer is going to have very, very rigid requirements when it comes to what's
happening. Usually a lot of legal, a lot of XYZ, a lot of times like compliance is a huge issue.
And, you know, on the other side of things, like a lot of small companies, like the big
anti-fraud companies won't even talk to them. Like if you're just,
you know, working at a, at a small, you know, Shopify store, for example, a lot of times,
like the best in class anti-fraud solutions are like, they don't have time to, to work with you
for something that may not even make a dent in their bottom line. And so we definitely have to kind of come to each of those different groups and speak
And one example of a specific solution that's just really kind of out there is, let's imagine
that you're a company that's involved in calling, whether it's like, you know, a company like that provides calling
services, or maybe they call center, or, you know, even one of these new like AI calling things,
there's a lot of regulations about who can call individual, especially like really country by
country. But like in the US, for example, there's the do not call list.
And you have to meet those regulatory requirements.
And so if your customers are abusing the do not call list,
you could get in trouble.
And so a type of fraud would be
someone signs up for your account,
they go until they get caught,
and then that happens so often that your company gets in trouble. And so
the legal requirements there are really are something that you
can enable anyone in that type of industry to be able to solve.
I see. So what are the kind of classes of solutions you use?
Clearly, machine learning is something that's central to what you're doing.
Are these systems that you develop yourself and then add on top of what the other solutions people have already purchased?
Well, John, it's also open to zero of solutions to not just, not just AI, but yes,
So, so yeah, I, I actually, I was, I was talking with, uh, with Stephanie earlier and, and I feel
like the concept of zero trust is like really core to
to really security in general, but also like how dodgeball responds to things. And so
one of the examples of zero trust is sort of like, okay, well, someone's like,
maybe they're in your application. They now want to make a withdrawal well i can't just believe that they you know have the right to do that because they're in this application i need to potentially
do some sort of additional check at that point assuming that you know maybe their authentication
was hacked maybe maybe they're in here illegitimately maybe their behavior is weird
I never want to just believe that they have the right to make this withdrawal. And so we enable
people to run custom workflows at any point of risk so that they can, for any individual point
of risk within their application, have a new computation of trust or maybe even just taking greetings. And so really, the kind of classes of
solutions that we provide, we do have our own services. There's things like, you know,
fingerprinting, we can provide a solution, but we also have partners that we work with, like,
for example, fingerprint js is a really, really great fingerprinting service that would help
you figure out, has this device been used before?
And so you can, if you use Dodgeball, you can either just use our default fingerprinting
or you can use a provider's fingerprinting and we let you hook into those.
And that's just one example of a type of integration that we would provide. So it sounds almost like you're putting interrupts inside people's established code base to stop and then do a check and then continue with the algorithm.
Is that really true? That's amazing.
Yeah, yeah. And that's exactly true.
And it actually can interact with their front end, too.
So we have two SDKs, well, two classes of SDKs.
We have client SDKs and backend SDKs.
And basically, people are, if just by virtue
of loading in client SDKs, so like a web, a JavaScript SDK,
we will figure out all of the components,
like say that there's like a fingerprinting service
that they use that they want to use
for some given application,
load them in for them, set them up for them,
so they never even have to deal with installing it,
use them correctly, and then use those in the backend
to run some sort of a workflow analysis.
So I try to obscure my fingerprint.
I try to send false data because I just don't want,
I don't want to be fingerprinted. I don't want you, you know, I'm not logged in and I've got a
VPN. So you hopefully can't tell what my IP address is. But if you can uniquely identify
my computer by whatever updates I have in my system, then I've sort of failed completely
regardless. So I at least try to obscure it. I don't know how many people do that, but
for me at least, I guess that means I'm blank, that you see me and you don't know if you should
trust me or not because you haven't seen me before. My fingerprinting obfuscation is working.
Yeah, and every company will handle that differently.
But we consider it very important to let people make a decision off the fact that there appears to be some kind of obfuscation happening.
So, like, you could potentially have, for example, an obfuscation score of, you know,
there's this cut, there's this person, they've logged in as this customer ID, and every single
time, the fingerprint looks completely different. And so we would basically say, okay, there,
it's very likely that this customer, you know, maybe 90 out of 100 is trying to obscure their
fingerprint. And that's totally legitimate. Like, there's no reason
that you wouldn't allow that. But maybe if someone's using an obscured fingerprint, and they've tried to
create like eight different accounts in the last five minutes, we might want to block that ninth
account, for example, and also like not necessarily trust those other seven.
And I guess you would determine the same person by the fact it's coming from the same IP address? Or how would you know if I'm coming in from various VPNs that I'm doing repeated actions?
Yeah, so we would run some sort of fingerprinting on your browser when you first start doing stuff.
And assuming that you don't open a new browser every time you want to make an account,
like if you just try to make an account every like seven accounts in a row in the same browser,
then that fingerprint, even though it is obfuscated, would still be associated with each of those actions.
And again, like it's only as good as as it is, but it's still useful.
And so and that's where you can get really crazy when you start doing machine learning analysis,
because anything that is valuable there, it will surface for you. Yeah, that's right. Well, so, well, I, the thing, so, you know, I, I
teach ICT economics to, to undergraduates. And one thing that really comes, becomes clear is that,
that even them, and they're, they're smart people, and they people and they're pretty savvy, a lot of them are
in computer science as well, they really don't understand much about how systems work, about how
authentication works, how fraud works, how identification works, and they don't really
care about privacy in the sense that they're willing to do anything about it.
So people are sort of stumbling along.
So probably these things work about 90% of the time.
company can sort of make more intelligent decisions about people who are obfuscating
and working for their privacy because it's not going to necessarily impact their bottom line
like as much if they get it wrong. And so they can sort of like a company could be like very pro
privacy and be like, okay, well, we're just going to look at the non fingerprint stuff for this class of customers who are obfuscating it um and and really we
it's important to kind of enable that enable that and also make it really clear because like
imagine like i've i've definitely been on what like for example my brother um every time that
he tries to order on etsy he has to get me to do it for him because somehow they don't trust him.
And he thinks it's because he used a VPN to order one time and and he can't ever order with them ever again.
And yeah. And so I have to make his orders for Etsy.
And I would imagine that Etsy probably doesn't want that to be the case.
if I would bet also that it's probably pretty hard
for them to like, A, know that that's happening
before, you know, they start hearing about it
B, figure out like what actually is happening
for this user and like what went wrong, you know,
why can't he order and see like how to change that.
But it's probably not important because it's probably a very tiny fraction of people.
I mean, if you lose that 1%, like me,
probably it's not worth taking a lot of trouble.
I mean, I think people...
I mean, that depends on the elasticity you have, right?
So I think that that's not a generalization that you can make.
It's a question of numbers. You know, how many people are weird like me?
You're probably a desirable person. I can vouch for that.
But I just think it's interesting that you can learn so much. And I'm very sympathetic to the idea that, you know,
some help is better than none.
But I think, I'd like John to speak a little bit
about the discussion we've been having
about authentication and identity.
And let's see what you think about it.
So one of the things that to me seems like it's a fundamental,
should be a fundamental security rule is that you should,
you should as much as you can ignore a push.
Anything that's being pushed at you is necessarily suspect
because you don't, it's very hard to verify
where it really came from.
There's lots of ways, unless you're very savvy
Phishing emails are the obvious example.
It's being pushed at you, it looks genuine,
and unless you really know what a subdomain is
or what a URL is really supposed to look like,
then you're likely to be drawn in.
So if you can design a security apparatus
that's focused on pulls from sources that you know
that you can trust because they're designed that way. You know, you're not just taking something random off the street,
but you're only pulling from something that is known. Um,
then that seems to me to be a much better foundation. And that's,
That's where we think that blockchain, especially ours, fits in.
that's where we think that, um, that blockchain, especially ours,
That if you have these, if you have credentials and requests and signatures of, you know, attestations of what was asked, what was done, what permissions were granted when they were exercised and so forth.
Then, you know, you come knocking at my door and say, you owe me $5. And I
say, really? Okay, well, then I go to the blockchain and I see if it's if it really, whoever is
there's there's something some evidence that that is actually there that says not only do I owe
somebody $5, but it's you. And you're not just there randomly taking money from me.
I don't know, what is it that you would like me
I think the authentication part being tied to
private key challenges is really the key.
Yeah, so it's a problem. So my,
I'll tell you a personal story. My, my father,
but recently went insane, um,
age and, uh, I wouldn't collected him.
And I was completely locked out of both his,
his Gmail identity and his Microsoft account because he'd lost his credentials and he'd, of course, tried to log in endless times.
And then, of course, the fraud detection thought that not only he was probably a bot and also that this account was under attack.
was under attack and so it was locked and as far as i could determine there was there was
absolutely no way that i could really convince either microsoft or google who my dad was
google if he had arranged to have a a backup email or something like that then possibly there
have been a recovery route, but he didn't.
And I suspect that a lot of people don't.
And if he'd made his recovery disk with Microsoft,
he could probably have managed to get back in,
And of course, I don't think most people do.
So you end up with a lot of these false negatives.
So if you try to lock yourself in
to a security arrangement,
you're going to lose a lot of people and at least some people.
And it's devastating for them.
Fortunately, my dad didn't have much going on, but it could have been.
Yeah. I mean, getting back into his financial arrangements took a lot of effort.
Fortunately, he had not tied it all into his Gmail identity.
But if he did, then it would have been an absolute catastrophe.
I don't know how I would have managed it.
But anyway, so I guess that's – I guess what I'm saying is I think attention should also be given to these false negatives as well as the false positives. False positives can be disastrous because you lose all
your funds to a fraudster, but false negatives, I mean, this would have been a month of my time if
I managed, even if I'd managed to reestablish him. Well, just to be clear, we're talking about potentially losing Gmail as a second factor,
yes, as one of your multi-factor authentications, right?
And if you can't access it, then you can't authenticate yourself in all the ways they're asking.
Well, sometimes even first you say,
you know, log in with Gmail.
And the HB warning, yeah.
You've never really established credentials.
You're just using them as the proxy.
So, I mean, don't do that.
That's a terrible idea because of that, but-
Well, let's get back on track
so that we're more organized.
I feel like I didn't ask a great question,
Well, so your question was, I think,
how do you tie your identity to a key?
How do you tie the challenge to a key?
Yeah, I mean, that's known.
But, yeah, so I guess the question is, what really is a practical zero-trust method of authentication
that hopefully gets rid of both false positives
and false negatives to the greatest extent?
If you make it costly to people, like often I won't establish accounts places because they
want my phone number or they want another email address or they are insisting I go through
capture or two-factor authentication and it's just not worth it. Your content, whatever it is,
not that interesting to me.
So I'm not, I'm just, you're gonna lose me
as a viewer or a customer.
So what our thought is, is,
so establishing the way that you use public keys,
public private keys to establish identities,
well understood, just to review,
if I can generate a public private key pair on an iPhone
through a FIDO application or with a YubiKey
or with some sort of Microsoft key ring system,
lots of ways that those can be established,
although they're not as widely used as they probably should be,
but you can establish those,
keep that private key in a secure place.
And then once you've done that,
you have an ability to give a public key to anybody
So I can give it to Vanderbilt.
I say from now on, when I log in, this is me.
And all I have to do is have access to that private key.
I don't need anything else.
You can just meet me at the gate.
And if I can answer a private key challenge,
then you know that it's me.
And that seems to me to be if we can just get
that infrastructure established, and it's there already, it's just a question of deploying
it, then we can get rid of really all the rest of the two factor authentication, the
fingerprinting, the machine learning
to see if this is a credible way to come in.
It's really just, is this private key bound
to whatever entity I'm supposed to deal with?
Yeah, I think I was actually looking at your guys' website
and there was the example about the teacher
That sounds to me like we're talking a bit about that, that kind of.
So I actually did have, I had one question about this because I, I come from really from
Web2 and so I'm trying to kind of like conceptualize this as how this protocol could potentially be implemented
And I think that it turns out it's a little bit complicated
which is probably why you guys are doing it
But I wanna try to understand to talk through it
just so that I can see if I'm understanding
how that works correctly.
So I think people are familiar with,
a lot of people at least are familiar
with the concept of a JSON web token,
which is a very common way that you can prove
that you have passed a correct password that handles,
it kind of serves as your private key, the server will do
the actual like, they don't have to actually hold your password, but they can still validate that
you have that you have logged into this website. And yeah, and so it sounds to me like when you
like when you get your token back, you have a payload inside of it. And it sounded to me like
the example you're talking about
is that payload contains a bunch of stuff,
but it also contains something else in it
that I would only be able to decrypt if I had my own private
key, and then say that I decrypt it, and I'm like, OK,
I now know that this secret payload within my JSON web token is
something that I need to then send back to the server. And then they know it's actually
like literally me. Is that is that kind of something like the process that we're talking about, or am I misunderstanding
Don, why don't you take...
Why don't you take an example of an enterprise trying to authenticate a worker?
Yeah, well, let me answer this question first.
Yeah, I mean, that's essentially right, except I don't think you need to necessarily go through
As I understand that sort of security token approach and I could be wrong, please correct me.
I authenticate, I get a basically
an authorization cookie that lives in my browser.
And then every time I request a new page,
I am interrogated for that key, for that cookie. And if it exists,
then then the the destination site believes that I've previously authenticated. And so therefore,
I can look at the next page of my bank statement or something like that. But it but it lives
on my browser. And I thought that the attack surface there was,
if somebody were to grab that cookie off my system,
they could put it on their system.
And then they would also be apparently authenticated
because there's no encryption going on after it's deposited.
Then that's the replay attack example that you were talking about. So you're saying that it's deposited uh yes so yeah so then that's the replay attack um example that
you were talking about so you're saying that it's and so basically what you're saying is by moving
it outside of that flow and still having that additional challenge uh within the payload if you
will that you're able to kind of solve both those problems is that is that the way you're talking
about yeah i mean if like i say if you wanted to do it through the cookie delivery,
We sort of envision the simpler process where I log in.
And so the simple version is I've just given you a public key.
You have at some previous point decided that you believe that at a point of a trusted exchange that I am, in fact, your customer.
And at that point, you accept the public key for me as the proxy for my identity.
Now, how that's done, lots of different ways.
lots of different ways. And that is, of course, a point where bad things could happen. If I could
And that is, of course, a point where bad things could happen.
fraud you into believing I am somebody else, you have to be very careful, you know, to accept that
proxy. But okay, so let's suppose we've gotten by that. Then every time you, I ask for something,
all you need to do is encrypt anything at all, just any arbitrary challenge like,
who are you and why does your dog have a nose?
Just random texts and send that encrypted
with my public key, send it to me any way you like,
could be in a cookie, could just be,
send it to me as a packet. And then when I come
back and I say, well, give me the give me the following page, I include that decrypted one
time password in effect that you've given me. And every time I make a new request, I just give you
that one time password. But that one time does not appear any place where it could be captured.
password, but that one time does not appear any place where it could be captured.
Yeah, yeah, that's, that makes sense. And I think it's really interesting to think about this from
the anti fraud perspective as well, because, you know, this, if you are able to truly, like,
verifiably trust who, who your customer is, i think that you're able to solve a lot of
questions on fraud and so i think it's interesting to put this into perspective from from the from
the dodgeball side because i think that this the way we tend to discuss things is like
everything that you think you know,
you only know to some degree, if that makes sense.
Like, you know, customer is authenticated.
Well, we only know this to the degree that we believe
that the customer is in fact authenticated, right?
So customer is who they say they are.
You know, we can have a certain confidence of that.
We can have a certain confidence of any value that we may assign to any entity. And I think that if you could
truly get to the point where any individual piece of data, but especially a great example
would be customer is who they say they are and is authorized, and this is them on the
other end of this darn computer, then you know, that's a really good stepping stone into solving
a lot of really complicated stuff. You know, once you're able to get past that sort of
the identity takeover, then you can get into a lot of other solutions.
And I think another part of it that I think is interesting is it sounded to me when I was reading some stuff off of your website, um that you could use your token to sort of like verifiably determine um sort of like
oh uh some organization some central organization could could allow someone to to um exist right
like your example with the teacher like the teacher is now creating this this opportunity
for someone to be a worker and i think that that's kind of interesting too it's
sort of like it's it's kind of like in web 2 you have like the concept of an invite and and I think
that um you know that's an interesting concept as well because I guess um in in the business world
like generally you would invite people upon their request. So even if I can guarantee that someone is the person who responded to the
invite, what do you think about like, how, how would a, um, an entity,
whatever they may be determined whether or not they actually want to issue the
Well, so this is a right so the simplest method of authentication was what he described. I've given you a public key. That's my proxy. I have the private key. Now I can always
authenticate every single time we have an interaction. And you know that it's always the
same entity because I've always, I'm the only one that has that private key. Now, if you want to
extend that, then that's, this is where the blockchain really comes in and where the idea
of a pull as opposed to a push comes in. So let's imagine that, you know, I'm at Vanderbilt. So let's suppose that the registrar has met with the students as they are enrolling.
And at that moment, we have a YubiKey and we exchange a public, give the registrar a public key.
And so now from now on, that's your student identity.
And so I, as a professor, can rely on that student identity.
So somebody knocks on my door and says, I'm a student in your class.
Well, I can look at the blockchain and there's an NFT deployed by the registrar saying, this
is student XYZ and here's the public key and it's endorsed by the registrar.
I can then send the challenge based upon that public key that's in the blockchain.
And the student will be able to respond to demonstrate that he actually is the person on the other end of that transaction.
So you can then extend that, you know, I, you know, it would be that you're, you're a member of the,
of the lions club and you're, you're you have a pilot's license and you have,
you're, you're employed by, by Vanderbilt. You know, if we have these,
these roots of trust that are saying this public key actually has these sets of,
of credentials and qualifications and characteristics that can be used as a basis for deciding if I
not only do I want to invite you because you can prove to me what you really are but making sure
that the invitation effect does go to you instead of somebody capturing it definitely so oh sorry
I definitely that that workflow is is definitely useful for like within, say, you know, in dodgeball, for example, you can you can invite other members of your organization.
And I can imagine that that workflow being very useful to know, okay, like we have this person able, you know, I gave the they have their their private key, maybe their company ID or whatever it may be, and they have to validate their company ID in order to receive this invite.
And that's definitely helpful.
is when you do want to leave something open to the public,
people will often create what we call synthetic identity, for example.
And this is often used for things like referral abuse,
where the example I gave a while back was Robin Hood.
And with AI especially, it's becoming easier and easier to
generate these like reasonable sounding people who totally don't exist um is your vision for
the world something where like a person would have a personhood ID that right
Talk about that a little bit. I think that's interesting. You know, actually you have Stephanie and I were just talking about that this morning
We're talking about Stephanie brought up the idea of
Platforms and economies of scale and the desire to get bigger and bigger
Because the notion is that you have a network externality when everyone is trying to capture this network externality but the counter is is kind of what you're suggesting
and that is that when the people joining your network were actually people well then there
probably was a positive benefit you know if you're on if uh i'm on x or facebook and real people are on X and Facebook, well, then that's actually
what I want to, that's who I want to communicate with.
But if 90% of them are bots, well, these bots are actually negative to me.
They don't, they don't enhance my experience.
In fact, they're, they're bad for me.
So with the advent of, of AI and more sophisticated sorts of fraud, the problem is you get parasitical identities attaching to these platforms.
And the parasites don't generate a positive network externality.
And it's also very costly because if you're making your efforts through sophisticated AI, I think Stephanie
mentioned that Amazon was spending something like, what, was it $1.2 billion on AI to detect
these kinds of frauds? That's really expensive. And that's just an effort to keep parasitical
and fraudulent identities out. But it's only going to get cheaper to make a parasite.
Cheaper and cheaper and cheaper. And at some point,
it's not even spending a couple of cents to destroy a parasite that's almost free
is not going to be economically feasible.
I think that's very, very interesting. And I know I have some friends who are really involved in DC.
I worked in DC for a few years.
And one of the interesting parts is no one who values privacy,
I should say, wants to have a centralized government.
Like, this is a real person vouched for by the government.
I feel like a lot of times we're moving towards this with the amount of ID verification that's happening out there
in Web2. Right. But, you know, for people who do value their privacy, the idea of a private
actual personhood identification or proof is very compelling because if you don't have to share
all your data to still know that someone is in fact a person that has been verified with
with certain levels of certainty that's very very interesting so you know that the problem is yeah
yeah i mean this really is very fascinating i think this is actually like one of the golden technologies, if you could ever figure it out.
If I walk into a personhood store and say, look, I'm a biological, here I am, give me a personhood public key, because you can definitely tell that I am a person.
personhood public key because you can definitely tell that that's who I am, that I am a person.
Okay, well, if I can walk into stores all day long, there's a cost because I have to walk in
and that takes some time. So it's not completely free, like it might be with an AI, but it still
might be very cheap. And I could just be in the and easy um so there has to be a kind of a limit on how many
how many personhoods i can have um or there might it might be that it costs me 500 to get
authenticated as a person or maybe we even have a free market for personhoods where
where what i've done is i've put up a bond you know i've gone to a really expensive personhood
authenticator and just burned ten thousand dollars and that act of destroying value shows that i
can't create this personhood so it's a it's a much more trusted personhood. Oh my gosh, it's a proof of work of personhood.
Yeah, that's exactly right. Or, you know, there's burning money games in economics that we study.
But yeah, it's only important that you burn the money. It's not important that anybody, you know, receive $10,000 for verifying your personhood.
Just has to be a part. I'm thinking about the use case you guys gave earlier
of the teacher verifying the job.
And I almost wonder if you could add a third layer to that
where it's like, OK, one layer is I can see that I want
to know whether or not, well, this person is a person,
but then you have to verify the issuer of that,
and the issuer of that is the government
or something to that effect.
But you never actually have to see the detail
or like this is the proof, and then there's another level,
and then under that's like you have to prove
that it was originally issued by the government
or something crazy like that.
I could imagine some world where that would make sense.
Right now we have no standards for what a person is, you know,
a virtual person is, but, but, you know, we, we could have many standards.
It could be that I buy a $2 throwaway person identity because I want to just
play around on a gaming platform without identifying
myself and you know maybe two dollars is enough that it's not worthwhile for an ai to to to buy
that there's not enough value in it so there's a there's an entry price if i have a two dollar
identity maybe i can do certain things yeah i think i think that's very interesting. And it kind of like, as I'm,
so we have a thing we call our fraud topology
and I'm looking at it right now
and I can kind of see like there's places where a solution
that does have that sort of those proofs are very helpful.
And I think some of those examples are your authorized fraud,
your identity fraud, potentially even things like ID verification, stopping synthetic identity,
application fraud, things like that. And then I'm curious how you guys would think about,
because we spend a lot of our time thinking about these kind of frauds, fraud use cases,
which are called, they're called policy abuse. And so it's when you have a legitimate person. Yeah. And this legitimate person could
be a great customer for a while. And then all of a sudden they see a tick tock video that's like,
Hey, if you go to Walmart, and you return something that's under $5, they, they don't ask you to ship
it back. And you can do this eight times before you get in trouble. So do it seven times and you'll get $35.
How do you, is this something that is kind of out of scope
for how you think about things?
Or, cause I kind of can see a world where it's like,
both of these approaches have value.
Right, so I guess I'd say two things about that. One is that as a mechanism,
the mechanism is flawed. Every rational customer would take advantage of that.
And what you're counting on is that there's a behavioral prohibition that causes most people
not to take advantage of that. So your type one errors are small
and your type two errors would be big
if you didn't let people do that.
So it's worth it if you have a certain type of society.
if everybody is a ruthless game theorist,
it would be a terrible mechanism.
So they were talking about zero trust. Either you can actually get a terrible mechanism. So, you know, they were talking about zero trust. Either you can actually
get a good mechanism where when people behave ruthlessly, it's still optimal, or you're making
this compromise to, you know, be the real predictors. And so you're basically, what you're
basically saying is like, if there's possibility for policy abuse, then the policy is wrong. It's
be able to figure out if someone is going to abuse a policy. I'd say the policy is not based upon,
I guess I'd say it's an empirically sound policy, perhaps, but not a theoretically sound policy.
So it's broken theoretically. But if in in fact people don't use certain optimal strategies
as a matter of of fact then it's okay uh and you you should then just accept that as cost of doing
business and i think it's really what's really interesting though is when so i i think i agree
with you in theory about the empirical and theoretical solutions
but what's interesting is i think that you can actually have a theory behind your empirical
uh empirically superior approach and and here's word salad there but i think
that makes sense actually yeah yeah so i
think what i would what i would say is that i think these these enterprises these really big
companies they know what they're doing and they know how to make a marginal dollar and if they
can write a policy that's going to make the vast majority of customers much more happy because like
i don't want wanna have to worry about
if I'm meeting some complicated policy
and if I'm gonna return something for five bucks
and I don't wanna ship it back.
Well, that is not, you know, game theoretically ideal,
but it is making them the most money.
But then on top of that, when they do get that problem, what's the lowest cost way
for them to stop the people who are going to abuse that abuse that policy from ever having to have
the opportunity to abuse that policy. And sometimes I think I think that approaches like, so you know,
solutions that are meant to stop policy abuse, just plugging those in or writing
your own machine learning model or whatever it may be, can be really, really good, especially if
they have the correct instrumentation. Yeah. No. So before we had this possibility,
this is why we had policies. We sort of induced what probably would be the best guess at the average
And we laid it out as a set of rules because the rules are easy to follow
and you don't have to do calculus.
But if you can make ad hoc policy, like, you know,
you're coming in and you're drunk and you want me to give you a hamburger
for which you'll pay me on Tuesday to you want me to give you a hamburger that's for which you'll
pay me on tuesday i'm going to say no because my ai will tell me that those guys don't actually
come back but if you're a regular customer and you and you ask for a loan then i'll probably
give it to you because they're my aid says that's a good that's a good idea so the policy of no
credit is not the right policy if you have clever AI.
And I think that that is the sort of transformation that we're seeing is, you know, in the past, I think that the obfuscation, the lack of transparency of what these policies actually
Like, you'd have to, like, work at Walmart and have read the work instructions to be
like, okay, less than $5 approved.
Well, in the age of the internet, everybody knows everything at all times at scale.
And people figure this stuff out.
And then within 10 minutes, the whole world could know and start abusing it.
And that's why a lot of times you
know companies who deal with this kind of stuff they're like we don't have a fraud problem we
don't have a fraud problem we are losing tons of money help us now um and that's and that's a very
real real situation that's happening all the time in the world now and I think that that obscurity is no longer protection. And that's what is really
happening. Yeah. Well, I agree. And so this is perhaps pushing the limits of AI, you know,
coming back to the notion of a parasite, that you are able to adapt and make these sort of less policy driven and empirically driven
decisions, but it's expensive to, to make that for a lot, a lot of cases. And if 90% of your
customers are parasitical artificial beings, then, you know, it, you just may as well close down.
It's like you're in a bad neighborhood.
What you're paying for security is more than you can possibly make in profits.
So unless you can have a way of, and this is why you come back to zero trust,
that if you've got to let somebody in and look at them and say,
are you a human and that costs you $5?
Well, then you're restricted to certain kinds of businesses.
And it depends upon the fraction of humans to non-humans.
But if people can prove it to you all the time,
they can only pass the door after they've proven that they're of a non-parasitical class.
Well, then you're back in a good a good neighborhood yeah and we we call we call this actually um so there's like
these two terms that we like to use a lot there's assertions and policies and so within some given
uh point of risk you can make an assertion and then you can have a policy that you apply to that.
And so, for example, let's imagine that we can assert that this person is, in fact, a robot and a year in the bad neighborhood type of situation.
neighborhood type of situation, but we don't got to spend money on our machine learning algorithm
We don't got to spend money on our machine learning algorithm for them.
for them. We can just immediately cancel or do some default behavior or do something cheaper
to analyze their behavior and decision off of it. But assuming that we cannot assert that,
assuming that assertion is false, then we should probably, in the case of policy abuse, run the run the machine learning
analysis. And you can even have something like a special policy that you run in the case of a
customer who's predicted lifetime value to your company is going to be such a certain amount or
a customer who's predict who's predicted lifetime value is very low.
And that way you're able to,
like we work with companies that work with celebrities, for example.
And if you can assert that someone is a celebrity,
you do not want to be blocking their transaction,
for example, right before they're about to go
and promote you, for example.
Actually, that's very interesting.
So let me say two things here.
One is that, you know, capture,
CAPTCHA is one of the ways that you put a gate to hopefully you have a human
and you keep up at least some of the, some of the parasites.
But this idea, you know, our idea of having a key is, is a,
this is sort of an advance on that, provided that those keys are given by by entities that you trust.
And so that's the idea of having this public infrastructure where it's not just that that you your company has met me and decided that my key is, in fact, a key that's valuable.
a key that's valuable. But you can leverage the other companies granting of keys, you know,
because the passwords are not involved. There's no credentials for the company that are being
shared. But if I see that Walmart has verified you as a customer, then Costco can take Walmart's
word, assuming that Walmart has done a good job in the past and use your Walmart public key to verify that yes, you are a human.
Yeah, that's very interesting as well.
So, you know, if we have those attestations that are, and the key is they have to be a
It has to be that the attestation is someplace that we can all see the NFT, we can see who
signed it, we can see the
public key, and then we can judge if that's a good credential. And we pull it up, we say it's a good
credential, and now we're in business, we can do what we want to do. The other thing was your idea
of, I like this idea of, of, of celebrity, because we thought about this a while ago.
of celebrity, because we thought about this a while ago.
One of the things you can get with a blockchain is portable clout.
if I have documentation that I have a certain number of Instagram followers or
that I've been to X, Y, Z number of, of Taylor Swift concerts,
or I've bought this much Merck or whatever it is,
I've made this many, this or whatever it is. I've made this many tweets.
Whatever it might be, these things are easy to document as attestations.
And then they're portable not only across platforms.
I can go to Twitter and say on Instagram they think I'm a genius
genius, because I have all of these, I have all of these followers, and I can actually
because I have all of these followers and I can actually verify them.
clarify, but also for endorsements and for, and for other businesses, you know, you can actually
get an Instagram influencer who's really one, and then treat them differently, because they have a
different, a different value to you. Yeah, and I think that this actually brings me back to a
conversation that Stephanie had earlier also, which is, I feel
like the there's a lot of people who have spent a lot of time
getting really, really good at these sort of like, these walled
gardens of non distributed technology, and the way that you can solve things. And it's really easy to
solve things this way. You don't have to integrate with a blockchain to be able to, for yourself,
have some... Yes, you do. Yes, you do.
Yeah. Well, but I think that, you know, realistically, though, like a lot of people
just aren't going to. But the point where it becomes undeniably valuable
is when you want to have distribution and across network without trust because i am instagram
and i i have no uh business connection with with x but if i ever want to be able to to use this distributed information and and i don't trust
x maybe they're gonna maybe um you know they're gonna change their api on me i'm not gonna spend
you know three months integrating with something for their too so i can work with their engineers
right but if there's a an a distributed predictable pattern where I can get value from that.
That's where I think that there's people
who need to be thinking about how do we leverage this
and turn it into value for those companies
who are doing things in a web two way.
Because really, if you can glue all these places together,
that's a tremendous value.
And as soon as you're providing value,
I think that things are going to,
the acceleration can really happen.
And that's the thing that we know how to do.
It's the old question of,
you say you're big in Japan, but are you really?
but here it's not too hard. there's the sort of brute force way
to do it where in a decentralized way i can have evidence of anything that i do and record it
there's an easier way and that's that um um elon musk or fred p, whoever owns Instagram, I don't know,
has a service and says, you've got this many followers,
or you've got this many that,
or an attestation signed by the platform,
if the platform is trustworthy, creates portability.
And more valuable, the platform is more valuable too.
Yeah, and I feel like, so one of the big things that I always spend time preaching about is
vocabulary. And standardized vocabulary is very, very, very powerful, especially when you're
fighting fraud, but really everywhere. And like, I like to think of vocabulary and sort of entity and attribute senses.
And I think that what's potentially very interesting is a world where there's a standardized vocabulary of, you know, things that could be pulled from a distributed data source. So for example, like platform.name,
platform.followers, you know,
and then all of a sudden, you know,
X doesn't have to come up with their own vocabulary.
They don't want to do that.
Meta doesn't want to have to do that.
No one wants to have to do that,
but it's all of a sudden much lower cost for them to publish this information
in a way that anyone is going to be able to use very quickly
because they're already set up to listen to that book, to understand and digest that vocabulary.
Yeah, so I agree. I mean, as a math guy, if you don't define your terms, it doesn't mean very much.
You have to have good definitions. But on the other hand, you as an AI guy, as a machine learning guy,
actually, you'd be able to figure out what this diverse kind of vocabulary might be.
I might say followers. I might say engagement. I might say this or that.
But the AI could probably take any sort of set of standardized documents and then try to make them you know cross uh you know interpretable by one
yeah and it's actually really really good at that and um and that is something that you know we use
all the time um at dodgeball and also you know we do have our own vocabulary and we part of the
value of dodgeball is like if you push things into our system and our vocabulary, we're able to call anything with that information.
You know, the example I always give is like,
you don't have to know if you use dodgeball
that Twilio requires you to have a plus one
and then a space and then the rest of the phone number,
which, you know, isn't necessarily standard E164 formatting,
even though they say it is in their docs.
Like, you don't have to worry about that.
We'll take care of it. It's just going to work. Send us the phone number, do it in standard E 164 formatting, even though they say it is in their docs. Like, you don't have to worry about that. We'll take care of it. It's just going to work.
Send us the phone number, do it in standard E 164.
We'll make it work for them.
We'll make it work for someone else who only wants a 10 digit phone number.
And it's only for the United States.
Like that is some real value and AI definitely is transforming how that works as well.
There's two elements there.
So one, what you're describing is essentially data cleaning. Yeah, pipelines. Yeah. Yeah, that's really important. And that's
very time consuming for humans to do. But the other is, you know, let's suppose that I have a
whole bunch of evidence that might be on blockchain, might be private access stations,
maybe something I could even pull from existing platforms.
Well, AI is exactly the right tool to interpret it.
If I wanted to do a background check on you and say, well, what is this guy?
It's going to take forever.
But if you can just have AI do that as a matter of course, and then it gives me a summary credential, well, then that's, you know, the data, there's too
much data out there for a human to use, but humans want to know what the data says. And that's a
great deal. Yeah. And we're seeing tremendous power from that, especially like we have a system
where people are able to respond to alerts and it'll
create an alert summary um and the reactions we're getting from customers who are using that is like
generally involves some sort of like a open mouth like what like that would have taken me so long
to figure that out and it's right and it's pretty Yeah, that is that's amazing. I just had a very simple example,
I'm looking for a new password store. And so I went to one and
they offer security checks. And it turns out an email that I use
that is I thought was pretty obscure is connected to my phone
number. So somebody at some point late leaked leaked, lost lost
had a data leak. And now forever forever that's that's attached to my name
and my phone number and my actual address yeah it just takes one time too and that's what's really
crazy about it well this has been really great um yeah it's cyber security it's absolutely key. And it's I don't something's got to change because it's going to be easier and cheaper to to fraud people, both, as you suggest, with authentic authenticated fraud-authenticated fraud. That'll become very cheap. If you don't
have a way to back up these non-authenticated pushes, then people are going to start either
losing a lot of money or credentials, or they'll just stop participating because it's just too
risky. It's just too bad a neighborhood. Everybody is lying to me, so I can't believe anything.
risky. It's just too bad a neighborhood. Everybody is lying to me, so I can't believe anything.
So we really do have to solve these authentication and security issues or else the system is going
to come down. Yeah, it's extremely important. And there's real lives and real companies and
real businesses that are at stake. And that's know, that's why I'm in this industry.
And I think it's just so important to do it right now.
We're wonderful talking to you.
Thanks a lot, Nathan, for your insight as well,
sharing your experiences for you as well, John.
And, John, like, let me teach you something for once.
The owner of Instagram is Mark Zuckerberg.
Instagram, I could have sworn.
I want to thank everyone as well
Then Bitcoin Gordon Freeman as well for joining uh also saw rod uh joining um then uh bitcoin gordon freeman as well uh hipto cat luxury emmy no ac turn easy uh lexafix like uh everyone thanks a lot for joining in uh
thanks again nathan hope to see you again for for next spaces as a speaker, as a listener. Everyone
is always more than welcome to join.
And I wish you guys a great
afternoon, great evening,
and talk to you next week.
Okay. Thanks a lot, everybody. Awesome. Thank you.
Bye, everyone. Það er hann.