Thank you. Okay, GM, GM, we're going to play some music as people join in and then we'll get started in a few minutes. I'm going to go to the next video. so I'm going to go ahead and do it. so
GMGM, welcome everybody. Welcome speakers. We are here at the fourth episode of Laz Talks and we're talking about today building privacy and trust in Web3.ai.
We'll introduce the speakers very shortly. We've got a bunch of questions we'll chat about. We've got a few community questions and we will get stuck in to this topic.
But first, a little bit of housekeeping. We're very privileged to have our guests with us today. Thank you all for joining and making the time for this. privacy and trust in Web3.ai. And I think we can get started. So we'll start just with
speaker introductions. Then we'll go through some questions, then community questions.
And then there's a thing at the end for any of you people on the TaskOn campaign where
I give you the secret code, but that only comes right at the very end.
but that only comes right at the very end.
Alrighty, so let's begin.
Would everyone just smash that emoji button, please,
if you can hear my voice clearly?
Okay, I can see some emojis.
I see some regulars in the audience.
I see you active on Twitter.
Peter Joe, Little Wizard. you're everywhere with Little Wizard. Good to see you.
Yeah, by the way, I haven't introduced myself actually. My name's Liam and I, I'm the intern
over at Metis and I also help here with Lazy Eye Content Toon. Rosita, I see you there, welcome.
Rosita, I see you there. Welcome. Jaroslav. Alrighty, let's get started with the speakers.
So we'll start with Daniel. Welcome, Daniel. I'm very familiar with you. You're the head of
marketing over at Laz.ai. Would you like to give a very brief introduction about yourself,
a little bit about Laz.ai, and how are you doing today, sir?
Yeah, of course. Thank you for having, I guess, me, but all of us.
I think we have a great panel here today.
I'm the head of marketing at LAS.AI.
I am also the CCO at Metis.
At LAS.AI, we're building a playable solution
that really does look at AI blockchain
from A to Z in the whole pipeline
of what would be AI in blockchain.
And to do that, we're building a lot of playability.
And by that, we mean interactability,
how you can interact with the system,
with the ecosystem, with basically our chain.
And we're looking to build momentum
all the way up to our mainnet.
And in doing so, bring everybody on board along
for the ride thank you great stuff thank you for joining now i think we have brian novel if i'm
pronouncing it correct perhaps it's brian novell i think speaking from the lagrange twitter account
welcome sir how are you hey i'm doing well and And hi, everybody. Yes. So I'm Brian Novell, head of business development
at Lagrange Labs. And Lagrange is a ZK company, really a cryptography company. And our flagship
product these days, among others, is our DeepProve, which is a ZKML ecosystem that cryptographically verifies AI inference correctness.
So we really add two things on top of AI, which are safety and privacy. And while Web3 is a core
and very important vertical for us, we also permeate well beyond crypto into many other
industries and many other sectors. So a lot of exciting things going on
here, which I'm sure we'll get into, and glad to be here on the panel.
Thank you very much for joining us. Fantastic, perfect guest for today's topic about building
privacy in Web3.ai. All righty, let's introduce Ria, the head of ecosystem at AthenaX. How are you, Ria?
Right. Thank you, Liam. It's first time for me to chat with you guys. Thank you for inviting me
to come here. So I'm head of the ecosystem at Athena. Quick about Athena, we are open data
layer for builders, for researchers and everyone that's using the data.
And we use IPFS, so everything that matters,
it stays accessible, verifiable and owned by the people who can create it.
We have like four products, TechX to build,
ScalaX to publish, AnalyticsX to analyze and LaunchX to scale.
So if I tap into more on the AI side,
all our products, actually we integrate our ai models at the storage level that means everyone can ping ipfs contents while running the inferences
so then we also use we run privacy preserving computations for sensitive sensitive data as well
and then everything that we do all the things that we are ready for the product
and everything, it will result in permanent searchable
and AI enhanced knowledge layer.
And I would like to talk about more later on with you guys
and thank you for having me.
And I think we have our final fourth speaker,
Gary Liu on the stage. Gary Liu is a
co-founder and CEO of Terminal 3. Welcome, Gary. How's it going? Hey, it's good to be here. Thanks
for having me. Yeah, I'm Gary. I'm the CEO of Terminal 3. We are an Asia-headquartered
startup that is building data privacy and security infrastructure. So we sit at the
intersection of decentralized storage for user data so that it remains self-sovereign and secure
and also privacy enhancing technologies like zero knowledge and trusted execution environments that
are used to process that data and provide access to it primarily as cryptographic verifiable credentials, but also via APIs and SDKs to applications.
We power native identity layers for fully scaled blockchains.
We also work on regulatory compliant identity credentials
for a bunch of different digital asset use cases,
which include now things like regulated stable coins
all over the world and real world
assets. For artificial intelligence in particular, we're now starting to power AI agent identity
credentials and permissions credentials so that agent to agent interactions are actually trusted.
So glad to be here, excited for the conversation. Fantastic stuff. Thank you for joining us, Gary. All righty, we can get into
the questions section. I'm going to throw a pinned tweet into the Jumbotron, and speakers,
you are all very welcome to do the same thing. So you can just click a tweet, you click the arrow on the tweet and you can click Add to Spaces.
So I've put one in there and speakers, you're welcome to do that.
You have the authority as speakers.
Alrighty, let's go into it.
I've got two questions first to kick off with and then just general questions to start the
Anyone can answer them, multiple or one person.
And then we'll go into some more focus questions afterwards.
So let's start off. The topic is building privacy and trust in Web3.ai.
So why are we here? Why is this even a topic?
Why is privacy important when building AI systems in Web3?
Who wants to have a crack at it?
Since everyone's quiet, I guess I'll kick things off.
Got to be honest, if we're in crypto, I think the natural tendency,
just the average user persona of the crypto space is we care about privacy.
We care about it a lot more than pretty much the rest of the world, I think.
So if you look at, for example, like other social media platforms
or whatever account you have at whatever services you use,
even Amazon, whatever it may be,
they really do a ton of data harvesting.
But for crypto, if we started doing that, the whole industry would flip out.
But for crypto, if we started doing that,
the whole industry would flip out, right?
So that in itself, I think, is just from the really basic thing of it.
If we're tailoring our services to our customers, then, of course, we need to care about privacy because they care about privacy.
But taking it a step further, because we're always looking to, OK, what's coming up next in AI because it is such an infant, nascent industry.
And it's really kind of hitting its growth strides.
We really have to look at what it could develop into.
And if we don't look at privacy and don't really care about it,
then there could be a bit of a runaway kind of thing
where it's now looking into a lot of the things
that we maybe don't want it to
look into. We're already kind of feeling a bit weird about, for example, like personalized ads
where they take, you know, one drunk Amazon purchase, you know, three Thursdays ago, and it
gives me an ad for something that I don't want to see, right? But imagine if an AI is able to do that, and it's
digging through my history of whatever I did online or even offline. And now it's speaking to
me as if it knows that that happened, right? Even just on a really basic kind of, I guess, on an LLM
usage type of deal. So just that. And then on top of that, you train ai you know it could potentially um look into
basically what you've been doing and then it could tailor its response not just in that it you know
gives that uncanny feeling of it knows right but also if it changes the way it behaves and it for
example tries to cater to my biases or tries to oppose my biases on purpose because, you know, it's
being used for this other thing.
That's also a pretty scary prospect.
So in my eyes, it's those two things that's really driving the privacy focus in the AI
space, sorry, in the AI blockchain space.
Yeah, let me jump in and add to what Daniel just said.
We're already seeing this.
This is not a, oh, it may happen. It is an inevitability unless we start to shift course
and start to care about this as a complete ecosystem.
The amount of information that we put into artificial intelligence agents
and chatbots at this point that feed into these
LRMs is extraordinary. At this point, we're creating a trillion new tokens of information
per year for these training algorithms just by our inputs alone. And the thing is, our inputs
are significantly more invasive than anything that the internet has been able to
collect from us in the past. Of course, what we browse on the internet, our conversations to
friends and whatnot is already invasive enough. But what we're telling ChatGPT is our deepest,
darkest thoughts and things that we won't even tell our loved ones. That's what we're putting in to these models. And so beyond the fact that these models can train on these and start to mimic like
if they know us, there's the fact that once AI models, and especially when the agents start to
grow in scale, when agents become part of our daily life, once they have this information,
and by the way, 75% of agents being built today retain everything that we have put into them as part of their core memory.
Once we give it to agents, we lose all control of it.
What does losing control look like?
Well, number one, these agents, once they really start to know us, they can start to imitate us.
And it's not just because they want to imitate us.
They're not necessarily insidious that way. It's the fact that we're giving them start to imitate us. And it's not just because they want to imitate us, they're not
necessarily insidious that way. It's the fact that we're giving them permission to imitate us. We're
literally asking agents to make automated decisions and complete transactions on our behalf.
And at some point, they're going to lose those boundaries on when they're supposed to stop,
they could lose those boundaries on where they're supposed to stop, and they could truly imitate us to anyone.
And then secondarily, it's the fact that in the future, the future of artificial intelligence agents, and this is not a dotted line, this is a straight line within the next five years,
we're going to start having true autonomous robotics in our homes.
And so it's not just what we're writing into these or submitting in written form,
but it's what they're hearing just by being among us and living with us, existing in our homes.
So if we do not care about the foundation of data privacy and data security, if we do not find a
better way of actually passporting and securing data before artificial intelligence agents get to it, before they start trading it with one another, we are going to be left in a situation in a few years that we cannot actually claw back from.
Totally agree, Gary. And I can definitely, I'm one piece of evidence to your point that we tell AI, chatty, but things, everything.
My chatty butty knows me better than my wife does, better than my children do.
I just tell it everything and get feedback on everything.
So, yes, privacy would be certainly an upgrade in my life.
I saw a hand up from Brian. I don't know if you still want to upgrade in my life.
I saw a hand up from Brian.
I don't know if you still wanna chime in, sir?
And very, very good answers from both Daniel and Gary.
I think there is certainly a inherent tension
between the ethos of Web3 and the openness
and transparent nature of it
And similar to how the internet was unlocked
by adding security and privacy,
enabling things like online banking and payment rails
to really take shape and be used pervasively,
the same thing kind of needs to happen in Web3
So, I think Gary did a very good job
also of talking about protecting user data
and all the potential issues that can happen there.
There's also privacy issues with the models
that are being used in the IP.
So AI models themselves are valuable intellectual property.
And without privacy guarantees with respect to the models, competitors could extract
or reverse engineer weights from on-chain inference. And zero-knowledge and privacy-preserving
techniques allow models to run verifiably without revealing those internal structures and natures.
And speaking of verification, which of course is a big part of what Lagrange does,
verification, which of course is a big part of what Lagrange does.
Web3 inherently demands verifiability.
You know, we say don't trust verify and proving correct AI execution,
such as Lagrange's deproved lets anyone confirm that the result is authentic.
But the privacy ensures you can verify correctness without revealing
either the private input data or the full models.
So I think it kind of balances on both sides there.
And I think privacy is also very, very important with respect to AI systems on Web3 when it comes to compliance and regulation.
You know, there's an increasingly expectation from regulators about proving privacy and correctness.
And in those areas, privacy layers help Web3 apps meet compliance obligations without undermining decentralization.
really unlocks a lot of activity among users when privacy is put in place in
ways that they can actually have confidence in using these things. So
mainstream users are not going to adopt AI powered web 3 if their data is
exposed or could be exposed. So privacy actually enables new economic models
where users can monetize their data contributions to AI training and other areas while keeping the raw data secret.
So ultimately, this strengthens incentives for participation in decentralized AI networks.
Well said, well said. I agree. I think you've all answered the next question pretty well, actually. Ria, did you have your hand up there? Or were you just giving a nice emoji?
I will build like to the audience,
think of the current AI system.
It's kind of like renting apartment.
And then you put all your stuff there.
You dump all the things you said that your GPT
maybe knows you more than your wife and kids whatsoever.
And then the landlord owns the building
and can change the rules anytime,
which means they can literally,
they're collecting your data.
They can just get all the information
and then they will also can change API
But then in Web3 AI, the privacy is kind of like owning your own house, like your own
So you have your own control.
I'm not sure if you guys like check the news.
I kind of have like a curious question for you guys.
Like I think the GPT was, there was a news about talking all the conversation you talk
in a lawsuit if you are like on a trial whatsoever. Have you guys heard about it? Like they can just
pull out your data and then to use it in like a lawsuit case, no matter you don't know who
actually used your account to ask questions whatsoever. Have you guys checked about that?
Yeah, so basically, I think the idea was that it's similar to looking through your internet search history or, you know, just your internet history in general.
So if you get called in or if you get arrested for a crime, then they can seize your computer and dig through its history, see what's on it.
So in a similar way, they can access your, for example,
ChatGPT account, because most of us just log automatically into ChatGPT, right?
Whether that's through an app or whether it's through the web browser or whether
if you're using Gemini even, doesn't matter. If it's just there and it's accessible,
then I think they can just, you know, log into it, see all your history, read through it.
asked ChatGPT how to make something very, very illegal,
and OpenAI has put kind of controls against this,
but you can find ways around it.
So for example, you ask it,
or at least one of the early hacks was you ask it, if you were to teach me it, not how do you do this, right?
So if you were to teach me, or if you were to, what was it, find evidence of someone doing this, what are the signs that you would look for, that kind of stuff, right?
that you would look for, that kind of stuff, right?
So you ask it in a roundabout way, and it'll tell you.
So all of those things can be used against you
because it's just on your computer, right?
Well, history is not actually on your computer,
but it's accessible through your computer.
Yeah, I quote tweeted an article about that recently.
I mean, how many of us would go to jail?
Because of what we're told is that AI, I think I might be at some risk.
I'm going to have to use that tip down to just mention.
I'm going to have to talk about things in more roundabout ways.
Anyway, OK, I think we've answered the second question here,
which is about trust issues
with big tech AI. So let's dive into the focused questions. Next, let's start with Daniel again
and ask, so with LAS AI and the DAT innovation, which is data anchoring tokens, how can that support a privacy-first AI landscape?
So DAT, just to give a quick overview, it's a new token standard.
You can think of it similar to an NFT in that it's unique and it signifies ownership,
but it also has a time vector baked into it so that you can track its evolution.
It has all of these other kind of factors built into it.
So for example, what category of AI asset is it?
It encapsulates an AI asset.
And the important part and most relevant to this conversation is that you can also set things like privacy rights through the IDAO that surrounds the DAT.
So the IDAO is the individual centric DAO that looks at how to manage that DAT.
So, for example, if you have a data set and you clean that data set one way or another, but you can also look at the privacy rights
surrounding that data set.
You can set usage rights to it as well.
And on top of that, just for the DAT itself, just on a really basic principle,
it by nature has verified computing built into it.
And along the way, you can also use your knowledge proofs, right?
Where you basically are, what is it?
You're not showing the thing,
but you're verifying the thing. So it's a fantastic way to keep things simple. And going
back to what Brian said, this, I think, is a great revolution for the crypto space when considering
the dichotomy between transparency that we all hold dear to our hearts, but also privacy, which we
also hold dear to our hearts. And the nature of it, it's just completely opposing, right?
How can you be transparent, but also secretive? It doesn't make sense. But with zero knowledge
proofs, that absolutely can happen. And with that technology that, you know,
especially ZKEM is building and will be using,
it really does allow for private,
but verified data sets, models, inferences, what have you.
Excuse me, I have a cold.
Cool, cool. Makes total sense.
You mentioned this earlier about Lagrange's DeepProve,
how it verifies AI inferences with ZK proofs.
Would you like to go into a bit more detail about how exactly that works?
Yeah, sure. I'd be happy to. So DeepProve is a system designed to bring zero-knowledge proofs
into the world of AI inference. And ultimately, the problems, or at least one of them, is that
AI operates in a black box. So it's hard to tell, for example, if the correct model is being used,
So it's hard to tell, for example, if the correct model is being used, the weights of the model, how the model was trained.
If someone changes the model or alters the weights, it could also create bias.
So ultimately, there's very little, if any, guarantee of safety and privacy.
So AI models produce outputs, inferences, as we know, but there's usually no way for an external party to trust those outputs other than perhaps directly rerunning the model. And of course, for large
models, that's computationally expensive or often infeasible, and it may involve private
data that can't be shared. And in many cases, regulators, dApps, enterprises, and other
users may very well want to guarantee that the
inference came from a specific model, that the model executed correctly, and that the private
input data remains private and hidden. So DeepProof's approach is DeepProof provides
cryptographic guarantees of AI execution using zero-knowledge proofs.
And DeepProof is really predicated on four principles.
The first of which is privacy.
So input data is never revealed,
only the proof of correct inference.
which it ensures that AI models were not tampered with,
misspecified or run on alternative datasets.
The third is reproducibility, in that outputs can be re-verified without needing the raw consumer
and input data. And the fourth is auditability, where the proofs create a permanent compliance
trail that stands up in areas of enforcement, policy, and even judicial review.
So ultimately, DeepProof uses zero-knowledge proofs to prove that AI made the correct decision
without revealing how it did it or what data was used. So in plain terms, an AI model can make a
decision or a prediction like classifying an image or analyzing data and this happens off chain and while this is happening the system turns the ai's process
into that math puzzle that can later be verified so the actual ai model in the input data are kept
private but the system creates a mathematical proof that shows the ai follows it rule followed
its rules correctly so the result is you get a proof that says yes AI follows it rule followed its rules correctly.
So the result is you get a proof that says, yes, this output came from this specific model using this specific input or the alternative.
And then anyone like a company, a smart contract, a regulator, they can come in
and check this proof super quickly in just a few milliseconds.
mathematic proof be a hundred percent sure that the ai did what it claimed to do without needing
access to that underlying data or the model so what are the advantages of this of course it's
privacy preserving in that the inputs and the full model weights don't need to be disclosed.
Also, it's scalable in that checking the proof is a far cheaper alternative than rerunning the model, even if that's possible.
And any stakeholder, especially when we get into those areas of legality, regulations, that side of the coin,
and either on or off chain, can trust that the AI output is authentic without
having blind faith. And lastly, it's composable. So the proofs can be integrated into things like
smart contracts or cross-system workflows. And that has an infinite number of use cases in in Web3, DeFi, also compliance, gaming, enterprise AI.
So ultimately, D-Proof turns AI inferences
into provable computation.
And that output is not just trust us,
but it's backed by a cryptographic ZK proof
that can be independently verified.
Cool, cool. Love it. Thank you for Cool, cool.
Thank you for that, Brian.
Also, Brian, anyone on the stage, feel free to throw something in the Jumbotron.
Any announcement or explainer that could value people, you can throw up there.
And let's check the time.
Technically halfway through.
We might run over a bit, though.
All right, let's ask Gary a question.
So I noticed, Gary, that the pinned tweet of Terminal 3 talks about the average data breach costing around $4.5 million.
So I suppose you would argue there's a very clear business case for web-free privacy in addition to the
more philosophical case. Would you like to elaborate on that? Yeah, sure. And I want to be
clear. That number is, if you take all data breaches that happen on an annual basis,
then that's the average cost of the companies that are breached. And that is in a web two world, and certainly a world before
AI agents run free with corporate data and with consumer data. So I imagine that that number is
going to explode as AI agents continue to scale the deployment of artificial intelligence,
especially at the corporate level grows. So yes, there is a very clear commercial case
for using these new technologies
that we are lucky enough to be building with,
whether it's decentralized tech or privacy enhancing tech
to actually properly secure data
in a way that we've never been able to do before.
and I didn't mean to scare everyone on the call
these issues are about to get significantly worse.
I love what Brian was saying, and actually, Brian,
I'm going to have to ping you afterwards and learn more about what that entire inference model actually looks like.
But I love a community like this because what Terminal 3 is doing is very complementary to what Brian just explained that they were doing.
For us, what we're trying to solve for are three major problems with the deployment of AI as agents.
The first is trust and identity. The second is permissions.
And then the third is actually the execution layer and the use of private data and execution,
sensitive private data and execution.
Let me talk through each of these very, very quickly.
On the trust piece, it's very easy.
Brian also mentioned it earlier.
which is we need to know that the AI agents we're talking to
are actually the ones we're supposed to be talking to.
Agents are already showing that they're getting very,
very good at pretending to be either another agent, which they're not, or actually a human
being. And it's becoming harder and harder to tell if you're actually speaking to the right
counterparty. And so we actually need a hierarchical ID system that is fully verifiable
to know not only what the underlying model is, so you can trust the inference, but the actual AI agent that you're talking to,
the operator that is deploying the AI agent and making sure that it is a correct authorized operator.
And then, of course, as a user, you need to have your own identity in this entire ecosystem.
So DIDs are a great tool to serve that purpose.
And that's our first solution.
The second is permissions.
And again, even if it is the right agent,
we don't always know if this agent has the correct permissions
to perform the transaction or make the decision
that they're trying to do right now.
Remember, permissions are programmable.
They're supposed to be programmable in a decentralized world. And so AI agents and their permissions are programmable. They're supposed to be programmable in a
decentralized world. And so AI agents and their permissions are not static. Once they receive
the permission, it doesn't mean they have permissions forever. The permissions could
be changing every single second. And so you need a tool, you need some kind of asset that allows
you to prove verifiably the permission is correct at the moment of transaction or the moment of decision and we also have a new technology uh well actually not a new technology
but a newly deployed technology called verify verifiable credentials especially zk credentials
that are a perfect uh tool to provide for programmable commission permissions to ai agents
and then finally on the execution, please, the same way
that Brian was talking about making sure that data is private going into the model, and then you get
the ZK proof out with the result, there are certain things that are sensitive transactional details
that we should never hand to an AI agent, even if they're the ones who end up making the transaction.
For instance, payment information. If you're a consumer and you need an agent to book a hotel or plane ticket for you, you should not be
giving that AI agent your passport number and your credit card number, even though those two pieces
of information are critical to actually book your ticket. For enterprise, it's the same thing. There
are a bunch of data for enterprise-to-enterprise or B2B transactions that you should never hand to an autonomous agent that can eventually do nefarious things with it or trade it without permissions.
You actually need a decentralized trustless layer that can passport this data and then be called to give that data directly to a transaction platform.
So AI agents can make decisions for you.
They can go to a transaction platform and say, here's the booking that we need. And then transaction platforms can separately,
independently call for the necessary sensitive information to complete transactions.
Those three things together makes up what Terminal 3 calls agent auth, and it is our data privacy and
security layer that helps AI agents be able to perform automated tasks securely.
So I think there's a massive commercial case for deploying this technology as natively as you deploy AI agents.
And yeah, that's what we hope to serve, how we hope to serve the community.
Thanks very much, Gary. Good, detailed insight.
All right, let's have a question for Ria, which is about, so Ria, you're from Athena X. I know, I think it was Nabiha, one of our
team members, is one of the judges on your HackX Hackathon. So good luck with that. And
when you ensure data permanence and accessibility
across your ecosystem, you're using HackX,
storing hackathon submissions,
ScholarX, managing research papers,
AnalyticX, processing massive datasets.
How do you prevent data loss
while maintaining decentralization?
Yeah, I had such a nice chat with Natalia, right?
When we were talking during the open ceremony,
It's really rare to see females in this industry as well.
We were the rare females that's talking on the show.
But anyway, back to your questions.
So yeah, about the data permanence,
it's actually really fundamental to our visions at AthenaX
because we're building the world,
the first truly decentralized AI-driven knowledge infrastructure.
If I would talk a little bit about the four core products
So right now, the hackathon,
it's launching our first product, HackX. launching. So right now the hackathon is launching
our first product, HackX. It's the first open source hackathon platform, then where all the
projects, teams, information, demos, everything, all the data whatsoever that's going to live on
IPFS. And then for the scholar access to publish, where all the publishing researches data or the thing with peer
reviews or everything that recording according to the researchers and and the
data it will be stores also permanently on chain and then another one is
analytics is to analyze so all the Twitter performances all the data
analyzing the tools gonna track in all the metrics and it's going to also preserve permanently on chain.
And the last one is LaunchX.
It will be also transparent.
It's a transparent accelerator where all the application, milestone, feedback, whatsoever.
The data is not, you cannot manipulate the data.
Everything is also audibility to YPFS as well.
So this is all the four core products that we're doing so
hackaxe is launching as well then uh for answer your question actually um how to um maintaining
the decentralization while just ensure the data permanence right then um basically we have like
uh integrated our ai model uh within the storage level it's uh we integrated our AI model within the storage level.
We integrated the AI model at the storage level, which means, for example, we have a
multi-layer storage architecture.
So the first one is the hot layer, IPFS.
So all the active data starts there.
Either HackX, the hackathon, or research paper, whatsoever, everything that I mentioned above.
The IPFS gives us the content addressing and peer-to-peer distribution.
And then all the data information have a CID, you can pin it, and then it's permanent.
Even if Athena, whatsoever, we're not running, but the access, the data is still there.
Users can get it. It's permanent on chain.
So another one is the persistent layer
so we built our own paying service that maintains redundant copies of course
geographically distributed nodes so every piece of the content gets pinned at least five nodes
with the automatic health checks and re-ping if the nodes drop and it just like keep it running
it's gonna maintain the decentralization and permanence.
And then the last one is archive layer, the Filecoin.
We use it to have the critical data gets back up to Filecoin for long term storage with
the cryptographic proofs.
So we store all the Scala X papers, winning Hack X projects or historical analytics X data
I mean, overall ensure like so the AI models
we integrated in the storage level that
will help the discovery and speed.
In general, I'll just keep it short.
So everything that we run,
we leverage the AI model to help to keep
the privacy and also the data to be permanent,
decentralized. Thank you, Liam.
That was a clap for you, Ria, also for Daniel, Gary and our friend Brian Novel.
So we're about halfway through the questions. Let me give a few shout outs to some people in the audience and then we'll go through
another question for the speakers each and a few community questions and then wrap up let's give
some love to han in the audience han is one of our forum guild leaders uh shout out to norbert
norbert i think norbert's been with metis for four or five years since day one, helping out with the moderation. Big shout out Norbert.
Gustavo. Who else have I not mentioned yet? Some familiar faces. Ricardo.
OX Nano. Yep, good to see you all. Okay, let's go on to another question for Mr. Daniel.
Daniel, tell us about, the thing is here, Web3.ai privacy can sound heavy. Like, I think we all care about it, we mentioned already, like we all care about it we mentioned already like we all care about it a lot and like the kind of crypto cypherpunk spirit cares about it however when we're talking to a
audience uh how can ai agents like lesbubus for example make this topic fun and engaging so users
actually care well it's um you're right it is a quite a heavy topic and exactly like gary said
um it can't be quite scary once you start kind of uh game theorying out all the different
possibilities and what's going to happen to the industry uh but like you said um as long as we
kind of keep within the guidelines and we make sure that what we're doing is you know pushing things in the right direction it can be fantastically fun right or
at least very engaging so for example if you look at last boo boo um it's i would say it is a very
good uh demonstration of the capabilities of the dat so one of the benefits of dat DAT. So one of the benefits of DAT, it's a unique thing in the industry,
is that it can track dynamic data sets
rather than having a very static data set.
Whether that's a data set or an AI model,
it doesn't really matter.
It can evolve over time, right?
So for us, Las Boo Boo is, like I said,
trying to get the community engaged
and take part in the internal economic flywheel
is what we're calling it.
Trying to get the community to take part in that
all the way up to mainnet
so that everybody can actually drive value,
capture value along the way
just having a big airdrop at the end.
So for us, the Laz boo-boo encapsulates the benefits of the DAT and that it evolves with
you over time. So you can think of it like a companion agent where, you know, you talk to it,
you go on adventures with it. You might be able to do some other things in the future,
but at the end of the day, it can evolve with you.
You could teach it things.
So it's almost like one of those, remember Tamagotchis?
I used to love those things.
But it's like a Tamagotchi.
We've heard about this pitch before.
It's like an AI Tamagotchi.
And by doing that, you know, and of course, it also has different rarities.
And depending on the rarities and depending on
the rarities um it gives different amount points in the uh ecosystem so we have last ai points
last pad points all these different ways for users to get engaged and that i think um is what
at least right now is kind of the immediate okay this is a fun way to get engaged right because
once we're talking about actual like composability of ai agents and these you know is kind of the immediate, okay, this is a fun way to get engaged, right? Because
once we're talking about actual like composability of AI agents and these, you know, massive kind of
overarching solutions that uses, you know, let's say task-specific models chained together to create
a whole solution, that's kind of quite a bit in the future and it's going to take a while for
some of it to really kind of play out. But at least in the near term, this is, of course, a great way to get involved.
It's an easy way to understand what the value of AI plus blockchain can be.
And, of course, with it, you know, we also ensure privacy.
Things like your chat history with the agent is kept off-chain and private so that, you know, whatever you ask it isn't really public and on-chain, right?
Because that'd be pretty scary too.
Yeah, it makes total sense.
It's almost like a Trojan horse, right?
You get people on-chain with something a bit silly and a bit fun, then they start learning these great tools that are great for freedom and privacy and everything else.
All righty, let's ask Brian next.
Another question for Brian about, we've spoken a lot about the technology and different users.
Who do you think will be the first real users of verifiable AI?
Web3 people, enterprises, governments, or another group? Yeah, it's a great question. And really,
the first real adopters of verifiable AI will likely emerge where the pain of blind trust is
the highest, the demand for privacy is the loudest, and the need for verification is also immediate, right?
So broadly speaking, one way that I tend to conceptualize
and identify use cases is bifurcated along the lines
of internal and external in terms of who are the beneficiaries
So on the internal side, when a company, for example,
might be using AI and the verifiable AI for its own
purposes, this would be for its own security in many cases. So, for example, protecting against
a rogue internal actor or another example could be the military that might be using AI to maneuver
drones in certain ways, wanting to verify, of course, that that was not
tampered with at any point in the process. And then on the external side, this is where it could
be a company, a project, a government wanting to protect its users, its consumers, and or its
constituents. And this also breaks down in a few ways on the external side, because
it could just be one of those entities providing benefits to those groups and users
to do the right thing. It also could be required by law or rule or regulation. And I think we're
going to probably see an increasing amount of that in the future. And finally, on that point, I think that there's also,
I know there is a growing narrative around actually getting ahead of those regulations and
getting ahead of doing the right thing and wanting to be a first mover and actually seeing this as a
great marketing tool. A lot of the conversations that I have, folks say, we want to position ourselves with Lagrange as a leader in the space of getting ahead of the requirement or the growing demand of verifying AI.
Because quite frankly, a lot of end users out there who might not be very familiar with AI, they don't really even know about the concept of verifiable AI and the potential pitfalls of
giving up their user data and others. So a lot of the counterparties that I speak with,
we talk about positioning the use of Lagrange and DeepRoof as an incredible marketing tool to be a
first mover in that growing narrative. So to the kind of order I see things going, I think AI verifiably becomes
much more important in areas where the stakes are higher. And this means generally areas like health,
safety, defense, and certainly high stakes financial activity, both within and outside of
Web3. And in terms of adoption, I think it ultimately comes down to economic
motivation. So I think the first wave will likely be Web3 users and developers and projects,
as we've seen, because we already have that culture of, again, don't trust, but verify.
And we have the developers who are already comfortable with proofs, gas costs, and on-chain verification, things like that.
And there is certainly plenty of use cases.
So, you know, dApps proving fraud detection, risk scoring, or recommending logic without revealing low inputs.
Also, things like on-chain games ensuring that AI opponents play fairly.
We have DeFi protocols proving compliance
or credit risk checks without leaking sensitive data.
And ultimately it fits the Web3 ethos
and plugs right into smart contracts
where verification costs matters.
I think, you know, in other ways
we are seeing a lot of adoption and interest
And I think that'll kind of be
the second giant wave of this movement, especially in regulated sectors. So banks, insurers,
healthcare firms, they all face regulatory scrutiny about the explainability, fairness,
and compliance of AI. And again, I think this will likely increase more and more as time goes on and AI becomes
So here we have use cases like, for example, proving AI-based AML and KYC checks having
Things like auditable credit scoring without leaking client data.
We have healthcare AI verifications.
So, you know, for example, we work with a major US
hospital that wants verifiability for diagnostic models, and we're able to provide that. So I think
enterprise will likely be the second wave. And of course, right behind them and in parallel with
them are the governments and the regulators. And there they must trust but verify AI and critical systems. So like I mentioned,
things that are of the highest stakes like military defense, things like elections,
things like immigration policy. Here, proofs let them demand guarantees without the source code
disclosure and the underlying data being revealed. So ultimately, again, where the stakes are high, this becomes an absolute necessity sooner than
later. And then I suppose the fourth wave, and perhaps if we think of them this way,
the indirect beneficiaries of this will be everybody, literally everybody is, and consumers,
users of these platforms and projects, people who are buying
goods and using services from companies, and everybody who are constituents of government
and other related entities.
So the consumers won't be generating proofs themselves, but they will benefit when platforms
And this could be everything from a social app proving
moderation rules were applied fairly, a ride-sharing AI company proving that pricing was not
discriminatory, could be a fintech app proving that an AI loan and the decisions around it were
unbiased, and so on and so forth. So again, ultimately, verifiable AI matters to everybody, but the overall
importance of it matters the most where the stakes are highest in those areas of safety,
health, and financial. And ultimately, we're going to see even more adoption as time goes on,
as the power of AI increases, the need for verifiable AI will increase as well.
Brian, thank you very much.
I'm glad to hear you think that we are early.
We're the Web3 users and we'll be the first ones.
But I think I also agree with,
and you see this across the board in crypto institutions,
enterprises are now getting more involved, particularly in America, particularly
in the Middle East. And yeah, that makes sense for it also to be, for them to get very much
involved in the high risk, the high stakes industries like health, safety, finance, etc.
Very cool. Moving on to Gary. So we're all building Web3 AI privacy tools. How do you think we can build tools that are the most user friendly?
I feel like that's not as much of a problem as building Web3 tools that are user-friendly.
I think we have a massive user experience problem that still exists in Web3, primarily because the paradigm is so different that the underlying developer toolkits, tool sets that are available for all Web2 developers
And so we are reinventing tools that in the Web2 world
have existed for well over a decade
as we are building our user applications.
And then on top of that, of course,
we have the fragmentation of settlement layers,
whether it's layer ones or layer twos
where applications are built.
And so for users, especially for the mass market consumer,
they don't know the difference.
And it just means it's just complicated trying to figure out
exactly where their asset is.
If you're using one wallet, which chain is it on?
If I'm trying to bring an asset from the chain to another,
how on earth do I do that?
So when it comes to user experience problems,
has it. I think with AI, much less, I mean, AI tools that are available to consumers these days,
whether it be your chat GPTs or the ones that we use in our work life, like the AI note takers and
summary tools and research tools and stuff like that ux is uh is pretty good
but i think that where we are um we are failing is uh because these these tools are are so um
they're they are so easy to use actually and they're so intuitive that we haven't actually taught consumers
about these quandaries that we've been talking about tonight about data privacy and security
at all. There's been very, very little conversation around it and I think part of the problem is that
large AI companies have done such a great job with their marketing, presenting themselves as a social utility, as a tool that democratizes information access, service access to the rest of the world.
were five, six, seven, eight years ago, and not the massively influential, impactful for-profit
company that is changing the world and changing the world from a very capitalist point of view.
And so we've kind of tricked ourselves, we've lulled ourselves into the sense as a mass market,
a consumer marketplace, that these tools are for our good and therefore I can inherently trust it and that
is a problem so I really like what Brian said I think that he is right that when it comes to
the first few ways of adoption the first wave is going to be a small group of people that already
understand the foundations of necessary privacy in the new digital world and of course web3 natives
are that and then the second are enterprises because they have so much to lose when they get it wrong.
I know, William, that didn't directly answer your question, but hopefully in a roundabout way,
I did. I got there. Yeah, it makes total sense. Yeah, it is true. The actual user experience
for AI tools is amazing and that's why
i just tell it everything because it's just it's so easy to do that um even though i'd rather not
tell everything it just makes it so easy for me um yeah and it's probably the blockchain part that is
that has more of the the ux challenges um cool makes And also, it's totally true about, you know, you have
the OpenAI founder, you know, talking very softly and very politely. And yeah, it's just
great marketing that they position themselves as this very non-profit for good, good marketing.
Alrighty, final question for Ria. Then we can move into a couple of questions from the community and wrap things up.
Ria, a final question for you about the intersection of Web3 AI privacy tools and compliance.
It's already been mentioned a little bit already by, I think, all the speakers, but how do your AI models access this distributed
data efficiently and what's your take on the compliance intersection?
I'm going to make this quick.
So Ben, basically I want to introduce a new knowledge point that I actually learned today
as well because I'm not a tech person, but it's called the FHE, fully homomorphic encryption.
So basically it's a technology, it's an advanced encryption method that allows computations to be performed directly on encrypted data without needing to decrypt it first.
needing to decrypt it first. So then this technology ensures the data remains confidential
even while being processed by an untrusted party. So like a cloud provider making a powerful
tool for us to use. So then basically back to the question, this data share, this knowledge
share is quite crucial because this is quite important for the data privacy, the AI and how we can make it efficiency and everything.
So then the short answer is like, I think the compliance definitely will be easier by building the privacy into how we compute and then by keeping the permanent record of what happens.
compute and then by keeping the permanent record of what happens.
And from our side, Athena's side, basically we run privacy preserving computations using
the knowledge point that I just introduced, the fully homomorphic encryption for sensitive data.
So the models can learn without exposing raw inputs.
And then basically everything is content addressed and permanent and you can point to exactly what
was used either papers or code or data set or when like the time time span whatsoever all that and
that makes the audits the audits a matter of checking stable references and just not like
chasing for screenshots for example and then the result of the accountability uh
without over sharing so this using this tag you can actually show um what you did while keeping
the private data private like the data private in for example and then lastly i just want to
point that out like uh in general this is quite crucial for us like i really appreciate all the
speakers here sharing all the values.
It's kind of nerdy, the conversation that we talk about,
all about AI, all about this and that.
Yeah, so our vision here, we wanted to build towards the future knowledge
as the infrastructure, the data layer.
So every piece of data in Athena
that becomes part of the permanent and searchable
and AI enhanced knowledge layer.
And imagine the researchers assessing the hackathon code
from five years ago or like the research paper later on,
the professor who just posted anything,
And the professor, he owns the data
and he can definitely have everything to
do about it just using our product and then the ai models training on a decentralized data set
without privacy concerns and then the knowledge graphs that connect the ideas across disciplines
all without a single point of failure and then for for us, we're aiming by 2026,
we aim to store 100 TB of plus across our network,
processing 1 million plus AI queries daily
and become the default infrastructure
for the decentralized AI applications.
Thank you for your time, Liam, and thank you for everyone.
Ria, do you mind just repeating?
What was the clever phrase you used earlier with the abbreviation?
The PH, what was it again?
It's Fully Homomorphic Encryption.
It's a new, it's a technology term.
Fully Homomorphic Encryption.
I honestly, I Googled it.
I actually Googled how to pronounce it. Homomorphic Encryption, F-H-E. I honestly, I Googled it. I actually Googled how to pronounce it.
A fully homomorphic encryption.
I'm going to use that in a tweet and pretend I'm smart.
We can, thank you very much, Ria.
We can take a couple of community questions. We're past the hour mark, so we'll run through them relatively quickly. And anyone can answer these, multiple people. I'll just throw them out there. I think the people who submitted the questions are going to be in for a little bit of a prize.
a prize. So first question here, I like this one. Privacy, scalability and AI are expensive
to run. Who's really paying for it? The users in high gas or the projects through venture
funding? And if it's the users, what makes this different from Web2 subscription models
we claim to be escaping? What do we think? That's not an easy one who wants to take on that question
i can try to start um i don't have the full answer so hopefully smarter people like the
other speakers can dive in um i think the enterprises have to bear some of the load
we are starting to see i mean we know that large companies around the world now, because of regulations, are paying an arm and a leg on an annual basis for security and to make sure that they're handling data in compliant private ways.
And that ought to be the cost of business for these large companies that have the responsibility of all of this data that we've handed to them and that they frankly monetize.
And in the AI world, it's no different.
They absolutely should be on the hook for this, which is why policies and regulations do matter.
And of course, we don't want policies and regulations that slow down the innovation and the deployment of artificial intelligence.
But we want the informed policies and regulations that allow for the speed of innovation while protecting the end consumer. That does not mean that the cost is passed to us, either as gas fees, if we're talking
about decentralized AI and also verifiable AI, like what Lagrange is doing, or fees that are
passed to us in subscription form, because now we're handling our own data privacy when big
companies are still profiting from our own data.
So enterprises, I think we should start with the cost of this.
We should start with enterprises.
Good answer. Daniel, you want to jump in, sir?
Yeah. Gary, I think you hit the nail on the head.
But yeah, if you look it, you know, hit the nail on the head. Sorry. But yeah, it's, you know, if you look at, for example,
and he talked about, for example,
privacy regulations around the world, right?
So if you look at GDPR compliance, for example,
I think I read a stat somewhere that said that
it was like the Fortune 500 companies or something.
Put together, they spend 10 billion 7 billion around
there just to comply to gdpr compliance right regulations so if you look at it that way yes
there's a lot of money and that is probably somehow passed on to the consumer whether it's
it's probably not a dollar for dollar kind of you know pass through but it's still you know
eats into the profits of these companies and they try to maximize profits.
So then the question is, all right, well, is it wasted money?
And again, to reiterate for all of us here, privacy is probably a significant concern or at least a lot more significant than a lot of the other retail spaces.
or at least a lot more significant than a lot of the other retail spaces.
So if you look at it that way, yes, there is going to be some kind of cost,
and it is going to be borne by a lot of these institutions and even regulatory bodies.
A lot of them, they even give incentives to first adopt kind of new experimental regulations.
of new experimental regulations doesn't happen too often but it does but one way or another it
It doesn't happen too often, but it does.
does have to be a cost of basically business doing business it has to be a cost that's kind of born
by society to make the new products new features and new life improvement kind of products, you know, as with all of internet, basically.
And yeah, that's just on the compliance side, right?
And then if you look at the actual technology, that's kind of ensuring all of these things.
You know, Ray had a great point about FHE.
That is a relatively new tech, yes, but it's also increasingly being used in conjunction with zkps
zero knowledge proofs just because it goes completely you know fairly hand in hand and
it's a match made in heaven and zero knowledge proofs initially were thought to be completely
cost inefficient and we thought that it didn't make sense at all because it's just going to be so computationally expensive.
we're seeing the cost being driven down quite a bit
through improvements in the algorithms,
in the way we handle the systems,
And then if you match that with things like FHE
or new technologies that might come out,
there's going to be some cost
of implementing it, but I would much rather have it than not. That's like saying, you know,
my car has a seatbelt. Yes, the seatbelt adds cost to the car, and it's being passed on to me
for sure, but I'm not going to go without the seatbelt, right? So as much as, you know, hearing
that it might be passed on to the consumer, as much as that might scare people or is not really, it doesn't make me happy to hear that as well.
But the cost is going to be minimal.
And if businesses see that the cost just doesn't make sense, they'll find a different way to make it happen if it's an important enough thing.
Cool. Makes sense. I want to bring in Brian now. Just before I do, before I forget, very quickly, some of you community members are doing the Task On campaign
and you want that attendance code which is LAZTALK4. So lowercase LAZTALK, L-A-Z-T-A-L-K,
and the number four, the digit four, Laztalk four.
This is being recorded, so people can listen back to that and get that.
Brian, sir, please jump in.
So on the issue of cost, the reality is that there will be a cost, right?
To add an additional service and enhance things, there very likely will be a cost, right? To add an additional service and enhance things,
there very likely will be a cost and will always be some cost that needs to be borne by some party.
And generally speaking, that could be borne by the company or the project or passed on to the
users and the consumers. So in many cases, we do see companies and projects who are burning runway, which is subsidized by VCs and other investors.
And then there alternatively could be passing these costs along to the consumers and customers.
So that's kind of the bad news. But in terms of the good news, like Daniel mentioned, the costs are coming down and in some cases dramatically.
down and in some cases dramatically. You know, we've seen at Lagrange, for example,
there are some costs that even a year or two ago were, you know, for a ZK proof in certain
situations might have been around a dollar. Now we're talking about one hundredth of one cent,
and it continues to be optimized as we innovate and go forward. So I like to think that over the
course of time, we collectively will be able to never eliminate costs, but they will be much more or rather much less of a material nature over time.
I think another very interesting point here is that in Web 2, in that scenario of passing the cost onto the customer, that's really just the end of the story.
And it's really done behind the scenes as part of the business model and pricing structures.
But the beauty of Web3 and in well-designed, decentralized AI systems, the users of protocols, for example, may still pay.
But in a very well-designed system, the users can own their own data.
They can leave, perhaps, with their own agent memory preferences. The
users can actually earn upside if their contributions are reused in certain scenarios.
And ultimately, through various mechanisms, users can govern the protocol if they do participate
meaningfully. So I think one of the beautiful aspects of Web3, as we all know, is that the users may still pay and the costs may still be passed on to them, but they're not just customers.
They're actually stakeholders.
And finally, on this point, ultimately, again, there will be a cost.
But I think when it comes to privacy and security, whatever that cost is, it's worth it.
I think that is a key point in that even a decent size incremental cost in many scenarios would ultimately be worth it because at the end of the day, in a lot of these negative scenarios that could be implicated in these situations, the cost that needs to be paid
is far less than the cost of having a security breach or other potential pitfalls and major
issues that happen. So again, that's on the cost side. I think there's a lot of opportunities and
a lot of ways that things will be getting better over time. Great stuff, all speakers. I'll just ask one more question, and then I think we can wrap it
up. And then I'll give speakers one final chance to just give us some closing words,
closing thoughts, and tell us about your projects. What can people listening do
Listeners, please follow all the speakers up on here.
Okay, one more question from Community, and then we can have final words.
And so this question is, this is a challenging one, but I like challenging questions.
So when AI enters the discussion, control usually shifts back to devs and protocols.
How do we make sure AI doesn't just centralise power again under the guise of community support?
A somewhat cynical question, but people might be thinking it.
So who wants to take this one on? Jump in, Daniel, sir. jump in daniel sir so since i am las ai well part of las ai and this is a las ai uh hosted event
um i'm filling in silences i'm speaking um because no one else is speaking. But on the actual question, it's vote for it to wallet, right?
It really is that simple.
If it seems like these projects are not doing things for the community,
And then once that kind of happens and you see the pullback
and the feedback from the community and all these
things, some enterprising individual is going to see that and do their own thing while focusing
on the community or a community gets together and start something. So at that point, you know,
you can just take your wallets and all move over to the new thing, right? At the end of the day,
that's basically what's going to need to happen to pretty
much any type of decentralization discussion or, um, you know, new type
of governance structure or whatever it may be.
It really is just about, okay, are you using it?
That's the best you can do.
And if something else is coming up, then great.
And also during that whole journey, please voice
your opinion if you can, because that's the only way to actually get the industry to change if it
can even happen at all, right? Because otherwise, I mean, as a business, as, you know, the traditional
business model, obviously, they want as much control over their product, over their systems,
Obviously, they want as much control over their product, over their systems, over their process as much as possible.
So really, the only thing you can do is essentially boycott.
And it is actually the most powerful tool that anybody can have.
So I love this question because it's accurate.
It is, AI is heavily centralized and it is going to provide, and it already has, provided immense amount of value and control to the few companies that have been able to centralize what they pretend is a social utility.
And we just have to be eyes wide open. I've already talked about OpenAI and how no one should
ever think about OpenAI ever again as a for-good company. They are a for-profit,
for-shareholders company. And the same goes for NVIDIA, right? The other gigantic company that is central to this AI story.
The chips, the professional AI training chipsets that are necessary and the cost of those, which is why only very, very, very rich companies can really build effective foundational models.
They don't have to be that expensive.
It's completely a controlled market.
And it's a made up price because it's just what people are willing,
the largest companies are willing to pay for it.
And yeah, it's inconvenient to criticize these companies
because they have a right in a free capitalist society to price things whatever way they want as long as somebody is willing to pay for them?
But you're going to do that from the other side of your mouth.
We've got to stop telling the story as if this is a social good.
And to Daniel's point, we as consumers just we just need to know better. These, we
it seems like we haven't learned our lesson from the first few phases of the internet,
where in those early days, whether it was, you know, Sergey and Larry at Google or Mark
at Facebook, they were our heroes. And now it's the Sam Altman's and the Jensen Huang's of the
world that are our rock stars and our heroes. For the web two folks or the web one web two folks,
now we're looking at them and we're like, Oh, these may not have been heroes. Maybe they may
have been villains all along. And we haven't learned our lesson. We're still looking. There's just the new titans of the industry
extraordinary new advances in technology.
I do know and I do appreciate
these advances in technology,
but the way that they're doing it
and the control that they're consolidating
while doing it is something we have to wake up to.
Very good point. Very powerful point.
All right, then. I think we can exit and wrap up with a final word from each speaker.
So let me give you all a clap.
The audience members, feel free to slap the clap emoji for our four fantastic speakers.
And we can wrap up with tell people where to find more about your project, more about you.
Would you like to start Ria from AthenaX?
How can people learn more about you?
I guess just pretty simply just follow our twitter and
check out what's going on we're going to post everything on twitter and then i think this month
is going to be big because we're going to launch a podcast i think i'm going to invite matis and
lazari as well uh athena x and then non-style we're going to launch a podcast series dedicated to, we're going to invite top researchers, professors, and founders of the project to talk about. I also welcome Gary and then Danny and then the speakers here. I think I'm going to launch the podcast to educate the audience, to actually make something on the culture side,
to want to actually get to know who you are as a person
and then why you step in the industry,
what are you actually doing in the industry,
why you do this, all that thing that's correlated,
that's going to be a 20-30 minute podcast.
Yeah, thank you for inviting me, Liam.
It's nice talking to you guys.
And that podcast sounds exciting.
Brian, I'm going to pronounce your surname correctly as we exit.
Brian Novell, head of BD at LaGrange.
Would you like to close us off, sir?
Thank you very much for having me and us here as a speaker on the panel.
And also wanted to say thank you to the other speakers.
A really awesome discussion and definitely happy to be here and having participated.
As far as being able to chat with us at Lagrange, we'd love to.
So here on X, it's LagrangeDev is our handle.
So absolutely feel free to message there.
And if you want to get in touch with me directly for any BD-related activities, it's Brian Novell, as you see there.
And then the other thing to mention is that we will certainly be upcoming at KBW and Token 2049. So for anybody out there who would want to meet up in person,
I would love to do that while we're out there at those conferences. So thanks again,
and looking forward to talking more. Fantastic. Thanks very much, Brian. And Gary,
co-founder and CEO at Terminal 3, would you like to give us a closing line, sir?
Yeah, thanks to Laz and to Metis for hosting this again.
We'd love to talk to you guys.
Terminal3 is really co-creating the Agent Auth project with other builders in the industry. So if you're working on AI agents, or AI
platforms, either for enterprise or consumers, and you want
privacy and security to and trust to be native to your
platform and your products, please reach out to us. We're an
infrastructure partner. So our goal is to provide you with the
tools to make your products that you sell to enterprises or to consumers as safe and secure as possible.
So you can find us on X at Terminal3.io.
Our website is Terminal3.io.
I'd be happy to chat more.
So thank you so much to Laz again.
Thanks very much, Gary. Great to chat more. So thank you so much to Laz again. Thanks very much, Gary.
And finally, Daniel Kwak,
Head of Marketing at Laz.ai,
would you like to give us a final line, sir?
Thank you, everyone, for being here.
It always makes me excited
when I'm in a panel or AMA,
whatever you want to call it with really smart people. Um,
makes me feel like I'm smart too. It was fantastic. Thank you very much.
Um, and also, uh, Raya will definitely hit you up for that podcast and Brian,
we are, I think a title sponsor, um,
is what you call it for the side event in KBW.
Very excited to meet you.
For Lazai, please keep an eye on our socials.
Like I said, it feels a bit weird because everyone's talking about, you know,
like these government contracts and all these regulations and these heavy topics.
And then I'm going to be here showing kind of, you know,
how the users can get involved kind of thing.
But really our road up to mainnet and our go-to-market
is all about driving that playability, that interaction
that's actually generating value.
So the interaction itself is the value generator,
not some token that we've, you know, kind of, um, pulled out of
nowhere or, uh, just a one-way street kind of thing. Right. So, um, it's, uh, we're going to
have a lot of products coming out. Um, we're going to have, uh, launches on our last badge.
Um, so please keep an eye on our socials. Uh've got a lot of exciting things coming, and Mainnet is also coming.
So we'd love to see you all there.
Thank you very much, Daniel.
All right, we will wrap it up there.
Thank you again to all of the speakers.
Thank you to all of the listeners.
This has been LazTalks episode 4,
Building Privacy and Trust in Web3 AI. For those of you listening in for the task on campaign
attendance code, that is LazTalk 4, lowercase LazTalk and the digit 4. Fantastic stuff. I look
forward to future episodes and talking to the guests again in the future. And have a great day.
GM, GM, GM. AI in the future. And have a great day. GM, GM, GM.