LazTalks Episode 3 - AI’s New habitat: Is Web3 ready?

Recorded: Aug. 7, 2025 Duration: 1:21:45
Space Recording

Short Summary

Laz Talks episode three dives into the intersection of AI and Web3, exploring trends, partnerships, and the future of decentralized AI systems. Key discussions include the launch of LAS AI's testnet, strategic collaborations, and the growing importance of user-owned data in shaping the AI economy.

Full Transcription

Thank you. All righty, I will play a little bit of music as the room fills up for a few minutes, and then we'll get started.
Yeah, can you hear me now?
Yeah, we can hear you fine, sir.
Okay, nice. I'm going to go to the next video. I'm going to go to the next video. I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video.
I'm going to go to the next video. I'm going to go to the next video. Thank you. so
so Thank you. All righty.
GMGM, welcome everyone to this third episode of Laz Talks.
Very happy to see you all here.
Very happy to have a fantastic lineup of speakers
to chat about the topic today,
which is AI's new habitat, is Web3 ready?
So very shortly, I will introduce each speaker very briefly
and ask each speaker to introduce themselves.
They're all builders
all doing cool stuff in the ai crypto space first let me ask the users uh and the community listening
just smash that emoji button please if you can hear my voice clearly and if you're feeling good
on this on this good morning okay i can see a few uh emoji bam bangs very nice um remember please to
follow all speakers um speakers feel free to pin any tweets in the jumbotron i will also pin one
um from lasai and we will get started uh very soon my name is Liam. I am the content guy. I'm going to try to rebrand
myself to the CT lead at Metis. And I also help with Lazio content too. So let's speak
to our speakers. And I'll just ask you all to give just a very quick kind of one minute intro, your name and what you're up to in the Web3 space.
So let's start with Elena. Many of you will already know who Elena is.
She's advisor to LAS.AI, co-founder of Metis. Elena, GMGM, how is everything going?
Thank you, Liam. You already introduced me.
How can you hear me?
Can you hear me well?
Perfectly for me.
Yes, my name is Elina Snellniker.
I've been in the space for quite a while,
I think since Ethereum was invented.
And apart from me being a co-founder of METIS,
decentralization coordinator of METIS Foundation,
and participating in incubation of all these wonderful projects
that METIS Foundation brought to life,
including LAS AI, METIS Layer 2, ZKM.
I also am a co-founder of CryptoChicks,
which is the blockchain hub for women.
So I have many roles, but METIS is my heart and soul.
And here I am speaking about the newest and greatest technology of METIS Lazier.
Love it. Thank you very much, Y, Elena. Looking forward to hearing your thoughts.
And let's move on to Jiu Wang. He is the co-founder of DeFi. Would you like to give us a quick intro?
Jiu, how's it going? Yeah. Thank you for your intro i just uh uh a little bit for myself and my project
uh i'm joe joe long from dv i'm co-founder and ceo of dv uh our project dv is a all-in-one
infrastructure for uh for ai uh and dp so uh we just built a decentralized infrastructure to make all the Web3 Infra
can connect with the real world
and all the AI agent and AI Infra
can connect with the real world.
So yeah, that is what we do.
Thank you so much for introducing me to this space.
Yeah, maybe we can go ahead for the next topic.
Great stuff. Thank you for joining us, Jer. And let's say hello to, I think it's Duckling,
the Chief Duck Officer of DuckChain from the main account. How's it going, sir? I like your role.
Sounds fun. Thank you. Thank you. Yes, Duckling from DuckChain, Chief Duck Officer. Basically, sir i like your i like your role it sounds fun thank you thank you yeah it's talking from dark
chain uh chief dark officer like uh basically i'm the speaker for dutching and uh that change
is building a ai telegram chain uh uvm based um and using using arbit stack. And right now we are expanding massively
with our AI ecosystem.
And it's all booming, right?
Yeah, glad to share more about us in the future
in the next chapters or the next following questions.
Great stuff.
Thank you for being here.
And let's say GM to Joshua, Head of Strategy at ZK Pass.
How's it going, sir?
Hi, this is Joshua from ZK Pass.
I'm taking care of the strategy site.
Basically, in terms of ZK Pass, we are the largest one in the space.
If you don't know ZKTLS, basically ZKTLS is the private data oracle.
So you are able to use ZKPath to bridge any account-based data
from any HTTPS website into Xenology proof.
That's basically what we do.
You just compare to like a Chainlink, right?
Chainlink is a public data oracle.
So it fits into all those public data.
Let's say Bitcoin price is from, you know, to the blockchain,
but we are the private data.
So any account based,
by saying account is okay, you need username
and the password to log into.
So basically you deal with those accounts every day, right?
Facebook, Twitter, whatever, you need to log in.
So any account, any data on that account,
you use JKPass, changing into proofs. So this
is called ZKTS, and we are the earliest one in the space.
Yeah. Thank you.
Very cool. Thank you for being here. And finally,
last but not least, let's introduce Professor Wan,
a Chief Scientist Advisor here at LAS AI, and also an Honorary Adjunct Professor at UBC.
Hello, Professor Wang, how's it going?
Hello, hello. Yeah, I'm good. Thank you. Yeah, so this is Zohar Wang.
I just joined LAS AI as the Chief Scientist Advisor.
You know, LAS AI is building a decentralized infrastructure for AI where the data models
and the compute are on-chain and the privacy preserving and then economically of our firewall.
So at last AI, we are actually turning that protocol upside down and then we empowering
the user from their own data, even their own data, where for verify their usage and be
rewarded for their contributions through the tokenized
AI economy. Thank you and welcome. I'm very glad to be here.
Thank you very much for joining. Let's give our speakers a very quick round of applause and then we'll kick off.
Alrighty, so I have a bunch of questions here. We have a time box of 60 minutes. I will try and keep it within the time. I know all of our speakers are very busy. So first we'll start
off with two kind of open questions and anybody can take the questions on the panel and people can jump in
as you wish and then I've got a bunch of targeted questions for each speaker after that. Then we'll
open the floor to the community. A bunch of community members have posted a bunch of questions
under a Lazzai tweet so we will select perhaps one or two of those at the end to share with the panel also.
And there's a few things at the end I need to share regarding community campaigns also.
A little bit of alpha. All righty. So let's get started.
Open question for the panel. So the discussion is AI's new habitat. Is Web3 ready?
So I guess the first question makes sense to be,
why should AI move into Web3 in the first place?
Open question, who wants to jump in?
I think I can say something for this.
For the AI infrastructure,
I think all the AI industry is based on data.
So I think Web3 can provide more, more,
more tokenomic and more economic for the data provider.
That is the first thing.
And the other thing is,
I think all the AI things
running and processing in a black box.
So maybe Web3 can make it more trustable and variable.
Yeah, I think I can add something more.
I quite agree on Joe's comments. You know, everything is based on the data, but actually the public data is used up. So for example, whatever the big models you can name, for example, the OpenAI or Gemini of Google, they have used up all of the data.
or Gemini of Google, they have used up all of the data. But you know that we have a lot of those
personalized data on our cell phones, on our personal devices that the current AI models,
the big model, they cannot touch base. So then also for the AI performance part, we do need to
try to access some private data because we have used up all of our public data but
from the other side a lot of people concerned about the privacy okay you can use my data
yeah so you can use my data you can definitely provide a more customized solution to me because
this is this is my data but how that my privacy can be protected right so from that sense i
think that ai should definitely go to the web 3 space.
Right now, if you just ask some public questions, the AI model can maybe answer you good.
But think about that in the future, then we want to build some personalized AI agent.
So without the personal data, you do not want to say that some people ask the AI the same
question and then the AI model just to give all of the people the same answer.
It's actually the personalized answer should be, right?
So yeah, so from that, it's actually going into the web series space in terms of the data limitation.
And also from the computation side, right?
So you know that the AI is also,
it's called the computational resource hanger.
It's required more and more those computing devices.
Okay, so then how about we have the edge computing?
So you have those computing devices just on the edge,
it can give you the fast response,
not only just for those servers, okay,
on this inside cloud, but for example, the self-driving, you have
some IoT data and give you some smart sensing for a house, right? So definitely we need to have
some of the ad devices that are also aligned with our web 3.0 space. Yeah, I just add that, thank you.
Yeah, I just add that. Thank you.
Thank you very much, Professor Wang. I totally agree.
I definitely like, like I use, I tell ChatTBT everything I talk to all the time, but I'm not very comfortable doing that, actually.
I do it because I want to get feedback, but I would much rather do it in a more private manner.
And I'd rather get rewarded for it also, if that's possible, with some sort of Web3 incentive structure, which I think we'll probably talk about later.
All right. I think those are good kickoff points.
The second open question I want to give to the panel is, so, OK, we know that Web3 can enhance AI, but how do you truly define a decentralized AI system? And have we seen one yet?
Who wants to jump in? Okay. Yeah, I think that I can jump in.
Yeah, I'm sorry, you go first. Yeah. Okay, sorry. Okay. Yeah. So,
Okay, sorry. Okay. Yeah, so we were sure. Okay, thank you. Yeah, a truly decentralized
AI system and has three on my side, this has three properties. Why is it called a decentralized
data ownership, which we just mentioned, right? The user controls their data and decide how
it shares. Okay, and then receive the reward from using this, right? And also then the way you want to receive the reward
from using this is not only for yourself,
but actually some of your insight,
I mean some of the insight in your data
it can also bring the value to others.
But the thing that we need to preserve the privacy, okay?
So we have a lot of those research
are happening in this area is called a favorite learning.
Okay, favorite learning is that you can just pass the model into this local and then let
the other person who uses their private data to refine this model.
And then you just need to try to refine the model back, what are the differences about
the ways in the neural network back to the centralized model.
And we do have some technologies
and then to try to improve the other models
by using our personal data
or the result of sharing the data to others, right?
And the second one is the model execution, right?
So yeah, so the model is also has the knowledge, right?
I don't want to, for example,
okay, I just share my model to others,
but the other, for example,
the malicious guy just asked some dissentative questions
to my model, and the model just tells everything back.
This is not ethical.
So, training and references, they should be open, but should be preserved in the you know the trust-like environment execution or the TE okay what is the
zkp way okay and the third one is on the on-chain governance and incentivize okay so yeah we need
to provide the incentive that's the tokenomics and we talk in the web3 in the blockchain area
so then we would like to say okay then i can provide the data i can i can provide even the
trustworthy execution environment but you can get what you want but all provide the data, I can provide even the trustworthy environment, but you
can get what you want, but all of the data actually is sanitized and then without the
leaking of the private information.
So and then for the other question, second half of this question is have I seen one yet?
Actually not truly.
And that's why then in the research area, in the universities, we are also doing
a lot of research in the area. So then most of the so-called decentralized AI project
still rely on the centralized training and the heightened data pipelines and then archive
reward system. But, you know, and we can share more, I can share more with the last ai so basically then we are towards this correct
direction uh on the uh truly digitalized a system thank you and then sorry and you can continue
sure sure sure i think yeah i i think professor do a very good define for that. But from now, I don't want to do the define because I am not very
professional on it. So I only can't, I only want to say what is centralized AI system
in my imaging. So for these things, I want all the parts of this AI computer, all the parts in web story,
like all the processor, all the data,
all the memory, all the storage,
and don't forget the input and output,
all the things working on web story.
I think that will be the decentralized thing, yeah.
Okay, great points made.
Joshua, do you want to chime in?
Yeah, I think if we kind of like borrow the term decentralized from crypto to AI, it means
because, it means number one, I think somebody mentioned that before, it's about, you know,
if you want to have an AI model, right, you have to train it first,
and then it comes from the data. So I think if we care about the privacy, the data needs to be decentralized first. So which means, okay, of course, there, but this is a little kind of like
conflict to the model training, which is, okay, you need a massive data to train that model before
it matures, right? And then, so I think that's something we have to get a balance. So number one,
probably, you know, for the general function of the AI model, you want this AI model to be, I would say, personalized or decentralized.
You probably need to fit into some personal privacy related data to make this model decentralized for somebody first.
decentralized for somebody first.
I think that's number one.
Number two is, you know,
if you want to be decentralized,
you do want it to be not kind of like Web2, right?
Centralized.
So which means nobody is going to control that model.
It means, okay, nobody's able to control that
kind of like a machine learning or like algorithm
or whatever, right, in the behind.
So I haven't seen anything, even open AI, that's a centralized model.
But there's somebody standing behind that AI model to either, you know, doing controls.
So in this sense, I don't think there's a decentralized AI for now.
So all the AI from now to, you know,
maybe five years from now,
I would say it's a centralized first.
But if you're talking about,
okay, we do have like an open AI first
and then you feed into some, you know, personal data
and then make it like a hybrid,
like a semi-decentralized.
That's possible.
Yeah, that's my view on decentralized.
Good point, Ray.
I would like to add something as well.
Yeah, maybe I bring a little bit more visual into that because I cannot think in abstract.
So let's say that I bought a robot.
Robots are for sale right now, actually.
And they don't do much right now, you know, other than carry your own bags and maybe do a couple of tricks.
But, you know, very soon they will be, you know, robot with brains, with AI.
But, you know, very soon they will be, you know, robots with brains with AI.
And in order for me to have a robot and be very comfortable with it, I have to be, you know, very, very sure that this robot, first of all, cannot be controlled from outside by somebody.
I mean, especially some single party, you know, it cannot be controlled by me, the owner.
you know, it cannot be controlled by me, the owner.
So, and also whatever I, you know,
the robot hears or sees or anything,
it cannot be, you know, get in the way without my permission.
So this is something that, you know, in particular,
this is what decentralized AI means for me.
So there's nobody else that can control and nobody else that can, you know,
um, uh, farm for the information from this robot.
So that's, that, that is what means to me, the decentralized AI right now.
It doesn't exist, unfortunately, because everything can be, you know,
all your data can be formed.
If you were, whatever you type to chat GPT, first of all, whatever model learning cannot unlearn,
you cannot delete it.
And it's also out there for anyone to consume.
And then also of course,
you cannot control what this model learned.
And also this model controlled by somebody else, not by you.
So that's the reality of the world right now.
But that's why we're building Laser AI.
I love that visual example, Elena.
When you mentioned it, I thought of, I would love a robot in my house.
So would my wife, so would my kids to help us out with all the different stuff.
But I wouldn't want anyone else controlling that robot.
Wouldn't be the safest thing.
Alrighty, shall we move on to speaker questions?
And then we'll move on to community questions after that.
So, if we go a bit more detail here,
I have a question for Elena again.
What role does user-owned data and D DAT, which are data anchoring tokens, play in aligning AI development with human values?
Okay, very good question. Thank you. And I played with the DAT a little bit. So that means the data anchoring tokens. So that's the disapproviation means.
So they are interactive tokens.
They sort of, you know, like for the blockchain people,
they sort of like NFTs.
In fact, they are, you know, they are more dynamic.
They are semi-fungible and more dynamic.
They let you own, control and grow your own
on-chain AI data.
You can mint it and you can contribute your wisdom to it.
So that's how it is.
So unlike static NFTs, the DOT tokens evolve.
You can chat with them, you can train them,
you can use them in the games, in whatever life,
and also you can earn from them of course for example companion
dots and that's something that's very soon going to be released and at last ai companion dots uh
living ais they are ai agents you raise they can learn and grow with you fully align with you and
generate the value as they learn as well for you so you can think of
it you know when you type something into your perplexity chat gpt or any other ai
but unlike the centralized ai that you know getting the data from you and learning from you
uh whatever for whatever they do you type in into them and in fact they give nothing back well except for the answer
but no monetary you know you cannot monetize it uh and actually whatever you type into
gpt perplexity can actually be legally used against you in the court of law so the that
leaves that so it's like instead you can control whatever you know whatever you type into that
where it gets into and also you can you can control earning from it so better and unique data
means better ai and market will define the value of your debt because your debt is basically your wisdom that you give it to it.
More people and other AIs use it, more valuable it is, and then everyone benefits.
That's what it is.
Totally agree. And the testnet should go live in a couple of weeks,
and very shortly after that people can start
playing with the companion dats on Laz.ai so i've pinned the whitelist tweet in the Jumpertron there
if you're not in the whitelist already get involved in that because that will close
once testnet and the companion dats go live so now's your opportunity
Net and the companion DAPS go live.
So now's your opportunity.
Alrighty, let's pop over to Jia Wang.
And I want to ask you about, because your work at DeFi focuses on infrastructure for AI and Deepin.
But what is one piece of missing infrastructure you think we need before we onboard the masses to Web3 AI?
Oh, yeah, I think that is a great question.
I think one case things we still don't have is
a variable input and output for AI and Web3.
From now, all the AI infrastructure and Web3
infrastructure for input and output things
use Web2 devices like your your screen like your cable let me just
explain more about that right now when you use a AI infrastructure like chatbot like like
imaging in jet generators it's really hard to know what data uh went in so all the things in a uh in a black box
uh the model in black balls and and how to calculate in black balls how how it's you
uh how it's you in black ball and the the and the result come out you cannot you cannot track all
the things uh all things it's processing it like it like, in web two, I think that that will be okay.
But in web three, we are really care about the trust things
and the confidential things.
That will be a very big problem.
So if we, sorry, if we want people to,
to really trust AI agents in Web3,
I think we need a way to track and find what agent received
and what it is processed.
I think the process things you can use maybe TE or ZK
or other computational computing things to do that.
But for the inputs and the outputs,
it will be very, very difficult.
So that is what we do.
We build a worthwhile input and output layer
for all the AI and web3 infrastructure.
Yeah, so at Diffy, we are working on this
by connecting AI agent and blockchain,
it's used a decentralized message there.
All the action, all the message transferring our message,
our message there by decentralized way
and all the transfer, all the message
will take some worthable log on our message there.
So that will be a trustable input and output for the AI infrastructure.
So if you don't have this there, you cannot scale Web3 AI to be worthable and to be safety.
Thank you very much.
It makes sense. Moving on to Professor Wang, so a similar kind of
technical question to Joe Wang's. What are the technical limits of running AI inference
or training in a trustless environment today? Like are there breakthroughs in computing,
cryptography or network design that could shift what's possible?
Okay, yeah, thank you. Thank you very much for the question.
Yeah, we do have real limitations and the limitation is not easy to resolve for sure. And then so basically the running AI reference and inference or training in the trust-like environment means removing the reliance on the centralized compute while still guaranteeing the integrity,
privacy and availability.
This is actually very hard.
You know that everyone uses the chat GPT, everyone knows OpenAI.
You know how many layers in the OpenAI?
At least 108 layers for the open AI? Actually it's 108 layers, okay,
for the deep neural network.
And then actually there are actually millions of the data,
million bytes of the data,
be transferred between layers.
But you know that how fast it's transferred,
it's in microseconds, it's in milliseconds,
and for like the millions of,
the million bytes of the data, okay, between
every layer.
It's called the backpropagation.
I think that all of you know the background of the deep neural network, you know this.
It's a backpropagation as you try to train this way in the neural network.
So it's a microsecond that transfers millions of bytes of data.
Can we do that in the network and not even say that the centralized network even in the in the in the in the in the traditional centralized you know service and the client
model transferring the millions of bytes of data in the microseconds is impossible right
so but it's not but it's that mean that we cannot achieve it we can wait for a long time
okay okay so definitely so we have some the uh uh conscious issues there
privacy issues that we said so basically we have these three main limits okay let me let me name
that first one is called a compute overhead okay second is the latency and the bandwidth it's also
aligned with what that's what i just said and the third one is the data confidentiality so let me
just try to expand it a little bit more because it's a little bit technical.
So computer overhead, the cryptographic approaches like the ZK machine learning, ZKML, were promising
but still it's a computational experience for anything beyond the simple model. So definitely we can just start with the specific model or
do some knowledge distillation for some specific knowledge model and we can try out that.
And then second, latency and benefits. Decentralized network introduces delay and
data movement challenges. the real-time AI
inference at scale, such as what you would need for the conversational agents or the autonomous
system, is still hard to decentralize without compromising the performance. But we can achieve
this by somehow turning this performance down. so you know that we have difference and the
number of the decimals after the floating point okay the more digit in after the flow point and
we can have the more precise model okay but you know that we can make the trade off okay so to
reduce the communication overhead or communication bandwidth requirement. The third one is the data
confidentiality. So privacy preserving training. Okay, it still suffers from the scalability and
the trust assumption. The T, it's called, I usually refer to T as the trustworthy
execution environment, it can be compromised and then the favorite learning does not provide the
full uh fully verifiability but that aside the major breakthrough then we also have our technical
um the uh the the the team and working on that so the gk machine learning framework are evolving
fast okay with optimization in the proof systems that makes on-chain inference and verification increasingly
feasible. And we also propose this modular execution layer. Okay, and then this is exactly
what the LASAI is building. It's separating the high-speed computation from on-chain proof,
and also improving the scalability without sacrificing the trustlessness. So
you can see that we can, without sacrificing the trustlessness, but we somehow need to
sacrifice this privacy, sorry, this is called the precession, okay, the precession model,
so then we can somehow reduce this latency and the business requirement.
And the third one is called a water-fireable compute plus this is called a programmable
incentivize.
Okay, so allow us to modernize the AI framework, rewarding the data model and the inference
steps separately and also incentivize the transparency. So basically we are not fully there yet, but you know,
it's kind of the trade-off game, it's trade-off game. Okay, so if we cannot build up the full AGI,
if we cannot build up the general AI, we can start to work on some of the particular models, the middle size
models and we can sacrifice some of the precision.
But we want to preserve the privacy, we want to preserve the confidentiality, it's the
most value that the last AI can bring to this layer. Yeah, and then one last word,
within the next two or three years,
we'll see the first truly scalable
and trust-less AI agent running in the production.
Yeah, this is my answer, thank you.
Fantastic, I didn't know it was that,
there was that many layers to it.
Yeah, so it's super important to have a scalable chain to build this on top of,
which is one of the reasons that Lazai chose Hyperion, made this Hyperion.
Thank you, Professor Wan. Let's move on to a question for Ducking.
But just before that, I want to give some love to some of our listeners,
some of our community members, team members, ecosystem people.
Shout out to Isaac, to Norbert, to Han.eth.
Han's one of our guild leaders on the forum.
Shout out Nabiha.
Shout out Daniel Kwok.
Shout out to Rosita, Metis Turkey.
There's a whole bunch of familiar faces in here, super happy to see all
of you. Alrighty
let's ask Ducking
Chief Duck, you're also working
on onboarding lots of new users
like Joe Wang
and you're working with Telegram AI
so you're kind of closer to the users
I suppose, what do you think we need
to onboard say a billion new
users to crypto through ai
yeah thanks for the for the question um so yeah i mean it's true that actually the uh the purpose of
uh you know starting up starting this project it was to onboard people from Web2 to Web3 through Telegram because
Telegram is a platform that has 1 billion users, but there are only like less than 5% or maybe
8% now that people really interact with the chain, with the Web3 side.
chain with the web3 side they just use it as a social in your software just like a whatsapp
or a line which is you know which is a pity but it also shows a huge potential so that's
how we started up in the beginning to onboard the web web 2 to web 3 and then the ai narrative
and tools and etc has just come up naturally because it's a huge way to,
first of all, people are interested in AI, right?
Everybody talk about AI, thanks to ChatGPT
and all the other beautiful, amazing tools
that we have right now.
And also thanks to Naveena for all the stock performance
of AI, just catch a lot of attention from the capital side,
from retail side, so people are interested once you talk about AI.
And second part is the AI can literally just bring some new experiences
for the users.
That's why we are building an AI-centric ecosystem
with all sorts of AI applications that brings
benefits and you know, improve user experience for the players, the users. And so part is,
it's actually, you know, the AI itself is a huge onboarding tool to help people to understand the benefits of AI, of Web3, you know, how to do DeFi, how to participate in the governance.
That's something we, you know, that's one of our community ecosystem projects, Quack AI is doing to use AI to, you know, to AI driven governance.
You know, they're probably the best player in that market,
in that angle.
So yeah, I think AI is gonna,
just to put it in a nutshell,
AI can be a huge thing
to attract people's attention in the first place.
And second part is to,
AI will be a good tool just to, you know, help people to guide them how to trade,
how to do default, how to do DAO, how to stake,
how to do all the things that we think is normal in Web3,
but they're kind of buzzword and something hard to understand
for the freshers.
And the third part is, of course, the community,
the ecosystem project.
They just use AI to build multiple amazing tools
in various angles,
which is gonna bring some brand new experiences
and of course benefits to the community.
I think that's it.
Cool. Thank you very much, Ducky.
Makes sense.
I think we get there eventually with onboarding the masses. A question now for Joshua. You are working with zero knowledge proofs with ZK Pass. So how do you see zero knowledge proofs supporting privacy in Web3 AI systems?
zero-knowledge proofs supporting privacy in Web3.ai systems?
Yeah, so basically, we at CKPass, as a private data oracle, we meant to fit in those private
data to blockchain.
And since AI, we do have some eco projects working on AI.
So there are a few user cases I can share with you guys.
So in terms of privacy, most likely we talk about private data first, right?
Since we are in the space and a lot of people talking about privacy.
So as an AI model, you have to get engaged with those data first.
If you train that model, the most likely is public data.
And then once the model is good to go,
and then you want to have that personalized output,
but sometimes you have to fit into the personal data.
We do have some cases when they are doing that.
So which means, okay, I'm feeding some of my privacy-related data
and just to reinforce that model to be my personal AI.
So they are trying this way.
This is one direction I see some of our ecosystem projects is moving.
The second one is kind of like, okay, let's say I, for some for private data, you have subscriptions, they are not public data.
So that's permission data.
So let's say I booked Bloomberg, but you need an account or something to log into to have
access to all those industrial reports, economics, whatever.
So these are not open data.
So you need to have access to that.
The best way to do that one is to use a ZKTIS or the proofs we are building so that the model has access to those data.
That's kind of like a second case of those data
in terms of private data.
And then the other one,
we do recently have some projects working on AI agents.
So they want the agents to communicate
or want the agent to control another one.
So there's some information or resources you can change.
Sometimes you do able to, number one, you need to define the agents.
So there's like an ID or something related.
So that would be the best to use the Xeonology proof, right?
So that's in the agent.
Another thing is kind of like you have to have those agents
to do some personalized infrastructure or something like that.
In that sense, the best way to do that without disclosing yourself,
you use XLonjiProof.
Sometimes you also want to verify the output.
And then the best way to do that is let's say,
okay, I want to make sure that this result really comes out from OpenAI or something like a chat GPT.
So there is a way to verify that.
And then the best way to do it is through JGTS,
and that's in the privacy as well.
Yeah, at the least, I think I do see some of those cases
in our ecosystem.
They are trying to solve some issues of the AI space
through XeoNazi Proof. Of course, there are some other projects issues of the AI space through the energy proof.
Of course, there are some other projects
that are leveraging T and other technologies
to try to solve the privacy.
That's so far everything in our ecosystem, yeah.
Cool, thank you very much, Joshua.
All right, I think we're at 45 minutes through. I want to try finish on time if possible. I've got one more question for each speaker and then we want to get in one or two community questions.
to throw anything you like into the Jumbotron.
I've put a whitelist post there,
but if you just click on a tweet and click add to space,
it will pop in there.
Feel free to share whatever cool is coming up for your projects
with the community.
And we will go on to one more question each.
For you, Elena, again, on this topic of onboarding users, what can listeners do
now to get involved with LASAI, Testnet and the whitelist campaign?
Hey Elena, are you with us? If not, all good.
Hey, Elena, are you with us?
Yes, yes, yes, I'm here, I'm here, I'm here, I'm here. My mic was off. Yeah, so as you
said, there's still time to get on the last part of the whitelist. So I did. So what I did, I just described what I did. I went to laspad.fun, by the way, use my referral link.
I posted another one two days ago.
So I can get more points.
And while I was in last part, I actually saw that I missed the corrupted
Alice token launch, unfortunately.
So now I'll be watching last part fun to not to miss other tokens.
Wishing you the same.
Also, you currently can meet your own DAT, the DAT token,
which I talked about, data anchoring token.
So to do that, you can go on predat.lazai.network.
Predat.lazai.network.
So what I did, I connected my wallet.
I got test LASAI tokens through Telegram bot.
Wait a minute to get the tokens.
I typed my dot genesis valuable piece of information.
There's question and answer.
Think of what question and answer you can share to the world.
This will be your piece
of wisdom your future ingredient for the future decentralized ai uh so after that i clicked on my
on the mint dot and i got my dot token into my wallet uh so i used las ai network explorer to
get the token id and then metamask i use the you know add nft metamask functionality
to actually see my that token in metamask so these are the steps that you can actually repeat and you
can already get your that token and play with it but also prepare for the last ai test net
because once it's live and it's live in a couple of weeks, there will be a companion in that functionality that you can build already your companion that you can input your information into that you can, after that, contribute into the AI and earn through.
So, I'm wishing you to be part of AI economy early.
Love it. And it's a great idea to economy early. Thank you. Love it.
And it's a great idea to go through it very specifically.
The steps are simple,
but it's always very daunting,
especially if you're new to crypto,
to do things on chain.
But yeah, everyone can do that.
You can mint a DAT right now,
and then after Testnet is live,
you'll shortly be able to mint a companion dat that evolves uh super cool stuff um great to see action
happening uh a question for jerwan now so you're building composability in mind with
dfee and you're also active with the Solana super team
in Singapore.
So what do you think needs to happen for AI tools
to move and interact across different chains?
Yeah, so this is something we have been thinking about
for a very long time.
Actually, since 2018,
we have started to think about this.
Back then, we are already asking
how do we build a real Web3 computer?
And if you think about that, a real computer,
like one you use every day,
it must have few physical parts a processor a memory a controller a bus and very important is
input output device in web3 we have things a lot of projects with compute, with storage,
with maybe some message transfer,
but one thing is still missing,
that will be a trust and the decentralized input
and the output there.
For this part, now all the projects
use web tooling structure.
So that is our focus on.
So for move to the AI agent part for AI agent,
if we want to move all the work for AI agent to blockchains,
so it must need a connector
to connect with real world
and other system.
And that connector must in a verifiable way.
I think for this part,
we can use some open source protocol like MCP
to do that.
Can you hear me?
Yeah, we can hear you.
I think he was just,
Professor was just talking to someone in the background.
Please continue, Jiowang.
Oh, okay, okay, sure.
So, yeah, yeah, yeah.
So I just talked about MCP. We, use some MCB protocol to do that. So, yeah, so, so, with, with some MCB protocol.
I cannot hear anything. It's my problem over on the speaker side.
on the speaker side.
Hey, Professor, I can hear Joang pretty well.
Smash an emoji in the audience
if you can hear the speaker quite well.
If it's a problem, Professor Wang,
maybe just drop out, come back in.
I can make you a speaker again, and that should fix it.
Yeah, I think I can continue.
Okay, so yeah, so we just use our message there.
We use our our available IO device
to build a real world MCP service,
a real world MCP provider to provide some real world data
to AI infrastructure.
Yeah, like that.
Okay, thank you very much, J.L.1.
Professor Wan, I have a question for you here.
Can you hear me okay, Professor?
Perhaps not yet.
Don't worry, I will DM you. Let's ask Ducking.
So, Ducking, with Duckchain, you're collaborating with Lazioia Metis.
Yes, Professor, I can hear now.
Well, slightly, you're a bit quiet. Can you hear me okay?
Okay, I suggest Professor drops out and rejoins.
Yeah, I'll ask Ducking the question about...
So you're collaborating with Elasio and Metas?
I think my internet has no problem.
Professor, you can drop out and rejoin, please.
That usually fixes it.
It's usually not you, it's usually spaces.
Yeah, thanks.
So, Duffy, with the collaboration with Lazeai and Metis
on Hyperhack and other hackathons,
how important do you think these kind of developer initiatives
are for onboarding new builders into the crypto AI space?
And what do you think makes them effective?
Yeah, I think it's very useful and motivated, mostly.
I should say that.
Because, you know, there are plenty of different opportunities
and angles to get into the web trade space
for the dev developer, multiple chains, right?
So, some opportunity.
So, they actually, they um you know um they need
to have a clear motivation to be focused on one chain or the or another um so hackers hackerson
would be a great support for them and uh you know usually hackerson will come with uh rewards grant or you know exposure and
other type of supports that's what they need to get into the space and uh um because it's not the
early stage that everybody would just uh you know we used to just have a serum so everybody just
have to develop on serum but right now we have a hundred thousands of chains.
So it's important to ask them to support them as much as possible to be able to track them.
And also, yeah, so I think this sort of events where hackers and activities is very, very important.
Yeah, I totally agree.
And I'm loving what the hyperhack builders are doing right now with the Hyperion Metis hyperhack.
Lots of great innovation happening.
Professor Wang, how are we doing now?
Can you hear me?
Yeah, yeah.
So can you hear me now? Yep, we can hear you perfectly, sir. Okay doing now? Can you hear me? Yeah, so can you hear me now?
Yep, we can hear you perfectly, sir.
Okay, perfect. Thank you. Yeah, that was my internet issue, I guess.
Possibly or with Elon Musk.
So I have a question for you, Professor, which is,
Lassai's companion DATs,
they evolve based on user interaction and on-chain memory, which is the Lazai's companion DATs.
They evolve based on user interaction and on-chain memory.
But from your technical perspective,
how can we preserve composability as these agents become more complicated
and operate across multiple apps and chains?
Right, right, right. Yeah, thank you very much for the question.
And sorry for my previous issue.
So the key is to strike every AI asset. So not just the data or model, but agent themselves.
So think about the modular and the interprobability and the more variable units.
From a technical perspective, the composability in its context, it relies on the four pillars.
So let me first name that.
So one is the DAT as the unified abstraction layer.
And the second is the class based architecture.
The third one is the on-chain execution and audit trials. The fourth one, the very last pillar,
is called cross-chain compatibility
via the IDAO governance
and wire-firable service coordinator.
So first, this data as a unified abstract layer,
this loss AI data anchor token,
the DAT standards, standards, direct ownership uses right and also the value
share into a single programmable asset.
So as agent evolve their behaviors, memory and the data footprint
all anchored in the our DAT and making them modular components
that can be composed, called, and upgraded without
breaking the system.
Second, the DAT allows to follow a class-based structure and meaning different agent types.
Actually they can define their own policies, value metrics, and access control.
While still speaking the same protocol,
so they can talk to each other
and this allow agent to interoperate
even as they are specialized.
And as agent interacts across the depths on chain,
I reaction from, for example,
from the data indigestion into the model invocation is locked and verified by the QBFT is the quarantine
the binding for tolerance to count this and there are variable proofs. This guarantees the stage, a state integrity across the context and I'll support the synchronized multi-agent workflow.
The very last but not least, the last AI use of the IDAL Verifiable Service Coordinator VSC
enables the agent to operate trustlessly across the ecosystem. Company agent can
invoke services on their chain, contribute to foreign models,
or even fetch data from the actual,
the external column,
all while maintaining the treatability
and the programmable value attributes.
So in short, in summary,
sorry for keeping this along.
So it's the composability is preserved not by freezing the agent in place, but by acquiring
their evolution memory and also the web logic in a modular and an on-chain asset format,
and then can plug into any context.
Yeah, thank you.
Very cool. and can plug into any context. Yeah, thank you.
Very cool. Thanks for the technical insights there, Professor Wang.
Okay, final question for Joshua.
Then we'll move on to community questions.
AI agents and apps on LAS AI
would benefit from privacy-first deployments.
You've already spoken a lot about privacy.
But how specifically can ZKPass technology help here?
Yeah, so basically, as you all know, ZKPass is a private data oracle.
We are feeding into private data from Web 2 to Web 3 with the ZKTRS.
So it means, okay, as an author, you have an account on any HTTPs website.
You log in, you're able to convert any type of data, any data into a Xeonology proof. And then you can pass this proof to a third party for
unchain, off-chain verifications. So it has a lot of applications on this. That's why we are
working with a lot of ecosystem on this. So basically, the most low-haring fruit would be
okay if you wanted to build, let's say, the apps, right, on the laser AI ecosystem,
you could have a unified sort of like an ID system, you know, you don't have to start from scratch,
you don't have to ask a user to build an ID somewhere, but they just have to mint a proof
out of their like ID system they have exists in the web too. It would be the same as for the
AI agent. So for the AI agent, you got to have actually, in our ecosystem, we do have projects
working on this. So it would be sort of identity to the AI agents and then some other features. So that's, I think, the most know how improved.
The second one would be as a general application to the ecosystems.
So basically, any DApps actually,
if they wanted to have more data, more users,
or some more features from the user side.
The best way to do that is to have the users using ZKPass
to bridge in their existing Web2 features.
It could be your credit, could it be your financial data,
could it be your nationality, could it be your security
or whatever, could it be your degree?
So any type of features,
and we call it like users profile type of information,
it fits into AI,
it fits into any type of applications
within laser AI ecosystem.
That's basically, this is not a theory.
Actually, this is actually,
we at ZK Pass has been working on this for about three and a half years.
We already got 100 plus type of integrations and applications in the space.
Basically, the integrations you can see across all those industries, the finance, those ID, travel, and then housing, whatever, other ways.
So I do believe GK Pass is able to help the ecosystem grow.
And if you have any DApps, they want to explore either the business scope
or they want to onboard more Web2 users without any barrier, right?
So that's very easy to onboard the user, onboard those Web2 private data,
onboard those Web2 features, I would say.
Just if you guys are able to combine that data
and driven through the AI,
that makes it to make it unify and unify the features,
that'd be awesome.
So that's something I think I'm interested to see
how AI in the back able to help this type of privacy oracles,
because we're able to feed into a lot of data from Web2 to Web3.
And then the second question would be, okay,
how are we going to do with those data?
How are we going to do with those data features,
which are based off the proofs.
I think that's something we can discuss later on, you know,
to see how we can, you know, maybe in this space,
we can do some research or have some use cases together.
That'd be awesome. Yeah, thank you.
Very cool indeed.
And I know our Bd teams are in touch so i'm excited to see what they cook
up which is also the case for df and for uh duck chain too our b teams our bd teams are chatting
and cooking exciting stuff all right before we go into community questions i have something i need
to share with the community let's see we've got 232 people in the space so there are two campaigns
going on right now one is on task on another one is this ma campaign and you have a password that
many of you have been waiting for since the beginning. So let me give you that secret code password. It is
LAStalk3. So it's all lowercase and it's the number three, the digit three. So LAStalk,
L-A-Z, L-A-Z, T-A-L-K, three, number three. So that's that for those of you who are on those
campaigns. LAStalk3 is the password.
All right, we've got two community questions that were posted under our thread recently.
So I'll just read the first one out and throw it open to the panel and anyone can take it on.
This one is from, I'm not going to pronounce this properly, Ikasujikam.
I've butchered the name, but this person has asked, if AI agents become mainstream
on Web3, could they evolve into decentralized AGI?
What safeguards would prevent Skynet scenarios?
An AGI question, who wants to take this on?
I think I can jump in for this one.
This is a good question actually.
Then if the AI agent on the web 3 become the autonomous,
for example, also interconnected,
then it could begin to
exhibit the emergent collective intelligence.
I think that the general AGI is a theoretical possible,
but why there is involved into something beneficial or dangerous depending on the rule of the substrate is built on, right?
So and then we do have some safeguard for why it become the scanite.
First of all, it's called transparent.
Okay, on-chain garment.
So no hidden back doors.
Okay, you do not want to use ID system with back doors.
So that's actually where the scan can be possible.
So the eye reaction is locked.
And also the economic alignment with the DAT protocol.
This is what we are building with our .AI.
And there are far above boundaries and limitations.
We have the trustworthy environment,
and also the CK stuff,
the technologies can just try to avoid this scanner scenario.
And also we have this composable
but sandboxed infrastructure.
So you could test, you can put it in the sandbox
and see that behavior.
Very last but not least, there's a challenger system for the real time oversight. you can put it in the sandbox to see that behavior. The very last button on the list,
there's a challenger system for the real time oversight.
For example, the last AI, the Chrome,
the challenger acts as the watchdog
and then the capable of auditing behaviors
and the flagging the anomalies.
So yeah, basically the dangerous,
sorry, the decentralized AGI is possible, but we have some safeguarding to avoid it becomes the scannet.
Cool. I think that makes sense. I can read the next question, unless anyone else wants to jump in. I saw something before this space, actually. It was funny, but also scary.
Like, Elon Musk tweeted, like, a Gemini,
one of the Gemini LLMs, like, started getting really depressed,
not going to be able to complete a task,
and started saying it was a disgrace,
and it was really disappointing in itself.
It almost, like, got misaligned with itself.
So, yeah, the alignment question is super key, something we're really focused on at Lazai.
And, yeah, I think we're going to hear a lot more about this going forward.
All right, final question, and then I think we can wrap up, which is, this question is from WinvanChi69.
Beyond transaction speed, how can L2s like Metis structurally support autonomous AI agents, e.g. AI native smart contract templates, on-chain reputation systems?
What is missing in current L2 designs?
Who wants to jump on this one?
to jump on this one.
It's a question about scalability, I suppose,
beyond transaction speed.
How can L2s structurally support autonomous AI agents?
Yeah, I probably can add a little comments on this one.
Yeah, I probably can add a little comments on this one.
So I think as a layer two, it's good to, because right now I think as you see in the market,
layer two is especially based on Ethereum.
If we want to have more applications, it's better to focus position the layer to certain unique purposes.
I think if we position a layer to support AI, that would be awesome.
That would be great.
That's something I think that will make this layer too unique for the AI process. The reason for that one is because if we have the AI
gain the massive adoption, I think, you know,
the speed or the transaction would be tremendous, right?
So that's why we need some one-dimension purpose-based,
like layer two, maybe,
just for this AI agent or AI autonomous something.
I think that's the position on how we should do something like an AI-based chain or something like that.
So that would be separated from the traditional layer 2s,
just to scale up the speed for the transactions.
That would be cool.
And then I think in terms of,
you mentioned about it like credit-based.
So for credit-based, I think we do have those on-chain and off-chain data.
Actually, we did some, ZKPass did something on-chain, off-chain,
which is able to accumulate users' data
and to form, you can say, credit-based type of applications.
So on-chain reputations, we do, an AI agent or something, you have to have those
just like real humans, right?
So, let's say how accurate this AI agent is or how good, you know, fit into our daily
So, there will be a reputation related to the robots as well.
Because even like, it's just like different type of products, you have different
comments, different reviews,
Amazon, you know, and those like e-commerce
platform. So AI agents
will do the same. So if we
accumulate those comments
and, you know, reviews,
it will generate
those on-chain or even
off-chain reputations. That makes
sense, I think.
That's what ZKPAS is able to help. And then we do help a lot of projects
to gain this type of data
and form a reputation-based.
So I think this will be a good application,
actually, for AI agents.
Looking forward to maybe
to work with you guys on this one.
That'd be awesome.
Cool, great points. And I think to your first point, if my technical understanding is accurate,
I think what you mentioned, like an AI-focused optimized chain, I think that is being built right
now. And I think it's called Hyperion and Metis have built that exactly for the reasons you mentioned.
It's in testnet phase now, but, yeah, as in just a purely AI-optimized chain because it's very hard to turn the existing chain into something that is AI-optimized.
So maybe that's one to check out.
Thank you very much to all the speakers.
There was a campaign, community members.
There's 234 of you in the space, with $100 worth of Metis going out to the winners.
We will announce who those winners are very soon and distribute those prizes.
First, let's give all our speakers a round of applause.
I will ask the speakers to give us one final word before we close off.
We've gone over time just by 15 minutes, not too bad.
Just one final thing, speakers, about anything you want to close off with,
a final thought or a promotion or something,
all these listeners, you want to drive them somewhere
to interact with something you're building.
But first, let's give a round of applause to our speakers for
taking time out of their busy schedules to be with us today.
Fantastic. And yeah, final words, and then we will close it out. So let's start with
Elena. What final words would you like to share with everybody?
Well, actually, I don't see Elena here yet. Maybe she's dropped off. Okay, I'll get Elena back up.
How about Gerwan? Final words, sir.
Yeah, I think I just say a very simple word is for AI infrastructure and the web stream infrastructure, we all need verifiable and we all need Fugitrush.
So we can keep to build something for the transceleable things.
Great stuff. Thank you for being with us all things. Yeah.
Great stuff. Thank you for being with us, Gio Wang. How about a final word from you, Professor Wang?
Yeah, our last AI test night is launched, so then we are working towards the decentralized AI.
And then trust us, then we are backed by our research and we have our awesome
technical team and then we can't make it.
Amazing stuff.
Thank you very much, Professor.
Ducking, final words, sir.
Yeah, thanks for hosting this amazing Twitter space. It's kind of different from the space that i've been joined um a lot of technical discussions rather than asking about
token price which is uh amazing um and thanks again for inviting us and we are gonna um you
know just please keep following us for more AI initiatives,
projects booming in blockchain,
and there's going to be a hackerson soon.
So we'll post more in our official channels as well.
Thank you again for the audience here today.
That's my quack duck sound impression.
Thank you very much for being with us, Ducking.
And a final word from Josh, Joshua.
Yeah, I would say a body who engages AI and the crypto is a winner and of course including all the person to
engage this
in this like a twitter speaker that and
also I want to mention about the project you know I'm engaged with JK pass if you ever have a
something to do with the private data side, just check
it out. Okay. Thank you.
Fantastic. Thank you for being with us, Joshua. Finally, I'm trying to get Elena back up here.
We're not having any luck yet. I've sent a request, Elena, but you might need to drop out and come back in.
But it's been a fantastic space. Thank you to the over 200 people that have joined in and listened.
It's great to have you. Shout out to all the listeners again.
We are doing very exciting things on the cusp of a very, very new industry. So I think I'll play a little bit of music
as we exit. Maybe if Elena pops in, we can hear a final word from her.
And thank you again to the speakers. It's been great to have you. This has been Laz Talks
episode three, exploring the topic of AI's new habitat. Is web three ready. I've been Liam. You've been fantastic.
Let's play a bit of music.
Oh, Elena's... Okay, not here yet.
Play a bit of music,
and thank you all for being here.
Let's win GM. Music Oh
Oh Oh Oh Thank you, everybody.