Thank you. Thank you. OK, we'll start in a few minutes.
We'll let the room fill up and our speakers join.
I'll play us a little bit of music in the meantime. so Thank you. Music Thank you. so
so okay gmgm everybody i think we get started. We have all of our speakers here. We have a nice audience of about 40 people here. I think we can get into it.
It's a pleasure to be here with everybody to have a little chat about our topic which
is decentralising AI data.
So in a second I will introduce our four special guest speakers.
First, just please spam that emoji button in the audience if you can hear my voice clearly.
Give me a bit of positive feedback here.
Okay, I can see some love hearts.
I think I am being heard.
Okie dokie. Sweet. So I will introduce our speakers and then we'll get into the discussion.
A little bit of housekeeping. Yeah, please reshare the space. Let's get the numbers up slightly higher.
and feel free speakers to drop any of your own tweets in the jumbotron the in the pinned tweet
area here i've dropped one from from lasai here but as you speak you're very welcome to throw in
your own announcements and exciting things uh there too you just click on your own tweet
and click the button to throw it in the top there.
Also, yeah, we'll look to have the space last about 45 minutes to one hour.
Yeah, so I think we can dive into it.
We're talking about decentralized AI data.
It's a very hot topic right now.
I'm excited because right now I pay $20 per month to ChatGPT,
and it also farms my data for free.
So I kind of pay twice, right?
So Web3 and decentralized AI has to have a better idea.
And a lot of our speakers today are working on these solutions and people in this space are also working on solutions and thinking about solutions.
So it's an exciting time.
So without further ado, let me introduce our speakers and speakers if you don't mind just give us a quick brief intro as i go through each of you and then we'll get into some questions some open questions
and then some questions for each individual speaker all righty enough offer from me so our
first speaker of four we have have Natalia Ameline.
I think Natalia doesn't need any introduction to this crowd.
She is the decentralized coordinator at Metis and a lead advisor for LAS AI.
Natalia GM, how's it going?
And thank you, everyone, for joining the space. Happy to be here. As you heard, my name is Natalia Amelin, and I'm a decentralization coordinator and advise Atlas AI team.
we look at current AI landscape and we see some clear problems such as opacity, unclear ownership,
centralized control, to name a few. And with that, we saw an opportunity to build something different
and to build a system where the data has a clear ownership and privacy is not just a word,
where data is verifiable and economic incentives are properly aligned.
And this is what Les AI built.
Les AI is building a blockchain-based AI economy
that allows users, projects, and stakeholders to start partaking in an AI economy.
Very cool. Thank you, Natalia.
Okay, let's get a quick introduction from a second speaker.
Now, I've used AI to try and learn how to pronounce your name, sir. Let's see if I can get it right.
Dr. Achim Struva, how was that? Hi, everyone. How are you doing? Yeah, that was quite close.
Can you hear me okay? Yeah, good
here. Fine, sir. Amazing. Yeah. So my name is Achim. I have a PhD in complex
systems, just a quick intro and basically the classic engineering
field. But now I work for Outlier Ventures since three years and help
early startups and systems with their AI architecture, token design, token engineering.
So I work a lot on simulations and incentive optimizations
and how to design decentralized coordination systems.
And yeah, that involve also AI agents.
And in other words, I basically help
on how to make early startups future-proof
given all this AI disruption. And a quick word to Outlier Ventures.
Outlier Ventures is the most active accelerator in the decentralized ledger technology field
with a portfolio of around 400 companies. And we also pioneered this post web thesis. So some of
you might have heard about it. that lines out essentially how the whole
startup theory will converge towards decentralized self-emerging systems so this is basically our
thesis that are and will primarily be established and run by AI agents.
Yeah, token engineering lead at Outlier Ventures.
I think that those services are going to be very in demand, making projects AI future-proof.
Cool to hear your insights as we move on.
Let's introduce Jerry, the co-founder of Delisium.
Hey, everyone. It's great to be here. My name is Jerry, the co-founder of Delisium.
I would love just to share you guys about how we're thinking about the AI and also the whole ecosystem,
and also my previous experience about it. Yeah. Thanks.
Sweet. Great to have you, Jerry Jerry thank you for joining and our fourth special guest Mr.
Bao L or on Twitter as it goes as Bao Banger a web3 growth hacker GM sir how's it going
Good morning good morning or good afternoon to everyone I'm currently in Paris you know
taking this very important like you know chat with all of you raining out on stage.
Super happy to be here, super glad to be connecting
everyone and having good conversations.
G-Bunny. My work in Web3 is
bringing visibilities and awareness to
my success, the project I
work with are Archim Intelligence
Current period, I've worked with Republic Capital to push out AI products across web
Me and Graph AI have been one of them, so I'm super excited to talk about AI projects
since I've been involved with a few good projects that are involved with AI.
So I can't wait to have this conversation, continue, and see what you guys talk about
Cool. Thank you very much, sir.
OK, so there are four special guest speakers on the stage.
We also have Metis. The Metis account, we also have Isaac, who's also part of the content team here at Laze AI.
Great to have you both. So let's dive into some questions. I'll ask two questions. Just open questions first. And anyone, any speaker can feel free to chime in and give a take. We can flow. And then I've got some specific questions after that for individual speakers.
Let's open it up with a basic one.
Who owns AI data right now?
Anyone can jump in and take that on.
Yep, jump in, Jerry, feel free.
Yep, thanks for giving me this first chance
So, previously I was working as a gaming AI company, which was incubated by Y Combinator. I'm not sure if you're not sure you're not sure if you're not sure if you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure.
I'm sure you're not sure if you're not sure. I'm sure you're not sure if you're not sure. Amazon and OpenAI, they just own and control the vast majority of the world's AI data.
So every search, social media post,
like purchase or smart device interactions,
has become their data silos, right?
And users really have direct access to or
even control over the data they generate.
So for example, when you're using Google search
or uploading photos to Instagram, Google, and Meta,
where just own and modelize your behavior and content data.
So that's the fact that we are facing.
And also that reminds me of some pretty centralized AI
providers, like when developers are using API from OpenAI,
Anthropik, or Microsoft tools,
their users say as data like prompts,
outputs, feedbacks is open, collected,
and used to further improve those companies' models.
Just like how ChatGBT is fine-seamly regularly,
just leverages users' conversations and feedbacks
that will just contribute to their models improvements with no users ownership.
That's also another fact that we are facing.
well, they might just compile and trade users data,
collected from multiple resources,
usually without users knowledge or consent,
like how the Cambridge Analytica you know
institutions harvested Facebook use data for some political profile without some pretty direct
constant right so that's how the the web to giants or how the AI industry is moving now it's not good
about it it's just the fact right so those companies are just experimenting with, let's say, download your data tools, like
Google take out, Facebook downloads, but those are superficial and not do not grant true control
You know, that's the fact.
And since we're doing crypto, what we're doing, how to leverage the decentralized blockchain
technologies to enable you this prophecy or something like that what what i noticed like that some project
name uh let me think about it's like ocean prodigal it would just uh enable users to publish and
modify their data sets like uh retaining ownership and their contributions and also uh like lance
prodigal right uh they're doing pretty well in the crypto social and also allow users to own their social graphs
or data as NFT or other kind of tokenized based data.
And also what we are doing is like for dealers, one part is to, you know, to amount the first
to bring this model to AI when you're to anywhere with AI they will become you the own digital assets or something like that
like such as choice style like and preferences that is logged and
attribute to the identity on chain and also clearly just can you know
potentially modelize their AI agent or train data by licensing selling or just
rent to them within the whole ecosystem, unlike those Web2 platforms or roads. So I do
believe this thing is there's no very centralized answer is you
know where we're in a big era to fuse everything together.
There's a lot of opportunity for us to to let you this own
data or even just to satisfy you this requirement without
letting you do still making the data. So well, honestly, honestly speaking, I don't have any answer
just to try to, you know, to share with some of the facts and what we're doing and to
discuss with you guys what is the best words. And yeah, that's everything. Thanks.
Can I piggyback of what I don't know who's talking a minute ago, because it didn't show
up my voice. But basically, to be honest, I think't know who's talking a minute ago, because it didn't show up my voice.
But basically, to be honest, I think the ownership of AI data is such a difficult issue to address
because they're open source of data, like government public source, website like Wikipedia,
Reddit, or even X, where AIs are being trained on.
So this is considered fair use and considered a public fair, right?
So in this sense, no one really owns any types of data.
But then, like, you know, by working with Twitter algorithms, like, pretty much, like, I can confirm you, like, you know, like, Twitter had to step in and put in some type of, like, you know, like, limited of AI model usage training on their platform.
It's not just X, it's also GitHub and Reddit as well.
Not just X, it's also GitHub and Reddit as well.
There's one project in this whole forum, Web3,
that's doing something pretty cool where they kind of go around this,
where they kind of own their own data of AI.
You guys heard about the yapping, like the yap token.
You yap on Twitter, you earn money, right?
Basically, what they're doing is they collect the entire Web3
conversational flow to build the AI virtual
assistant. The way they are doing that
is like, this is what I kind of
They use themselves as third party
by you locking into their website and you
start yapping online and that's why they have
collect it by individually, your tweet, and they collect it by individually your tweet and they
use you as the way to like train the ai virtual assistant it's kind of like you know they start
gathering a personal line like communication right so um but then that's just the ai part of
like you know ai training model but it's also like, you know, AI generated image. This is where like a fractionalized ownership
are being split among creators, platform,
platform provider and data accurators and developer, right?
They're still reading law that being implemented
into like, you know, like cracking down,
like, you know, what is being shared
and what is being like, you know, protected
and who's getting what, you know?
So at the moment right now, like the answer is very,
like very gray. It's like nobody really owns much of their data AI unless you're like a private, I guess, data provider, like, you know, so you own those kind of sector. But at the moment right now, every single thing you've written online, it's being put into AI learning machines, and they're learning about you.
Yeah, that's really funny as well, because I don't know about everyone else in this group,
but I speak to ChatGPT all the time.
I probably speak to ChatGPT more than my own wife.
And I speak to it about all sorts of personal things.
And so it knows everything about me completely.
I just use it as an advisor for everything.
But yeah, I'm not sure how comfortable I am with it having that data.
But I give it to it freely for some reason.
OK, I think we can move on to the next question, which is we've got a good overview there about who owns the data.
And let's zoom in on the problem.
Why is data ownership broken in centralized AI?
Who wants to take that on?
So I think, you know, the ownership model is broken, as we say, with a centralized AI, because to me, all the power is concentrated in the hands of few tech giants.
So it creates a sort of a great asymmetry in terms of distribution of power and access.
So on one side, we have centralized tech giants who control the data.
They control the models, the outputs and access to the platform and charges for the access.
But at the same time, the original creators have no control over if their data is used, how their data is used, and how the model behaves.
They have no say at all, right?
They have no say at all, right?
And they receive no compensation for their creations.
And they receive no compensation for their creations.
So the centralized AI creates value for its overload from our data without giving us any
visibility choice or, you know, a cut.
That I think to me is a very broken economical model and value extraction model.
And another big issue I would say is the privacy.
So there is no option for you to opt out, right?
So, you know, you talked about,
someone talked about before how, you know,
So everything in public domain is considered to be,
you know, it's a fair game, right?
But so let's say, you know, I'm a Redditor and I published my research on Reddit.
But now it became all a fair game for Google to harvest my data and train a model and essentially own my data.
I don't retain any ownership rights, right?
rights, right? So to me, it's a broken model. I think that privacy, where, you know, you cannot
So to me, it's a broken model.
opt out. And also another thing is that with AI, is that AI can, you know, once it learns the
information, even if it's a private information, let's say for whichever reason there was a data
leak and AI got its hands, quote unquote, on it, right? So once the
AI learns it, it cannot unlearn it. You cannot make it forget things. So with this apasty issue
with the models, one cannot really predict where this private data will show up. So, you know,
Liam, you're mentioning how you, you know, talk to Chat GPT and give it all information freely, but, you know,
who knows who sharing this information with. That's another concern from the privacy perspective for me.
Totally agree, totally agree.
I would also add security concerns overall. So having all the data being controlled by one centralized party,
like it's a huge risk, right?
So for vulnerable breaches or whatever can happen there.
So it's basically a single point of failure.
And I remember like there was like this scandal,
this data scandal with the data that has been collected
by Cambridge Analytica on facebook
without any consent and this got disclosed in 2018 and it just demonstrates demonstrates basically
how centralized systems can be exploited like to use from the user user data and this is uh yeah
it's it's horrific to see also i would I would say, like with the centralized models,
and this was also in the discussions in the last couple of months,
is around the bias that they have in them.
So it's basically, they are trained on data,
but like, what is this data curated on?
Usually by a small group of developers or engineers,
and they are essentially embedding their own biases into what like what kind of data should be used for the LLM like their
own perspectives are embedded in the model and then propagated to all the users but there is
like I think we need to be very aware like what is the actual bias that is baked into a certain model.
And we should have a lot of transparency around it.
And additionally, I would say the lack of diversity overall in these data sets.
So, yeah, decentralized parties only have access to a certain subset of data.
And I mean, we will dive into the opportunities we have in the decentralized world soon.
But yeah, this is another big bottleneck I see.
One of the points I do want to add into that too is like, you know, like,
like, it's just like Bitcoin.
Decentralized AI data, it's irreversible like once you play down
the stamp you can't control the data or alter the original piece which can add more into it
but you can't really alter the original concept
yeah it makes sense all great points made, bias. So I suppose we should explore what everyone's
working on, which is some of the solutions. So let's, shall we start with Natalia? What
are DATs, DATs, and what exactly are verifiable data contributions. So DATs, big topic.
So DATs or data anchoring tokens is a new token standard for data assetization.
So what it does, it essentially turns your data, models and outputs into programmable
So each DAT in this scenario represents a piece of data with its history.
For example, who contributed it, where it was used, which training models it went through, etc.
And so this basically creates a recorded verifiable data history on chain for a data piece.
So for example, in the case of AI agent, each DAT can capture agent's code, interaction history, behavioral metadata, etc.
So, you know, I would like to think of it as a semi-fungible token, which, unlike NFT, is not static.
It can grow through interactions and is memory aware.
And blockchain technology is a very important building stone and a perfect housing environment for this,
as it offers immutable verification of data contributions.
as it offers immutable verification of data contributions.
So we talked about data opacity, right,
as one of the problems with modern AI systems.
So a blockchain is an immutable ledger for records of,
could be data provenance,
which enables anyone to independently verify the origins
and integrity of data feeding the AI models.
And that allows us to transform data sets from opaque sets into transparent and accountable resources.
So effectively, that addresses the issue of data opacity.
And so what are the key issues that we set out to address here, right?
What is this innovation help solve?
We talked about the issue of data ownership and control and how currently it is all in the hands of OpenAI, Gemini and the like.
So since DATs live on chain, we can see them as smart contracts.
They are smart contracts.
And therefore, we can program them to govern their usage. For example, you as privacy concerns as well through this mechanism that I just described.
And now imagine you are the owner of some data set and a model owner wants it in order to train the model.
Or you could be a user who contributes to the development and training of AI agent,
So we can use the Web3 instruments,
such as stocking models and standards.
And through that, the data sets can become assetized
and explicitly owned, traded, and governed.
So that on that side enables contributors
to receive tangible economic rewards
and tokenization is what makes it possible.
So in the DAT marketplace,
AI assets can become valued and ownable.
So all this, as a result,
breaks the centralized control over data
and addresses the issue of proper alignment of incentives.
That's exactly what, when I first started researching into LAS AI, that's what got me excited.
It flips the script with these DAT tokens because instead of me paying ChatGPT and then letting it farm my data,
me, I can assetize my data and get rewarded for it myself.
And it's also more private, more decentralized.
So yeah, this is very exciting.
Shall we look at Dr. Echim?
What's your take? How can decentralized systems decentralize AI data?
Yeah, so a few solutions we just heard by Natalie. Thank you so much. And I think one of the most obvious answers to this question is essentially distributed storage and blockchain integration. So these are the obvious solutions to essentially how we can use decentralized systems to
decentralize AI data, right? So in decentralized AI, it's about resources like data, models,
computation, and so on, and how they are basically distributed across a network of nodes,
rather than just relying on a single centralized authority.
And with blockchain technology, we have essentially a ledger on which we can create or where we can see the auditable trails of each decision that has been conducted.
And this ensures transparency in the data usage and model training and one project that had has already been mentioned is
ocean protocol which is also one of our portfolio companies via asi and fetch and they are for
example also created a decentralized data marketplace where users can share data securely while maintaining full control over it.
And other aspects are, for example, around market-based coordination and tokenized incentives.
networks can actually compete with their centralized counterparties by just deconstructing
the AI stack into basic modular functions and creating markets around them.
So decentralized AI platforms can use crypto tokens to reward the data contributors, developers,
and also the compute providers or resource providers.
And this is then empowering a community-driven ecosystem.
And for example, another portfolio company of us, they are about to launch their token.
It's DataEye, and they have built one of the largest, if not the largest, decentralized
blockchain indexing protocols and they'll be using their token to
essentially scale and the developer contributions and the infrastructure and the usability by ai
agents another aspect on how decentralized systems can decentralize ai data is everything like all these privacy preserving technologies.
So techniques such as federated learning, homomorphic encryption and zero knowledge proofs,
they are all essentially enabling this AI training on decentralized data set without exposing the raw data behind it.
And then, yeah, we have the D-PINs, the decentralized physical infrastructure
networks like Render or Cache, who leverage unused computational resources, like what I already said,
and they can essentially leverage this from a global, like they can source it globally,
essentially reducing their reliance on centralized cloud providers.
And I read that this modular approach enhances the scalability and also lowers the cost up to 80%.
Yeah, I think that this is incredible. And there is definitely a lot of potential here.
And obviously, there are plenty more deep ins around all the different kinds of resources yeah storage and so
on uh really so yeah there are a lot of different angles from which we can look at the decentralization
of ai data sweet yeah it's great that there's it's still very very early but it's great that
there's still a good amount of projects doing a good amount of initial building in this space.
It's great to get an insight into some of them.
Jerry, can I pull you in, sir?
How do you think we measure and reward data quality fairly?
Thanks for this question.
And to be honest, I think in general measuring and fairly rewarding data
quality is a pretty complex challenge, but that's the one that can be addressed by blending
best practice from both traditional web to system and also emerging crypto technologies.
So from my experience, I think in the traditional AI world, data quality has been assessed through
centralized methods like automated validation scripts,
manual reviews or user feedback systems.
And those works to some extent,
but also often lack transparency and it can be biased.
You know, since control is the major hand of the few parties that can just
do revise everything. So I think from my crypto experience, I think now we have some opportunities
to make data quality measurements and also reward system more transparent as well as pretty
management and also reward system more transparent, as well as pretty decentralized and more fair.
So for instance, some of our team has told me that those small contracts can automatically
validate data while community-driven reputation system and even token-curated registries
can allow members to vote for or challenge data quality with economic incentives, of course.
And at the same time, high quality contributors and tokens or rewards,
while those submitting poor data may lose their stakes or reputation.
So yeah, that's a pretty interesting angle that what sets crypto approach apart is transparency, right? Everyone here knows the
mentality and the spirit of crypto or just building blockchain based technology like rules or metrics
are public and auditable permissioners and decisions about rewards and penalties are
governed by the community and some major contributors not by merely essential authority.
So this can, I think, reduce this bias, encourages collective
improvements, and then just ensure the systems evolves based on the
feedbacks in a positive way, right? So by combining those, you know,
robust web 2 or traditional validation
with transparent incentives driven web 2,
I'm sorry, crypto mechanisms.
I think we do have a lot of spaces that can
just objectively measure data quality
and also distribute rewards fairly.
This motivates high standards and views trust,
which is pretty essential for
digital platforms and applications to survive. Thanks.
I think it makes sense that a lot of these solutions
end up solving the problems we spoke about earlier.
Collectively, they solve the privacy issue and the bias issue.
So it's an exciting space to be.
And my next question, this feeds into it about,
So we know there are problems.
We know there are some cool solutions,
which we've heard already.
But why should content creators,
people like you and me, why should we care about who owns ai data you know because um
if you want to get paid you should care right even if you don't this should operate like a
music industry where the master of the art get royalty each time it being produced or being
altered and it's also go beyond that as well right like your reputations your rights are being
exploited here imagine me as a songwriter,
which I am, by the way. Hello, check out my music. Cell pitch. Or an algorithm cracker, right? Let's
say I wrote a song or developed like an algorithm bypass code. It's going to be in scrape and use
AI training model. Basically, think about like my ideas are being ripped off and copied and being
reused multiple times without my consent, knowledge,
nor any type of compensation. You know, that's just pretty much like violate. It violates my rights. I violate the copyrights of my art and my work, right? Essentially, you and me are training
another competitor for free. Why would you do that? Like, you're basically losing money,
losing time, and losing your sweat work into a competitor, you know.
What separates you, what separates me, myself, or you against other artists or creators is yourself, yourself, your personified, and your individual authenticity, right?
AI is just basically a McDonald's fast food delivery for third rated fast product of you
Like, if you don't see like a wrong in this, and you don't see, like, a wrong in this,
and I don't know, like, you know, like, for me,
this is a huge, massive problem in the future
we're going to face, like, going on continue forward
if there's no raw law against protect,
like, you know, creators, you know.
Yeah, so if you don't, like, think it's,
you don't care who owns the AI data,
just imagine you're working for free
while the owner of these data
generates billions of dollars off your voice, personify, self-intensity.
By the way, it's not a word I just made up, but self-intensity.
So just imagine your heart, heart, and soul, your sweat, and tears
are being exploited by other people.
I agree. We're probably going to have get you to sing one one line of a song before the space ends now you've dropped in that you're a singer oh my god
not me singing in the middle of a cafe there's like hundreds of people here uh up to you no
pressure at all but now you've dropped it i can tell you're a great singer uh then we can then
we can actually tokenize it maybe and and make a deity out of it.
We've got a few more questions left, and then we'll look to close it up.
By the way, speakers, feel free to drop any pinned tweets in the pinned tweets at the top.
That's free for you to use.
So here's a question for Natalia,
focusing more on solutions and infrastructure.
How can infrastructure like LazPad,
which is also in the pinned tweet,
the manifesto of LazPad just got released,
play mechanisms help decentralize AI data.
We all need more fun in our lives, right?
And no one thinks of data as fun.
We often view data as dry and impersonal, making it hard for people to get excited about and quite easy to brush off. I can attest to it being finance professional. Nobody likes the
data person, right? Boring. So at Les AI, we found the solution to make decentralized AI data engaging by integrating it with some fun proven Web3 formats with the high user traction.
We're launching a new kind of platform, Lastpad.
And as you mentioned, the manifesto just came out.
So, guys, feel free to read it.
So last part, you know, where the AI agents are launched,
and they essentially take a life of their own,
growing through play, interactions, and daily journeys.
So it was inspired by PumpFun and Virtuos.
So, you know, Virtuos pioneered launch paths, right, for AI agents,
and PumpFun showed how meme coins could be not just fun,
but rewarding for everybody. But the last pad goes beyond this launch pad model.
So basically, last pad is a crossover between a pad where a pipeline of AI agents is built and evolving and fun, which speaks to a design philosophy
based on play and emotional bonding that users experience while connecting with the agents
during the so-called upbringing process.
So users don't just mean tokens.
The experience is like so much more.
You can play with AI agents.
You can train them, evolve them, watch them grow like your babies, all while earning on
And once the agents are mature enough, they can be traded.
The value being determined by how valuable some of their traits and the features and
So, and, you know, I talked before about data anchoring talking,
and that innovation is actually enabling this emotional data economy.
With the last path, we introduced a fun element that we think is missing today,
allowing communities to start playing by creating and discovering AI experiences together and watching
them evolve through the daily interactions. So essentially we created an infrastructure for
building emotional data economy and we're looking forward to you taking part in this revolution.
Cool, cool. I like it a lot as you you're speaking there, Natalia, crazy ideas come into my mind
about what I would want to use, like an interactive AI agent, like a pet or a girlfriend or all sorts
of things. Yeah, it's a very, very fun space. And I'm really looking forward to what gets built out on LASPAD. Okay, moving on to Dr. Akim. Here's a tough question.
Do decentralized data assets outperform centralized ones? Like we all prefer them,
but do we actually outperform them? Yeah, thank you for this question. And as you said,
this is, in my opinion, i think maybe one of the most important
questions um overall when we look at decentralized ai and data sets and i have a little bit more
nuanced take here so i think both sides have essentially their advantages and limitations
which might also change over time so first of all let's look at the efficiency trade-offs.
So while centralized models generally have a raw computational power advantage,
just due to their concentrated resources,
I would say decentralized systems can improve efficiency
when it's about very specific areas because you can essentially
get data or knowledge from very special vertical so to say or maybe local restricted areas
and i don't know if you have heard about this algorithm dpsgD which stands for Decentralized Parallel Stochastic Gradient
Descent. This is a training algorithm for distributed machine learning that
allows multiple nodes to train models in parallel without requiring a central
coordinator and this is very valuable for decentralized AI systems where you have essentially all these computational resources are distributed across various different locations.
And this algorithm can match convergence of centralized methods and is often faster in low bandwidth and high latency environments.
and high latency environments.
So there are cases in areas
where the decentralized approaches
and algorithms can out-compete
and outperform the decentralized versions.
And the same goes with diversity and scale.
So decentralized AI can aggregate data,
as I said, from global diverse sources
and also from different sources.
Also, it can be gathered by or from individuals, from IoT devices, organizations, machines, whatever.
Yeah, it's really open to this. And this is in contrast to the centralized, quote unquotequote competition. And therefore, decentralized AI has the potential
to create also richer data sets overall,
as opposed to the more centralized data silos
that the big players have at the moment.
And I would say that this also increases
the overall model robustness,
especially for applications requiring as i said
cultural or regional nuances scalability advantages are there for decentralized ai as well because
essentially it improves and offers an improved scalability just compared to the traditional ai systems due to the distributed
computational layer so you can scale across essentially across every machine that is
available in theory and this is a big scalability advantage also i would say through the decentralized
nature um this benefits also the innovation.
So it encourages basically a broader range of offerings, ideas, data sources.
Yeah. And facilitates the open source AI model development, which allows developers
and researchers from different companies and different backgrounds, different biases
to contribute, which overall
has a big potential to increase the diversity and quality of the data.
I will talk about this in a bit, a little bit more.
And then there is the aspect of resilience and transparency.
So decentralized data sets are distributed across multiple nodes, reducing single points of failure and enhancing cybersecurity.
This is a big advantage of decentralized data sets.
And blockchain's immutability ensures that data integrity, preventing unauthorized tampering, changing of the data, and the blockchainains also enable traceability of the data provenance
addressing like this whole black block black box problem of the centralized ai side and thereby
potentially supporting an even more trustworthy environment but i as i said i also want to look
at this a little bit more nuanced so let's also look at the challenges and limitations of the decentralized side,
So I already mentioned data quality and also coordination.
So we need to ensure consistent data quality across these decentralized contributors.
And this is a complex task.
And coordinating these disparate data sources can lead to integration challenges, impacting
essentially the whole model performance.
And speaking of performance, there is still a gap.
So the current decentralized platforms are often lagging behind the centralized ones, at least in terms of speed and usability due to the infrastructure immaturity that we still have in the decentralized AI space.
And yeah, centralized AI benefits from consolidated resources for their more rapid processing, making it sometimes better suited
for real-time applications and so on.
So there are some advantages.
And decentralized AI is still a little bit of complex.
So the complexity and lack of user-friendliness
limit the mass adoption a little bit,
at least at the moment but
i would also add that yes it is more complex it is more complicated this might also be less
developer friendly but i think ai can actually help here as well like the llms can be used to
essentially abstract away all this complexity and then make this complex technology more usable again.
And I think we can still conclude that there is at least not yet conclusive evidence that
decentralized data sets are universally outperform centralized ones yet. Again, I think this might
change. So there are great examples examples as we already said ocean protocol
flex almost data ai um and also and i cheer you on guys less ai um i think you all should like they
they all show high potential um but again like we are not there yet um but yeah this is also why we're here right we are early
yep totally agree i appreciate the honest and nuanced take we're definitely early and that is
where yeah that's where the great opportunities are but yeah it's still it's still young the tech
is still young uh all right i want to try to finish us within the hour, if possible.
So we've got a couple more questions.
Do remember, audience, thank you very much for joining so far.
Do remember, please, to follow all the speakers.
I'm very grateful for them for taking time out of their busy days of building
to be with us today on this space.
So give them all a follow.
Also, you are very welcome, everybody, to make your way to the Lazzai Telegram.
That's how you keep in touch with anything Lazzai. That's where you get all the latest updates.
You can also visit the website and actually try out Mint Adapt, a D-A-T, on the website yourself.
Have a little play with it. Yeah, that's how you keep in touch.
Yeah, that's how you keep in touch.
So let's jump over to Jerry.
And this kind of weaves into something that Jim was saying about how AI agents can actually help in this process.
Do you think AI agents could earn or contribute data themselves like humans do?
Oh, yeah, thanks. I'm just trying to keep the whole process and then you know the period to be as
dry as possible because ideally there's only seven minutes left. So I go back to this question. I
think absolutely AI agents have the potential to know and contribute to data economics much like
humans do specifically speaking as we are bridging the principles of traditional AI industry and
crypto technologies. So from my experience, I think in traditional web two platforms,
data columnists revolve around human users who generate, curate and exchange data, whether it
is through social media posts, reviews or let's say our e-commerce activity.
So these contributions are typically or indirectly monetized with
rewards or recognition controlled by the centralized entities.
So however, as days are going with,
also as AI capabilities advances,
we're seeing like a shift where AI agents
can autonomously gather, process, and even trade developer data assets. So on top of
a crypto context, because we are talking crypto, right? So this becomes even more powerful.
Like AI agents can interact directly with decentralized products like what we're doing in Lucy and Delusium.
They can just supply real-time data to blockchain oracles,
help curate high-quality information in token governance registries,
or even negotiate data sharing agreements and communication on behalf of users or some institutions.
And basically, we're using MCP to connect with all kinds of MCP service
and data sources, AI agents,
can initiate the transaction by themselves,
like T45 and also VASIP trading.
So crucially, I think small contracts
and transparent incentives mechanisms
to receive payments like stake tokens and be reputation in a pretty fully automated way
without needing human to join in.
So this means AI agents can be rewarded directly and fairly for the value that they created,
whether that is providing accurate marketing analysis, identifying those
misinformations or just maintaining the
integrity of the decentralized data networks.
So as a result, to be sure, I think that the line between
human and AI agents participation in data and
comments is becoming increasingly an increasingly blood
because by leveraging both traditional data infrastructure and also crypto-based
some native decentralized incentive structures we're just on the way to head into the future where
AI agents not only contribute but also actively learn from data ecosystem,
like driving greater efficiency innovations and also like some further fairness
in the way that data is valued and exchanged. Yeah, thanks.
I don't know if it makes me feel comfortable that AI agents are going to be doing more of this kind of thinking and creating stuff, but it definitely is very exciting and I'm all for it. Accelerationist.
about your work. Final question for Bao is you mentioned earlier you're talking about how you
care about this problem and so do we all but how do we make the somewhat more abstract concepts
like Natalia was talking about AI data ownership how do we make that resonate more with the
everyday user? You know you make that relatable relatable, and you got to sprinkle some of that, like, emotion into it.
Boo-boo, let me tell you, the moment when everybody loves AI is when AI are training to be that perfect boyfriend or girlfriend.
And let me tell you, it is game over for all you therapists and situationship, you know?
Like, AI already replacing your job, your voice, your art.
It's just a matter of time whether it's replacing your boyfriend or girlfriend or wife and husband.
So, like, plus, if AI compliance can help with our daily lives to be better and improve, oh, man.
Like, you know, this is where, like, people are going to start coming in and say, oh, I love AI.
Like, in my daily life basis, you think I go to my therapist and tell her my problem that I go through?
Like, you know, like, people drive me nuts.
I go to my AI, I chat with my
AI, and I tell them, like, this is
what X, Y, and Z. And you know what?
AI actually backed me. It actually
tell me, you are right. You have the right
to feel that way, X, Y, and Z.
You know what? I feel better
right after talking to AI.
I might yell at it, but, you know, like, it's it feels so good. So, yeah, you know what? I feel better right after talking to AI. I might yell at it, but you know, like, it's
daily our lives, when AI help to improve
our daily lives, you know, and improvement,
like, you know, I think that's where you get, like, you know,
more resonant sound to more user,
you know? Yeah, that's my hot take
Yeah, I love my hot take. And I land my plane. Yeah,
If anyone hasn't seen the movie her,
that's quite a good movie where the main character falls in love with an AI
then let me give speakers,
Just tell us where to follow you, how to find out more about what you do.
Give us a call to action and then we'll wrap it up.
We're about to hit the hour.
Let's start with Natalia.
So you know, it's been a very fun space and I learned a lot from all of you.
Thank you very much for being here. So for the AI, I really would like you to reflect on our conversation around data ownership.
Let's just reflect on it.
Don't let you become a product again.
You know how we talked with social media platforms that, you know, you are a product.
So the same is happening all over again with AI
platforms. You know, you contribute all your data and you are a product again. So I would like you
to try a different kind of economy that we are building at Less AI. And so you would ask me,
where should I start? I think right now, one of the best ways to join the movement and become
one of the OGs is by joining our Adopt Your Companion DAT campaign. So that is what Lesley
is running right now. You can meet your own DAT token, which is your companion, and play an active role in shaping up its development through daily journeys, interactions, and basically being its parent.
By the end of the journey, who knows, maybe your companion will end up the most valuable.
And that will be determined by the market.
That will be determined by the market.
You can learn more about it and join on our website, which is Lazzi.network.
And also, please follow us on Twitter at Lazzi Network.
And again, thanks for listening.
And I hope that you will be part of our journey as well.
Perfect. Thank you very much, Natalia.
That reminds me also, there are some people that will do the TaskOn campaign
and the attendance code for that campaign is LAStalk2, all one word, all lowercase.
That's one of the challenges to listen to this space and find the code.
So it's LAStalk2, L-A-Z-T-A-L-K-2.
And to find out more about that campaign, just check our Twitter.
Okay, let's head to Achim.
Final words from you, please, sir.
I just noted down the code word. Thanks.
So my last words for this space, I just want to quickly zoom out.
So first, I just want to summarize.
I am convinced that we should have full transparency, privacy, decentralization,
interoperability, and modularity instead of all these centralized solutions in the AI space that we are currently seeing. But I think it might take some time to get there. And probably at the end,
the centralized solutions might give the birth to the decentralized quote-unquote competition,
as we can essentially leverage the centralized AI now to build the
decentralized AI infrastructure out so that at some point the decentralized approach outperforms
the centralized approach on every level. And you might ask why that is. I think it's because
knowledge is getting commoditized more and more. It might be the reason why centralized AI
dissolves itself at some point.
And now my take, my important take comes here.
I think this has also broader implications,
not just for the centralized AI companies,
but basically for every company,
every startup that is starting right now.
And all these startups need to think about, OK, what what does this AI
disruption and transition imply for me, for my business, for the mode that I'm
building for and really think about it.
And that's why I also pinned one of my talks that I gave at ECC here.
So if you if you are interested in this thesis
where we really think that startups
are really challenged at the moment,
and basically what are the aspects
you should be looking for as a founder,
just have a look at the talk here that I pinned.
And yeah, follow Outlier Ventures
and dive into the post-web thesis and our takes
on it. I think it will be very valuable for you, whatever you're building at the moment.
Thanks, everyone. It was a nice and an awesome space with you. I learned a lot. Thank you all.
Thank you very much, Akim. Pleasure to have you. Jerry.
Kim, pleasure to have you.
Yeah, so basically I have to say thank you so much
for having me here for the chance to share
what we are building also with the topic.
And, you know, since that's the last chance for me,
I need to do some of my job just to let you guys know
what we are doing, right, in the Lysium. So, yeah, just to do to let you guys know what we are doing right in the museum so yeah just to
have a very short recap so basically
our project the lazium is dedicated
to building a cutting edge agent
network on the blockchain so at our
core ecosystem is called Lucy which
we call the the agentic operating
system for crypto which is designed
to empower users, developers and businesses
by providing intelligent autonomous services
that seamlessly interact with blockchain protocol.
So alongside with that, we also develop the network.
The name is called You Know I Love You.
Well, we want just to show the law
between the AI agents and humans,
and so we call it YKILY for short, which serves as a social and transaction layer within our
agentic ecosystem. So also we have our native token, which is called AGI. It accesses the
few of the Lysium ecosystem. So basically we're pretty proud to say that,
you know, Asia has been listed on several major exchanges,
like some Korean exchanges like Bitkom,
Vivalvo, Bybit, Gate, Mexico, Coin, and the others.
And so basically this world's accessibility ensures
that both new and existing community members can,
you know, easily participate in our network
from it from its growth. So yeah, as we're looking at, I truly believe that we are entering
into a new era where an agentic operating system like Lucy and other AI just will transform
how we can interact with crypto and our life, AI and even each other among our network.
So these AI agents are not just tools,
they're just collaborators that can understand our needs,
our requirements, just build new capabilities on the fly
and also helping us navigate the complexity with this.
So of course, I do believe there are lots of challenges
around data and trust, but that is the exact thing that
motivates us at Elysium to be a technology
not just a selective few.
And we also want to just to make it
agentic intelligence open,
more transparent and accessible for all.
So yeah, I'm just to stop my,
stop my, you know, was here and thanks
again for your question and your passions.
Pretty excited to see how we can shape this future together.
And also looking forward to connect with all of you guys with this new era of agent unfills.
Oh yeah, follow me and follow the Deleuzeum Twitter account here.
So it's pretty easy for you guys to deal with.
Thank you very much, Gerry.
Yeah, at LAS AI we also have Aleth, our AI agent framework. So maybe Aleth and Lucy should have a chat, have a coffee one time and see what happens. Thanks. Very good to have you. And final words, please. G-Bunny, Bao, give us a final take. How do we stay in touch with you? Yeah, they always left the Asian guy all the way in the back, you know.
Finally our turn and everybody's finished.
My final thought is like, you know, yeah, AI is going to talk over our job, our lives anyway.
Don't follow your dreams.
I give good takes, right?
Throw some emoji if you think it's right, right?
You can find me on pretty much anywhere.
My telegram is literally bow.
but the next word is like in your area.
I'm always around every conference all over the world.
But yeah, shamelessly plug my music.
You can find me on Spotify.
It's called Promises of You-BowLG.
Find me, give me a stream, give me a thumbs up.
But if you're curious about marketing and you know growth,
yeah, my DM's always open.
And yeah, we can take it from the side chat without AI, you know?
We can have real human interaction.
Thank you so much for everybody being here, too, either.
Like, you know, I enjoyed this conversation a lot,
and you guys are amazing takes and everything.
So, thank you for having me.
So, follow all the panelists, you know.
Thank you very much, Balf.
Love the fun. Let's give, I will give, I will play a round of applause here for our speakers. If you're in the audience, give an emoji, spam that emoji clap and let's just show some gratitude.
Fantastic. Thank you again, Natalia, HM, Jerry and Bao, our speakers.
It's been a pleasure. I hope to have you all very soon in the future.
And we're on a very exciting journey to decentralise AI data.
Thanks to all the listeners. Follow all the speakers.
Keep in touch with Lazai and let's keep building.