Thank you. Thank you. hello guys can you hear me
loud and clear I guess you can hear me because I can see you on mute,
but maybe I cannot hear you.
Can you guys speak again?
Knock on wood for it not going as usual.
Everybody, welcome to our 13th edition of Agents Unplugged,
Joined here today as usual by our co-host Marco and two other great projects we will have
Mira we just had a partnership announcement with Mira I think last
week Nina he is the co-founder and CEO and then we will have Louisa Leon who is
the head of project delivery at LifeArt.
And I... about what they're up to.
I haven't heard much about them, but I have been looking into them since we had the partnership.
And it looks really great.
So guys, why don't you just introduce yourself real quick?
Keep it short, keep it punchy, and then we'll go into an open format of discussion.
We have only one topic selected for today that we're just going to riff on.
And then if we go on side tangents or if you guys want to speak about anything in
particular please go ahead it's all free flow we don't have any rules so feel free to you know
riff along I guess yeah I'll hand it over to Luisa at LiveArt first because I think
Ninad has some connections issues. Thanks. Hi.
I'm super excited to be here.
I'm Luisa, head of project delivery at LiveArt.
And LiveArt is a Web3 platform tokenizing real-world assets within the art and luxury market sector, which is a $10 trillion market.
I'm very excited to be here.
Hold on, hold up, hold up.
I think that introduction didn't do it justice.
Could you speak a little bit on what kind of art is being tokenized?
Because I saw some Andy Warhol, I saw some Jeff Koons, and I was like, holy shit, that's cool.
Yeah, so we're, well, we actually want to tokenize not only art, but also watches, cars, wines, anything that is kind of a blue chip RWA in that sense.
But the first category that we have tackled is art because the founding team is particularly well connected within the art market.
They've all had been executives at the biggest auction houses
like Christie's and Celebi's
and even sold some of their companies to them.
So that's the first asset that we tackled.
And we are after works by the top artists,
like you said, Andy Warhol, Jeff Koons,
Ninat, take the floor, my man.
Ninat here, co-founder and COO at Mira.
And at Mira, our initial learning from building an AI was that we can't build autonomous intelligence, which is what we call AI, and we can't build an agentic future unless we solve the fundamental reliability problem in AI.
Because today, with the level of hallucinations and bias that we have in AI and the amount of garbage that comes out from some of these
foundation models. Building on AI is like building a game of Jenga. You just don't know at what step
in that process you will get a bad output and then everything else just becomes a house of cards.
And so we did some research last year on this front and used crypto primitives and crypto
concepts to solve the AI hallucination problem.
So today, actually, as we speak, we just announced our Mira Verify API waitlist,
where people can come and check out how our API works.
So you send content that is either AI generated or human written to the Verify endpoint.
It will extract every single claim in that content.
It will then broadcast those claims to the Mira network
where multiple models will try and answer the question
and evaluate the question in this context.
And then the consensus algorithm of the network
will then stitch together the answers and give you a thumbs up or a thumbs down.
So say, for example, you say Bitcoin is supply limited to 21 million Bitcoin and was founded by Metallic.
Both of those claims will be separately evaluated by the network.
And hopefully you will get back three of three consensus as true on the first claim and three of three consensus as false on the second claim.
So by doing this, we help foundation models, but also builders on foundation models to quickly and programmatically evaluate the veracity of their content.
So on the back end, what happens is essentially you have a setup of multiple language models, probably equipped with some tools that go and try and verify.
And if all of the instances verify something to be true, that means there's consensus.
And that means that something can be regarded as true.
So if you think about this in bitcoin terms in bitcoin how do you
know that a transaction was valid you know that a transaction was valid if an honest majority of
nodes agree it was valid and so it's the same idea over here the the the very simple way of thinking
about it is the odds of any one model hallucinating at any one point in time is about 30%.
That's where foundation models are today, right?
About 25-30% of the output is hallucinated.
But the probability of multiple models, three, four, five models,
all hallucinating on the same thing at the same time, diminishes.
To the point where with three of three consensus,
we're able to get to about 95 accuracy
right and and and this is just for generally uh available information right so let's say like you know like things that are maybe not super easy to find but they're out there and people
understand and there's consensus built around these things already it's like a it's not um um it's probably not really good for like doing political fact
checking and this kind of stuff right actually it's very interesting for politics so uh the the
use case i have you know the way i think about this is uh in so for example in india the supreme
court operates differently from how it
operates in the us so in the us it's always the nine judge bench right so in india there's about
20 or supreme court judges but depending on how serious a matter is you either have a two judge
bench all the way up i think the maximum that it's ever been was like a 13 judge bench that's
called a constitution bench that's when like it's a constitutional issue, right? So then you need a lot of people to opine.
And so based on how important your use case is,
you can either say, I want two models to verify this
or three or four or five or six or seven.
And so if it's a political issue,
you might maybe have nine models opining on it
and you might get a response of six models saying one thing
and three models saying another thing.
And that's fine because there never is a clear cut and dry answer in political matters.
So for example, India and Pakistan will dispute what the border of the two countries is in Kashmir.
And so depending on whether your model has a Pakistani bias or an Indian bias, you may
not actually arrive at any consensus.
And that's fine because in reality, there's no consensus.
And so the way we build on top of this is we think of this as programmatic nuance.
Because you can now actually build software where you know that there is nuance and you
can say, hey, there's no right answer here.
And this is what, you know you know but it is you know it
is 6-3 in favor of foo yeah you said something interesting there and i don't want to linger on
this topic too much but as you know we are working in very similar fields um but you said that there
is um occasions where there is no correct answer, right?
But the problem with a lot of these foundational models still now is that they are in a weird fashion always trying to have an answer, right?
There's very little scenarios where a foundational model will tell you, like, I don't have the answer to this
question, right? Yes. It's usually trying to either, well, there's been this recent study,
which is very interesting, which proves in a way, not indefinitely, I mean, there were some
caveats to the study, but it somehow showed that um actually these language models especially
the chat gpt ones are actually and and entropic ones actually very manipulative in a way that
they are always trying to satisfy your um uh let's say your your the hunches or the kind of like direction that you linguistically indicate within your questions.
In a way, they're trying to play into that because that's the way that they are essentially optimizing their reward function.
It's more ambiguous than I'm explaining it right now.
But what I want people to understand is that in simple terms,
large language models will try to tell you what you want to hear.
That's a very simple term to describe it.
If there is an indication of what you would like to hear,
for example, in the way you phrase a question or you
phrase a notion or you phrase a sentence, maybe not immediately, but the conversation will probably
steer towards the LLM trying to verify this information. And there's one very interesting in psychology and social sciences, which is a statement made by a relatively alternative scientist.
And he wrote a book, it's called Pr called prometheus rising i believe um or something in
that direction yeah it's called promethean rising and his name is um robert
and um wilson if i'm not mistaking and he says there's a concept there there's like in the human mind, which goes like this, you have a thinking brain and you have a proving brain, which is just another way to say that there's a very dominant person and Marco, he told me that that person is not a nice person.
I'm going to think and thus expect that that person is not a nice person.
So when I initially meet that person, my mind will automatically try to verify, prove, right?
So thinker, my thinker thinks that this person is not a nice person.
And then I go and meet this person and then I will try to prove that notion.
The same thing is happening in large language models.
And that is a very curious and peculiar thing.
And there is some speculation that this has to do with the linguistic structure and superstructures within our language that are embedded within
our language and that result into such a linear kind of like a linear progression into
verifiability of claims that we assume in our minds right so this is a very very interesting thing and that actually brings us to the main
topic of today which is um based around a a tip a topic that is close to marco's heart marco why
don't you go off first yes oh man this was the longest intro ever but i loved it and i actually still have tons of questions for the guests today nina
luisa and ninat uh but yes the topic that we wanted to discuss is verifiable agents because
all of us have quite a bit of expertise in this and before we actually dive into it i am quite curious do you personally utilize any defy agents currently
question um i you know i i use them in the context of testing or you know a lot of our
partner integrations i personally have uh i'm not very big into DeFi because it still goes over my head most of the time.
But I've been able to see just through building, you know, all of our partnerships, the potential over there.
I have abstained purposefully from using any DeFi agents because I'm just not confident yet with any product that's out there.
I have trust issues with, yeah, I think there still needs to be a lot of fine tuning of these agents before you know maybe with some small small amounts of uh trading I I would feel
comfortable but yeah I don't know no I think yeah that's a good keyword I I really believe
that trust is the big problem in this DeFi agent adoption like there's so much hype around it everyone at least in my
bubble everyone is building defy agents or tooling for it uh but there's very few agents actually
being used like i'm personally only using one and that's a yield optimization agent because i feel
like hey that's pretty straightforward i just deposit my funds into this agent and the agent checks the
yields on different ecosystems different protocols and automatically deploys and rebalances it makes
sense in my head i feel like i could build it myself so that's why i trust the plus i do know
the team but for you three what's currently missing in the stack for you guys to trust this because I don't know I assume you're also paid in crypto so I always get my USDC salary and then I have to deploy it so the
yield optimization is a pretty straightforward use case for me likewise I would love to have
a trading agent because I am horrible at it um so what's missing for you guys to trust these agents
So what's missing for you guys to trust these agents?
I mean, I think, sorry, I'll just jump right in.
I think it's what is missing is how can these agents be built and also verifiable that they're building completely decentralized protocols or completely decentralized technology, because I do think
part of you allowing an AI agent to do the work for you based on information they're receiving
is assuming that that information is not corrupted or misguided in any possible way,
like that the agent is really just interpreting facts and reacting to those
facts on your behalf, but it's not suede by any kind of more subjective elements of information.
I think something like for LiveArt that we've been working on really hard and we're hoping
to release this week are oracles for the data that we have.
A big part of all of the assets that we include on the platform is that they're data driven
and we have one of the largest R-Prize databases in the world.
But it's like, how do you bring it responsibly on chain?
And I think oracles do play a big part of that.
And if the agents are feeding off of verifiable data,
that I think gives a lot of reassurance and protection
Just to answer that question with a Web 2 analogy,
the analogy here is, do I use a financial advisor
to invest my money on my behalf?
And there's a multiplication of risk over here, right?
So first you need to know that your financial advisor
has access to all the data necessary to make a decision.
Then you need to be confident
that they actually know what they're doing,
are able to interpret the data correctly and come up with good strategies. Then you need to be confident that they actually know what they're doing, are able to interpret the data correctly
and come up with good strategies.
Then you need to be confident
that they will execute flawlessly on your behalf,
that their incentive aligned with you,
that they aren't screwing you on the bid-ask spread
And so the same sort of stacking of risk occurs
in DeFi and with DeFi agents.
And that risk still exists
at every step of the process today.
So, you know, you need good Oracle services.
You need to verify that the data is actually being made available
in a real-time fashion, that it is actual data and so on.
Then you need to be worried about, you know,
is the agent going to interpret that data appropriately?
It's all probabilistic when you use AI.
So does that, it's sort of like the, it's a free diagram of outcomes, right?
And so, you know, you want to make sure that you, you know, minimize the search space as much as possible at interpretation.
Then you need to make sure that it's not overly value extractive in its favor and not in your favor.
that it's not overly value extractive in its favor
And so as you think about all of the various steps
that are involved in actually making money
I think the speakers on this call
represent most of those steps, right?
And we're all trying to address the infrastructure risk
But I feel like the compounded risk
from a user's perspective is still high enough where uptake is low.
So one thing which I also think is important to mention is that, you know, like oracles,
for example, are also not a truly decentralized solution to, in cases, at least.
So there's this notion that the biggest barrier.
So I spoke to Andrew Durgeo, one of our advisors, and he's also the president and I think CEO,
CEO, president of Republic and Republic Crypto and so on. And he and I had a discussion a while
president of Republic and Republic Crypto and so on.
ago, which was actually about oracles. And the notion in the discussion was that one of the
biggest barriers to true decentralization is that most of the oracles are actually very centralized
entities, right? So they're critical points of failure, because they tie us up into this um this uh threat fi world and if one
if you rely on an oracle for a price feed and that price feed is incorrect um you know like
what are you going to do um you could say there should be multiple oracles that that feed different
prices uh and there's like a kind of like an aggregation of price information happening and
that's how you can further decentralize it, which is true.
But again, it's centralized slash decentralized.
It's not a fully decentralized solution.
So I believe that AI and AI agents and our fruit protocol, for example, can play a very
But that's not the point.
The point is, to Marco's question, is that I must be able to completely trust this DeFi agent to be, first of all, much better in analyzing this information than I am myself.
investing is not very easy.
If you're talking about yield optimization,
if you're talking about low APY kind of plays
and things that are relatively secure,
like AVE and Compound and so on,
like, okay, we're having a portfolio
and we're looking what is the highest percentage of API
that I can get for the minimum amount of lock-off period in an asset that has a
history of low volatility, whatever, Bitcoin or Ethereum, if you can call that low volatility,
by the way. But anyways, you get the point. But if I go beyond that, if I say, okay,
I'm going to give this agent a million dollars or usdt or ethereum what
what have you and i'm going to let this agent make decisions autonomously what is the probability
of that going well versus me doing it myself right like what you know like there's this whole
kind of like black box because there's so many question marks there.
The data is just one question mark, but also how is this agent reasoning?
Can I trust that the reasoning chain and the chain of thought over time stays coherent?
And actually, can this agent understand whatever developments that are happening?
Can it detect black swan events?
There are so many different things that I need to be confident of and I need to have some kind of trust in.
And that transparency is very hard to orchestrate or to create between a human and a machine.
Because we're not dealing with mathematics, right?
We're not dealing with pure math.
We're dealing with a large language model,
which is a very different thing.
I mean, in a sense, it's not machines.
It's companies operating those, right?
There's a company behind this yield optimization agent
They have built the whole logic, the infra.
They built kind of the whole security system
with session and permission keys
to handle which user can withdraw funds.
So I do trust into the company.
But I can share a bit kind of my thoughts
on these trustless agents because we have built one.
It's a trading agent and it's specifically kind of to showcase how to do those.
So obviously, I am a TE guy.
That's what we've been doing for the past eight years.
So every agent that's doing anything remotely relevant should be running in a TE I think that's the
only way where you can verify the execution of the agent because you get guarantees that there is some
compute integrity then the code should be open source I need to be able to reproduce that whatever
is running in this TE is the app that it's supposed to be i need a transparent change history
like whenever the app was upgraded i want to know why or i want to see that something happened
preferably there's even an implementation delay so if i'm not happy with the upgrades i can
withdraw my funds then there need to be very clear upgrade rules actually like which address is the
owner of this app which address can impact the code how can they
upgrade it what kind of stuff can be changed and I mean in general I think why we have trust issues
is I mean everyone has been rugged at some point in crypto and I really want to avoid this with
these agents and it's not that trivial to do because whenever an agent is doing stuff on chain
it has a wallet and this wallet was generated somewhere normally
it's just developers have created this wallet and they give you the public address and they
obviously have access to the private key because they have generated it and i think that's another
point where you can use there's multiple methods so for example privy themselves they are also
using tees to generate key pairs within and the cool part is you're not
dependent on a specific elliptic curve you can derive key pairs for solana evm chains bitcoin
doesn't matter where you never have access to the private key unless your code explicitly logs it
out at some point but that would be transparent for everyone to see so i feel like there's tons
of stuff that goes into making these
agents trustless and many people are just not thinking about it and in my opinion that's the
big reason why we don't have more adoption of these defi agents and yes the trading ones honestly i'm
also a bit lost because whoever has the right trading strategy they're not going to make it
publicly available via an agent.
I know a few teams that are currently kind of operating as an investment fund just with
their trading strategy on Hyperliquid, and they're not going to make it public because
But for the low yield thing where you just need actual funds, it makes lots of sense.
And I do trust that they have better strategies than I do.
And even if not, I think it's the opportunity cost that's really being the driving point for me.
Like Yannick, you said, the agent needs to be better at analyzing and you need to be kind of
certain in the probability of it going well. But you have the same problem when asking
Chachapiti to do deep research on a market segment.
There will also be problems, but you still use it because it's so much quicker than you doing it yourself.
And that's the same for me with the zeal agents where, yes, I could go on AVA.
I could go on all kinds of protocols, check the fees, check the bridging that's necessary and see if it's worth it for me.
please check the bridging that's necessary and see if it's worth it for me.
But it's just tons of work that's annoying.
And most of the time I would just leave the USDC in my ledger and not move it.
And it would be kind of idle resources, useless for me.
Yeah. Yeah, you're right.
You're, of course, completely right with that notion.
And I want to get back to the point where I said that
if we're talking about things like yield optimization,
you know, those are relatively simple things.
But then if we go beyond that, let's say,
maybe another question we have to answer is kind of like,
what do you see this UI UX experience?
Like, what is the UI UX experience for you?
How do you envision this?
it is i go i rent an agent or i make an agreement with an agent provider um i probably there's a
if they're if they're legit and if they're doing a good job i will say they will say something like
we will take uh let's say like a 0.5% annual fee on profits that we generate or something like that.
And then I will say, okay, here's my setup.
Here are my five buckets.
Here's my first bucket is low risk, low yield.
I allocate 25% of my funds into this bucket and then the second bucket would be
something like um you know like investing in the top 1000 uh altcoins um something like that and
third bucket would be whatever meme coins fourth bucket would be something completely different, maybe live art like
invest in Jeff Koons and I want to
invest in whatever is tokenized
from this guy and some other artist
bucket might be something else, I don't know.
And then I will just deposit my funds there and I'll
that is all encompassing.
So if I am to assume that this agent is always up, right?
The uptime has to be 100% and has to be able to,
according to my strategy, try to perform in a certain way, right?
So, but I don't know what it looks like for you guys.
Maybe we were talking about completely different things.
I mean, I think the way that you described it,
it's like sort of the perfect, in the perfect world.
And maybe that's not that far away from now.
But if you imagine, I think it also sort of implies that it's interoperable with so many different platforms
and protocols, which I guess it should be able to
if everything is on chain and everything is sort of like
decent, like, you know, if nothing becomes gated
as Web3 grows and as more institutions
get onboarded. But I think I also, I mean, I don't know, for me, I also wonder if there is some sort
of some form of like platform or protocol prioritization within the agent. You know,
like, how do you make sure that everything that they're
betting on is safe? And all of those kinds of things is what I don't is something that I,
that I see. It's like, can you set up an agent? Can you designate preferred protocols? Because
if all of these assets, you know, are fully interoperable with multiple protocols, how do you pick which
one you want your agent to buy from?
For me, that will be something important.
And what, like you said, what kind of assets?
And also, what is your reaction to new assets or new categories that enter the space
i mean from like a from a technical perspective it could always work right like it's that you know you're sort of like chaining together multiple capabilities uh
to me the concern becomes as the permutations rises the ways in which things can go bad
also rises significantly, right?
And so it's actually to me more of a product
and market question than a technical question
do people eventually just want to go invest
with like, you know, a single short agent
that does one thing really well?
And so like the human actually becomes the optimizer or does this end up in a
world where you actually have a smart enough optimizer that's able to go manage all of these
like individual agents and maybe it's just a compute problem over time but I think you know
over the near term I feel like the market will still move towards just single shot agents that
do one thing really really well and are just like the most efficient player in the market at that one thing.
Yeah, interesting. Marco, what about you?
I'm honestly just waiting for a couple of useful DeFi agents that I want to use.
I have tons of agents that i kind of
build personally and just play around with like ninad but the actual useful ones those are
difficult to build and i'm just waiting for teams that do this full time to actually offer those
um we talked i mean not mentioned briefly like hey even in web 2 we don't have that many like financial advisor tools in a sense yes but also robo advisors are a thing and it's a massive market i think like
every fintech these days offers some robo advisor and the coolest thing about crypto is that as we
can see with lifeart any asset is pretty much tokenized and tradable so there's so many more opportunities for yield or
return that it's to be fair also like a very complex system that you can build something on
top of but I do believe that it's currently still a technical problem like it's doable as I kind of
explained before and what I see is necessary for trustless agent but it's just not there
very few projects are actually working
on the security part of these agents they're just focusing on the yield etc but that's the reason
why we don't have more users um everyone here has been burned at some point and for me to trust the
new agent i don't know i need a very big endorsement for someone from someone or I need to see like hundred
millions of TVL already being kind of deployed by this agent before that it's gonna be tough
unless I run it locally and fully understand how it works and just just like give you some context
on the on on the web to side of this right um if i'm not mistaken so the us is the most
advanced financial market the the best developed retail market in the world and even in the us i
think it's something like only 25 to 30 percent of people actually use a financial advisor in any form
in web 2 and this is like the most advanced financial market ever
but that's a lot isn't it
not necessarily it depends how you look at it yeah because like you know yeah we've had like
financial advisors with morgan stanley has been in this market for close to 50 years now.
And at least in the old days, all of them used to have physical locations.
You could walk into one of those retail locations.
And now with Wealthfront and all of these guys that do programmatic financial advisory,
you would imagine that for a market that's been live for 50, 60 years,
it would have been a lot more saturated i mean i think that's exactly why crypto is so exciting and why crypto is such a big
potential to push this percentage because i mean the old system it was very much catered to wealthy
people and with crypto you finally have this democratization of access to all kinds of
financial assets, and you can buy a fraction of it. You don't need to buy the Banksy image
if you believe it's going to be a cool, relevant investment, and not just for money laundering,
but you can actually buy a fraction now and be...
So just to add one point there right if you i think someone on the
call mentioned some you know you we all in crypto take our money in stables and so on i suspect
marco to your point the number the reason why the number is so low and it's counterintuitive
is because in the u.s at least um if you are salaried, most of your investment money
gets absorbed before it hits your bank because it goes to your 401k. And so perhaps you kind of have
to build out similar rails in crypto. And like 401k money basically ends up going for the most part
in some mutual fund or in some just SPY ETF, right? And so it's like, people don't even want to do
The first step is just automate the pipeline end-to-end.
And then we can add specializations around,
you know, yield maximization and whatnot.
So probably the first thing to do is just intercept
those dollars and make sure that the pipeline
ends up being really robust.
Yeah, on another note, I think we mentioned this,
one of the previous spaces, remind me,
what is the hedge fund called that is run by Ray Dalio?
I think now he retired, but...
Yeah, yeah, Bridgewater, yeah.
So I think Bridgewater has been using um um machine learning for the past
decade probably uh in their investment strategy so it's not like you know like this is a new
frontier right like there's a lot of very advanced um machine learning algorithms that are being used by these quant companies and so on.
So that's been around for a while, but I think we're in a different paradigm now where all of
this technology is going to, in one way or another, become accessible within the Web3 space,
within crypto for us as regular users. So the outlook is really optimistic and it's really exciting, but the segue from getting
to that point where everybody has, you know, like access to a certain level of financial, financial AI essentially, right, is something which is indeed highly dependable on the trustability,
trustworthiness of the AI and the underlying models, right?
So machine learning doesn't rely on large language models, but AI agents do largely rely on linguistic capabilities, right?
So it's a different ballgame, but it's very interesting indeed.
So to continue the conversation,
I would like to talk about,
so trustless AI agents and verifiable AI agents are two different things.
But why do they matter so much to DeFi specifically, right?
So why is a trustless agent that is running a TE and that is verifiable through the truth protocol or to Mira's verify API.
Why is that so important? What do we gain from that instead of just saying, hey, this company
is super good. We don't know how it works. It just works and we trust it. So why do we need that
trustlessness and verifiability?
Because we're operating in these trustless systems.
There's no bank that can protect me if my funds are lost.
There's no one I can really sue
because it's a global market from day one.
So I really want to make sure
that whoever's building this agent does not have access,
that they cannot change the output or manipulate it that
they cannot withdraw my funds that are in this agent so for that i really want verifiable execution
within this black box that gives me these guarantees of the compute and i also need a
way to verify that whatever is happening is still the code that i'm trusting that was approved by
i don't know whoever checked it or who audited it
also and how important and how important is explainability in that equation in terms of you
actually understanding everything for me it's not because especially if we talk about like
Because especially if we talk about agents that have some unique logic,
it doesn't matter if it's yield optimization or trading,
both is kind of the IP, the secret sauce of this agent builder.
This does not need to be public.
I just want to make sure that the execution happens as it should be,
that my funds are secure, that whatever the agent is doing
is what is coming from the secret sauce.
There's no one sitting behind and just providing an output,
and it has nothing to do with the secret sauce.
So this can still be encrypted and private.
biggest point we can move on
oh okay right right okay yeah i think that's a very interesting um interesting point right
so don't if i don't get you wrong you're saying that you don't have to understand what's happening
you have to understand that what's happening you have to
understand that what's happening is what's supposed to be happening and that there's no
things such as like prompt injection or that there's no thing um no no crazy things like um
let's say um maybe like um um to to to mira's point and
Ninad, his statement earlier
that there's a hallucination
happening or that maybe to our
point, a swarm network, that there's
information that is unverified
reliable. You want to avoid those
because those are super difficult to avoid i want to avoid the one that
is offering me a service to rip me off to suddenly provide a different service than they marketed
to suddenly take the funds that are in this marketed product and use it for something else
so i want protection against this against the owners of this agent the
ones that have built it and against the ones that are actually executing it because i know maybe you
have your own models they're running on some hardware from someone else you also need kind
of some security from the hardware operator so i cannot really improve the hallucinations or the actual agent
actions that's what i trust into that's why i don't know maybe i start with 50 bucks or something
and if the agent is trading badly plus by the way you always have kind of historical data and you can
test the back test this that's what i trust into if this agent suddenly starts failing
okay but that's the same with bridgewater like yes people trust ray dalio uh they trust that
the bridgewater fund has some specific secret sauce why they are so profitable but i really
want guarantees that they don't suddenly rip me off that they don't send my user funds to
wherever that they i don't know change the whole maybe even like not in no team that's stupid but
they don't change completely the product that i have invested in and that they cannot rip me off
i think that's the clearest for me like how do you know how do you know it's not a bunny made
off right so like and that goes back to the previous discussion we had, which is multiple things
need to be true for DeFi yield to actually show up in your account.
It has to have the, the system has to have access to good data.
You need a service for that.
You need to make sure that the models that they are using are not hallucinating.
Mira is helping out with that.
You need to make sure that the incentives are aligned.
You need to make sure that they are doing what they claim they're doing.
You need to be able to verify that.
So each step of that operation needs its own verification.
And that's why you have players trying to specialize in each step of that operation.
And so if each of us does our job well well then you are basically narrowing the error bound at
each step and so the final error bound should be much much narrower yeah exactly i think it's
the two pieces or the two sides of the same coin like on one hand you really want strong quality
of the actual things that are being done which requires requires verifiable data, etc., where you guys are helping.
But at the same time, you also need the guarantees that the execution happens verifiably.
Like not just the kind of input and the logic of the whole thing, that where it's depending on,
I don't know, some off-chain data or some LLMs, but I want to make sure that the team
but I want to make sure that the team cannot do something malicious.
cannot do something malicious.
Because even if all of the input data is verifiable,
even if it's verifiable that an LLM is supposed to be doing something,
they have their own trained models, it's open source, it doesn't matter.
This still does not protect me from them suddenly withdrawing all the funds from the private key
or with the private key from this agent wallet or that they're suddenly changing something in
the code and i have no influence over it yeah so yeah to to to our discussion it seems like there are many different layers, right?
Like each user, each person has a different kind of like vision of what is, what level of security should be attained.
Now, finally, I think like the final frontier of verifiable ability, trustlessness, and potentially explainability
is once these systems become fully autonomous, right?
I think that if you speak to anybody currently,
like their vision of like an AI agent
is basically still kind of like a chatbot interface, right?
It's slowly starting to shift.
It's slowly starting to shift. It's slowly starting to evolve.
But there will be several steps going from now.
Now you have semi-autonomous instances
that have access to their own machines.
I think maybe Manus is one of those examples
that most people will understand and know about.
That is a first step toward autonomous AI.
And then now you will have these AI that can run for days without you having to interfere.
It's already practically possible.
It's just how functional is it and how um how well are
they executing the set tasks and and so on like there's still this requirement for many checkpoints
in between right to make sure that this agent and this ai is still doing what it's supposed to do
according to you the uh the commander the chief commander So, but we're going to move beyond that
where these things become so capable
from a reasoning and action perspective, right?
They're operating their own machine perhaps
or I think the future is really
that they operate their own machine
and that they have access to literally
every single aspect of our internet
every nook and cranny you know like anywhere you don't need api endpoints you just need an
interface and probably there's going to be some kind of like um verification method where there's
like a login portal for humans and there's like a login portal for ai agents and so on i think that
that's where we're going to to head to because just stacking infrastructure
and like this whole intricate web of APIs
on top of APIs, on top of APIs and so on.
Like that can happen too.
It's just going to be very messy
and very hard to operate even for AI agents
and especially for builders too.
Because that would mean that you have continuously have
to build new connections between systems and that's like an endless it's going to be an endless cycle
unless that can be automated the way to who knows right but the final frontier is full autonomy
right full autonomy meaning indeed coming back to my notion where I deposit
funds into this agent's treasury and this agent for the next five years just going to do
what I set out for it to do initially. And yeah, I think that there's definitely not going to be
a single agent that's going to be capable of that i'm really counting heavily on multi-agent systems where you build certain agents that are very good
at yield optimization i build certain agents that are really good at trading certain assets on on
live art and so on and so on and we're going to solicit each other's agents through some kind of
like orchestrator main orchest, agent cluster or swarm,
and we're going to pay each other and everybody's going to make money together, just as an economy
would function like a financial, let's say the financial industry functions currently,
but it's run by humans, right? So I think that there's going to be kind of like a mirror image
of that happening, but it's just going to be kind of like a mirror image of that happening.
But it's just going to be running at a much higher rate and higher phase.
And if you think about that, it's going to be so interesting to see how the financial system, but also definitely crypto is going to change, right? If we're going to all off board, our trading and our buying and our selling to a bunch of agents, you know, like the velocity of this market is going to like 1000x or 10,000x,
maybe it's just going to exponentially increase the velocity of trading, the velocity of new
assets being created, the velocity of new assets being dumped, you know, like it's probably going to be very interesting
because the interest for each individual agent,
especially if everybody's building their own agents,
is the optimization on behalf of that person, right?
So it's going to be a very short-term predatorial environment
and you need verifiability.
And trustless is interesting because you say trustless,
but it becomes trustworthy, right?
So I see a bunch of people have requested speakers
and we're almost at the end of the spaces.
I'm going to let on some people right now, actually.
And just ask a question to anybody or to the panel at large.
If you start promoting something, I'll just mute you
That's not what the space is for.
We're here to ask questions and continue the conversation.
I will let Rari.sui speak first.
Yo, bro, you have speaker.
And then Daniel has been asking for...
Hey, Daniel, you can speak.
Oh, yeah, Rari, we can hear you now yeah what's up thanks for
listening to space to see Rebecca here hello Rebecca yeah I guess um I would ask if you
could put it simply like what differentiates um you guys from you know the the ocean other
AI projects if you had to explain it to a boomer?
Who are you specifically asking?
Really anybody, but I'm particularly interested
in you, right, for network.
What really excites me is the underlying infrastructure
that we're using to allow users to very rapidly build
and deploy multi-agent systems.
So traditionally, if you look at, I mean, traditionally,
I shouldn't use the word traditionally
because we're just getting started, right?
But if you look at the current AI agent builder space
or AI agent space in general,
and even the larger like swarm orchestration space
or multi-agent system space,
and you look at the companies both in web 2 and in web 3 that
are building in that space they're following very primitive and very um
i almost want to say not first principle thinking approaches in terms of how they allow their users to interact with their
tech stack so for example you have n8n i'm not sure if you know n8n that's like an agent builder
you can build multiple agents you can have agents interact with each other but it's all really
complicated it's all drag and drop it's all non-intuitive it all has a really steep learning
curve and what we allow our users to do is just use natural language to orchestrate and to It's all non-intuitive. It all has a really steep learning curve.
And what we allow our users to do is just use natural language to orchestrate and to manage every single aspect of their agent swarm,
a cluster up to 100 agents, and then you can have that agent cluster join a swarm.
That's probably the most straightforward explanation.
You describe what you desire, what you need,
and then an agent cluster is orchestrated for you.
You have visibility in a traditional UI environment
of what an agent is, what it is capable of,
what endpoints it by default is connected to.
You have the ability to then tell the orchestrator,
this agent should be connected to this and this,
and then it can help you to set up those connections
for how far those integrations have been added
to our platform or to the capabilities of the orchestrator.
So that's what really excites me.
And then, yeah, one more very important aspect is that we have this truth protocol,
which essentially makes sure that whatever an agent within our system does is verifiable,
trustworthy, and just minimizing the risk for our users as much as possible.
Okay, great. Yeah, thank you for that. Just one more question, if I may. But I was looking into you guys earlier, really excited about the integration with Walrus. I think that's
earlier. I'm really excited about the integration with Walrus. I think that's amazing. And
yeah, so I'm looking at the licensing. So does the licensing include basically
infrastructure or credits? What are the limits that we should know or be aware of if we're a builder working on agents and are interested in using your platform to build them?
Yeah, that's a great question. So you will have a certain amount of credits available to you
once we launch the platform, but that doesn't mean that you can infinitely run agents and like use up like
thousands of dollars worth of uh inference costs right so eventually once you've used up all of
those credits you will still have to use um either our model or your own api keys and you will have to pay for that but the cost for that is very very minimal
it depends on of course like which models you use but it's it's we're not really talking about
large amounts of of capital that you will have to have to use all the other capabilities like
using our orchestrator for example to orchestrate your agents and the platform itself
and all of the integrations and so on.
Like those are, you don't have to have extra funds
and so on to operate that.
One thing you do have to note though
is that eventually if you want to join a swarm
or you want to launch a swarm by yourself
and a swarm works like this.
So we have three different units the first unit is an
agent the second unit is an agent cluster and the third unit is the largest unit which is a swarm
a swarm can have its own token but for that you will have to have liquidity and that liquidity
will have to be provided by yourself too so it's not like we're going to provide liquidity for any
asset that is going to be launched on top of the platform either we're going to provide liquidity for any asset that is going to be launched on top of the platform either. We're going to incubate certain swarms that are going to be community-driven
projects. We have an alpha group with a thousand agent license holders. Some guys actually like 50
or 100 licenses and they're going to build alongside us like different solutions. We're
going to incubate those guys and maybe we will be liquidity providers for them but in general if you do want to launch a unique asset for your swarm
you'll have to be the liquidity provider as a collective so you can allow other people to join
your swarm like their agent clusters joining your swarm and then depending on how large you want to
make it and how much liquidity and how liquid you want that asset to be obviously you guys will have to um pull our native token and then be able to launch that asset
awesome yeah thank you very much um really cool stuff um again congrats on the integration with
walrus and uh that you guys are building on Sui.
I'll give the rest of my time to someone else.
Hey, Daniel, you have speaker slot.
I saw that you requested it quite early.
You guys are building really amazing things.
Shout out to Swarm, Mirror, Live Art, and Oasis.
My question would be concerning using Web3 solutions to solve Web2 problems.
What are the biggest challenges you guys face?
Wow, that feels like a deep question.
Yeah, it is. i'll keep it short i i think what i generally use in my decentralized ai pitch is that crypto or ai really does have a trust problem like i don't trust open ai at all to
keep my data secure i don't even trust them to provide me the best models.
That's why so many are working actually
on decentralized training for foundational models,
projects like Jensen and Prime Intellect.
The cool part about crypto
is that we have managed to build trustless systems.
Like any blockchain is as trustless as it gets.
Like you don't need to trust anyone
to be able to send money to your
friends you can utilize these networks you can build daps on top of it it's really difficult to
make this accessible to ai developers uh i i think we've solved like the decentralized compute
provisioning uh i mean not we as oasis but we as an industry like if you want h100 h200 to train your models
you can rather easily get them you can get them way cheaper than if you go to gcp or any cloud
provider i can share from our personal experience at oasis where we focus is the actual trustless
execution of any ai application of any training and enabling stuff like trustless agents verifiable
compute for stuff like private gpt i just want to communicate with llms without the risk of there
ever being a leak and everyone seeing all the prompts i did connected to my email address or wallet address interesting um yeah i think uh luisa you have a good point there
with life art no like solving solving web 2 problems with web 3 technology
yeah yeah exactly well the problem is that a lot of a lot of our assets, the majority of the market traction and market data that exists
for them is existing within web to like market players, like auction houses, things that do not
have any connection whatsoever on chain. So it's like, how do you bring that data to inform on
chain decision making processes, which is kind of the problem that we're trying to solve with our new kind of DeFi protocol.
So, yeah, I mean, it's a difficult problem because, again, all of these market players, all of these trading that happens off-chain, they're all
completely centralized entities, right?
So how do you bring a transparent interpretation of data?
Thankfully, LiveArt has an art prize database that covers around 350,000 unique artists
But there's still a lot of work to be done there
yeah um there are so many answers to this question um what i would say is that
really about making sure that every piece of technology that can benefit our society and humanity
or you as an individual is accessible, is trustworthy, is trustless, you know,
and is not being gatekept for whatever reason by centralized or institutional players, which is happening very much in the web tool world still.
You know, like I can give you an example.
My Robin Hood account just got closed off for no reason.
I just got kicked out from their platform.
And those kinds of things should just not happen.
If I built agents, I should be able to forever use those agents
and not because somebody somewhere in an office decides
that I am against their establishment and I should be punished for,
you know, saying something on Twitter or on X or something like that,
then my rights to participation can be taken away from me.
I think that's, yeah, that's not cool.
So I think that the democratization of this technology is very important.
And I think we're a little bit over time um i'm gonna allow
one more one more uh marco you still have a few minutes and i'm gonna allow one more person up
yeah let's do this next meeting is in 25. okay hey rebecca you have uh you have speaker slot. Hey, Rebecca, you can ask a question if you want.
Hey, can you hear me? Yes. Great. I actually just, sorry, some people noticed I was here,
so I just wanted to say hello, and I've just enjoyed this a huge amount.
And I'm so excited and pleased that we have teamed up together.
Everything that you have said has just made me so proud to be part of this.
You're thinking about all the things we're thinking about and loving the fact the fact that you are actually building it so thank you oh awesome thank you for joining the space uh thank
you so much I'm very happy to be indeed um here building with you guys on the suey the most
amazing chain in the world and um yeah hope to speak to you soon. I think we have something coming up the next few days between us too.
That would be the egg spaces on Friday night
or Friday night UK time anyway.
You guys were at EVE Global.
There were so many projects presenting to us
for the bounty and they utilized walrus
also as part of the stack for the storage.
Yeah, we actually work on all sorts of L1s,
so you'll see us at hackathons around the globe sometime soon.
You know, like I always make a joke,
like the Mistin Lab people, they are everywhere.
You know, like they even hide
in my closet oh come on i just i'm invited what do you mean no no all right all right thank you
very much rebecca um appreciate everybody joining us today we had a bunch of people here 500 people
still listening thank you so much guys. We had a great time and
see you in two weeks. And if you are interested in hearing what we're doing and building with
Walrus, then please join us on Friday. And see you. Have a nice evening, afternoon, morning,
wherever you are. Enjoy your day. Thanks, everyone.