you you you you Yn ystod y cyfle hwn, mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle, mae'r cyfle yn ystod y cyfle. Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle. Mae'r cyfle yn ystod y cyfle. Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle. Mae'r cyfle yn ystod y cyfle. Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle.
Mae'r cyfle yn ystod y cyfle. Mae'r cyfle yn ystod y cyfle. Okay. Hello, hello, everyone.
Matt, great to have you on.
So I guess we can go ahead and-
Yeah, we can hear you just fine.
Yeah, great to meet you and thanks for joining us today.
With your busy schedule and everything,
but yeah, it's really exciting to have you on.
I'll just give a bit of an intro.
So my name is Dahlia, and yeah,
I just wanted to welcome our listeners
all to our AI Frontier series.
So if this is your first time tuning in,
we're just having a series and doing like
an open conversation with founders,
building projects in the AI space.
So this episode is gonna be an exciting one
because we've got Matt, the founder of Gaia,
joining us and we're going to be covering a really hot topic right now,
which is all about AI agents and autonomous agents specifically.
So, yeah, Matt, great to have you here.
Before we get started on diving deep into the conversation, let's just start with a quick intro.
So can you tell everyone just a little bit about yourself
and what Gaia is building?
Yeah, of course, I can give a quick intro.
So basically I started my career
in corporate innovation, developer communities.
I worked for a small group called Angel Hack
and we organized hackathons in 60 different countries
We also do like accelerator programs
and various innovation events with Fortune 500 companies.
But really like around that time,
I got to travel the world and meet a bunch
of software developers building frontier technologies
So it was a huge opportunity for me to see how developers
interacted with new SDKs or APIs and seeing the developers
from different cultures thought
and we're building software.
And then around that time I got into blockchain
and I did a hackathon for Barclays in New York, ironically, in 2015.
And met a bunch of engineers at this event who were giggling at the fact that a bank
was doing a blockchain hackathon.
I didn't really get the joke at the time.
And blockchain hackathons back then were very broad.
came out, people didn't know what to build on blockchains. And so the verticals were
like, you know, build a FinTech app for, you know, use case builds, you know, something
in public goods or impact, build a supply chain project. This is very broad, but I,
you know, really quickly started to understand the why of blockchains and it just
pulled me in. And then a few years later, I actually kind of went forward on that
journey and went to go work at JP Morgan on Quorum because they're open source for Ethereum.
We've been in Ethereum work for banks and enterprises. And that was a really amazing opportunity because I got to understand open source software
and how do we work with various enterprise engineers
and get them to adopt new technologies like Ethereum.
And then that project got acquired by ConsenSys in 2020.
So I was at ConsenSys software building out
dev community across all our products, built an accelerator program, did a bunch of Ethereum,
Web3 shenanigans, and like DeFi, DevT, and DAOspace. Just had a lot of fun learning about
the Internet of Value. And this next adventure in decentralized AI is kind of this next frontier for a dynamic
internet that has the ability to contextualize and think and execute. So for me, it's been
a very crazy adventure in the past 10 years, but it's all kind of boiled down to one thing,
which is I love working with developers. I love bringing developers together and solving complicated and creative problems. And, you know, AI is just one of
those next frontiers for us.
All right. Yeah, that's really cool to see how long you've been in the space. And yeah,
it's decentralized AI really is like sort of the next frontier. So I'm gonna pass it along to Ganesh now,
who's gonna just like dive deeper with you
on the topics that we wanna cover.
So yeah, Ganesh, you can just take it away from here.
Ganesh, if you're speaking, we can't hear you.
Yeah, your mics doesn't seem to be working.
Yeah, these spaces, it's always like a little bit buggy, but maybe try rejoining and coming
Sorry folks just one second. Sorry, Matt, just give us one second while we're trying to figure this out.
Okay, so third time's the charm.
Matt, I think you and I first connected when you were part of Consensus, probably along
the SNAP program that Metcula was about to release.
I don't know if you recall those days, but there was a pretty exciting product launch
and a go-to-market that you guys had built.
Okay, now it seems like his mic isn't working now.
I think he's still set as a listener.
There we go. Oh my goodness. Yeah, we're all good. We're all set.
Weird. We actually kept bugging out. It was crazy. Anyways, I'm back. I'm so sorry.
No, it's not your fault. Yeah, it's just one of those days. But yeah, I'm sorry. Go ahead. So what was the question?
So I was just going to say that you and I had connected
a couple of years ago just as you were building out
And I don't know if you remember those days,
but those were exciting times.
Can you guys hear me now?
Yeah, I can actually hear you.
It shows that he's trying to connect.
I don't know what's going on here.
What is going on? Okay. It shows he's on, but it says it's a list.
I got jumped like three times.
I'm not going to touch this device.
Let's just hope it works.
Yeah, man, I do remember.
So it's such a pleasure getting back in touch here.
I was going to say, let's jump into it
before I get rugged again. Awesome. OK. Sorry, go ahead. I was going to say, let's jump into it before I get rugged again.
OK, so let's get right into it.
So Gaia, that's how I pronounce the name of the project,
OK, so let's start with the vision behind Gaia.
And has this vision evolved as you've started to take this to market?
So I'll start with kind of the why
and kind of the big problem we're solving for.
If you look at the current AI landscape,
centralized AI landscape, centralized AI landscape, we're facing this time where
we're giving these systems our data.
We're training models further and entering a world where we're going to become super
dependent on these models. And if the bias of these AI infrastructures
and how we use them is censored,
or again, there's some bias you created,
it becomes really dangerous.
And so what we're really fighting for here
is censorship resistant, open source AI,
where the economics really enable user-owned AI and that we can build applications that
leverage our knowledge and we see that as living knowledge systems. So as we see the scale of
open source large language models, we believe that it's a zero-s sum game. Creating these models is becoming cheaper and cheaper.
They're more and more available.
When we were first starting, there
was something like 200,000 open source
local language models on Hugging Face.
Now today, there's like 1.5 to 1.7 million available models.
And every week, we're seeing that another LLM is beating out
a benchmark of the incumbent.
So it looks like a race to the bottom.
And it looks like with centralized AI, the only way
you can monetize and recoup these billions of dollars
they raised to build out these original LLMs
is to start monetizing the customer and doing that
through either kind of telling data or perhaps advertising. And so we will enter a world where
if we're using AI on a daily basis, again, it would be very dangerous if one, we're in an emerging
market and we're using an AI infrastructure that is built by a small group of folks in Silicon Valley who don't
have any context of the culture.
But also worst case is we're being fed advertisements inside of our GPT or we're being fed answers
that aren't necessarily truthful.
And so for us, it's super important that we build censorship resistant open source systems
so that people can really own their own domain knowledge,
build censorship resistant applications on that,
and basically drive forth new economic models that
help us monetize our own knowledge in this AI universe
versus becoming the product ourselves. I think one of the problems we see coming down the line is,
one, you're training the models on your own data and you're just giving that freely to a centralized
provider and perhaps you end up becoming the product. But if you're a builder and you're
trying to build applications in these ecosystems,
this is happening time and time again where look at Google and Apple, you'll be charged
20 to 30% margins on your business or on your GPT application that you serve to customers or perhaps you're using open AI APIs and the fees
for utilizing that service becomes economically impossible to scale your business. You have to
factor in those inherited costs and so it's really hard to compete at that pace. And so for us, it's super important that,
you know, high level, we build user owned AI, and that we help every company become an AI company
by leveraging open source. Incredible. And so the AI space is a super dynamic space. So I believe
The AI space is a super dynamic space.
So I believe Gaia is a couple of years old.
And what you're starting to see now is basically every week,
there's some kind of new breakthrough,
either with DeepSeq or OpenAI's operator or Manus.
And how is this initial premise that you started Gaia with,
has this evolved since day one?
Yeah, I think the ball has moved for sure. I think when we first were getting started,
nobody really cared about AI agents. We were building agents early last year and
We were building agents early last year and people were just calling them chat bots.
People didn't understand, I say people, but I mean like VCs, developers, even savvy end users.
They still chat with GPT and they're like, oh, that makes a ton of sense. It's easy to touch and understand how it works.
But when it came to decentralizing
that whole component, so like the kind of compute component,
the data component, the middleware and dev tooling,
what we call inference, and what a lot of other folks
call inference, it's still a mystery
to a lot of people in the world. I don't think they
understand, one, what it is, but two, why it's important. And we saw like in this past year,
that mindset has changed drastically. People now understand kind of where the AI attention is going
They definitely understand how the space
is constructed a lot more.
We've seen developer attention increase drastically.
So like web three devs and traditional devs,
finding open source LLMs much more interesting
than they did a year ago.
And I think the agent space, which is kind of like our demand economy, I think if you can imagine inferences like this gas station,
we're providing domain specific knowledge paired with an LLM. If we create this library of
If we create this library of the world's knowledge in this living and breathing system,
we need people to actually be demanding that service or demanding that the supply of knowledge.
And so we are very interested in more agents being out there and more agents having utility
and having autonomy. We see that number has driven up drastically. I can't recall
where I got this exact number. For a fair fact, check me off to figure out where I got this number.
But I was hearing through somebody that we're not looking at like 100 million agents on the internet
today. And it seems quite plausible that we should have more AI agents on the internet today. And it seems quite plausible that we should have more AI agents
on the internet than there's moments on earth in the next one to two years. And that's crazy. I think we weren't originally thinking it would be moving this quick. We also didn't think that
the cost of all the Amazon fees would be dropping so fast. The speed of innovation is just tenfold
than we thought would be happening.
It's a much different world we live in today
than we did six months ago, which is just insane.
You have agents building agents.
You have an insane amount of dev tools
where you have this strand of like vibe coding
where non-technical kind of PMs or entry level devs
can spit out agents or apps in like 45 minutes.
I'll say like, if you're an engineer,
you're probably gonna pick at least this person's code,
but whether you like it or not,
this is how the internet's gonna come to fruition fruition. We're going to see a lot of spaghetti code and quickly shipped agents and
apps that are either built by AIs or built by humans, but we have all these tools at our
disposal. And so we've just seen this radical shift in, let's call it shipping agents, shipping software.
And it's leading into kind of the next stage of things, which is, well, my agent has hit
It needs specific context to do the task I want it to do.
And you're faced with a couple tradeoffs.
One is, do you build your own infrastructure, your own inference with an open source LLIME
and continue to train that as a kind of a business operator?
Do you leverage something like Gaia?
Or do you just use something out of a box like OpenAI or Anthropic or Grok?
And again, there's trade-offs.
If you have to build on your own, it's cumbersome.
You might not have access to a network of knowledge. If you have to
lean on a centralized provider, again, you're giving your data away. There may be some bias.
This thing has to crawl the entire internet to answer some small specific question you have.
And then I think there's the economic piece we mentioned,
but one of the bigger challenges is like, you know, in crypto, we think we're, we think we're such a big
industry, but I don't think it'd be too hard for, you know, big AI to, you know, cut the cord on anyone in
crypto using OpenAI APIs. And that censorship. Imagine, even in some regions,
it's actually BTC has blocked, OpenAI has blocked, Korea, for example.
So what happens when an AI agent does some behavior that Silicon Valley doesn't like
and they come to the thought of like, let's just block anyone.
Any agent that is doing a web 3 process,
signing a smart contract, deploying tokens, you name it,
any of those we will block from using those APIs.
So that's the next phase of things,
where we'll start to see some things break between Central
And that's what we're trying to help people transition to once that happens.
Sorry for the long-winded rant.
Incredible. That's just giving me more ideas to ask follow-on questions.
So if you think about crypto open source and these trustless environments.
There's a couple of keywords that you just shared.
For example, it's permissionless,
it's censorship resistance, it's trustless,
it's open source, of course.
And out of these different attributes,
I would love to hear your thoughts
on what actually matters for agents to become autonomous,
as opposed to just being able to tell these agents what to do.
I mean, the whole point of agents is to have intelligence and autonomy.
So out of these attributes, what, in your opinion, is, I would say, the sequence of
events before agents become autonomous.
Yeah so I guess one first thing's first let me explain like why we want them autonomous because
I think a lot of people, one you know a lot of folks are nervous about the idea of an autonomous agent. I'm excited about them.
I think a lot of folks are also weary that they actually would be performant.
And so I think the one advantage that Web3 has is that we can create agents that solve the big three issues we have in the AI space.
One being the problem of hallucination. Two,
you have the problem of the agentic problem. And then you have this trust problem. So
hallucination being in centralized AI or in an LLM, if we ask an agent
they are like in an LLM, if we ask an agent question
question and it's not specifically trained to answer that question,
and it's not specifically trained to answer that question,
it might provide you the wrong answer.
We had an LLM that we trained on like a chemistry agent.
And it was specifically trained on
like being a chemistry tutor.
And we were asking questions to the chemistry tutor
versus open AI, similar amount of questions.
And you receive more hallucination from OpenAI around this one question, which was, what is the temperature of mercury?
And if you ask that to the science agent, it'll tell you the temperature of the chemical compound.
And if you ask OpenAI, it starts telling you about the temperature of the planet. And so there's like this hallucination problem where you could use Web3 primitives like attestations
to actually guide agents toward doing the right things, doing the right tasks.
Just as we would humans in Web3, we can give them tokenized rewards or attestations for
And that's, you know, this is early. There's so many cool things we
can do in the future, but for now, there's unique ideas that Web2 doesn't have.
And then for the agentic problem, we have the issue of, okay, in Web2, your agent is bound to the context that has,
the resources that has, and the execution environment.
So for us, or for agent-filled,
there's this issue of having the agent come up
with unique context of what it needs to be doing.
And if it's telling itself what to do,
and it starts going down the wrong path,
it starts hallucinating or kind of
not accomplishing the right process,
you end up wasting resources,
perhaps the agent doesn't have permission
And so I think there's a huge opportunity
to build out, you build out identity for agents, reputation systems,
There's a project like HATS protocol
where you can pass roles to wallets.
Any intent that a wallet can do, an agent
will be able to do or inherit.
So imagine you tell your agent, hey, go state this ETH and borrow against
it and then go trade on this DAX. Go purchase these liquidity positions or these tokens.
If you don't find those tokens, take this path. If you want it to really think on its own, it'll come back to you and say that it did
a task and if you give it positive reputation, like thumbs up, like, hey, that was perfect,
then this thing can actually start to learn natively what it needs to do next because
it has some track of a track record essentially.
And then if you're a third party and you want to borrow my agent to go do these things,
you know, this agent would need to have some sort of identity or reputation or sort of some kind of like a credit score essentially or some green check mark that shows that this agent is verified
and trusted by a certain party,
it's being hosted by someone you trust. And then that starts to lead into this trust problem where, you know, if we're
working with an internet of 8 billion agents, how do we know which ones to trust? How do we know that the data they're
trained on is trusted? That's kind of what we focus on in Gaia is more of the verified
inference on the back end.
But for the agent itself, they're
going to need to be trusted as well.
And so there's all these really cool Web3 projects,
infrastructures that have been building for years,
that now are able to position themselves
That's been phenomenal for me.
I think a really fun example,
I guess, one of my good friends,
Billy Lutte from Intuition,
we had a conversation last year at some point
around Intuition, around his project.
They're building an attestation service
and much more, but attestations being like
an on-chain agreement that something is true.
And he had a lot of these examples of humans
doing that behavior being like,
oh, we're building intuition so that like a human
can verify that they are the owner of this Reddit account,
or their preferences, like a latte at Starbucks.
And you can take this and program that into different situations in Web3.
And during that conversation, I was like, yo, everything you're building building, like your customer is not humans, your customer is. And so
they've really pivoted a lot of their work into, you know,
thinking about the largest, the largest user of web three that
is not here yet, which are agents. And so, you know, I'm
more interested in working with existing web 3
companies that are now positioning themselves
I've seen a lot of folks in different blockchain ecosystems
try to build all this stuff from scratch.
And I don't think that's how software should be built.
I think it's in our best interest
to leverage what's already out there.
And it's been really exciting. Really exciting to see all these people shift their interests to, again, this new community of AI agents that will slowly start playing a role in the work we do.
Amazing. Okay. thank you for that. My following question is the role of a blockchain
I'd love to hear your thoughts on two, I would say,
facets of blockchains and agents.
One is people say that agents,
for them to be completely autonomous,
it's almost like a smart contract,
which is able to execute on chain completely.
And the second facet is access to on chain data so that the decisions that the agent is able to
make comes from a cryptographically secure and cryptographically verified information source
so that you cannot really poison the agent and torpedo its decision-making
process. I'd love to hear your thoughts around the role of a blockchain in a system like Gaia.
Yeah, so a few things here. One, you have
The one you have the verification of inference and compute data
and how all that is coordinated as resources.
The blockchain is a perfect place to...
Let's reverse engineer it. If an agent needs do that efficiently on incumbent banking rails and
value systems. An agent's not going to go walk into a Wells Fargo branch and open up a credit
card. Good luck. I worked at JP Morgan. It's not going to happen. If a human wants to be in the
loop for that process and be liable, sure, go nuts.
But again, this is why a lot of Web2 folks
will never have autonomous agents, really.
There's going to be business.
Until it becomes possible for an agent
to be its own entity legally in a jurisdiction,
it's going to be really hard to do that. And so there's this kind of coordination
of resources piece. Yn ystod y cyfle hwn, mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn. I see that he's connected, but I he's reconnecting.
Give us a moment. Yn ystod y cyfle, mae'r cyfle wedi'i ddod yn ystod y cyfle. Mae'r cyfle wedi'i ddod yn ystod y cyfle.
Mae'r cyfle wedi'i ddod yn ystod y cyfle.
Mae'r cyfle wedi'i ddod yn ystod y cyfle.
Mae'r cyfle wedi'i ddod yn ystod y cyfle.
Mae'r cyfle wedi'i ddod yn ystod y cyfle.
Mae'r cyfle wedi'i ddod yn ystod y cyfle.
Mae'r cyfle wedi'i ddod yn ystod y cyfle. Mae'r cyfle wedi'i ddod yn ystod y cyfle hwn, mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn.
Mae'r cyfle hwn yn ystod y cyfle hwn. Mae'r cyfle hwn yn ystod y cyfle hwn. Okay. I think we can wait for another minute and then if Matt's still having problems, we can just
Yeah, I'm trying to see on the back end if he's responding, but I haven't heard from
them yet. Apologies, everyone. We're having a bit of issues with the spaces today, it's just one of those days.
It's not just today, I've done probably a dozen spaces so far this year,
and every time I need to connect at least two or three times before it works.
Yeah. Okay, let's wrap up here then.
Yeah, I guess that's all. But thanks, everyone, for your time.
And Ganesh, thanks for joining the AI Frontiers.
Yeah, we'll see you all next week for another episode.