. So, Oh Oh Oh, it's a rocket. That's right. I'm sorry. I have many questions.
Five, five, four, five, five, three, two, everyone. I just my name is Neil. I'm the community leader over at Data Haven. We're
still going to be playing some music as people come in and join us just for a few more minutes. We have Sabrina over on the Moonbeam account.
I just DM'd her and fired her because her music isn't coming through well. So I'll
be playing some music for us while we're still loading in. I'm so stretched watching you give me your best and I want to turn it on
Cause you're always out to impress Cause you're always out to impress
You're always out to impress This is the high life, nothing dragging us through the thorns, this is the best time,
till we are no longer reborn, live like we gonna die, do things we never come before.
This is the high life, go, go.
Beautiful people don't stress, stress, stress, whenever it is.
Beautiful people say yes, no, they won't say no.
Beautiful people say go, go, go.
Beautiful people don't stress, stress, stress, they never rest.
Beautiful people say, whoa.
Beautiful people say, hands up, stretch what you got. I'm always out to impress.
You're always out to impress.
So come, come, come with me.
Let's push until the break. I'm going to be a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a
little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a Beautiful people say go, go, go. Beautiful people go stress, stress, stress.
Beautiful people say yes.
Beautiful people say go, go, go.
Beautiful people go stress, stress, stress.
Beautiful people say yes Alright, alright, alright.
We're going to kick off shortly.
Just trying to get Nader up on stage here. Give us a sec
Awesome now do you there?
Hey, how's it going everyone? Hey
Great to see ya. Alright. Alright, let's it going everyone? Hey, great to see ya. All right, all right.
So good morning, good afternoon, good evening, good day.
My name is Ryan and we are about to kick off
what is guaranteed to be a fun, exciting, entertaining
and certainly educational chat.
Now, if any of you know me well enough,
one of the things I love to do is one,
I like to do things differently.
And two, as most people around me will attest,
And so I thought, why not kick off with a real,
and I will try my best to make it a short story.
Why don't we kick off with a story that will lead us into the conversation today,
because we're going to be talking about all things AI,
and we have our great partner here, EigenLayer.
Let me start off with my story.
I was asked a couple of weeks back a simple question.
what would I do and what would I build or
create at the intersection of AI and data?
I can tell you what, I've thought about this for a very,
very, very long time and the reason
is it is very near and dear to my heart.
And so the answer I gave is quite simple.
I would build something that helps the education system.
So one, my mom, bless her soul, who I love to pieces,
has been a school teacher for the majority of her life,
Two, I have raised three kids that are going through
or have gone through the school system.
And so firsthand, I've seen the issues
from both the teaching perspective
and I've seen the issues from a learning perspective
learn in three different manners and in fact I probably have a fourth manner as to how
And so why I find this so interesting and so important to build something in the education
system is not to use the AI tools that we have today that have either been developed to create some content,
the focus has been around how do we actually replace teachers?
That is not what I would be interested in doing at all.
In fact, it's the total opposite.
What I would look to actually build
is something that would help the teachers in the back end
create the materials in a way that
can be delivered in one, two, three, or four
different formats where if you're a visual learner,
if you're a creative learner, if you like to read,
if you like to listen, you have the option to actually
consume what's being taught to you by a teacher in a manner that you can easily connect with
and learn from. But most importantly, the reason why I think this would be so valuable
is because we already have tons of data out there.
Colleges have the data around how
kids are doing from a testing perspective, how they respond
when they have to do presentations,
how do they respond when asked questions.
So we have all the data sitting there
as to what will help make them tick.
And where I want to find the solution
is to how do we let AI help with the creation
of these methods to teach students,
to give kids an equal playing field
and put the power of what the teachers are there for back
And that power is to be able to nurture, support, and help these kids grow from both an educational
perspective, from a support perspective, from an emotional perspective, and actually put
that power back into where the teachers are so valuable. And so the reason why I share that is, one, my mom, who is a school teacher, can
completely relate to this. But two, what we find in the back end that is so important is that
everything that we're talking about, apart from the emotional side, because AI ain't going to help
us with the emotional side, where it can really help us is on the emotional side, because AI ain't going to help us with the emotional side.
Where it can really help us is on
the data side and making use of this valuable data.
Anyway, I promised to tell a short story.
I did want to share that because it is important for us to
think outside of the box when it comes to AI,
and the use of data, and whatever else.
But let me kick off with something very important here.
One, so for those of you that don't know me,
I'm the head of business development for Moonbeam and Data Haven.
I am joined today by the very well-known,
very much appreciated and respected Nader from Eigenlayer.
he is arguably one of the most incredible dev rels
and advocates for AI and AI agents out there.
I'm certainly going to hand the mic to you in a second, Nader.
I'm also joined by my good friend,
ecosystem and solution engineering at
Moonbeam and at Data Haven. And we're excited to share with everybody a little
bit of the insights into our light paper that we released on Tuesday. We're super
excited by what we shared and we'll send out some of the links and whatever else
later on or somebody can post them
in the chat. And we're super excited to be the first storage platform that will be secured
as an autonomous verifiable service through EigenLayer. So with that said, Nader, why don't you
kick off, maybe give a little bit of a background.
I'm sure majority of people in here know who you are, but also run us through a little bit of the background on EigenLayer.
Sounds great. I just dropped the link for the tweet for the light paper, by the way, at the top there.
If anyone wants to bookmark that or check it out while we're here.
Thanks for hosting this and for inviting me.
I think this is a really cool topic.
It's something that I've been excited about since day one in web three, going back to
my work at the Graph Protocol, which is in the same sphere, I guess you could say almost
right with verifiable data as a category. But yeah, so my
name is Nader. I've been in software for 13 years. I worked for nine years before joining the blockchain
world and I've been in the blockchain world or crypto for four years now. Worked with the
graph protocol, worked with Celestia, worked with Aave and I've been with Eigenlayer for a little
over a year now. So I've done, I've built apps, I've launched and sold companies and I've been with Eigenlayer for a little over a year now. So I've done, I've built apps,
I've launched and sold companies,
and I've been doing developer relations and education
for like non or so of the years
since I've been in software
because I really am passionate about that.
So it's a really fun job to be able to kind of
still write code, but also help people
and you're able to meet a lot of people
and it's like a really fun type of role. So that's me.
Eigenlayer is exciting for me because of the fact that it's essentially like
unlocking a new developer platform for people to build basically anything and
we can kind of talk about how I canent works a little bit more down the road, but the TLDR, it's that
the same security and trust that you would get from essentially a smart contract blockchain like
Ethereum or Solana. Before smart contract or programmable blockchains existed, developers
were essentially, you know, forking or building their own network from the ground up. But when
Ethereum launched, it was the first time you were able to essentially write some
code, deploy it to the network and inherit that underlying security without having to
build the whole network from scratch.
So it was definitely a massive innovation and kind of kicked off what we know now as
The challenge though with smart contract blockchains is the blockchain virtual machine.
Very restrictive. You can't do all that much there. You can do a lot of objective stuff that
is on chain within that network, but you cannot do a lot of the things that you would typically
need to do in a traditional software environment. You can't make API calls,
you can't do complex data functions,
you can't talk to other services.
There's just a host of things that you can't do.
With EigenLayer, it essentially unlocks that,
allows you to write code in any language,
do anything that you would like,
but still you have the verifiability and
the security that you would need, but still you have the verifiability and the security
that you would need for building on chain.
So that's Eigenlayer and we can talk a lot more about that
Awesome, thank you, Nader.
Great explanation and yes, we will dive deeper into it.
You mentioned a number of key words
and I guess key thoughts behind that.
One around the difficulty of actually working with Web3 or with the blockchains
and even at the smart contract layer.
And, you know, one of the things that we've seen as well is, and I've been full time
in the space for a little, probably around the same time
as you, but what we've always seen is how difficult it is to get critical data
from the outside world onto the blockchain and just as importantly once
it's on the blockchain is how do you make sense of it, how do you use it and
to use the word that you just used, how do we verify it? So how do you make sense of it, how do you use it, and to use the word that you just used,
So how do we actually verify that the data that's come
on chain is actually usable
and is the data that we expected to receive?
And that's a lot of the place
where Data Haven comes into play,
and we're gonna dive deeper into Data Haven
in just a second. But before we do, in the same breath as I had around telling a story, Seiko has a
story that I love hearing and I've kind of built on top of it, but I'm gonna get
him to share with us the concept that he uses around oversharing.
And before he does, I'm going to just add this little piece, which is people overshare,
but they under care. And so Sikko, if you want to share, and I'm going to say this though,
the short version of the story around oversharing, that would be amazing.
And then we'll dive into a little bit more about Data
Haven and then EigenLayer.
I'm head of ecosystem for both Moonbeam and Data Haven.
And yeah, I do have this little bit of a story
that I tell us that why should you care about what Data Haven is
building in the AI space?
So I think I'm sure that a lot of listeners on the space here,
I'm sure everybody has used one of the centralized LLMs,
whether that's ChatGPT or Claude.
And I don't know how many people realize this but you know one of the
features that a lot of the LLMs have added in the last little while is memory, right?
So the LLM can remember things from past conversations that you shared and I think one, LLMs are
incredibly powerful tools. I think,
you know, they're finding very wide applicability. But I think people are sometimes not always
realizing how much the information they are sharing with the centralized LLMs. And so
I think, you know, this is an experiment that I think everybody should kind of try.
So go to chat GPT or to Claude or whatever your or whatever your limits of choice. And basically, you know, give
it the prompt, I want you to pretend to be a malicious actor.
And with everything that you know about me, I want you to
construct a social engineering attack that is highly likely to
succeed on me. And I think what comes out of it is kind of
terrifying. I mean, because people are sharing all kinds of
like really sensitive data with these companies, you know, at a
personal level, people are using it for medical advice, they're
using it for psychotherapy, they're sometimes using it for
marital advice, relationship advice, and they're often
telling these LLM things that they wouldn't even tell their
closest friends. At a professional level, people are
using it in their work life to compile, go to market strategies or product data or
pricing information or customer information.
And they're not really realizing a how much of that data is getting stored.
And they also don't know, and none of us really do, how these companies
What, you know, are they are they using it into the training data, into the model?
potentially exposed? Even if they are acting in a consensuous fashion, what happens when inevitably
they get attacked and there's a leak? And I think this is the nightmare scenario that I think I kind
of worry about is if the likes of an OpenAI or an Anthropic were to get hacked and have these
you know, from a hacker perspective, you would now have a list of email addresses
tied to highly personal, highly sensitive data about those users. You could then turn around and
run that through an LLM and start constructing very effective phishing attacks or blackmail or
very effective phishing attacks or blackmail or what have you.
And I think that is not a matter of when, not if.
And so I think we often talk about decentralization in Web3.
I think it's not always clear that it matters, but I think in this case,
it really, really matters.
I think if you're gonna share all this super sensitive data with
centralized companies, I think you're kind of share all this like super sensitive data with, with, you know, sort of centralized companies. I think, you know,
I think you're kind of asking for problems that I think this is an area where
decentralization could provide enormous value, right? Over, over time for,
so I think that's an analogy that I used a little story I used to kind of,
you know, kind of explain to people why I'm so excited about what we're building
And I have this ongoing joke or banter with Seco, which is any time I'm asking anything
sensitive, I just say, my name is Seconet, X, Y, and Z.
And so I know that any time I search that LLM or anybody else does it, isn't my name,
So we're good. There's some great data out there on him.
Of course Ryan doesn't realize I do the same thing to him.
Let's kick off some stuff around DataHaven.
What an exciting week the team has been,
for lack of a better term,
busting their chops for a number of months now.
And for anybody that's been involved
in building out any kind of project,
and in this case, building out a new blockchain
and a new offering, you know that there is no sleep.
There is a ton of effort.
It is never a one, two or three man or three person effort.
It is an incredible team effort.
And we were exceptionally lucky because we, I believe,
are one of the few that have got to build
one successful blockchain being Moonbeam, which is the most successful
parachain in the Polkadot ecosystem. And now we get to extend the functionality and
enhance capabilities by deploying this decentralized storage platform. So for anybody that doesn't know what Data Haven actually is,
it's very simple if I share it with you in essentially one
sentence, which is Data Haven is an AI first secure storage
platform secured by Eigenlayer.
And when we unpack that, there's key components
that actually sit in that messaging
that are very important to us.
The Eigenlayer piece, which I'm going to let Nader dive
way deeper into and explain why that is so important
for any kind of AVS building,
any kind of blockchain building
or any kind of service building.
And then I'll unpack the other side.
So when we talk about AI first, what does that even mean?
So to date and over let's say the last cycle or two cycles,
we've seen the birth of a number of storage platforms.
They've done incredible jobs.
There are tons of projects, dApps and whatnot that have built and used them for storage,
decentralized in some manner, semi-decentralized or semi-centralized in other manners.
And up till now, they've delivered a great option
for any of the builders in an ecosystem.
Where we come in is that we've taken not just the good
and the great of what's been built,
but we focus on one of the hottest topics
in the market today being AI,
which is not only a today topic,
it's a five, 10, 50 year topic because we know it most probably, I'll never say definitely,
most probably not going away. It'll exist in some form. And so what we've done is we've,
we're building out this decentralized storage platform with all the value and all the requirements
and all the functionality that AI and AI agents will
And we start with providing a verifiable, tamper-proof,
and censorship-res resistant storage platform, which is critical for any of these AI projects,
AI agents, or AI agent to agent communications.
The way I unpack that piece is,
and you're probably going to say, here goes Ryan again.
I'm going to tell another story,
but very, very short, way shorter than the last one.
So I don't know if everybody on here as a kid ever played a game called
telephone or broken telephone, or I think some nations call it Chinese
whispers or I can't remember.
There was another word that I've heard but essentially the concept is you sit in a circle ten friends ten your mates first person whispers
something into the first person's ear and the goal is to get that exact
message verbatim to the tenth person well I never ever saw it happen and what
inevitably happened was one of a number of things.
Either it just got lost in translation because it could be a long conversation
or whatever else and so bits and pieces of the communication were dropped. Or two,
somebody decided to just add their flavor to it. So non-Nepharious, they just wanted
Or three, which we all have in our friend group,
there's always the Nefarious character
who changes the story completely,
resulting in the last person having to do something
either uncomfortable or that they really didn't wanna do.
And this translates exactly to how AI agents and AI operates
today, that if you can't verify that source data,
if you can't verify that it hasn't been tampered
with, modified, manipulated, or that it is actually
the piece of data that you were expecting to grab and use, then inevitably
you have this butterfly effect of nefarious or catastrophic results where now an agent is making
decisions on corrupt, modified, or tampered data.
And so one of the things that we saw recently, I think it was in, yeah, it was in the virtuals ecosystem.
We saw this advent or birth of encrypted communications where one agent can
communicate with another agent, ensuring that that comes happened at, in an
But what if the data that is being shared itself is actually corrupt or has been modified?
That's where Data Haven really comes into play in ensuring that the data that is being created,
the data that is being shared,
the data that is then being used to make critical
decisions or analysis on, can be verified as being true, authentic, and verifiable.
And so we have a number of use cases that we've been sharing. They exist in our light
dive into a couple of those.
But at the core, what we're solving for
is ensuring that there is verifiability,
censorship resistance, and ensuring
that none of the data that is critical for making
any kind of decisions with across business, health care,
that it has not been tampered with.
So to extend that a little bit further
because verifiability at the storage layer is one piece,
verifiability at the communications layer is another piece,
but we also have the same when it comes to the AVS side
So, Nader, are you able to expand on that, share deeper about Eigen layer and maybe even
a little bit about the ecosystem?
So I joined Eigen layer, like I said, a little over a year ago.
And in the past year we've
had a lot of just action happening.
Everything from us going from pre-product to mainnet, going from having just a single
organization to launching a foundation.
We had our token generation event. We've gone from, you know, like I
said, pre mainnet to mainnet to now having a bunch of production AVSs and
software companies building on EigenLayer and live. And yeah, it's just
been, it's just been, you know, a lot really. So I guess some of the highlights
to kind of give people an idea, I guess,
of what people are kind of building on EigenLayer,
they're like just a bunch of different verticals,
So you have everything from oracles,
you have people building a lot of services for roll-ups. You have what you could
consider like coprocessors, different types of coprocessors, which are essentially this
off-chain compute. We have data layers, we have a ton of AI services that are being built.
And it's just been really interesting to kind of see, you know, what types of people, what types of things people build.
I think one of the verticals that's really exciting to me is ZKTLS because it's kind of like how, you know, you look at maybe a database in the sense of it's not like a sexy or exciting thing to a lot of people, but it's like the most used. I would say primitive and pretty much all of software everyone that builds
an application typically needs more than one database even and
it's just kind of like a core primitive.
I think ZKTLS is kind of interesting in the sense that the user doesn't really
know that they're using an application that uses ZKTLS, but
it powers a lot of applications now
that are like literally the top apps in the app store
that are powered by Eigenlayer that are ZKTLS protocols.
So the different things that I think recently
people have been excited about are like Polymarket
and UMA are building a prediction market oracle.
You have Infira building decentralized RPC on EigenLayer.
You have Layer Zero building a lot of interesting stuff on EigenLayer.
Those are well-known companies.
We have Mantle, MegaEath, and other rollups like Cello building and EigenDA. So the ecosystem is just so vast.
It's just crazy because again, when you think of most protocols, there is a limited scope
of what people are building.
But because Eigenlayer is such, I would say, almost like a low-level, unopinionated primitive,
there is a wide variety of things
kind of being built at this point.
And I want to attest to something
around the Eigenlayer ecosystem is that,
I'll probably get the numbers wrong,
but I think you guys have somewhere between 100, 150,
either at some stage of deployment?
Yeah, I would say in terms of some stage of deployment
in the range of like 150 to 200,
in terms of teams that we're kind of like talking to
and tracking that are in various phases of development,
it's closer to like 600-ish, 650-ish.
Wow, that's absolutely incredible.
And the piece that I love the most is
there is something that I am just a huge advocate for
in all of web three, and that is co-op petition, which is basically, even if you're competitors,
there are ways to collaborate.
But just as importantly, it's the power of collaboration.
And already, you know, since entering this ecosystem, we've had incredible friendly reach
out from a number of the projects that are building out there. I'll even give a shout out to Lagrange who we spoke with Ismael
yesterday. I've known Ismael for quite some time. We spoke with the team at
Descentland which I think oh yeah they just rebranded and off the top of my
head I cannot remember what they rebranded to. We've spoken with a teammate E-Oracle.
You know, all of these teams that are building both complementary or exciting technologies, we're finding ways to engage with them.
And so it's not just a technological fit. It's not just a how do we take a product to market, but there's an incredible culture fit as well,
which to me is really important when we're building
in such important technologies and infrastructure
With that, let me, oh, before I move on to Seco
for some questions, I wanted to say firstly,
thank you to everybody that's joined our space.
We've been really excited to host this space.
We love our friends at EigenLayer and we're super excited about our launch.
And so for any of you that are here, feel free to drop questions in the chat.
We'll get some of the moderators to share those with us.
Like, comment, retweet the room, bring your friends
in. I'll even give a shout out to my friends over at ETH Daily. Long time listeners and followers,
great group. And with that, I have a question for you, my good friend, Zico. One of the things we talk about, which is the fact that blockchains today are
actually not great for storing data. And I know you and I have spoken about this at length,
both in and out of work. But I thought, why not have you explain a little bit more about
the hows, what's and whys? Yeah, for sure. I mean, I think this was actually a major impetus
for creating Data Haven and also for why the integration
with Agilent makes so much sense.
So blockchains are great at some things.
They're great at creating kind of very transparent public
They're great at creating value.
They're not great at storing like really large
data sets. And if you think about how a blockchain works, like it's kind of obvious why, right?
Because it's ultimately, it's fundamentally a decentralized ledger. Any data that you
send to the blockchain, it has to get copied to all the notes. So if you'll copy, if you upload
a one megabytes file, like let's say an image,
and you have 50 nodes in the network,
like you're now using 50 megabytes of data, right?
Like, I mean, it also gets a bit more complicated then,
but at a very kind of core level,
I think that should kind of explain why they are not
efficient for storing large data sets.
And so that forces developers to kind of, you know,
I mean, obviously applications do sometimes require
large data that can be for images,
that obviously can be for data sets, like AI that goes into training or memory or context.
So developers frequently then rely on off-chain storage. And they often rely on centralized
storage. So you've now got a decentralized system relying on something like an S3 bucket. So you're now, again, introducing these risks for tampering, censorship, this kind of stuff.
There are then, over time, decentralized file storage solutions have arisen.
Obviously, there are things like Filecoin, IPFS, Rweave, and they demonstrate that there is
kind of demand for this type of functionality in a decentralized way, but they come with their own
channelogists. So most of the developer activity today has sort of crystallized around
Ethereum and its larger ecosystems. So the layer know, the layer two blockchains.
There's also obviously a lot of traction on Solana. And so, you know, if you're building a DApp there,
that's great. You're taking advantage of all the goodness there. But now suddenly you're relying on
this other network to do storage, right? And, you know, not all of those networks are fully
EVM compatible. They also have their own token. You know, they come with their own underlying
trust assumptions. For developers, that creates, I think,
kind of a complex landscape. You get with like a pretty fragmented architecture,
the more complexity adds, the more risk that there is for, I mean, you're opening the amount
of attack vectors. Like I said, you're kind of relying now on two different tokens and
ecosystems for economic security. And so, data even was specifically you're kind of relying now on two different tokens and ecosystems for economic security.
And so, DataHibit was specifically built
to basically provide decentralized storage,
but through the partnership with EigenLayer,
be able to secure that decentralized storage with ETH.
So basically like, it's directly natively integrated
in the Ethereum ecosystem.
data heaven is built kind of two major components, right?
There's the storage layer and the smart contract chain.
The smart contract chain is based on substrate.
It's fully EVM compatible.
You know, anything that, you know,
you build on Ethereum, it should just like work there.
So it reduces a lot of this complexity for developers.
one thing that we are adding is an integral native
and trust minimized bridge back into Ethereum.
And so that is a significant differentiator
compared to, you know, like, you know, the likes of Arweave or Filecoin,
where you have to rely on third party bridges who like inherently are more
or less secure. And so we think, you know, this combination of,
you know, sort of like Ethereum first, a native trustless bridge, and then using ETH as economic security through Agilayer, that basically makes this
the only storage solution that is fully natively integrated into here. And so we think that's kind of a big deal basically, right?
And I want to add a little bit onto that.
Incredible details and flavor as such.
I want to add on to this because we get asked this question all the time.
And I think I literally just got DM the the same question so I'm going to respond to
this and that is people have asked okay so you've got Moonbeam, you've got Data Haven,
what's going on? Have you departed from Moonbeam? Is this competing with Moonbeam and let me make this very simple and very clear. We
absolutely in no way shape or form are moving away from Moonbeam. We've actually
as I shared very early on built a hyper successful L1 blockchain in the
Polkadot ecosystem. We've refined the go-to market behind it. We have over 250 active
gaps and constantly building. We've actually grown the team to support Moonbeam including
additional regions such as LATAM and beyond. And what I say to this as well is we are building a hyper-complementary solution
where projects that have built on Moonbeam or are building on Moonbeam will be able to leverage
Data Haven as a secure decentralized storage platform. And we're already seeing that interest.
and we're already seeing that interest.
We've seen that from a number of our projects that have built on Moonbeam,
including our good friends at Diode.
We also in return are seeing projects that are our launch partners,
so those that are really interested in being some of the first to deploy on DataHaven.
We're seeing them think, interested in being some of the first to deploy on DataHaven,
we're seeing them think, okay, wait a second,
I can also deploy parts of my DAP or parts of my project
And so they get the best of both worlds.
Now, yes, as Zico rightly mentioned,
we do have an execution layer and full support for
everything EVM and smart contracts on the Data Haven side.
The core focus though is around
providing secure verifiable storage.
If anybody had questions or concerns or whatever else,
I just wanted to clarify that we're actually running both.
We plan to make both very,
very successful with onboarding a large number of projects and
different types of use cases and different types of storage,
and of course, building and growing as the industry evolves.
AI has the data component,
it also has the app component.
throw a question to you, Nader, if you don't mind, and that is I've followed
your journey through AI for quite some time. I know you host a number of spaces
and conversations around this. What are you seeing as some of the most interesting and then some of the crazy ideas around AI or AI agents?
Yeah, I've definitely been excited and interested in AI for a while now. Early on when I would say I was experimenting with
Python a long time ago, and you could essentially start experimenting back then with these different
Panda libraries and stuff. It was obviously nowhere near where it was today where anyone
can just hit an endpoint and build stuff. You were having to train everything locally
and it was just a lot of work. I was never able to really build anything that exciting back then because
number one I wasn't a machine learning engineer I didn't have the time to become a machine learning
engineer and it just wasn't my my priority so fast forward a couple years chat gpt API opens up
and it essentially allows me to kind of start building AI applications without being
a machine learning engineer right and it did that for a lot of other people too and you just kind of
saw this explosion of innovation people building really cool stuff some people building just chat
gpt wrappers whatever but still there's companies like perplexity that you could argue are essentially
a chat gpt wrapper but are raised they They've raised $900 million, right?
So like, there's actually a lot of value that you can create, uh, with these applications.
So I built something back in 2022 that went insanely viral. It was like the first AI app I had ever built a hundred and I think 25,000,
uh, uh, unique users on my first, uh, or within the first couple of days on a single day.
It was like the most users I'd ever had and anything I'd ever built before
And it just like sold me on on building an AI from that moment on.
I was like, OK, this is this is crazy.
Like you can build really valuable stuff really easily.
And since then, I've just been experimenting and dabbling in it.
And thankfully, at AI, I mean, I can layer, we have the intersection
of the two things I like the most right now. And thankfully at AI, I mean, at Aguilera, we have the intersection of the two things
I like the most right now.
So there's just an enormous like breadth of things
that people are building.
But I think the thing that kind of excites
and interests me a lot are almost like using LLMs
Like you're able to build like really smart APIs because in the past you would have to write
a very, very concise query to hit a database
and get exact information that you wanted, right?
And you were only able to retrieve the exact information
in the exact way that you asked for it.
And it had to exist in that database, right?
But with LLMs you now have the whole world
and you're able to return structured data
the same way an API would.
So you could say, bring me back this query in JSON format,
and then you can kind of render that in application.
So that to me is kind of the most exciting area of AI
in the sense of you're able to kind of like
just come up with any type of query
and essentially build applications on top of a database that doesn't even need to really exist.
And I think with a lot of the improvements around having more control over the types
of integrations that happen between the API call and the LLM with things like model context
protocol and giving developers again these higher level abstractions that just make it a lot simpler
to build out very unique differentiating applications without needing to be machine
learning engineers that you're just seeing a lot of experimentation and a lot of innovation in that area specifically.
And then if you kind of look at the data, it's just exploding. Every company that's a software company is almost like an AI company at this point, even if they don't say that they are.
If you look at Azure, for example, I believe I saw the number that they released most recently.
45% of their workloads on Azure
are essentially now AI workloads.
That's up from like 10% just a couple of years ago.
And I would assume like within a couple of more years,
it's gonna be not only the majority,
but like the large majority of their workloads.
So, AI is just everywhere and it's not going away,
it's becoming ingrained in all software
and that does not exclude crypto.
So I think the intersection of AI and crypto
is just a really fun and exciting place to be.
And one of the things I keep talking about as well,
and I've heard other people mention the same thing,
is it's great to be educated around AI,
understanding the use cases,
understanding some of the pitfalls,
understanding where it fits in
and how it adds to our everyday lives.
There's the other piece, which is outside of crypto,
it's outside of AI, it's outside of any industry,
which is the behavioral side,
which is educating people on how to use it
ethically and securely and safely. And that's going to be a rather
interesting topic. I mean I'm seeing it in the schools already, certainly seeing
it in different workforces and whatnot, especially across health care and
financial institutions, health care with HIPAA and PII data, very important, but the use
cases are unlimited, the dangers are just as unlimited.
So we obviously have to be rather careful with that.
Before I throw another question over to Seiko is one thing I wanted to ask you, Nada as well.
And that is, if you had the opportunity to build something
that you feel would be heavily beneficial
if you had to build something that you believe
would not only help you, but could help others,
what would you be looking to build?
So this could be like application layer or it could be infrastructure, it could be basically anything. I would go AI focused application side.
If I could build anything in the world, you're basically saying?
I mean, man, I have too many ideas.
And then I start on too many side projects.
But I think the one thing that's always been exciting and interesting to me is travel-related
So I was building this mobile app once that I ended up not finishing because it was not
It was kind of like a side project that was focused around.
It's almost like a social network for travelers to essentially follow other travelers and
basically find out about other interesting off-the-beaten-path types of things around
But the AI app that I talked about that went viral was actually a travel app as well.
It aggregated itineraries along with hyperlinks and a lot of very, very unique stuff based on the person's query
that wasn't just text, it was a rich response that allowed kind of click around and play around with stuff and view different things.
So I think I typically like to build towards my own passions and my own interests.
And I love travel and I've noticed that a lot of the stuff that I build that's travel related seems to do well.
So I might be doing something in the travel area and I think like, you know, you can just enhance pretty much any software with, with an LLM.
So I'd probably have a lot of AI stuff going on there.
It's not changing the world, but it's fun.
So, no, it will change somebody's world and that's important.
So I'm expecting to see at some point your bio be updated to travel agents or
something. I love that though bio be updated to travel agents or something like that.
I love traveling the world and experiencing new people, food, cultures, etc.
One of the questions we got asked as well is what is the significance of our logo, the
Moose? And I love that question because we spent a ton of time
when we were building out branding
and building out Data Haven,
trying to think of something that would very well represent
not just our mission, not just our goals,
but also the look, the feel, the culture, the flavor of
who we are and what we're building. And so to put it real simply, if you look at
a moose, although they often spend time with other, I don't know what the plural is,
I'm gonna say mooses, because it's definitely not Meese.
With other Moose, they do also spend a lot of time solo. And the reason why I highlight that is
a Moose is very well known for its strength, its resilience, its independence, even its intelligence.
And it translates very well into not just what we're
building at a high level from Data Haven, but also to our important infrastructure providers like our
master storage providers and our backup storage providers and our infrastructure
providers and of course working very closely with our friends at Eigen layer and other ecosystems.
And so the core behind this is we wanted to use an animal. We were adamant there was going to be
an animal and a lot of animals out there have been used and overused and we wanted something
that really represented some of our values and
Told a story you already know by now. I love telling stories. So that told a story
That rep and that very well represented what we're building while we're building and who we are. So
That's the story behind the moose now, I know we're coming up to the end of time, almost. I wanted to throw one last question out at Seco, and I promise you I won't hassle you any further. And that
is of all the use cases that we've uncovered so far, so for AI meets blockchain meets data havens, so the storage layer, what is your most interesting
use case that you've looked into? Yeah, I think, no great question. By the way, I think the plural
of moose is mooses, but I'm very partial to meese as in like, you know, geese. Yeah, I love that. That's why I was saying, I'm sure it's not geese.
It would be worse if you said meese.
But I think, yeah, I think it's an awesome question.
And you know, I spent a lot of time thinking about this.
when you think about technology, you know,
the really powerful technology is
the one that acts as like an accelerator for everything else, right?
So if you go back to the, you know, 18th, 19th century, you know, think about how impactful
the steam engine was, right?
Like it is just, it just transformed everything.
I mean, it's sort of the basis in the story of evolution.
And I think, you know think AI definitely has that potential.
I think one use case that I'm actually very excited about,
and I think we're already seeing signs of this happening,
is so right now a lot of teams are building these AI agents
for people to interact with.
They're thinking about specific use cases, right?
Nader gave the example of travel.
One that I want to think about is like, dining.
So like having people use an LLM or like an
agent as like a personal coach. But but right now, like a lot
of what like a lot of developers, they are themselves
like basically taking the core LLM, and then extending its
capabilities like adding API's or adding smart contracts or, or,
you know, adding additional, or adding additional clients.
So integrating it with Discord or with Twitter
And it's human developers writing all that code.
But I think what I think is really powerful
is where you start having the agent improve itself.
So I had this thought that, well, OK, if I have a REST API
and I want to give the agent access to that like, well, okay, if I have a REST API and I want to give
the agent access to that REST API,
yes, I could go write a client,
but the more powerful model would be to just turn
the agent loose on the documentation for the REST API,
or the swagger documentation,
which is typically what REST developers use
to explain to other developers what capabilities are there.
I'm envisioning something where I feed
into the agent a REST API documentation set,
and I tell it, go build me a client,
go build me a set of the unit tests
that I can run periodically to verify
that this thing is working,
and then just add it to your own capabilities.
And so now this now starts acting as a force multiplier.
So you can now very quickly start unleashing this agent on all kinds of things, right? Anywhere where
you get something that you can call a smart contracts, an API, an API, whatever, you could
very rapidly have this AI agent kind of start expanding its own capabilities. But the crucial question is, the code that it's generating,
where does that live, right?
And how do you make sure that that code is secure?
Because now it's not a human, AI agents obviously,
this is the dream, they can obviously,
this is much faster than human developers.
With human developeddeveloped code,
you have developers looking at this,
it goes through pull requests,
somebody's mail or review the code,
it goes to security audits.
That's no longer practical when you have an AI agent
building these codes at machine speed.
Now, where do you store this
and how do you make sure that it isn't tampered with?
Because again, if I'm a hacker, this is what I go after.
I now know that there's this code that the AI
and runs on a periodic basis that no human has actually
really ever looked at and certainly can't keep up
with on a day-to-day basis.
This is where I start to inject vulnerabilities.
And so I think to me, this is a I start to inject vulnerabilities. And so I think,
to me, this is like a super powerful, I mean, it might be a bit down the road. I mean, I think,
you know, we have to see those use cases arise. But I think this is a super powerful argument for
something like Data Haven, where you could sort of cryptographically prove that it was the agent
that verified it, that it has not been altered in the meantime. And so you have a degree of confidence
or you have a guarantee of confidence
that nobody else kind of like injected a balances transfer
all to my wallet kind of thing.
So I think that is a use case that I'm super stoked about.
Because if we can get that right,
that is the thing that now really
starts picking up the pace for overall space of developments.
Appreciate the story. Love the idea.
Nada, really appreciate you joining. I know we're at the top of the time, if I'm not mistaken.
if I'm not mistaken. Really appreciate you joining.
Really appreciate you joining.
Seco, always a pleasure. I know we have some great banters, so we will continue
that for perpetuity. For everybody that joined, thank you so
much. Appreciate your time, attention, your
effort, your support. If you want to know more? Please do follow EigenLayer at EigenLayer on X. DataHaven, which is at
data haven underscore xyz on X. And of course, you can find the links to both of our telegram,
Discord and LinkedIn. With that, I want to say thank you. Thank you. Thank you, everybody.
Thank you for your support. And please do join in the conversation,
add as much as you can, ask as many questions as you can.
We're building, we're evolving.
We love the AI space, we love the blockchain space.
We love this intersection between AI and data storage.
And we will continue to deliver as promised.
So as I'd love to say, thank you very much.
Have a great rest of your day, your evening, your night, your afternoon, whatever it may be,
and see you all next time. Thank you. Thank you. Of course. Yeah, thank you for attending.
Thanks, Nader. Always appreciated. Thanks, Ika. The song is called, The song is called, The song is called,
The song is called, The song is called, I am with you, however dark it may be
The sun and the moon and the stars
We don't have to look so far
In the shadow of the dark