I think I just have to log out from my phone.
Should I? I'm going to invite you as a co-host.
Yeah, have you checked the last post?
Let me see if I have time to sign out of this one.
It's weird that we can't talk to each other.
Like me, you, and Garrison right now.
There should be a way that we do that.
But then isn't it public?
Hey, I can hear you guys.
We just have one more joining.
Am I still on the phone with you?
Am I still on the phone with you?
Okay, so we have everyone.
Let's maybe kick things off then.
So I'm Unique, one of the founders of Nibiru and the CEO.
So if you aren't familiar with us, Nibiru is a young layer one blockchain.
So it's pretty high throughput.
But right now it supports WASM contracts.
And then we're also adding support where it'll be like a fully EVM equivalent execution environment.
So people can deploy Solidity contracts there.
And we're going beyond some of the current EVM implementations to make one that's a lot more efficient, scalable, and powerful for smart contracts.
So there's a lot of, in addition to that, there's a lot of work we've done on like developer tooling, adding like a lot of SDKs, native indexer that's self-documenting, kind of similar to what you get on the graph, but just like at the base layer developed by the foundation team.
And we're here on a space with myself, August as well from the Nibiru side, and then Garrison from Ionet.
Maybe, August, do you want to give a quick intro?
Yeah, I'm August, marketing lead at Nibiru.
I manage co-marketing initiatives along with, sorry about that, ecosystem dApps, social media, and all the fun aspects of marketing such as branding and events.
Hey everyone, it's really nice to meet you.
I'm glad to be here with everyone today.
I run marketing and strategy at Ionet.
If you guys are not familiar with Ionet, we are one of the larger dPins built on Solana, primarily focusing on AI compute.
Just recently launched our Cloud V2, and right now we are building towards our token launch for later this month.
So maybe let's start off with some of the basics.
So what sparked the idea of Ionet, and what problems are you guys solving or aiming to solve?
That's a really great question.
I think that no one can deny that compute is becoming an increasingly more important part of what we do, right?
Not just because of AI, but also within a lot of Web3 infra.
And in sort of talking to your team, I know that you guys have plans for ZK and some other compute heavy Web3 infra.
Hopefully I'm not revealing anything here.
But when we think about the landscape that we can solve with decentralized compute, right?
We're really looking at two areas of focus.
The first one is Web3 and AI use cases.
There's a lot of reasons why crypto plus AI makes a lot of sense.
Providence, anti-censorship, et cetera.
But also the re-decentralization of Web3, right?
I think we look at all these L1s and L2s that are popping up, really, really fantastic, right?
A lot of app chain adoption.
The challenge with that is a lot of that compute and value accrual is happening with centralized cloud, right?
So a lot of these app chains and layer ones and layer twos that are popping up are using Google Cloud.
They're using Amazon Web Services, which makes a ton of sense.
You want to bootstrap fast, use centralized resources and partners to get to market quickly.
For us, we sort of look at that and we see an opportunity to work with individual compute providers,
independently owned data centers, miners, et cetera, who have compute capacity, whether GPU or CPU.
And we want to be able to use and leverage that to run decentralized validators, right?
And allow our partners to deploy natively on decentralized infrastructure.
And so I think for us, there's this really aspirational AI use case that we're trying to solve.
And at the same time, we're also trying to add a bit more decentralization back into our own industry.
And just for the listeners, what are graphical processing units or GPUs?
And then why are they instrumental in AI and machine learning?
I think most people probably call them graphics cards, right?
If you're a gamer, you've got a computer, you've got a graphics card.
Most people know NVIDIA or know SMCI or TSM.
Like these are household names nowadays, right?
Because of sort of the appreciation that these companies have seen in the last year and couple of years.
GPUs have inherent advantages over CPU processing, really because of their ability to parallelize processing, right?
GPUs have significantly more cores than CPUs.
And so they're able to handle more computationally heavy workloads.
And then a lot of what has been possible with cloud computing and with AI has been a direct result of us shifting from CPU processing to GPU processing.
And you can probably see with companies like NVIDIA that it's having a lot of real world impact.
These companies have gained significantly in value.
And nowadays, when you think about some of the most valuable resources, right, for enterprises, but also for countries, it's GPU compute that is being deployed in data centers around the world.
Today, hardware for GPU processing, specifically NVIDIA chips, right?
Like upper tier chips like the A100 and H100 are exceedingly scarce.
They're just really difficult to get.
They're very back ordered.
They're extremely expensive.
And I think GPUs as a result of this are one of the only like assets, like technology assets, pieces of hardware that when you buy them off the primary market are actually less expensive than when you buy them on the secondary market because they are in such high demand.
And so given that point you touched on for parallel processing, transformers like the backbone of large language models depend on that, right?
So that they would have also, I wonder how is the demand for IoNet evolved since the popularization of the later ChatGPT models and like newer AI projects?
And then I guess also who are your like biggest clients or renters of compute so far?
Yeah, I mean, I think ChatGPT going public, right, and becoming one of the fastest growing consumer facing AI projects, like really put this issue of kind of compute on the map because a lot of other projects, derivative projects, GPT wrappers, competing models, et cetera, wanted to go to market to try to capture this opportunity.
And as a result, there just became this like enormous demand for compute, right?
To kind of talk a bit more about the market, traditional cloud is extremely reliable.
There's a really strong ecosystem around it, but there's also not a lot of capacity, right?
It's also a little bit less flexible.
And so it's harder for some of these companies to access these resources to build their companies.
And so what IoNet and what a lot of these other deepins are trying to do is aggregate underutilized resources that we see from independent data centers, from professional miners, in some cases from consumers and retail, depending on what kind of hardware they're operating, and put this into a permissionless network that allows supply and demand to meet each other, right?
And in this way, you can basically buy cheaper compute because you can access things more efficiently in a marketplace that previously didn't exist.
What IoNet does is takes that a step further, right?
We do both the marketplace, right?
We're a first party deepin.
We aggregate other deepins like render, file coin, now Aether, and we also provide virtualization and orchestration.
So, you know, a lot of different concepts there, but what we do is slightly different than a lot of other marketplaces because instead of just matching you with a machine, right?
And giving you bare metal access, we also virtualize that server for you.
So we deliver you a virtual machine.
It can either be Kubernetes or Ray.
Ray is the one that is becoming increasingly more popular for enterprise workloads and for AI just because of the advantages of Ray networks.
And most of our customers today end up being these AI companies, right?
That are either model training, model tuning, inferencing, plenty of different use cases.
We support end-to-end AI workloads.
And a lot of these companies aren't really crypto.
They're just companies that are looking for access to compute as a commodity.
And on the other side, the customers that we're talking to again are on that Web3 infraside, right?
There's sort of an opportunity just to run very vanilla validators.
There's also the opportunity to run slightly higher demand or higher overhead requirements for ZK provers and validators for ZK chains.
And I think that's one of the areas that I'm personally very excited about is being able to support Web3 infrastructure.
And for either us or other Deepin to kind of get back into that decentralization conversation.
And is there a particular edge gained by involving the IO.net product with a decentralized blockchain network?
In other words, why have decentralized cloud network for GPUs rather than a centralized one?
I think you said it there yourself, right?
The inherent advantage is decentralization.
Decentralization provides anti-censorship is something that at scale is typically more reliable than centralized networks and also provides like sovereignty and ownership over a network, right?
I think your chains like Nibiru, right, especially some of the younger L1s, one of the biggest challenges is that as more chains come to market, it becomes increasingly difficult to bootstrap node sets, right?
Because there's a finite amount of people who are operating nodes or who can operate nodes.
But there's a lot of demand for those nodes and even mature chains, right, continually want to diversify the node set so that they are more decentralized.
What we try to do is help bootstrap that, right?
Because we are using a marketplace like a compute marketplace to bootstrap our node set, when you look at the ability of a blockchain to bring on nodes in the early stages, it's tricky, right?
You've got to inflate a token, you have to provide people value in that token, and you need to convince validators that there's value in that token, right, such that they're providing you with hardware and compute.
The challenge with that is because it's primarily token yield, most of the times validators are trying to provide the lowest possible spec hardware to meet minimum requirements in order to maximize earnings.
When you work with a D-PIN, whether it's for Web3 Infra or just bootstrapping a D-PIN validator set, the economics are different, right?
You're pulling in dollars from customers that you then pass through to the node operators while simultaneously inflating a token and providing that value to your node operators as well.
So what ends up happening is that D-PINs, especially compute D-PINs, end up having significantly higher spec hardware, right?
We look at the nodes that are operating on something like Ionet or Aether, and these are enterprise-grade GPUs, hardware that costs $17,000 to $40,000 a pop.
And then you're also seeing that the node set grows a lot faster, right?
D-PINs, whether it's Ionet, Aether, Render, et cetera, have tens of thousands of nodes, even in sort of this immature niche, whereas most blockchains, right?
Like I was previous at Avalanche, I think Avalanche has 2,000 nodes, which again, very impressive, right?
You know, for blockchains and validator account for blockchains, but it's an entirely different set of economics.
So I think D-PINs offer a really interesting way for blockchains to bootstrap nodes, as well as decentralize their node set.
And what sort of possibilities do you foresee Ionet unlocking for projects building in the Nibiru ecosystem?
I think it's two things, right?
One, it's at a chain level, like through this partnership, we can offer Nibiru's ecosystem the ability to increase the node set and decentralize the node set.
But two, also provide just straight up compute for dApps, right?
You know, even if you look at a highly decentralized blockchain, like let's take Ethereum as an example, right?
It's been around forever.
You know, most people would argue that it's very decentralized, a huge number of nodes.
Well, a lot of the dApps, right, especially the more compute heavy dApps that exist on chain, still host a lot of their infrastructure on traditional cloud, which makes sense, right?
You want to build something that is pseudo on chain, if you're storing the most important elements on chain, but then you've got all this other infrastructure in your web app, and that's got to get hosted somewhere, right?
And that's certainly not being hosted on decentralized cloud today, because it's such a young industry, it's being hosted on Amazon.
And so as we continue to work with projects and builders in the Nibiru ecosystem, our hope is that we're not only helping the chain, right, decentralize, but we're also helping the dApps by providing them with deeper, more flexible and alternative options to centralized cloud.
So I went through the process of actually putting up my GPU for rent.
And in just a few clicks and less than five minutes of connection time, I was all set up.
Can you talk the audience through the process of lending out GPUs in the process of renting GPUs?
Yeah, so there's multiple ways that you can do this.
But the easiest way is if you're a consumer, and you've got something at home, whether it's your computer, laptop, maybe you've got a mining rig, right, you're effectively in studies for IO net, you're effectively installing Docker, right, you're running an isolated container.
So your file system is not have access to container and vice versa.
And you're committing your capacity to IO net, you're connecting to the network.
What we do today is we then run a binary on your GPU to make sure that it's real, right, that it can actually do work, and then you're accepted in the network.
And then you begin earning, you earn rewards for simply idling and committing your capacity, whether it gets used or not.
And if you're hired, you earn additional rewards and earnings, which is basically paid by the person who is renting your capacity.
In a standard marketplace model.
There are a lot of different flavors, though, right?
If you look across the deep end landscape, there are projects that are doing this via a Chrome extension, which is quite interesting.
You've got some folks who are doing this with mobile devices, in some case, maybe a bit nascent, mobile compute being not that strong, also less reliable connections.
And you also have what I think is really interesting, which is sort of fractionalized ownership of compute.
And there are a few projects out there.
Debunker is one that comes to mind, right?
Where if you're a person and you want to own and rent compute and have exposure to this economy in this market, but you also don't want to connect your computer, or maybe your internet isn't that strong, or maybe you don't want to buy a $17,000 GPU, right, to run an enterprise grade GPU.
There are some projects out there that are basically real world asset or tokenized RWA projects where you can buy an NFT or some other token that represents a portion of a unit that's sitting inside of an enterprise data center.
Or you're simply buying the rights to a portion of time for that compute.
A lot of different models out there.
And then, of course, these RWA projects then deploy that capacity on ION and other D-PINs.
And so I think regardless of whether you're tech-savvy or not tech-savvy or whether you have access to a GPU or bandwidth that makes sense, there's a lot of different ways to get involved in D-PIN.
And I think it makes a lot of sense to just try it out.
Yeah, I highly recommend.
I was just really surprised.
Like, someone that's not tech-savvy as myself I was able to spin a cluster of was pretty cool.
Garrison, in terms of earnings, what does that look like for users leasing out their resources?
And what are the costs for renting GPU clusters?
How does this vary between independent developers, AI startups, and AAA gaming studios?
So let's talk about the renting side.
The GPU marketplace is heavily commoditized because these companies are basically just looking for advantages and costs.
You've got companies like AWS that sell compute, but also sell managed services and a whole ecosystem of integrations.
And that typically is a bit more expensive, right?
Centralized cloud comes with more bells and whistles, but probably costs the most.
In the middle, you've got companies like Lambda, which sell you very basic access to compute, long-term contracts to bring that cost down.
And it clearly works because they have no capacity, right?
It's very difficult to rent a lot of compute from a company like Lambda because it's in high demand.
And then at sort of the other end of the spectrum from traditional cloud, you have decentralized cloud, right?
Probably the cheapest cost.
One, because you're bringing in latent underutilized capacity.
But two, because putting a token in the middle also helps subsidize costs.
And the trade-off on that is that your Web3 typically is a bit higher friction, right?
But hey, it's a more nascent industry.
And so there's less of a supportive ecosystem around that, right?
Not exactly a ton of managed services here in Web3.
And when you think about cost, a standard enterprise GPU A100 is going to cost you about $1.50, $1.60 a card per compute hour.
And if you're running an AI workload, you might use 64 of them or 128 of them, right?
And so you're looking at $200-ish an hour, which over time can add up, but that's still a pretty solid cost to rent compute capacity.
Now, on the earning side for your average consumer, right, there's a couple ways to think about this, too.
You're generally going to earn some kind of idle fee from a network like IoNet where you are paid to simply be there.
And I compare that to if anyone's familiar with rideshare, right, or delivery.
The people who are delivering your food or driving around Uber or Grab, those people are paid to simply be driving around, right?
They're paid sort of this baseline rate just to be available in case someone needs a car.
And then when they're hired, they get paid even more.
That's exactly how these deep-ins work as well, right?
You're earning some baseline rate.
At IoNet, we try to get people to between one year and two years payback on their CapEx just on the idle rewards alone.
And then if you're hired to do a job, depending on the length of job, the frequency of hire, et cetera, you're adding on top of that payback.
And when we look at the compute marketplace, you on average, if you're like mining Bitcoin or mining crypto, you're making cents per hour, right?
Like maybe 10, 15 cents an hour.
If you are providing compute to a deep end, your earnings rate is typically a multiple of that.
Three, four, five times if you're hired very frequently can be up to 10 plus times the earning rate that you would typically get for mining crypto.
And so at least today, right, obviously markets change very quickly and things get saturated.
But today, providing compute to deep ends, especially for something like an IoNet or other compute deep ends, typically will yield better earnings, broadly speaking, than mining crypto.
The other thing to consider, too, is whether you're someone who is investing money into buying new equipment or whether you're just running something that you already have, right?
If you're a consumer and you've got a laptop sitting around and you want to just mine tokens and be part of a deep end, great.
You're not, you're not really looking for a payback per se, right?
Like you already bought the device.
If you're looking to buy a new device and it's a completely different calculation, right?
You have to consider the cost of the hardware, the additional bandwidth that you might need to purchase, you know, what impact that has on sort of using your own network for other purposes and figure out whether or not that's worth it for you.
Is there a way for developers to set up clusters programmatically or perhaps an environment similar to like a Google collab where you can like integrate with existing libraries?
Yeah, we, so we have a self-serve platform today.
You go in and click a few buttons, every cluster deploys in a minute or two, depending on how big it is.
We do have managed services for enterprise customers.
Most of them want some level of handholding to deploy, especially because Web3 is so new.
And then we also have APIs.
So if you wanted to virtualize a cluster in that way, you could also do that too.
When I was creating an account on Ionet, I noticed that when you sign up, you can actually connect your WorldCoin app.
How did this partnership come about and what does the future hold for both WorldCoin and Ionet?
Yeah, I guess it has not been shared publicly, like sort of in a press release or anything, but I can give you guys some alpha here.
One of the interesting things about Depends is that as they get big, right?
And Ionet is one of the larger ones, but let's just say like one of these guys, whether it's us or someone else, grows 10 to 100 times.
They end up being like pretty formidable assets, right?
If you allow someone to hire thousands and thousands of GPUs, which is, you know, terabytes upon terabytes of TFLOPs capacity for two hours and it isn't super cost prohibitive, right?
Maybe you're spending 10, 20, 30 grand, you can cause some real damage, right?
And so what we needed to do was find a way to be able to enforce like moderation for bad actors.
But at the same time, we wanted to stay permissionless, decentralized.
We don't want to store user information, right?
Our goal is to not KYC users, right?
That's sort of the antithesis of what we're trying to do.
But at the same time, there needs to be a way to determinately and in a confidential and private way, identify bad actors so that they could be effectively rate limited or blocked from using the system.
And so the WorldCoin partnership allows us to do that, right?
We don't need to know who you are, but if you do something that is illegal, damaging, harmful, et cetera, on the network, or if you abuse the network, we need to have a way to enforce access.
And so this is sort of early stages of us developing a product called IO ID, which will be powered by a combination of Auth0 and WorldCoin ID.
And how is IO.NET different from similar or existing decentralized networks like Render and Filecoin?
You know, everyone focuses on a slightly different area, and I think everyone is really great at different things.
Filecoin primarily does storage.
They're like a decentralized CDN, right?
We don't do storage, and partnering with Filecoin is a really interesting opportunity because, again, people, even though all these different projects focus in their one lane, a customer tends to not just want one service, right?
Some people want compute and storage.
And so partnering with a storage provider allows you to kind of provide value-added services and build out that ecosystem that people have come to expect from traditional cloud.
Render, similarly, they are primarily focused on renting and supplying single GPU instances where if you've got a laptop or, you know, high-end machine and I need to render something, I just rent that one device.
But they don't do clustering.
And so when they have leftover capacity that they can give to us, we can then cluster and provide it to an enterprise user who needs multiple machines.
Then you look out at, you know, like there's always Akash as well, right?
They primarily focus on model training because a lot of their clusters are co-located, right?
And they also have these enterprise devices.
And so they do that really well.
I think they don't do geo-distributed clusters, which is something that is slightly unique to what we do.
And then you've got companies like Aether, which primarily focus on really enterprise-grade GPUs.
They've become a really valuable supply partner to our network, but they don't do consumer GPUs, right?
So they don't cluster kind of that middle or long tail end of retail GPUs and find use cases for it.
They really focus on enterprise data center partnerships.
They do a ton of work around virtualization of game workloads.
They do a lot of gaming use cases, which is very, very tricky, right?
If you're working with consumer devices, because the latency for like live gaming is insane, potentially even more difficult to handle than AI.
And so, like I said, we all kind of specialize in these different things.
And it's really up to the developers, right?
To decide who they want to partner with and which one works best for their use case.
Gotcha. Thank you for that. That was really insightful.
What would you say are your plans for the rest of the year for IoNet?
We've got a lot of big things in store, right?
So we just launched Cloud V2.
We're launching the token soon, which is going to enable collateralization and decentralized security for the network through staking.
And so that's going to help us, one, tackle a lot of the problems I'm sure everyone has seen, right?
Like we've had issues with spoofing and faking and abusing of the network with fake GPUs.
Like none of that would exist if staking was live, because then there would be a cost, right, to enter the network.
And that kind of thing prevents civil attacks and spamming.
And so really excited to get that system online.
We're focusing a lot on onboarding.
We've got 20 or so customers that have been waiting in the wings to onboard.
And our team is moving as fast as possible to get them onto the network.
You guys have probably seen like public AI is online, Wendera is online.
We've got a few more that are going to be coming on board in the coming weeks as we slowly bring people into the network.
And then we're trying to build out a ton more features, right?
If you jump onto io.net today, you see that you could deploy Rain, Kubernetes.
But, you know, we want to look at other frameworks like Ludwig, Pintorch, etc.
So that we can expand the types of use cases that can easily go to market with our clusters.
And as we're coming up toward the last question.
Well, first off, I wanted to say, like, love what you guys are building, by the way.
And like, thanks for walking us long form through how to like how to think about io.net.
But I guess my last one would be, what are the key takeaways you want, like builders and startups to understand about io?
Yeah, so I don't know what the audience makeup looks like, right?
But whether you're a builder, you're a consumer, you're someone who's just in crypto and trying to figure out what what to ape into next.
I think the takeaway is that deep in is very early, right?
Even though Filecoin has been around for now its second cycle, deep in is very early.
And I think as you guys have probably seen by how fast some of these incentivize deep in networks grow, that there's definitely something here.
Right. And so, you know, I'm always a little bit cautious about saying things are going to be the one or that they're going to work out.
But I would say if if I was in crypto right now and, you know, I've done the DeFi thing, I've done the NFT thing, I keep seeing like AI coins and memes and all that.
And I'm looking for something to kind of chew on next.
Look in a deep end. It's not all just compute deep ends.
There's more there's more to the mix, right?
You've got high mapper, you've got helium.
You're going to see like more mobile deep ends pop up, right?
As people kind of latch onto the mobile narrative that's popping up.
But if you can get involved with deep end, you should.
And the nice thing about deep end is that unlike a lot of crypto, which is primarily liquidity driven deep end is hardware driven.
And so from what I've seen, right, most of the deep ends, us, grass, gaming, blah, blah, blah, blah, blah.
They're very easy to participate in. Right.
If you have a laptop, a gaming computer or a mobile phone, generally speaking, you can participate in these deep ends and you're not putting up liquidity.
And so it's a really nice way to get involved in something in crypto in a slightly different way.
Awesome. So that goes through the questions we wanted to run through.
I'd invite you guys listening or those on YouTube after we upload it to follow Yellow Noise here and also follow the IoNet account and Nibiru for future announcements on this.
How can people how would you suggest for people to test drive the platform as a last piece?
You can deploy a cluster with four or five, six, seven, eight, nine GPUs.
It'll cost you 20 bucks, right?
It'll cost you next to nothing to try it out, to feel what that looks like.
And then connect a worker.
Don't you don't have to do it long term, right?
If you've got a work laptop or something that's lying around, connect it, take five minutes, see what it feels like.
And so as you're researching deep ins and getting more involved, you know, kind of how these networks work on the supply and demand side.
I guess we'll end it here, guys.
Thanks for having me, guys.
Looking for the next one.
Yeah, thanks for joining us.