Tech Corner: Discussing EIP-4844

Recorded: April 5, 2024 Duration: 0:59:19
Space Recording

Full Transcription

We're just going to give a few more minutes and let a few others join and then we'll get started.
Very good.
How are you today, Matt?
Doing great. How are you?
I'm okay. It's very rainy here in Scotland, as it usually is.
Or Scotland.
Yeah, it's supposed to snow soon, in April as well. So that'll be interesting.
I think we're just waiting for Josh to join as well.
But this one will be quite an interesting one for me because I have no clue what dank sharding is. So I'm looking forward to learning a few things from you.
I'd be surprised if there are many people in the world that really know what dank sharding is.
Yeah, when I was setting up the spaces, I was like, what exactly are they going to be talking about? Like, try to think of a title. It's quite hard. I had to go and look at a crypto.com article and stuff like that.
And even then, it probably didn't make it clear.
I read something about blobs. I don't know if that makes any sense.
Yes, there are blobs involved, for sure. Many blobs.
There we are. There's Joshua there. So I'll just invite you to co-host, Josh.
And then we'll get along with it.
Yep, there we are. Josh has it.
Okay, perfect. I'll now hand over to you, Matt, and feel free to take it away.
All right. Well, welcome, everybody. That's another episode of Tech Corner.
This time, Russ will not be joining us. He's taking some well-deserved days off, so I'm going to be hosting today.
Topic today is going to be dank sharding.
This is going to be an interesting one.
We decided to invite on one of the very few people on the planet that can probably speak about this topic, Joshua Primero, who is, for those of you who don't know, you certainly should.
He is the lead consensus architect at RDXWorks, so one of the main minds behind Cerberus and a lot of other really deep parts of the Radix platform.
So Josh is often surveying other consensus approaches and various things like this, and so he would really be the best person.
In fact, I'm going to rely heavily on Josh.
I'm going to be the first to admit here, I still don't really completely understand dank sharding.
I'm kind of like I've been trying to catch up with this.
Fortunately, it's not something I really need to understand for a lot of the product stuff.
This is like really deep architecture stuff.
So this is going to be a little bit of like an education experience for me as well as Josh goes through this.
So Josh, can you – are you here?
Can you speak?
Are you here?
Can you speak?
Hey, can you hear me?
So where I thought on this is – I don't know if this is me.
I'm getting a nasty echo through you, Josh.
Through you, Josh.
I have my headphones on.
Let me see if it will be better without it.
What's that?
Maybe it's grabbing the phone or something.
How's that?
Is this the bedroom?
That sounds good now.
So I'm going to start with the – I'm going to assume that there's very little knowledge on this because this is a really deep topic.
I mean, we're going to start with Dank sharding and we're very quickly going to get into, like, blobs and KGZ ceremonies and all sorts of things like this.
But I wanted to start at the absolute top level of this.
I'm just going to go to, like, the Ethereum – their own description of this, basically.
It says, Dank sharding is how Ethereum becomes a truly scalable blockchain, but there are several protocol updates required to get there.
Proto Dank sharding is an intermediate step along the way.
Both aim to make transactions on Layer 2 as cheap as possible for users and should scale Ethereum to greater than 10,000 transactions – I'm sorry, 100,000 transactions per second.
So immediately, like, I think one of the main topics here is EIP – let's see, which –
4844, right, which is actually technically about this thing called proto-dank sharding, which is sort of on the road to, I guess, full Dank sharding.
In fact, there's a little diagram I found on someplace where they kind of mentioned in the Ethereum ecosystem there's been this evolution of the thinking about sharding on Ethereum, where, you know, initially they started with this idea of, like, full sharding.
We're just going to have these completely parallel shards that are conducting sort of a shared consensus.
Then they kind of moved on to, like, well, maybe we can simplify that to what they call data sharding.
Then proceeded on to Dank sharding, which was kind of a streamlined version of that, as I understand.
And then proto-dank sharding, which is the thing that's, in theory, going to be going on for the network pretty soon.
So I guess, you know, where I wanted to start with this is what is Dank sharding?
What is it aiming to do?
And then within that, sort of what piece does proto-dank sharding play in that picture?
Yeah, so Dank sharding, I believe it's Ethereum sort of pushed towards a, what they call, focus on L2s or roll-ups.
So they're really kind of committed to going into supporting more roll-ups as their method of scaling, essentially, transactions on the Ethereum network.
And Dank sharding, I think, overall is a way to turbocharge these roll-ups by making it a lot cheaper.
And because currently how L2s, how L2s implement themselves on top of Ethereum is using call data.
So, you know, with like a transaction, you call a specific smart contract with specific call data.
And that call data is then used by the execution layer to run things.
And because the execution layer is required to have that call data to run that transaction, that call data is always required to be, every node on the network is always going to have to, is required to store that call data for the, for eternity, essentially, right, of the block.
So let's back up a second.
So called, I mean, when you're, when you're talking about call data, this is basically, it's kind of like, it's like passing data as an argument directly to the smart contract.
Yeah, exactly.
And so that's, so this, what's, what that, the reason why you're doing that is you're trying to enable this idea of a roll-up, like maybe like what, you know, how does a roll-up work?
Like, how does this create scalability for Ethereum?
So essentially a roll-up is, you can think of it as a, as a separate network, essentially.
And this separate network runs a bunch of transactions and a roll-up is essentially bunching all these transactions into a single, a rolled up transaction, which is then posted on Ethereum network as a single transaction.
And so essentially you can think of them using call data as sort of like, they're using call data as sort of a hack, essentially.
That's sort of where they're posting their data.
They don't actually, the Ethereum network doesn't run, execute anything on top of that call data.
They're just using it as sort of a blackboard posting.
This is the data that we know we want to execute on our separate network, essentially.
So you've got this L2 network that's doing its own thing.
It's got its own rules and it's kind of like summarizing what's happening on that chain and just kind of writing it to Ethereum.
Yeah, exactly.
Interesting.
So does, I mean, does that mean that, how does, how does an L2 interact with Ethereum?
So let's say you've got an application on the L2, is there a way of it interacting with like Ethereum tokens or ERC-20 tokens?
It seems like that would be difficult if you're just, you know, writing information on the blackboard.
Yeah, they do have some bridging mechanisms, but yeah, I think it's, it's not as, yeah, it's not as simple as just like posting a transaction.
There, there's definitely, because you are kind of moving from an L1 data to an L2 sort of data thing.
There's, there's some complexity there.
Like you obviously, you can't directly interact with stuff on the L1 because the L2 has got its own rules.
So I, I assume it would have to be something, if you've got this, you've got this smart contract on Ethereum and the L2 is pushing data into that.
I assume that that smart contract would have to implement some special logic to say, well, okay, you can, you can give me some particular requests and I'll lock up some tokens or something like that.
Uh, yeah, I'm not exactly sure.
I, I, I, I'm not sure if there is even such a thing as L1 smart contracts that can, um, communicate directly with L2.
I, I think they're like specific bridging smart contracts.
If I, if I understand it correctly, um, which is what's anybody who wants to enter to do that bridge needs to interact with.
So really the L2 really does act as a separate network.
It's just the fact that there, you kind of have this, this record on the main chain that the, uh, the L2 is, you know, was, you have a record of what happened on the L2 basically.
Yeah, exactly.
I think it's, you can think of it as more of like the L2 uses the L1 for certain decentralized properties.
Um, for example, the, the ordering of transactions and that that's about it.
Um, so it's pretty much like a separate network on its own.
So yeah, when, when you're, when they're talking about dank sharding, so dank sharding then is like, it's, it's basically the same kind of technique you're saying.
It's just that more efficient way of doing it.
Um, so maybe, maybe it makes better sense to start with proto dank sharding, which is EIP 4844.
Um, so the idea with this is that, um, like I mentioned earlier, um, currently rollups use call data, um, and you're required to store that for the lifetime of the blockchain.
Um, and it's quite expensive because of that, uh, with EIP 444, um, with this, with supporting rollups in mind, they've introduced this notion of blobs and with blobs, um, it's essentially acts as the same as call data, but the execution layer does not ever see this call data.
Um, as well as, um, um, this call date, uh, sorry, these blobs, um, uh, are only needed to be stored by the Ethereum node for 18 days, which is the enough time, uh, for rollups to be, uh, to be supported essentially, um, and permanent storage of that data is then, is then pushed on to, um, the L2 network essentially, um,
so it's essentially kind of like a, a blackboard that get things that posted on for a deterministic, um, amount of time.
And then it gets wiped out essentially.
Um, and that's, that's why it's, that's why it's a lot cheaper to use these blobs.
So rather than, so rather than directly writing the record to the Ethereum, you know, passing it to the smart contract and Ethereum and having it write it down and store it, you're basically saying,
well, there, there's this other piece of data in this blob, which you don't have to make that part of the main chain, but you sort of let the smart contract know, Hey, there's this blob.
You should pay attention to, you've got 18 days to do what you need to do with it, make sure it's, I assume a, you know, some sort of like correctness or something.
It's sort of examining the blob and going, did the right thing happen or what's the, what's the purpose of the blob, I guess.
So there are different types of rollups, um, for example, uh, for optimistic rollup, you have these things called fraud proofs.
Um, and essentially it's, if, if a, if a certain amount of transactions have been posted in these blobs and, and let's say there, one of them was doing a double spend for whatever reason, or something did something did bad.
Um, somebody can post a fraud proof and prove that, uh, the sequencer of the rollup did something bad or whatever, um, sequencing mechanism, the rollup is, uh, using and essentially slash, um, these validators of the L2 essentially.
Um, and so that's, that's essentially, and I think they said it's like, they need at least seven days is a good amount, but they just made it into 18 days, which, which is equivalent to, I think 4,096 epochs.
Um, and, uh, yeah, so they, they just picked a nice number for kind of a nice conservative number, um, for this data to be, uh, available for that time.
So there is like the, the, you're kind of relying on the, I guess the smart contract on Ethereum to do a little bit of validation of things happened in the right order.
Yeah, exactly.
Well, not even right order, but also things like, um, this transaction, um, created this, uh, this state hash rather than, uh, a wrong state hash and any sort of, um, uh, yeah, incorrect execution essentially.
So there is some sort of notion of sort of like, there's, there's a, there's correct behavior on the L2.
And even if the L1 doesn't know everything that happened, it can just kind of say like, Hey, I know the rules were followed on the L2.
Uh, well, with optimistic rollups, it's about, um, I don't care until somebody proves to me that somebody did something bad.
Um, yeah, exactly.
Yeah, exactly.
It's like, yes, that's right.
It's not validating.
It's basically saying, uh, I'm just going to sit on this proof.
And if anyone wants to, or I'm going to sit on this data and within 18 days, if someone else can come in and say, Hey, actually I've got proof that something incorrect was done.
Then I've got the ability of punishing people and so forth.
Yeah, exactly.
Interesting.
So yeah, you're using the Ethereum chain as a, as kind of a, um, you know, an enforcer, but it really is still a separate network.
It's just sort of, uh, using Ethereum as a, you know, a difficult to attack enforcer because it's a much larger chain.
Uh, right.
So I think you like, you gain sort of some of those decentralized properties of Ethereum.
Um, yeah, that sort of stuff.
Um, yeah, you, you, you said you, you'd started, there's like a whole like network of terms here.
You go and I, you, you'd mentioned optimistic rollups.
Um, I, I know I've read in some of the, the, the articles in this stuff, they started talking about ZK rollups and so forth.
What's the relationship to these different kinds of rollups with, they, with, uh, proto-dank sharding, sharding, uh, with proto-dank sharding.
So, yeah, so essentially the, these blobs, um, are available to use rather than call data.
And it's essentially, these blobs are a lot cheaper, um, to use.
Um, so that is, yeah, that would be EIP 4844.
When we go into actual dank sharding, um, the, the idea here is to, um, because currently all nodes on the network, um, would still need to, uh, verify these blobs.
Um, as in know that all data is available, um, for these blobs.
Mm-hmm, um, um, this might be a little bit hard to describe, um, but, uh, with, uh, we've got, we've got, we've got 45 minutes that we're here.
Um, with dank sharding, the idea is that every, I guess you could think of this similar to execution sharding, but with, uh, this is for data sharding.
The idea here is that every node does not need to, um, download the entire blob.
Um, that are being posted, um, to have some degree of confidence that the entire, the entire data blob was, is available, um, somewhere on the network.
You're saying this is like the different kinds of roll-ups provide you with this benefit that you don't have to get the whole thing.
No, no, sorry.
This is, this is, this is dank sharding, um, um, it's like, uh, because essentially dank sharding is about how do we push once we have this concept of blobs, right?
Um, on the network and essentially they've taken up the, the position that, um, that scaling transactions, we've reduced it to the problem of scaling how much data we can now process through, um, through consensus without execution.
Right, um, Right, okay, yeah, so basically, you, you know, Ethereum ticks along at kind of its constant rate, like it, it, it, it operates at the speed it operates at and what you're basically saying is like, look, we're going to create all of these.
The way we're going to get scaling is we're going to have this large number of L2 networks that are all kind of purpose built for some application or whatever.
And so now basically our limit on overall scaling, if we call that whole thing, the Ethereum ecosystem, the idea is like, how do you, how do you force as much of the, this data, the data that you're writing on the chalkboard, basically on the, on the Ethereum main network, how much of that data can you get through Ethereum at its, its standard throughput rate?
For, for, for a, for a large, a large number of these L2s.
Yeah, exactly.
So if you think about, yeah, what, what you just said, the optimization problem now is, yeah, how much data can we spit into this pipe essentially.
Um, and at that point, it becomes a data availability problem, which is sort of what Dan charted twice to solve.
Essentially, the idea is that with a, with a transaction.
Um, you're posting a, not necessarily, um, let's say you have a hash that represents some amount of data.
Um, and let's say that's a large, a large amount of data.
Uh, how do we process so much data that, that's a normal home computer can't process by itself because there's so much, you know, so much, so many bytes, um, moving along.
And essentially, pro-dent, pro-dent sharding is, um, is a mechanism by which I can, a, a small node can download just a portion of the data and see that it's somewhat, uh, corresponds to the hash, the, the, the root hash of that data.
Uh, and I have a great amount of confidence that the rest of the data is available somewhere on the network, essentially.
So this is related to, as I was going through some of the articles on this stuff, I, I saw this reference to a, to a probabilistic mechanism.
It sounds like this is related here, that you're kind of, you're, you're not trying to a hundred percent prove that it's correct, but you're just trying to do a little quick sampling of it and go, you know, it's very, very unlikely that if this part is wrong, or it's very, very unlikely that if the whole thing is correct, that this one little part of it is, is the only part that's correct.
Right, right.
Yep, exactly.
Um, and then once you get like a large amount of nodes and you know that, you know, there are a lot of other people, other nodes doing this, then kind of in congregation and aggregation, um, you have very good data availability, uh, probabilistic, um, um, probabilistic, uh, guarantees that all data is available.
It's, in some sense, it's kind of very similar to proof of work, um, where there's sort of a asymmetry in, um, in the generation of, uh, creating the proof of work versus the cheapness of verifying, um, like you only need to check number of zeros in front of your proof of work to see that you, the work's been done.
So in a similar way, with this data availability, it's like, it's very cheap for a single node to, you know, get an idea that, um, this data is available.
Um, um, yeah, so you, you have sort of that asymmetry.
It also kind of strikes me as like, uh, like the, the, you know, the, if you go through airports pretty often, you find that like every once in a while, it'd be like, you've been randomly selected to be inspected.
It's sort of, it's this idea that like, look, there's no way that we're going to fully inspect everybody that's coming through the security gate.
This is just impractical.
But if we do the, if we do spot checks on these things, if we do a large number of spot, spot checks, it's pretty likely that we're going to catch someone that's trying to sneak something through security.
So basically you can, you can push a, a much, much larger amount of data through Ethereum because you're not really checking all of it.
You're just kind of doing spot checks.
I don't think that's, that's similar.
I, it's a little bit better than just random sampling.
Um, they use like erasure codes and that sort of thing.
But yeah, I, I think the idea is there.
It's like, yeah, it's, it's, it's more clever than that.
You're not, you're not sort of just, you're not, it's not like tunnel vision.
I'm only looking at this part, but you're kind of doing this sampling thing where it becomes very difficult to, yeah.
I don't know.
My, my, my airport analogy is going to fall apart, but I think I understand what you're saying.
It would maybe be like, yeah, I don't know.
You'd like sample everyone a little bit and then reduce that into a single number.
I don't know.
It's like, it's, it's like you're, it's like everyone's walking through a scanning machine, but it's kind of a shitty scanning machine.
So like, you know, you're, but you have to walk through a bunch of them and it's something like that.
I'm going to pause for a moment here.
So we're, we're good over 20 minutes into, into the, the episode.
Definitely.
If there are any questions that anyone has throw those in, I'm going to stop from time to time and try to grab some questions.
And so, you know, if we're rushing along and it's like, you know, you keep using this word, what does that actually mean?
Or there's something, another question you've got out there, definitely throw it in and we'll, we'll start grabbing some of those as we go.
I don't think we have any yet, Connor, if I'm, unless I'm looking in the wrong place.
None, none yet whatsoever, but feel free to just ask them in Telegram or Twitter or something and we'll get to them.
We're so interesting that there are no questions.
Everyone completely understands.
You've explained it so well, Matt.
That's what it is.
Well, so let's go to some of the, the implications of this a little bit because it's, you know, one thing that strikes me is it's very like when you talk about sharding, it really isn't a single thing.
There's, there's, there are all these different ways of thinking about sharding.
In fact, there's that little diagram.
I wish I could, I wish I could post the visual here of, of the Ethereum community kind of going through this process of starting with the idea of full sharding and then data sharding.
And then proceeding down to kind of this kind of simplifying and streamlining the problem.
But it also strikes me that as they've gone, they've kind of started compromising, they're made, they've made different compromises in the performance.
Like if you, if you think about, like my understanding of some of the original thinking about full sharding on Ethereum was that there was still going to be this possibility like, well, fundamentally you can have communication between these shards.
You can have direct coordination.
You can kind of share state.
You can, you know, even if it isn't entirely atomically composable, you can still kind of do these commits to kind of create behavior across shards.
It's just a little bit more burdensome than if it was on a single network.
And it seems like dank sharding maybe makes more compromises to that from what you're saying.
Like it becomes actually, you really are treating these as separate networks.
So it's not even a matter of losing atomic composability.
It's kind of losing pretty much all composability.
Am I understanding that right?
Yeah, I think you got the, yeah, exactly right.
It's not even that, even from, let's say Ethereum's point of view, these blobs can be used for whatever.
It doesn't even need to be used for rollups or L2s necessarily.
It's just sort of another primitive they've added to the EVM, which happens to really support L2s a lot.
But again, it's, yeah, Ethereum themselves don't really make a, don't try to design something that makes L2s more interoperable with each other, that sort of thing.
They kind of leave that to the, to anything running on top of the EVM.
And I think that does have, that does end up with problems with like composability and kind of creating a platform that everyone can use.
If anything, it makes things more heterogeneous.
And I feel like they're kind of trying to move away from creating a platform, let's say, and just trying to support any platform that builds on top of them.
Right, exactly.
It kind of, it feels like they've, yeah, they've kind of backed away from the ambition a little bit.
But they've just kind of said, well, we're not really going to scale Ethereum.
We're just going to have this sort of ecosystem of chains that have a little bit of a shared security model.
And then we'll leave it up to those chains to be able to talk to each other in some way.
Which, and, you know, and, and, you know, bridging can do some interesting stuff.
I mean, like we've been talking to the layer zero guys and so forth and like, there's some really cool things you could do in terms of like coordinated message passing between chains.
But if you're leaving it to the application level, it feels like that's really where you end up is that kind of solution.
It's like, you're going to end up with something like layer zero between these L2s, which, you know, is cool, but it's certainly not, it's certainly not like send this token.
Like, you know, when you're, if you're moving from one chain to another chain, you're really sort of sending messages over the wall and trying to coordinate activity on both sides.
Right, right.
And then you, I mean, even though layer zero is cool, you still have different trust assumptions as opposed to if you're running on a, directly on an L1, you're always going to have UX, different UX problems between that.
And then not to mention supporting and maintaining kind of these different bridges.
It just makes all of that, all those things a lot harder.
I mean, honestly, I have to admit that I didn't realize how much they'd sort of, because if you look at kind of the roadmap now, like, I mean, it seems like they've sort of, they've said, well, we're going to do proto dank sharding, get like this fundamental sort of this blob capability that we need.
And then we're going to move on to dank sharding, and then that's kind of like the end in terms of the sharding story now for the scalability story.
They're saying, I mean, as it says on ethereum.org, like that's what gets us to 100,000 transactions per second.
But really what that's saying is saying, well, we can have this collection of separate networks, which in sum can do 100,000 transactions per second, which seems like a very different thing to me.
It seems like calling that ethereum is kind of pushing the definition.
Yeah, yeah, exactly.
It's like, like, you could already do 100,000 transactions per second on in centralized networks, or, you know, just a single server thing.
And, yeah, it is kind of strange to me that that would be because like those 100,000 transactions per second, it's like, what type of transactions are these?
Like, do none of them, if they're all kind of independent from each other, then is it really 100,000 transactions per second?
If like a layer two, because now the scalability problem has really been pushed to the layer twos themselves, right?
Like arbiter or optimism, they now need to solve how do we increase our number of transactions per second on our particular state machine, right?
Right, right.
Because their 100,000 transactions per second is for their, I don't know, whatever uses their underlying consensus, but not necessarily for their underlying state machine, which is, yeah, it's sort of a weird way of, weird number they're aiming for, or weird metric in that sense.
Right, right.
I mean, this has been the thing that always seems strange to be about L2s is that you've got, you know, you're, if we assume that like for a given L2, if it's its own network, and if we kind of assume like, well, you know, for a decentralized network, you know, I mean, whatever, you know, for a decentralized, there's a bunch of different consensus techniques for decentralized networks.
You've got, you know, Ethereum that does about 10 transactions a second, and let's, let's go to, I don't know, Solana is the way they do things.
I'm going to, I'm going to undercut their numbers a little bit and talk about like DeFi transactions.
It's like they're doing like a few hundred, you know, maybe, or maybe there's some techniques that gets up to like a thousand transactions a second by some reasonable definition.
But still what you're saying, but the whole L2 concept is basically going different applications and assets are only going to be able to directly interact with each other within that boundary within sort of, you know, hundreds of transactions a second.
And if you ever need to do more than that, sorry, you can't actually directly talk to each other.
You've got to have your own chain, which seems like a pretty significant limitation.
I mean, like you basically have, you know, the classic big example people talk about is like the Visa network, which does whatever, a few thousand transactions a second in the world or something like that.
You know, what you're basically saying is like, well, Visa has their own blockchain.
Sorry, you don't get any more interoperability than that, which feels like just a huge backing away from the whole promise of DeFi and blockchain in general.
Like I thought the point was that we were going to be able to connect our, you know, I've got some digital assets in my wallet and I can use those with anything.
Now what we're saying in effect is basically in the Ethereum ecosystem, it's like, well, okay, you've got some, you've got some assets on this chain, but if there's an application to this other chain you want to use, you're going to have to like move across a bridge of some kind.
So then maybe you've got like a wrapped form of that token on the other chain and then you can interact with that application and then you've got to move someplace else.
It feels like it's pushing it to this kind of usability nightmare.
And not only that, not only UX and usability, but each of those L2s now also have a different, different trust assumptions as well.
And there are different decentralization, you've got centralized sequencers in some and different, different things.
So now it's like, it's, there's too many, kind of too many parameters now to think about if you're trying to operate in that kind of L2 ecosystem.
Right. It's, it's kind of the, the, the classic Ethereum that says, it's like, well, you know, Hey, every chain can have its own trust assumptions.
That's up to the consumer. Like you make your own judgment, like you, the user, you should be making a judgment of, do you think this, this L2 you're going to be operating on has sufficient security guarantees for the application that you're going to be running there?
You know, it's, it, it seems, it seems to come from the same mentality that says like, well, you're going to write down your seed phrase to control your account.
It's like this, this sort of like, if, if you're, if you're super into dank sharding and you understand how this stuff works, then you can be an intelligent consumer of the Ethereum ecosystem.
Which, which seems like an extraordinarily high bar to set for people that were asking to come into DeFi.
Like it's already hard enough. And then, and then asking people to understand these, these L2s is very burdensome.
I, it kind of makes sense. I've seen a lot of, like, it seems like a lot of the activity with L2s, like I remember in Singapore last year, token 2049, there was a bunch of activity about different L2s.
And it was, a lot of it was centered around, uh, these gaming applications, which were quite cool.
Like there's a lot of really cool stuff going on, particularly in like Asia around these like mobile games and stuff that have like, you know, tokenized assets in the games and like user contributed content.
And the ability to kind of like move things between games. It was really cool. Um, and for the most part, these games were kind of like, they're getting set up on their, they're trying to like shop for an L2 that had enough throughput or whatever for them to build their game.
Or maybe it's like they had developer tools that were like really custom built for like building this kind of game. And so they could get up to speed really quickly.
But all of the, but the thing that it, it struck me with is that I think the reason why those applications are really going crazy on it is because they're kind of, they live in their own little world.
Like you've got this game where like, you've got your own wallet just for that game and all the assets kind of move around between the game.
Or maybe like you've got a publisher and they've got like three games and you can move assets between them, but it's like, they don't ever expect those game assets to go out and interact with the larger DeFi world.
Like they're just trying to like, and I kind of like the practicality. They're basically like, Hey, we can launch a game right now.
That's cool on its own merits and has its own assets. But it also struck me as being very limiting.
It feels like if that's, if that's where we end up with, why do you really need a blockchain in the first place?
Like why wouldn't, why wouldn't you just be like, you could build that basically on a database.
Like it feels like the power of having it all be on a blockchain in the first place is the fact that I can take my game asset out and then I can like list that on like a special marketplace that wasn't built by the game developer.
And it can develop its own secondary market value or, or I can build my own game that kind of interacts with someone else's game.
Like, so like, you know, it's cool that there's all this activity going on, but it would just be so disappointing if we gave up this ideal of being able to have things be atomically usable across applications.
That to me seems like a big part of the point.
Right, right. Yeah, exactly. I think, yeah, I think you got that exactly right.
Like, I think like L2s do, does make, do make things like, it's much more easy to start up your own, like, like you're saying, start up your own decentralized game engine.
That's its own ecosystem.
A lot cheaper to do that than, you know, create a new blockchain from scratch.
So in that way, it's a lot cheaper.
But like you said, if we're thinking about like the, maybe the real promise of DeFi or like, you know, owning your own finances and more financial things, it's about this composability kind of thing that's missing when you have these, when you focus on purely L2 type ecosystem.
Um, and I think one, one thing that I was kind of thinking of like, um, in the future, or what, what is sort of the end results if of this, like focus on L2 from a DeFi perspective.
And I was thinking like, um, at some points, um, these blobs, these blobs are still, um, you're still going to need to pay for them, but, but essentially they make things cheaper.
Um, and at some point you're going to have some L2, which is probably going to take up most of that blob space because it's the state machine, it's the L2, it's the rollup, which, which has essentially won the DeFi war, right?
Like this is where all the assets are, where you won't want to, you know, um, build things on.
Because it's, it's cheap.
Um, and then, but at that point, if there is a single L2, which is pretty much taken up most of, um, most of that space, then essentially Ethereum itself is supporting this one L2.
So, uh, except now there's this extra layer of indirection with these blobs and everything.
And it's sort of like you, you've, you've artificially made this stack for that L2 a lot, um, bigger.
Like it, you could have just started with that in the first place.
I don't, I don't know if that's just making much sense, but like.
Yeah, no, it's, yeah, it's funny because basically what, you know, if you're sort of making, if you're saying, look, L2s are just going to be these own networks with their own rules.
Then like, why don't you just let them properly be their own networks at that point, rather than having kind of like this vestigial attachment to, to the Ethereum network.
Like you, uh, you know, I mean, I, I guess I can see it in the beginning stages where it's like, oh, I've got this, I've got this, this L2 that I'm building specifically for this one game or whatever.
Because you, like, you don't have enough, there's not enough.
Um, like you get a lot of benefit out of the security guarantees that Ethereum provides at that small scale, because you don't have your own, like your network itself, if it was a standalone, could be pretty easily attacked.
But at some point, they all too get so popular that Ethereum is just this thing kind of hanging off of your system that you don't really, you're not really getting a lot of benefit.
It's just adding complexity.
Yeah, well, well, it does have the, yeah, the security stuff, like you said, like the, uh, the ability to post things essentially becomes a, the most bottom layer, which is has the blackboard for blobs, essentially.
That's, that's this L2 that's taken over the network, um, essentially uses.
Um, right.
And you do wonder how that L2 even got to that point, because what you're saying is like, well, if this is going to become overwhelmingly popular, well, that L2 is going to have to deal with its own scaling problem.
Like, how do you, how do you get enough capacity to, to handle all the applications that want to run on your L2?
Like, do you just then go to an L3 that starts fragmenting this?
Like what's the, the L2 itself will need to implement its own sharding mechanism, um, whether that is an L3 or, or, um, yeah, execution sharding or whatever that is, but, but yeah, exactly.
Um, um, well, I mean, maybe, maybe we can change the direction a little bit here, um, for the last 20 minutes.
Uh, we don't have any questions coming in.
That's, that's fine.
Um, maybe for those, I mean, particularly for those listening who aren't sort of, you know, haven't been following.
Radix for a long time and aren't kind of deeply familiar with some of our tech, maybe you could contrast the approach that dank sharding, uh, describes for adding scalability.
And we've kind of talked about this.
It's this, this, it's like sort of like a fragmenting of your ecosystem.
So you have these different applications, specific chains that are all kind of using Ethereum just as a way of providing a little bit of security.
Um, maybe contrast that with the Radix approach with where we're going with Cerberus and, and, uh, you know, braided consensus and some of those sorts of things.
Like we often talk about our approach as being parallelism.
Um, how does that parallelism differ from what, uh, from what dank sharding would provide?
So dank sharding is, uh, I, I don't, I don't even think they use the word sharding.
They even say to themselves, um, that sharding is not the right term for what they're doing.
Um, like we said, it, it's more about sampling, dank sampling, or yeah, essentially sampling data.
Um, um, whereas for us, we're, we're looking into, um, sharding, um, from the data all the way down to the execution.
So essentially being able to parallelize, um, execution so that we can run, um, uh, multiple transactions at the same time, essentially.
Um, which is more similar, I think to the approach, uh, to the initial approach Ethereum was looking into, um, though their model is still a little bit different, I think, but it's, it's, uh, um, more similar to that.
Like originally they were thinking about like, okay, we were going to figure out this way of being able to shard, but still have some communication and coordination between our shards and kind of like a shared validator set and all this kind of stuff.
Um, but it was, you know, I think like, I, you know, one of the, one of the big differences, I guess, is they, even back then they were thinking in terms of sort of a, a limited set of shards where it's still more easy to communicate within your shard.
And then there's kind of this other process for moving cross shard.
Whereas with Cerberus on, on Radix, um, there's no difference, right?
Like with Radix where we have the, the coordination approach is different.
Maybe you could describe how it works.
So with Ethereum, uh, well, let's say the, the original Ethereum, um, vision for sharding was essentially like, you can think of it as, uh, maybe multiple EVMs with their own, um, state root hashes, um, kind of executing their own thing.
Um, and you could think of this as maybe more of like, uh, multiple CPUs running with their own, uh, independent memory from each other.
Um, and then you have some sort of mechanism of sending messages, um, between, um, those CPUs or those virtual machines.
Um, whereas, so on, so on a given shard, if you've got a bunch of smart contracts that are all kind of running on the same CPU, obviously that's super easy.
But if you want to, if you want to talk to the next CPU over, you can do it, but it's like, you're throwing a message over the wall.
You're not running in the same machine.
Uh, yeah, yeah, exactly.
Um, you, you, you're, you're sending some sort of message.
At least that's what I, I recall from their original vision.
I think it was like, uh, because they, they never did any sort of synchronized, um, um, block finalization, I believe, um, crosshairs.
I think that's, I also think that's right.
If there are any Ethereum people on the call that want to correct us, feel free.
But I think that's how it works.
Or how it's supposed to work.
Um, and, uh, the Radix approach though, with Cerberus, at least, um, the idea is to, rather than operating at that level, to operate sharding at a, at a little bit slower level in that given a transaction.
And knowing sort of the dependencies of that transaction, we can know statically sort of, um, um, uh, how to split up or, or, or the shards which are required to execute transaction.
So, um, um, I guess maybe a simple way to put it is, uh, um, each state is part of some shard and we can execute, um, we can combine, uh, given a transaction that touches all the states.
Our consensus can execute that transaction in a, uh, synchronized way rather than, rather than in multiple steps.
Um, so that's, yeah, we, we still get atomic composability essentially.
Yeah, maybe I'm trying to, I'm trying to sort of continue the metaphor you started.
Like, I mean, one of the things that you didn't mention, but it, which I think, I think is one of the important parts of it is that with Radix, we have this, you know, with Cerberus, fully sharded Cerberus, we will have this extraordinarily large number of potential shards.
Like you, you, you, you aggressively scatter everything across.
So you can, you can kind of think of it as like, you have this almost unlimited number of CPUs, but we've built a system, basically like a bus between the CPUs that makes it really, really low friction for any sort of any set of CPUs to talk to each other and agree on something and then go their own way.
Whereas with the Ethereum approach, you have a relatively small, small number, the original Ethereum approach, not the dank sharding approach, but what they were thinking about and gave up on, you'd have kind of a small number of CPUs and, but the bus connecting those things was kind of, you know, simple.
It was basically like, you can just pass a message, but there's no such thing as directly coordinating between multiple CPUs.
Right, right, right.
Um, yeah, exactly.
I, I think their approach was a much more, maybe simpler, um, uh, I guess you could say, yeah, a simpler mechanism.
Uh, they're, they're, they're more of like a message passing sort of thing.
Whereas ours is more of, let's say, uh, if we're taking the CPU analogy, it's more of like a, a shared memory model between CPUs with, um, some primitive, which helps synchronization, um, between the.
Right, yeah, maybe that, that is the way of thinking of it, right?
Like shared memory model.
Basically we're like, we have one combined memory space, which is there, we're not, we're not fragmenting sort of the memory space on the multiple shards.
There is one, there is one RADC state, but a given CPU kind of has the right to operate on a part of it.
Um, and so you can, you can sort of coordinate between these things and you can write to different parts of that memory in parallel.
Right, right.
Yep, exactly.
Very cool.
Very cool.
Well, still no questions.
I'm, I'm, I'm starting to get disappointed.
Like, this is supposed to be participation.
Come on guys.
Like you kind of, no question is too dumb because this is a really, this is a really challenging topic.
So I'm, I'm really interested in everyone has any, any questions about like, you know, what the hell does this mean?
Um, so we've got another, other 10 minutes or so.
If we have any questions along those lines.
I guess I have a question for you, Matt.
And like with all this, this tech stuff with rollups and whatever, what's, what do you view from a product point of view?
Um, does this relate at all to the end user or do you view these things as things the end user should never really even think about?
Um, or it's abstracted from the end user.
Like to what level do these things come up, let's say in the wallet and that sort of thing?
Well, yeah.
I mean, it's like, I, I'm curious what the Ethereum viewpoint is on it.
Like if they're proposing something like this, like what do they imagine the wallet experience would like?
I mean, I guess it's going to be, well, I mean, we, I guess we kind of know what it is because it's going to be the same experience you get with L2s, which it's sort of like.
You got to connect your wallet to the L2, the L2 might even have its own custom wallet.
Like you kind of have your, your accounts on that L2.
So it's like, you know, maybe, maybe MetaMask is smart enough to be able to sort of like, you can switch between networks, but it's really not a cohesive experience.
It's sort of like, uh, it's like, you know, just having one UI that you can sort of connect to different ecosystems and it sort of gives you a similar experience.
Um, I mean, you know, I guess I can say, you know, like setting aside technology, if we just kind of think about like what we would, what we would want the ideal state from the user to be.
I don't, I mean, I don't think they want to think about this thing, this kind of stuff at all.
I mean, cause like, you know, one of the big things about user experience is that basically anything you present to the user should actually be something where they meaningfully, like what you're offering to them helps them directly solve the problems they care about.
And as much as, and as much as possible, if there's something they don't care about, you should simply hide it from them because it doesn't matter.
So like, you know, it's, which is often one of the really difficult challenges.
Like it's really easy in UI to just kind of go like, well, uh, it would make the tech a lot easier if we just, the user had to make this choice or they had to like, you know, we, we just, you can, you can kind of like offload the problem to the user.
But anytime you do that, it's, it's like kind of diluting the rest of the experience.
The user goes, okay, you're just, I'm trying to solve a particular problem and you're just introducing friction and it provides me no benefit.
So it's like, uh, it's like, imagine if, what's it, what's a good metaphor.
Like imagine if the worldwide web was designed differently and it was like, rather than having a single URL scheme and a single sort of, you know, TCP IP layer that would move these, what if it was like you fired up your web browser and you had to like connect your web browser to the server that was hosting the webpage you wanted to connect to.
It's like, you know, like if you, if you go back to like the, the eighties and nineties, that might've sounded like a more, more simple approach from a technology point of view, but they're like, oh, okay, we're just going to have all these little networks.
And it's like, you can just connect to this network and you can do some stuff there.
And then maybe that network says, Hey, if you want to do this other thing, you're going to have to link over this other network.
And then the user like disconnects their wallet and maybe like, you know, basically bridges the message across so you can, but you know, obviously this, the worldwide web would have never taken off.
If that's what the user experience was, the whole point was that it was just, it just felt like one system.
It felt like there is a single web and I can do whatever I want.
I can send a message from this application to this other application.
Like, you know, everyone's on the same email system.
Like everything is inherently universal.
And so, I mean, you know, when I think about web three and DeFi, it feels like the user experience should, if anything, be even more so like this, because that's the problem we're trying to solve.
Like if you've got right now, if you've got assets, if you've got money that you hold or your identity or anything, all these things right now are already in these separate silos.
We basically have that experience where it's like, okay, if I want to connect to this bank, they've got their own app.
I connect there.
I do something there.
Now I've got to go log into like a different bank to do something over there.
And the only way you move between the banks, particularly if you're an American, you're probably familiar with this.
Moving between banks is a huge pain in the ass because it's kind of like this message passing thing.
It's like one bank sends a swift message and then this other bank acknowledges it.
And then three days later, your money shows up.
It's like it feels like we're like, you know, the L2 thing feels like it's heading in that direction from a user experience point of view.
At which point, once again, why did we bother with blockchain?
We just have different silos now.
We run by different people.
Right, right.
Yeah, I mean, I think it's a usability nightmare from my point of view.
And I think one of the great things about building like the Radix wallet around where we're going with the Radix platform is the fact that the wallet never has to think about this stuff.
Like there's nothing in our wallet implementation that understands anything about Radix consensus.
Not even really like the Radix engine.
It really just kind of understands this high level interface of, oh, there are accounts.
Those accounts hold some tokens.
I know how I can push those tokens around with transactions.
Transactions are very sort of, you know, direct to build.
You offer this layer that speaks in the language that the user interface wants to speak in, which to me is just absolutely crucial.
Like if we didn't have that capability at a platform level, there's no way we could have built the wallet the way we did.
Right, right.
I think it's a little tricky, though, with, I guess, the difference with when we're in the blockchain world is there are quite different trust assumptions, right, with whatever we're using.
How, like, I guess, related to like EIP 44 and roll-ups in general, they've got, you know, different trust assumptions for all these L2s.
And do you see that as something maybe users don't need to care about?
As in like, like, for example, like a dumb one is like with Bitcoin, you wait six confirmation period times before viewing your transaction as finalized.
Right, which clearly is something that like, I mean, no, no normal user is going to want to have to think about those things.
It's like, oh, on this L2, I have to wait these many blocks or whatever, like understand it.
Like, I mean, I think if the world went that way, if the world went through the direction of like, okay, everything is built out of L2s, then, I mean, I think realistically, it's going to end up once again, just replicating what we have in the traditional world today.
Where it's like, for example, like, I don't, I don't trust Chase Bank, because I've audited their back end system.
And it's like, okay, you guys have good, you know, you guys have built a good system, it's going to track my money in a logical way.
And, you know, like, basically, I trust them because it's Chase Bank, they've developed a reputation.
Other people have told me, oh, yeah, Chase Bank is fine.
They've got FDIC insurance from the government, like, you can trust your money there.
I don't have to think about their technology.
Like, I think that's the, if you look at any mass market technology, it has to get to that kind of abstraction point where the user never has to think about technology, all they have to think about is what they have to care about.
And right now, basically, we have to care about is the reputation of the financial institution or the company, or the whatever that you're dealing with, which is, you know, works.
But if we're going to do a blockchain mechanism, you'd really like to get to the point where it's basically just like, all I have to do is I just have to trust Radix.
I like, I understand that, like, Radix assets, all my stuff on the Radix network, this is safe.
I don't have to worry about the trust assumptions, because I know that the Radix network is safe, my reputation.
And all the things that I want to do are available on the Radix network.
Therefore, this is not a problem for me.
Like, I think that fragmentation is going to, again, it's like, you know, imagine if, you know, when I'm online, generally in the Web2 world, I just sort of, I have a mental model for how the internet works.
I just know that, like, you know, if I send a message, like, there's a possibility that message could be lost partway, and I might have to resend it.
Like, you just kind of, you know, you get used to, you get used to, like, what things are like on this one thing.
But I think people have this very limited capacity to try to understand different systems.
Like, it has to be simplified down to a, you know, a business card or a bumper sticker of, like, oh, okay, I use this thing, and this is how it works.
Right, right.
Interesting.
Well, I kind of, we're getting close to time.
Still don't have any questions.
That's fine.
It's Friday.
I understand you guys are tired.
I think we might wrap it up then at this point.
It's been a really interesting conversation for me.
I mean, I hope at least, like, those of you who are sitting silent, I hope you've at least found this informative.
I know I certainly have.
I like, I have a much better understanding of dank sharding than I did before going into this call.
You know, as always, I guess I, you know, I guess the only shade I'll drop here is that I'm even less concerned about Ethereum than I was before.
I mean, honestly, I think it's, like, it's very cool tech.
But, again, it's, like, does the tech actually fit what the, yeah, the end usage of the overall goal of what we're trying to achieve?
And maybe, like, maybe Ethereum's goal is, you know, different from Radix's, where, you know, we're trying to decentralize finance, essentially, right?
But it seems like maybe their goal is a little bit different and where their sort of roll-up focus is better suited for their goals.
Right. I mean, it seems like it's sort of taking this very sort of technology-first approach.
It feels like it's sort of like, hey, a lot of people are using Ethereum, so we need more scalability.
Okay, well, one approach to scalability is sharding.
Okay, sharding is a thing we need.
Okay, let's start working on that problem.
Well, actually, we can build, if we define sharding in this way, we can build this cool technology that does that.
Oh, okay, let's do that technology and see where it goes.
Right, right.
But there's a lot of cool research there.
I mean, like, yeah, at a technology level, I did find myself, it's like, oh, this is really cool, kind of like this idea of, like, kind of subsampling the data and all this kind of stuff.
Some of those things where maybe it finds its place.
Right, right.
Okay, well, on that, I think I will turn everybody loose.
Thanks, everyone who listened and joined in.
And we'll see you guys for the next one.
Cool. Thanks, Matt.
Thanks, Josh.
Thanks, Josh.