And, for example, you're going to be a good one.
And, for example, if the user wants to make a trade on Uniswap, a pre-confirmation would
give them a promise on exactly what price they would get for their trade on Uniswap.
And, really, what we need is we need a way to punish the pre-confirmer for not respecting their promises.
And, one way to do that is to use eigenlayer, whereby you can add additional slashing conditions
for not respecting these pre-confirmations.
So, for example, if you have a transaction request from a user and you say, okay, I'm going to include it at position 10,
and then you end up not fulfilling your promise, and position 10 you just end up putting a totally different transaction,
then now the user can say, hey, hold on, I've been cheated, and I'm going to get you slashed your 32 ETH that you're staking with.
Really interesting. And I think it might be worth digging into how pre-confirmations work a little bit more.
I think this is, this to me is, I think, like an endlessly fascinating topic as they relate to base sequencers.
So, as I understand, what you're saying is that some validators will opt into restaking their ETH and will offer pre-confirmations,
and then if they either put different transactions in a particular slot, or if they miss a slot, they will be slashed.
And so, I have, I guess, two follow-up questions.
What sounds like these folks are going to be taking on some risk.
So, I guess they should be, I guess, getting some reward in exchange for taking that risk.
And then, I guess, a related question is, for folks who aren't taking on that risk, what happens to those transactions?
Say that there's only someone 100 slots out that's offering pre-confirmations.
Does that mean that transactions will have to wait, those 100 slots, or is there a way for those transactions to still make it to Ethereum sooner?
Right. So, let me answer the second question first.
So, on Ethereum, we have what's called the look-ahead, specifically the proposer look-ahead.
And what it means is that in the current slot, slot N, you actually know who will be the next 32 to 64 proposers.
So, you have at least 32 slots of visibility.
And what that means is that if you have, you know, let's say, 10% of proposers that have opted in to become pre-confirmers.
So, they've opted into these new slashing conditions.
Then, with very high probability, at least one of the pre-confirmers within the look-ahead, there's going to be at least one pre-confirmer within the look-ahead of 32 slots.
And so, as a user, what I have to do is to look at the look-ahead and just start talking to the very first pre-confirmer.
And what we're going to do is we're going to set precedence rules saying that the next pre-confirmer,
even if it's, let's say, 10 slots in the future, they have precedence over the sequencing of the next block that gets settled on the roll-up.
So, all the prior proposers, they can get transactions on-chain, but they either need to be signed by the pre-confirmer.
So, they either need to be in the very specific order as specified by the pre-confirmer, 10 slots in the future.
Or they can also include their own transactions, but here you don't have guarantees on ordering.
You only have guarantees on inclusion.
So, if the proposer, the pre-confirmer wishes, they can change the ordering.
So, that's how you get, like, pre-confirmations at every slot, even if only 10% of proposers have opted in to becoming pre-confirmers.
Now, in terms of your second question, in terms of incentives and risks,
when you provide a pre-confirmation to a user, you can request a pre-confirmation tip.
So, in the same way that we have an inclusion tip, which is a small amount of ETH, you know, to get your transaction included,
you can have the exact same mechanism for getting your transaction pre-confirmed.
And one of the things that we can do, actually, is make sure that the transaction is only valid if it is pre-confirmed at a very specific position.
And so, it's not like a pre-confirmer can, you know, just replay transactions and change the ordering and things like that.
It basically has a very simple binary decision to make.
Either it provides a pre-confirmation or it doesn't.
And if it doesn't, then it's going to be a lost opportunity, a lost financial opportunity.
So, for example, if users are willing to pay, let's say, one cent per transaction,
then every time a request comes in, it's kind of this binary decision around either getting the one cent or not getting the one cent.
And if, you know, you're providing pre-confirmations for tens of thousands or hundreds of thousands of transactions per second across all the base rollups,
then that's going to be a very significant part to your income as a proposal.
And so, I guess to implement this, eventually, there might be a new envelope transaction type that's maybe only used in L2s that has a second tip field that specifies a tip to one of these pre-confirmers.
Yeah, that's exactly right.
You can either do it with account abstraction or you could have it with a new native transaction type to the role of virtual machine.
And then, of course, I guess the power of all of this is that you get that instant confirmation.
And effectively, through slashing, you're basically putting some economic security behind something that feels like kind of one slot finality,
where once you have a transaction pre-confirmed, you can feel pretty good about it making it on chain.
And I guess the economic security that's offered here would be some portion of that 32 ETH that that pre-confirmer has restaked.
And I guess one thing that I'm particularly curious about is in some of the non-based decentralized sequencer designs,
there's work being done to use BFT-style consensus to get multiple nodes to effectively sign off on and in effect create these pre-confirmations for kind of batches of transactions.
And the consequence of that is that you have more stake potentially to slash.
And so I'm curious if this is something that you're also thinking about in the context of base sequencers.
Once you're adopting eigenlayer, I guess the design space is actually quite big.
And so I'm wondering how far down that path are you thinking?
So it's a very reasonable question to ask.
Is 32 ETH, you know, sufficient amount of economic security for 12 seconds worth of pre-confirmations?
And, you know, if the price of ETH were to go to, let's say, a million dollars, then yeah, maybe $32 million is enough.
But let's assume that it's not enough.
It turns out that there's multiple reasons why we want pre-confirmers to be sophisticated.
And that's kind of somewhat at odds with, you know, Ethereum validators that are meant to be running on the, you know, home internet connection on the Raspberry Pi.
And some of the reasons why we want sophistication is, one, around just bandwidth, right?
If you're going to be providing pre-confirmations for tens or hundreds of thousands or even millions of transactions per second, then you need the bandwidth to be able to just simply download all these transactions in real time.
Another thing that you want to do is provide a low latency service.
And so you need, like, some amount of networking sophistication.
And then, as you said, like, really, 32 ETH might be insufficient, right?
You want, let's say, 1,000 ETH or 10,000 ETH of economic security.
And then there's other reasons like the fact that as a validator, I don't really want to be leaking my IP address, right?
This is a denial of service vector.
And so I don't want to be communicating directly with users.
Instead, it would be better if the pre-confirmer was sophisticated and had, you know, anti-denial of service infrastructure like Cloudflare.
And then, like, another reason kind of which is a bit subtle for having sophistication is around tip pricing.
So it turns out that whenever the transaction is MEV rich, it can basically impact the amount of MEV that the proposer can expect at the end of the slot.
And that's going to affect, you know, how much the tip should be provided.
So as an example, if a user is making a Uniswap trade and they're trading in a direction which will lower the expected amount of MEV at the end of the slot, so it kind of lowers some sort of arbitrage opportunity, then the proposer needs to be compensated for this.
And that would be reflected in the tip pricing.
Vice versa, if a user makes a trade kind of in the wrong direction and it increases the arbitrage opportunity for the proposer at the end of the slot, then you can have a negative tip.
So you can basically have the proposer willing to pay the user to pre-confirm that transaction right now because that increases the expected amount of MEV at the end of the slot.
All of this to say that there's multiple reasons why we want pre-confirmers to be sophisticated.
And so really what we want to do is we want to have the layer one proposers delegate their pre-confirmation rights to a sophisticated pre-confirmer in the exact same way that they're delegating their block building rights to sophisticated builders.
And so just like we have proposer builder separation, we can ask ourselves, you know, can we have PPS, proposer pre-confirmer separation?
And it turns out the answer is yes, you can, you can do that.
And the cleanest way to do it is actually to do it at the L1 itself.
So one of the things that I'm working on right now in collaboration with Mike Nuder, who's helping with the write-up is what we call, well, we're not finalized on the name just yet.
But basically what will happen is that instead of having one single type of proposal, there's going to be two separate types of proposals.
There's going to be beacon proposals, which are the current proposals, and then execution proposals.
And the idea is that these execution proposals are meant to be sophisticated.
And in particular, they can take on the role of being pre-confirmers.
And in that instance, you'd actually, you know, just like we have a fairly centralized builder market, you know, there's something like 20 or 30 builders that have a meaningful amount of blocks on chain.
There might be similar amounts of centralization at the execution proposal level.
And those entities can provide, you know, large amounts of collateral, let's say 1000EF or 10,000EF or whatever is sufficient for users to feel safe for their pre-confirmations within 12 seconds.
One thing I do want to stress is that, you know, the worst case really is that a user was given a pre-confirmation and then, you know, they have to wait an extra 12 seconds or something like that.
It's not like the transaction is not risk of being reordered or anything like that because of the envelope that you talked about.
Like we can have protection on the user side to prevent replay in case the pre-confirmation falls through.
Yeah, I'm going to have to try to wrap my head around that because I'm guessing if a validator ignores the pre-confirmation,
ignores the pre-confirmations they've given and puts other transactions in, then the global state of Ethereum may no longer be what those previous folks expected.
Just looking at the clock, we have just a few minutes left and I really wanted to touch upon this idea that you talked about, this eventual sequencing merge, so to speak.
I think that's really fascinating. Can you share just a little bit more about that?
Right. So that goes back to shared sequencing and synchronous composability.
So today on the L1, every single smart contract is composing synchronously with every other synchronous contract.
It's like one unified chain, one unified experience.
And that allows, for example, modules like Uniswap to combine with other modules.
And you have these money Legos and then you have these very strong network effects.
Now, in some sense, in some real sense, rollups break down the composability, right?
Arbitrum is not very composable with optimism.
And so you can ask yourself, okay, how do we go back to this amazing user experience where everything is synchronously composable?
And it turns out that when you do two things, you magically get synchronous composability.
Thing number one that you need to do is you need to have a ZK rollup with instant proving, with real-time proving.
So what this means is that when you put a rollup block on chain, you immediately have a ZK proof that proves that this block is valid.
So basically settlement, the execution itself of the transactions happen at the exact same time as inclusion when the transactions are included on chain.
So that's ingredient number one.
And then ingredient number two is you need the rollups to have shared sequencing.
So if you have two rollups, A and B, they're both ZK rollups with real-time proving, and they have a shared sequencer.
Then now suddenly the shared sequencer can have synchronous composability within these rollups.
So they can do flash loans, they can unify liquidity, they can do like crazy bridging in and out as many times as they want.
The rollups basically act like one big meta rollup.
And, you know, I guess what will happen is because of network effects of synchronous composability, the rollups are going to start merging within themselves.
There's going to be some sort of shared sequencer.
And the question is, is that shared sequencer going to be based or not based?
There's some projects like Espresso or Astria or even, you know, some people are suggesting that, you know, Solana could be a shared sequencer for rollups.
Like whatever it is, I think there's going to be this winner-take-most shared sequencer and all the rollups merging together.
But I have an even stronger thesis, which is that the preferred shared sequencer is going to be the L1 itself because it provides the most amount of economic security.
And it's actually a security assumption that rollups have already bought in, right?
If I'm buying into Ethereum layer one for the security of data, of data availability, I might as well just reuse the exact same security assumption for sequencing as well.
Because if I don't, then I have two modules and the security falls to the weakest link.
But also there's aspects of credible neutrality, right?
You want to find this neutral playing ground where every rollup and their competitor are willing to opt in to the shared sequencer.
So imagine, for example, that Arbitrum comes up with some amazing Arbitrum sequencer.
Well, is Optimism going to want to use the Arbitrum sequencer?
Not really, right, because it's a competitive project and they don't feel this is a neutral playing ground.
But, you know, it doesn't get more neutral than the Ethereum layer one itself.
And so I think that's going to be a big reason for having this shelling point around base rollups.
That is really interesting, really exciting.
I think we can all attest to how exciting it will be to have that sort of composability between L2s.
And, yeah, it's fascinating to see just how the shared sequencer space is evolving and to think about, yeah, just where am I go?
And I guess I'm kind of curious if a non-base sequencer does become kind of the winner that takes most, I guess, in a traditional company, the bigger company would consider buying that smaller company.
Is that kind of world at all too out there for the broader Ethereum ecosystem?
Could Ethereum as a community take new and interesting public goods infrastructure and somehow acquire or consolidate with them and effectively make them more credibly neutral?
Is that something that folks have talked about?
I mean, I think it would be quite difficult for, you know, the Ethereum governance, you know, social governance at layer zero to somehow agree to let's mint a bunch of Ether, let's say mint a million Ether and use that to go acquire some non-base sequencer and somehow, you know, make it based.
The good news is that I don't think this is going to be required.
I have this relatively strong conviction that the L1 will ultimately be winning and just, you know, for zero if it will just end up winning.
One of the nice things of using the layer one is that all the gadgets and all the fancy designs that the layer one will have, the roll ups, the base roll ups will be able to inherit.
So for example, we want to have inclusion lists for censorship resistance.
We want to have enshrined PBS.
We want to have MEV burn, which happens to improve censorship resistance in addition to inclusion lists.
We want to have encrypted mem pools.
We want to have VDF based randomness so that the leader election is even more robust than what it currently is.
There's all these incremental upgrades that will improve the layer one sequencing.
And, you know, our mission, I guess, is to make the layer one sequencing World War Three resistant.
And so there's this big incentive, I think, for roll ups to just say, hey, you know, we're just happy using the L1 is providing all these services.
And there's really no trade off.
Like the main trade off that I talked about was that you have to give away your MEV.
But if MEV is 1% of your income stream, then it's a minor sacrifice.
And I call it the MEV gambit.
When you're playing chess, you might be willing to sacrifice a pawn in order to win the game by having a structural advantage.
I think it's the exact same thing here.
You know, you're giving up 1% of your income stream.
But now you're suddenly tapping into the credible neutrality, the security and the network effects of Ethereum layer one.
That's, I think, a really great metaphor.
Well, you know, I think this might be a good opportunity just to transition to one other topic that I think might be interesting for the audience.
And something that you briefly talked about in Istanbul.
And, you know, I think you said that roll ups, they effectively need three things from an L1.
They need data availability, they need sequencing, and they need some way to kind of validate execution.
And, you know, with dank sharding, we have the first.
With base sequencing, we'll have the second.
What about the third point?
What about the third point?
Or, yeah, how are you thinking about that third point?
So, Vitalik, I think just yesterday, published a tweet and a write-up on what he called enshrined ZKE VMs.
Now, people have, like, misconstrued what the term enshrined means, and they're kind of unhappy about it.
And so, the new proposal is to have the word native instead.
So, I talk about a native virtual machine as opposed to a custom one.
Now, Ethereum, you know, has the EVM at layer one.
And so, you could say, you know, are all the EVM equivalent roll-up projects native roll-ups?
And the answer is no for a couple of reasons.
So, if you take, you know, Optimism, for example, there's kind of two big reasons why they're potentially not native today.
Like, number one is that there might be bugs in the implementation.
It's extremely hard to build an exactly equivalent virtual machine, especially if you're doing a ZKE VM where there's tons and tons of complexity.
And the second, you know, maybe more fundamental issue is that every time the EVM upgrades, you know, every time there's a hard fork at layer one,
it's, you know, it's very hard to keep track of it.
You basically need to introduce governance at the roll-up layer.
And governance in and of itself is an attack vector for execution because you can 51% attack the on-chain governance.
And so, there's this proposal out there to provide an opcode or precompile within the EVM, which allows for verification of EVM execution within the EVM.
So, it's basically a way for roll-ups to be able to publish blocks on-chain and then you pass it through this precompile and then it kind of returns true or false.
Is this a valid block or is this not a valid block?
And in terms of the, you know, the bugs, you benefit from client diversity at the execution layer.
So, you know, every execution client has their own implementation of this opcode.
And if there happens to be a bug, well, you know, that's something that we can fix off-chain.
And if we have client diversity, it won't cause issues.
And then whenever the EVM forks or changes because the layer one hard forked, well, the definition itself of this opcode will move along.
So, you can be EVM equivalent at all times, even in the future when the definition of the EVM itself changes.
And you can do that without on-chain governance.
And so, what I think will happen is that a very large part of the roll-ups, those that are basically EVM roll-ups, will have the option to make a huge security upgrade at essentially no cost.
They're going to be able to remove all their bugs from whatever EVM implementation they have.
And they're going to be able to be, like, forward compatible with whatever the future definition of the EVM is.
And so, as you said, once you adopt this opcode and you boost your security, you become a so-called native roll-up.
And so, I think the end game for a lot of roll-ups will be to be a based and native roll-up.
And then, once you use, you know, L1 data, L1 sequencing, and L1 execution, then you basically have the exact same security properties as the L1 itself.
And one of the things that you can do is you can actually deploy a roll-up with very, very few lines of code.
With, let's say, ten lines of code, because a lot of the complexity will be encapsulated in this opcode.
And I call a roll-up that uses all three of those an ultrasound roll-up.
So, it's very, very easy to deploy these ultrasound roll-ups.
And you may ask yourself, you know, would the ultrasound roll-ups, you know, be competing with the non-ultrasound roll-ups, with the kind of commercial, I guess, roll-ups.
And I don't think that's the case.
And there's all sorts of reasons we can talk about.
But, like, one of them is the fact that there's going to be network effects.
And the commercial roll-ups have, like, many, many years of head start to build these network effects.
But maybe a more fundamental aspect is that, you know, a project like Optimism, for example, is working really hard on its tokenomics
in order to incentivize public goods being built on its chain.
And that's the kind of thing that, you know, a ten-line small contract roll-up cannot do.
Wow. Ultrasound roll-ups.
That sounds really exciting, I gotta say.
What a great way to end this episode.
Yeah, thank you so much, Jess.
And that was, I think, both incredibly insightful, incredibly interesting.
Really, really appreciate your time.
And, yeah, just really, really excited to see how all of this research and work evolves.
And, yeah, and the future launch of ultrasound roll-ups.
Yeah, it's a little sci-fi.
We're at least half a decade away, but it will happen.
And basically what we need is the same amount of client diversity that we have today for ZK EVMs.
So we need, like, five production-grade implementations of ZK EVMs.
And at that point, we can snarkify the EVM itself.
So we can make the current EVM a roll-up, specifically a base and native roll-up.
So, you know, we have at least one ultrasound roll-up.
And then if you want more instances of ultrasound roll-ups, then you can just deploy them as smart contracts.
Wow. Does that mean that there will be a reverse merge in the future?
Well, there will be kind of a splintering of the state.
So each roll-up instance has its own state.
But this is merely, like, a superficial thing because all these states can merge together through synchronous composability.
I guess one kind of very interesting detail is how the gas markets will work out.
So remember how I said that I think there's going to be at least one roll-up instance, which is going to be the Manhattan of roll-ups, where people are willing to pay a very high rent.
So they're willing to pay, let's say, 100 guay per gas.
But then there's going to be this long tail of maybe more application-specific roll-ups, where it's going to be essentially free.
There's going to be no congestion.
You might have a roll-up for a specific game or one which is, you know, for a specific, you know, banking application or whatever it is.
And because it won't be congested, it will be essentially free to use.
Now, you may ask yourself, you know, how does synchronous composability work when you have these different gas markets?
And in my opinion, it's going to be, like, very easy.
If you have a transaction which synchronously composes with roll-up A and roll-up B, you're going to have to pay the sum of the gas prices on roll-up A and roll-up B.
So, for example, if you have roll-up A, which charges 100 guay per gas, and roll-up B, which charges one guay per gas, then any time you have a transaction which is blocking both at the same time,
it's basically consuming these precious sequential resources on both roll-up instances at the same time, then you're going to be basically charged for on both sides.
And this is one of the big reasons for just saying, you know what, I'm going to, you know, stay within a specific roll-up because, and, you know, I'm going to be on Manhattan, I'm going to be willing to pay the high rent,
because I don't have to pay this additional gas to synchronously compose with another roll-up.
So, it's not like the roll-ups are going to be exactly homogeneous because there's going to be some gas market differences, but other than the gas markets, they're really going to look like one big meta roll-up.
Really, really interesting. Well, that is a feature that I think, again, we can all get really excited for. And I think it's a perfect place to end. I think we're just a tiny bit over time. But yeah, once again, thank you so much, Justin. That was amazing.
Absolutely. Thank you for having me, Marek.
Of course. Great. Well, once again, this has been L2 Unplugged. And if you liked this episode, come join us next week, where we'll be talking to Ben Fish, co-founder of Espresso Systems.
He'll be chatting. He'll be actually continuing this conversation and talking more about shared sequencing topics. So, I hope to see you then.