L2 Unplugged Ep. 3 w/ Vitalik Buterin

Recorded: Nov. 3, 2023 Duration: 1:03:49
Space Recording

Full Transcription

And as in past episodes, we are recording this live.
So if you have any questions, please tweet at us and we will do our best to include them.
Now, in the last episode,
we are going to talk about how things are going to talk about how things are going to talk about how things are going with the surge and generally where Ethereum and L2s are going next.
Welcome, Vitalik, thank you so much.
And we're going to talk about how we talk, but how we're going to talk about how we're going to talk about how everybody lives.
And so Kobi and I are just really excited to dive into this post.
And maybe to start, you know, in your post, you talk about the different trade-offs that L2s can make,
trading off security for lower transaction fees.
And, you know, you start by talking about why this might be useful.
And so maybe just to set the stage a little bit, can you share a little bit more about this?
Sure. So I think we're seeing a lot of projects of different kinds,
either building different kinds of things that they call L2s on top of Ethereum for various purposes,
or in some cases, either independent L1s like Celo or sometimes even completely centralized projects
seeking to become L2s or seeking to get some kind of like deeper technical integration into the Ethereum world of some form.
And I think when people do this, they have two kinds of motivations.
So one of those motivations is just to try to kind of be more aligned with the Ethereum ecosystem in a kind of some spiritual sense,
kind of be on the same team as, you know, team Ethereum,
while at the same time still having your own autonomy and being your own thing.
But then at the same time, there's also this technical goal of basically trying to get more security
and create a thing that people can trust if they're living inside of your thing
without having to, like, convince people that your own tiny bespoke validator set
or your own server or whatever actually is trustworthy, right?
And there's a lot, I mean, it's important to remember that those two objectives are separate to some extent,
though they're definitely also aligned to some extent as well, right?
Because it's, you know, if you're going to be on team Ethereum,
then there's even more value in making it easy and safe for people who currently live in the Ethereum
versus to also go and do things in your own ecosystem as well and vice versa.
But, you know, there's also a bit of a difference between those two goals, too.
And then within the space of, like, trying to gain security by creating some kind of tie to Ethereum,
there's a lot of different strategies that you can take.
And a lot of those strategies really depend on, like, what exact trade-off between security and scale
and functionality you're looking for, right?
So the kind of most pure one that we're the most familiar with is roll-ups, right?
And a roll-up base has computation happening off-chain but attested to on-chain using either ZK-SNARKs
or a fraud-proof system like Optimism and Arbitrum and FuelDoom.
And it also has data on-chain, right?
And so enough data gets published on-chain that it's possible to reconstruct the state
so that if every other participant in a roll-up disappears,
other participants can come right back in and extend it
or at the very least have enough information to start withdrawing all of their assets.
So roll-ups are really the gold standard.
They're basically Ethereum-equivalent security,
with the exception that there's thousands of extra lines of code
that you have to trust and not to have bugs.
Which, like, to be fair, is, you know, like, that's a very big if
and there's a lot of these separate discussions going on
on, like, security councils and multi-provers
and different ways to try to, like, minimize that security gap.
But, like, that security gap is something that exists in all of these systems, right?
So I think if we're going to compare different types of layer twos,
it's worth sort of putting that aside as a separate problem.
So we have roll-ups, but then we also have various kinds of options
that take compromises of different types.
So Validiums are probably one of the most famous ones.
Validium is a system that uses ZKStarks to prove validity of Merkle roots,
like, state roots that are being published to Chain.
And where state roots are being regularly published to Chain,
but data is kept off-chain.
And a Validium has the property that if an attacker decides to be, like, maximally evil,
then that attacker can cause people's funds to get stuck forever,
but they cannot steal funds, right?
As a plasma operator, because you're constrained by a ZKStark system,
you can disappear, but you cannot cheat.
So that's a Validium.
And the benefit of Validiums is that because you don't have to pay for on-chain data,
the data costs are much lower.
You still have to pay for costs of ZKStarking, right?
But the data is definitely significantly, well, mostly gone, basically.
You only have to pay an O of one data cost, like a fixed amount per period,
to publish the roots and the proofs.
And then there's other interesting ones.
So one of these hybrids that I think people don't talk about as much,
but it's important to explicitly recognize as a category,
is pre-confirmations, right?
So if a lot of these roll-ups, they want to kind of really appeal to end users,
and a lot of them have this kind of explicit consciousness of, like,
Ethereum is the thing that tries to be a neutral and safe platform,
but we are the, like, really opinionated thing.
We do business development.
We appeal to users.
We try hard to be user-friendly.
And one thing that that requires is that requires faster block times.
And the problem is you cannot do roll-ups or Validiums
with a faster block rate than the Ethereum chain itself,
just because, like, getting the roll-up or Validium level of security
requires having something publish the chain.
And so what they are at least planning to do, right,
what they currently do in practice is there's just, like,
a signature that gets trusted by some central operator,
but what they're planning to do is have signatures that get published
by, like, some set of consensus nodes specific to that layer two.
And then there's a fraud-proof system where, like,
if they say they're going to publish one thing,
but then they later end up actually publishing something
completely incompatible, they get penalized.
So there's, like, an economic security level, right?
And that's a kind of trade-off of security in exchange
for the functionality of having much faster block times.
Then there are systems like Plasma,
which have Validium-level costs, but roll-up-level security.
But the trade-off is, of course, that Plasma is kind of much more limited
to specific sets of applications.
And, I mean, I think there's an interesting argument to be made
that we've kind of swung toward Plasma-style constructions
actually being underrated now,
and there's an opportunity to resurrect them a bit.
But, like, even if we do,
they're inherently much more application-specific
than roll-up server Validiums.
So that's, like, another security trade-off, right?
So there's this, like, broad space
of different security trade-offs.
And the way that you think about, like,
what is the thing that they're trading off
is basically, yeah,
if you have an asset on the layer two,
then, like, what level of guarantee do you have
that you can actually convert that into an asset
on the underlying layer one, right?
And the answer in the case of a roll-up is, well, always.
The answer in the case of a Validium is, well,
your asset could disappear if the operator is evil,
but it can't get stolen.
Your answer in the case of a Plasma is, well,
it can if you're only using one of these kind of
fairly simple limited classes of applications.
Your answer in a pre-confirmation system is, well,
you have a lower economic guarantee
if you're willing to only wait one second,
but if you're willing to wait, like, let's say,
one minute or one hour, you have a much stronger guarantee, right?
So different choices have different answers.
And then separately, there is this other interesting question
of, like, what is your chain's level of ability
to read Ethereum, right?
So what all of these, like, roll-up Validium Plasma gains,
they're all about withdrawing, right?
If you have an asset on your Layer 2, then, like,
how do you convert that into an asset on Ethereum?
Reading Ethereum is something you need to do
to be able to deposit things onto Layer 2 in the first place.
And in that case, there's basically, like,
two kinds of answers.
There is the kind of ideal answer,
which is basically every block in the top chain,
like, in your Layer 2 points to a block in Ethereum.
And if Ethereum reverts, then your chain reverts as well, right?
So, like, you have to have a fork choice
that, like, really is dependent on the Ethereum fork choice.
And, like, this is something that's not too hard to do
from a fork choice perspective.
Like, I have 50-line Python POCs that are about five years old,
but it does require your Layer 2 to, like,
be capable of reverting,
which is something that you totally can do
if you're forking, you know,
Geth or some other Ethereum clients,
but, like, it's an extra challenge.
And then the other approach is
you only make your chain aware of finalized Ethereum blocks,
in which case it's, like,
the code is much easier.
You generally don't have to revert.
And there are, like, exceptional cases
where, like, what happens if Ethereum gets 51% attacked?
But, like, you could just say,
well, you know,
we're not going to try to write code for that.
We're just going to admit that
if that happens, we're going to hard fork.
So there...
And then if you do that,
your main weakness is basically that, like,
if Ethereum enters an inactivity leak,
then you don't have, like,
deposit functionality anymore.
And so there's some interesting edge cases
that happened there.
And then there's the interesting question of, like,
if what you are today is a separate chain
and you want to convert yourself
into being an Ethereum volitium,
then, like,
what are the specific changes
that you need to make?
And, I mean, I remember a few months ago
there was this big discussion about, like,
oh, you know,
is the concept of roll-up or volitium
even real?
Is it all basically just bridges?
And, I mean, I think
one of the kind of conclusions
that you can take
is basically that, like,
converting a separate chain
into a volitium
is definitely easier than it seems.
So, like,
it doesn't require
massively re-executing a lot of things.
It basically does just require
having bridges.
But those bridges needs to have, like,
some very specific properties.
Like, probably,
you, like,
to be a volitium,
you need, at the very least,
need to have a validating bridge,
like a bridge on Ethereum
that actually checks
your chain's entire execution.
So, like,
either a ZKEVM
or whatever else.
this kind of
secure enough
two-way bridging.
So, being able to interpret
Ethereum,
you need to, like,
have a clear answer
to what happens
in extreme cases
like 51% attacks
or hard forks.
But it's, uh,
it's, like,
less hard than it seems.
And there's a, yeah,
a pretty, uh,
a viable path
to doing that.
and then,
my conclusion of this
is basically
that there's just, uh,
a lot of options.
There's important
considerations
to think about.
there's a lot of
different actors
that are going
to be interested
increasing their level
of connectedness
to Ethereum, right?
There are,
chains like Celo,
are a great example
But then,
in a completely
different sphere,
imagine if you're
a video game company
you know,
your game is currently
on a server
and let's say
your game is, uh,
one of those games
where, like,
players have items
and those items
are potentially
super valuable,
And then you want
to make your game
Validium to some
extent to basically
prove to players
that you're not
cheating, right?
this is totally
something that you
And it's, like,
less hard than it
seems, right?
like, uh,
five years ago
you wanted to be
a project that,
makes some kind
of, like,
halfway house trade-off
between being
a centralized system
and a decentralized
you would have
probably been
sensed to go
make a permissioned
consortium blockchain.
But, like,
permissioned consortium
blockchains,
basically failed.
And the reason
why is, like,
they tried to be
a compromise
between centralization
and decentralization,
but they ended up
being the worst
of both worlds.
you have enough
decentralization
to make the
development really
annoying,
but you also have
enough centralization
that people are
not going to trust
a Validium,
is also one of
these compromises,
but it's, like,
a compromise
in a totally
different direction,
you have, like,
you gain a lot
of trust properties,
but you still
can keep your server,
it's still code
that's run on a server,
it's still very easy
for developers
to work with,
but at the same time,
you know,
you gain a lot
of benefits
from, like,
having security
guarantees
and having more
connection
to the Ethereum
ecosystem.
so that's
kind of my
summary of the
post and the
but definitely
happy to dive
more into any
particular things.
And I really
pre-confirmation
idea that you
talked about
before as one
of the hybrids.
especially as you
add consensus
in multiple nodes,
it starts to look
a lot like what
some people are
calling decentralized
sequencers,
which are
really interesting,
and we'll talk
a little bit about
that maybe towards
the end of this.
any questions
that you want
definitely.
first of all,
that was a
great summary,
though I would
still recommend
for people to
read the blog
really good
and nuanced
version of
what types of
trade-offs we
have today,
because there
are a lot of
security trade-offs
that you can
and there has
been a lot of
bike-shedding
on the definition
it's a really
recommended
But maybe
to touch a
one of the
definitions
gave both
you mentioned
that Validium
is something
that uses
say succinct
zk-snarks,
to prove the
validity,
and then you
have data
availability
available
somewhere
because data
availability
expensive,
started talking
about what
and there
are actually
some social
consensus
requirements,
touched on
but maybe
expand on
the social
commitments
would need,
or Validium
would need,
in order to
be secure.
think the
big social
commitment
is basically
the commitment
faithful to
the Ethereum
fork choice,
in the sense
reverts a
block that
you locked
then your
chain also
revert that
That's the
really big
So basically,
if something
extreme happens
to Ethereum,
also need
response.
the other
one is just
Ethereum hard
might also
hard fork
in response,
So like one
specific example
of this is
you're trying
to convert
which is a
separate chain
into being
Ethereum,
route that
you might
you might
Stark validating
bridge of
Ethereum inside
But then the
question is
like, well,
Ethereum changes
its rules,
Ethereum has
a hard fork
result of
that, the
Stark that
to correctly
Ethereum becomes
different.
that's true
for pretty
much every
hard fork.
the simpler
lazy route
not verifying
execution but
verifying
consensus,
there's still
going to be
hard forks
that change
consensus,
So like I
think, for
in the medium
Ethereum is
exploring
single slot
finality.
hoping that
we explore
deeper changes
to staking
that address
some of the
centralization
risks and
same time
reduce some
signatures
requirements
in terms of
validating
consensus.
But actually,
then basically
the logic
that you need
to verify
an Ethereum
block changes,
basically,
if Ethereum
hard forks,
then whatever
gadget you
your thing
to verify
would also
projects that
mindset of
starting from
it's a bit
easier for
Because if
your mindset
is that you're
starting as a
then every
your system
going to be
a node of
Ethereum,
in some sense
you have that
connection be
in a very
automatic way
by default,
you're going
mindset of
starting off
an independent
then you're
like, you
don't have
that as a
software thing.
that, like,
to describe
it, yeah,
you don't, like,
you don't have
that feature
already where,
like, all of
your nodes are
running Ethereum
nodes as well.
I mean, so
you're going to
want to have,
like, the
simplest possible
way to introduce
that connection.
And if that
simplest possible
way basically
ends up being
a gadget that
allows you to
verify Ethereum
blocks inside
of your chain,
then, like,
there's these
extra considerations
that you need
to worry about.
That's really
interesting.
And I guess
one thing that's
also potentially
different between
roll-ups that
start with
Ethereum and,
you know,
L1's transitioning
joining Ethereum
might be around
finality.
You touched upon
this a little bit
with your
overview.
You know,
there's two ways
to provide
finality as a
Bollidium.
only accept
L1 transactions
from finalized
or you can
accept transactions
And I guess
that has a big
impact on
kind of the
whether or not
the Bollidium
will reorg
if Ethereum
So I'm kind
of curious
how you think
about that.
You know,
you mentioned
there's some
advantages to
it's definitely
elegant that
L2's can kind
of pick and
choose what's
more appropriate
for their users.
I'm kind of
curious what you
think about this.
I think the
trade-off is
generally that
accepting the
latest Ethereum
block and
having the
functionality to
revert if
Ethereum reverts
significantly
better from
perspective
because deposits
are just always
going to take
exactly one slot,
they don't need
to take any
more time.
But it's just
like a lot
more technical
actually accomplish
if you want
simpler route,
then like
basically you
would need
something.
Then like
you could do
the thing of
only verifying
finalized blocks
and like that's
just it's much
less software
work on your
But on the
other hand,
there is the
trade-off of
like you have
to wait longer
and especially
in the inactivity
leak case,
you have to
wait much
longer in
order for
your chain
to be able
read away
one blocks
deposits might
really long
One other
kind of just
very bespoke
consideration is
like if a
chain eventually
let's say
wants to be
volitium of
Ethereum and
of like let's
say Bitcoin
at the same
which is like
not possible
but there's
possible 10
years from
Bitcoin adds
the functionality
to make that
possible,
which is like
or to like
actually verify
bridges on
the Bitcoin
which is like
very far-fetched
but like there's
definitely sort
of cultural and
software paths
that Bitcoin
could take that
could make that
a reality in
10 years.
Then like you
basically have
this interesting
property that you
might have to
revert if
Ethereum reverts
or revert if
Bitcoin reverts
and there's
like extreme
cases where
like it becomes
impossible to
satisfy both,
but like you
could end up
like basically
like trying to
maintain that
position for as
long as possible
and then if you
then like having
the tighter
coupling becomes
even harder,
then it starts
to become more
make more sense
to try to have
coupling.
But that's
like a very
kind of far-fetched
and almost
sci-fi thing
at this point.
I love it.
So I think
like a lot
of what we've
talked about
today even,
it means that
Ethereum moving
to one slot
finality will be
a big difference
and you've
talked a lot
about a year
you've wrote
about how
aggregating,
verifying BLS
signatures is
getting faster
and faster
and maybe
you can talk
a bit about
what it means
more deeply
for the finality
of the L1
and what it
means for L2
interacting with
Ethereum and so
so I think,
my thinking
definitely changed
a lot about
this probably
over the last
three months
to kind of
summarize
kind of both
thinking.
Ethereum has
this concept
have a block
and then it
the concept
of epochs
and a block
gets finalized
after two
and epoch
reason why
this happens
is because
Ethereum has
validators,
like there's
about 700,000
validator slots.
those are
obviously
individual
actors having
multiple slots
you know,
also some of
them are not
and that's
just like a
different
signatures
actually process
a signature
the validators
during every
and so it
actually go
through all
validators and
get a signature
from each one.
The way that
consensus
algorithms that
have like
single slot
finality,
like things like
Tendermint work,
is they do
require two
rounds of
signing from
every validator
in order to
finalize a
Tendermint
takes centralization
trade-offs because
Tendermint could
only effectively
process a few
hundred validators
but like this
is basically the
trade-off that
it makes,
one of the
arguments is like,
might it not
make sense to
actually convert
Ethereum into
being a single
slot system in
the same way
that Tendermint
argument is
basically that
there's a lot
of value in
having that
having like that
level of faster
finality,
basically because
Ethereum finality
today takes like
something like 15
minutes and a
lot of people are
not willing to
wait that long.
And so the value
of having 15
minute finality is
like actually much
lower than the
value of having
like 12 second
or even 30
second finality.
complexity and
it also adds a
huge amount of
technical complexity
because the
consensus spec has
to deal with all
of this complicated
stuff of managing
a fork choice and
managing finality
logic and
managing like
challenge slashings
and deposits and
withdrawals at the
same time.
And so from a
complexity perspective,
we get the worst
of both worlds.
And so the question
is basically,
yeah, you know,
might it actually be
an improvement to
just like bite the
bullet and go all
the way to a
single slot.
And the way that
BLS plays into
this is basically
that it's getting
easier and easier
to verify a large
number of BLS
signatures.
And we did the
math and we
figured out that
like theoretically
it is totally
possible to make
a system that
aggregates and
verifies hundreds
of thousands,
potentially up to
a million BLS
signatures within a
single slot.
So that's the
general thinking,
right, that it's
possible to get
there and there's
lots of benefits
in doing that.
And like it
simplifies even
things like bridges,
for example,
because those could
just verify
finality directly.
But the more
recent thinking
that we have is
basically that
like Ethereum
clearly has a
staking, you know,
like centralization
risk problem.
and it's like
there's a lot
of ways in
which it's
currently not
really benefiting
capability to
have 700,000
validators.
And if you
try to kind
of extend
this philosophy
of like we
want everyone
to be able to
solo stake in
its current
form, then
like it doesn't
really extrapolate
well, right?
Because if you
imagine Ethereum
becoming a system
that gets used
by a hundred
million people,
then like you
can have a
million solo
stakers, you
can maybe have
10 million solo
stakers, but
like you're not
actually going to
be able to
process a hundred
million signatures
in a slot.
Like there's just
fundamental
impossibility results
against doing
And so there's
some discussions
happening within
the Ethereum
research circles
of like whether
or not to find
ways to kind of
reform that.
And instead of
going all in on
making the current
version of solo
staking, try to
work for everyone
basically either
modify how solo
staking works to
make it require
less work for
users, like not
make it slashable
for example, if
you have under
some large
quantity of ETH
for example, or
if we can make
liquid staking
like better and
have better
decentralization
properties, then
like we might as
well go all in
on that and
increase the
minimum ETH
requirements.
and once you
do that, like if
you do either of
those, then it
actually might
become possible to
change Ethereum's
consensus so that
we have single
slot finality and
we have, for
example, less than
10,000 signatures
happening every
slot, right?
currently I think
Ethereum has,
what was it, I
think we're at
about 800,000
validators now and
divide that by
32 and so we
have 25,000
signatures per
slot, so we
can have less
signatures per
slot and have
single slot finality
and if you do
that, then it
also becomes
viable to have
Synarc bridges
that actually
like fully
verify consensus
instead of
fully verifying
this totally
separate thing
that we, a
gadget that we
call a sync
committee.
So, there's
some interesting
improvements there
and I think it'll
be really interesting
to see the
community try to
come to consensus
on a specific
model around
that over the
next three to
five years or
But, yeah,
that, basically,
yeah, once that
happens, it's going
to be a boon for
making bridging
easier and more
secure as well.
Exciting.
Yeah, definitely
something that I'm
sure a lot of L2s
are looking forward
Maybe switching
gears a little
bit, I wanted
to talk a little
bit about your
post around L2
stages and
kind of training
wheels, as you
called them.
I think it was
about a year ago
you wrote this
really nice post
about kind of the
different stages of
L2s and the
training wheels that
are being deployed
kind of today as
everyone works
towards delivering
kind of these
truly robust L2s
whose kind of
eventual existence
can mark kind of
the coming of the
I'm just kind of
curious, yeah, if
you could maybe
just recap for the
audience kind of
these three stages
and talk a little
bit about why you
chose them.
Sure, so the
way that the
motivation behind
this, right, is
basically that we
have a lot of
these roll-up
projects, but a
lot of these
roll-up projects are
really roll-ups in
progress in the
sense that they
want to become
fully secured
systems, but
their proof
systems or
these case
systems are
still being
developed, and
while they're
doing that,
like, basically,
a lot of the
time they're still
calling themselves
roll-ups, and
in some cases
they're making
kind of security
claims, and
there's even,
unfortunately, in
some cases, even
projects that kind
of make security
claims about, like,
what they currently
are, but without
really having any
serious intention
of actually doing
the work to have
a proper proof
system, and then
there's a lot of
projects that, like,
have their own
opinions of, like,
oh, this is some,
like, special
reasons for why
we're already
secure enough or
why we're all
roll-up already,
and the goal was
to basically
recognize the need
to take a staged
approach to security
where you have
some of these,
like, override
mechanisms while
the fraud-proof
and, as you
case, dark
systems are not
yet mature
enough to
really take on
the full burden
and be fully
trusted with
billions of
dollars of
security, but
then take those
training wheels
off slowly over
time and reduce
the level of
influence that
they have in a
way that is
like, responsible
and makes sense
given the
maturity of the
proof system
and standardize
all that in
such a way that
users can better
see, like,
what, basically,
to what extent a
particular project
actually is a
roll-up, to what
extent it actually
has satisfied all
of the security
events requirements
and, like, what
are the actual
security properties
of the thing that
they're assigning
So, to summarize
the three stages,
basically, stage
zero basically
says that, you
know, you still
are a multi-sig,
you have a
security council
that, like,
really ultimately
controls with a
majority vote
where the money
goes, but you
have been doing
some basic stuff
that clearly puts
you on the track
of being a
roll-up, so,
like, it is
possible to run
a node of your
system, it is
possible for,
independent actors
to go download
software, figure
out what the
current state
is, you have
data that's being
published on-chain,
and just all of
these basic things.
Then stage one
says you have a
proof system, so,
which could be a
fraud-proof, could
be a zk-snark, that
actually is running,
it is on-chain, and
it is the default
thing that is
validating blocks.
You can have a
security council that
can override that
system in case that
it has a bug, but it
needs to have a high
threshold, right?
Like, it needs to be
a 75% threshold of
at least eight
participants.
And the motivation
behind this is
basically that it
still lets you have
an override system in
case of real
emergencies, and you
are allowed to have
other rules.
Like, for example,
you could have a rule
that says that, like,
two security council
members can delay
everything by a week,
and then that gives
you time to, like,
gather up the other,
you know, like, four or
seven or whatever
security council members
that you need to
actually get up to 75%.
But the system is still,
kind of, biased strongly
in favor of trusting the
proof system, right?
So, like, if your
security council is, like,
roughly split 50-50,
then, like, it's not a vote.
Like, what the proof
system actually has
teeth and actually has
a very large share of
the say in which way
the answer goes.
And so you're basically
going about halfway
between fully trusting
the security council to
fully trusting the
proof system.
And then stage two says
fully trust the proof
So stage two does kind
of leave the security
security council one,
well, two roles, right?
One of those roles is
to do upgrades.
So you are allowed to
do, upgrade the system,
but an upgrade has to
have a delay of at least
And so you cannot use
an upgrade as a way to,
like, override what the
proof system does, right?
And an upgrade has to
give users enough time to
kind of safely and in an
orderly way, migrate all
of their coins out of
the gas system.
And 30 days is just an
arbitrary cutoff that
feels like it gives
users enough time to do
And then the second role
that a security council
has is it can have the
right to take over only
in the specific case
where the proof system
disagrees with itself,
So what I mean by that
is, like, one example
of this is if you can
submit into the chain a
fraud, a snark that
says if you apply this
block, the state root is
going to be X, or you
apply another, or, and
then you can also submit
another snark that says
if you supply that same
block, the state root is
going to be Y.
Then, like, the ZK
snark system is
contradicting itself, and
that is a ironclad proof
that the ZK
snark system has a bug in
it, right?
Because the output can't be
X and Y at the same
time, and so in that
case, the security
council is allowed to
intervene.
Or another case is a lot
of these systems are
experimenting with
multi-provers, and so
what that means is you
might have an optimistic
proof system and the
snark system, or you
might have two different
snark systems, or you
might have, you know,
some kind of fancy two
out of three between,
like a fraud proof and
a snark and a, you
know, like, some kind
of trusted hardware, but
like, if you have
basically at least two
of the, like, actual,
you know, trustless proof
systems disagreeing with
each other, then that
proves that you have a
bug, and that the
security council can also
intervene.
Or, on the flip side, if,
let's say, nobody submits
a proof for a week, then
that might, that would be
evidence that the
proof system is kind of
bugged in the other
direction, and that, like,
instead of accepting too
much, it's accepting too
little, and it's stuck,
and then in that case,
the security council is
also allowed to
intervene, right?
So, basically, in stage
two, the security council
is only allowed to
intervene in real time
in situations where the
proof system provably
has bugs, because it
creates behavior that is
just obviously incorrect
in a way that is very
easy to detect, even if
you don't understand the
EVM yourself.
And that's something
that's probably fine to
keep having for the
long term.
But, yeah, those are
basically how I think
about the three stages,
It's basically all about
what portion of the
trust are you putting in
your security council or,
like, whatever you call
your kind of emergency
figure things out if they
go wrong, team, and what
percent of the trust you're
putting in your proof
And that's a slider that
needs to really basically
move from zero percent to
100 percent over time.
And when it needs to move
really depends on the
maturity of the individual
So there is a dashboard by
Like, if you go to
L2Beat.com, then you can
see the dashboard that
shows the security
properties of different
projects across a bunch
of different metrics,
And so right now we can
see that Arbitrum 1 is,
according to them, the
only EVM roll-up that
has, like, actually
achieved stage 1.
There's others that have
come close, right?
So, you know, Polygon ZK-EVM
is very close.
It says no mechanism in
case of sequencer failure,
but that's something that
I'm sure they're going to
resolve very quickly.
Then, you know, we have
ZK-EKS, ZK-Sync, which,
I mean, my understanding of
them is that they've
basically made a decision
to just go full speed ahead
on getting to stage 2 at
some point fairly soon,
and they're exploring with
multi-provers.
Then we, the one thing
that, the two things that
are stage 2 today,
there's, I guess, D-Gate
version 1, and then there
And both of those are not
EVM systems, and so they
get to be simpler, but,
you know, they are running
with basically kind of the
full level of security.
So, you know, that's roughly
how the stages work.
Yeah, really interesting.
And, you know, L2Beat is
certainly, like, an amazing
public good for folks to
take a look at what these
L2s are actually offering.
And we had Bartek on the
show two episodes ago, and
so if you're curious to
learn more about L2Beat,
definitely check out that
Really cool.
And I guess, you know, it's
been a while now since you
wrote that post and came up
with those stages and those
requirements.
And, you know, I'm just kind
of curious if your thinking
has evolved, though.
Are there any new
requirements?
You know, you were just
talking about decentralized
sequencing.
Is that something of
interest?
Does it fit the framework, or
is there anything else that
that's kind of bubbled to the
top as something that maybe
might, you know, be worth
adding to these stages?
Yeah, I think probably the
big, one of the big staking
points that's kind of in the
way of the Security Council
becoming kind of effective is
like whether or not the
Security Council members are
made public, right?
Because, and the arguments in
favor of making public, making
them public is obviously if
they're not made public, then
like, how do you know the
Security Council isn't just
one guy, right?
Like, who cares if it's six
out of eight, if the eight
keys are all controlled by the
same person?
But then the reason why people
are hesitant against making
the Security Council public is
basically because the
individual members are afraid
of personal safety risk, they're
afraid of legal risk, and that
there's other risks that they're
afraid of, and so that is just
something that's proven hard to
navigate.
I mean, Arbitrum has, I believe,
been willing to actually make
their Security Council public, but
a lot of other projects are not.
So that is one of those things
that is a bit of a staking point.
The, I think, realistically, there's
basically going to be, yeah, like
two good ways out of that, right?
I mean, one of that, one of those
ways is possibly for there to start
being more specialized organizations
that are willing to be Security
Council members for multiple
projects and that kind of have a
good security setup and a good
legal basis.
And then the other approach
would be to basically, yeah,
well, just full speed ahead to
stage two to the point where the
Security Council has no power
except in those cases where
there's a software bug and
basically have more security by
relying more on the code rather
than less.
And then there's also this
interesting third route that I
hinted at at the end of my post
from a month before, which is
basically this enshrined ZKVM
path, right?
Basically this path where the
Ethereum protocol gets a native
functionality for verifying
EVM computation using
using ZKSNARKs.
And then if it's like native
functionality, then if there is a
bug, then that bug just has to be
responded to by like at the level
of fixing the and potentially forking
Ethereum itself.
And so that's kind of a gold
standard of security, right?
So this is something that is still
kind of very early stage and it's
still controversial in certain
respects.
And of course, that would benefit
EVM projects, but it would not
benefit non-EVM projects.
So there's like a lot of
interesting considerations in that
option as well.
And like, even if that does get
adopted, it's realistically a much
longer term thing.
Yeah, I completely agree that the
whole topics of security councils
and entrant rollups is going to be
extremely important to figure out
correctly in order to not lose the
benefits of rollups.
And yeah, maybe talking a bit about
a different topic, you've also
written about L3s and how they
relate to L2s and what are their
benefits.
And one thing that you mentioned is
that L3s has this nice ability to
bridge assets between them without
having to go to the L1.
But there is this aspect that you can
also achieve something similar when
you aggregate multiple L2
proofs and then sharing the
Wondering if you can talk about that
And so the L3 thing is interesting,
right, because like L3s are a
software architecture classification
much more than they are a security
classification or rather they're
like not a security
classification at all, right?
Like, so like, for example, the
StarkNet ecosystem is good at having
a lot of these L3s, but there are
some L3s in the StarkNet ecosystem
that are rollups, but then there are
some L2s that are Validiums.
And so it just kind of gets
bungled a bit that way, right?
But the reason why you would want to
do an L3, I think one of them just is
that you want to make it easier for
people to deploy new Layer 2s and
and have regular proofs of Layer 2s
being committed into Layer 1 at
lower cost, right?
So one of the big expenses of a Layer 2
is basically you have to like regularly
publish these routes and proofs to
And if you're just submitting a route,
then the cost is a bit lower, right?
It's like about 40,000 gas to send
the transaction to just like update
the storage slot, replaces X with like
hash of X and Y or whatever.
But if you're adding a ZK-SNARK, then
a ZK-SNARK has a proving cost of like
Or if you're StarkNet, then it's like
in the millions.
And that's, it's like a big, a much
bigger cost, right?
And so if you imagine having 10 of
those systems on Ethereum at the same
time, then like you just get this
really annoying trade-off between each,
either each of those systems having to
commit their proofs to Ethereum much
more rarely.
And so deposits and secure withdrawals
having a higher latency or basically
just having to pay some, some pretty
extreme costs.
And so what would be ideal is having
functionality to somehow share like that
root publication function to Ethereum.
And basically, instead of having like 10
different ZK-SNARKs, have one recursive
SNARK that represents those 10 different
SNARKs at the same time.
And one of the properties of the StarkNet
ecosystem is that basically it just does
that as part of what it is, right?
Because if you have 10 different L3s that
are all roll-ups, then they, they, when
they publish the chain, they publish the
chain as part of the Layer 2.
And then the Layer 2 could, can just
aggregate the proofs and the proof
verification can get done inside of the
Layer 2 instead of being done inside of
the Layer 1.
And that just really lowers gas costs for
them, right?
But that's only one way to do it.
So if you go back to my post on Layer 3s
on my blog, if you just, it's called
What Kind of Layer 3s Makes Sense.
It's actually for about a year and a month
And you scroll all the way down to the, like,
to the diagram at the bottom.
Like, I basically show that there is a design
where you can have this kind of, like,
proof batching system where you, it just
accepts aggregate and aggregate proof for
roll-ups that essentially presents their
state management and their verification keys
in a standardized way.
And so you could get the benefits of proof
batching without actually having to, like,
have a full EVM layer in the middle.
And that just, like, reduces security bug
risk, basically.
So, yeah, I think there's a lot of different
ways to do this.
And I think there's also a lot of complicated
trade-offs between doing it one way,
doing it the other way, potentially just a
lot of different projects giving up on making
their own Layer 2 at all and instead creating
some kind of sub-ecosystem inside of another
Another consideration that's, I think, worth
also bringing up and throwing into this is
privacy, right?
Because, like, so far we've been talking a
lot about layers that provide scalability,
but there's also a big need for layers that
provide privacy.
And we have Aztec, we have Nocturne, we have
Railway, there's just going to be a growing
list of these.
And it would be, like, it's important to just,
like, think about the need of these things as
well, right, because if you have a bunch of
different privacy layers, then realistically
those privacy layers are going to have to live
inside of some kind of scaling layer, just
because Aztec starts on Layer 1 are way too
expensive.
And so that's another place where you're also
going to have to have something that, like, from
an architecture perspective, looks like a Layer 3.
So that's also going to be interesting to see.
Really interesting.
And it's really cool.
You know, you've been writing about this over a
year ago, and yet these topics just keep coming
And, you know, I think they become more and more, I
think, timely now as more and more L3s are
launching and more folks are talking about sharing
bridges at the L2 level.
Really neat.
Well, we're getting close to time, so I thought
maybe we could end with kind of more of a topical
question.
You know, Celestia launched this week, and, you
know, it's obviously more of a Cosmos project, but, you
know, kind of curious what you think of the impact that
it might have generally on the Ethereum community.
Yeah, so what Celestia is is it's basically a data layer, right?
So it's a chain that is optimized for storing large
amounts of data, which is something that is usable by a
lot of other projects.
I think the really important thing to keep in mind with data
layers is that the level of security that you get from data
layers really depends on, like, which assets you're doing
things with, right?
So, like, for example, the thing, like, the thing that is less
secure is having assets whose home is in one ecosystem and then
having data whose data availability come from another ecosystem.
So, like, if you try to make a system whose data is on Celestia
where that system's, like, the assets that that system is managing are
assets that come from Ethereum, then, like, your security is basically
only kind of as good as the security of Celestia, right?
Like, if Celestia gets 51% attacked, then they can either steal all of
your assets if you're, like, if you're based on fraud proofs or you
basically just become a Volidium if you're using ZK-SNARKs, right?
So, if you have something that is using, like, that is managing assets that
are homed on Ethereum but is using Celestia for data availability, then
you're basically a Volidium, right?
Whereas, you know, if you use Ethereum native data availability, which is
going to come from, like, 4844 and then other things, then, like, you
could actually be a roll-up.
And roll-ups have unconditional security, like, even if Ethereum gets
51% attacked, you are safe because an Ethereum chain where any data is
unavailable is, by definition, a non-canonical Ethereum chain.
So, but at the same time, if you are managing assets whose home is not
Ethereum, right?
So, like, for example, if you are one of these, you know, like, a video game
company that wants to put its assets somewhere else or if you're focusing on
stablecoins that have an issuer or if you're focusing on assets where, let's
say, you know, you're managing your, like, people tokenizing their own stocks or
just, like, releasing their own NFTs.
So, and then if all of those things are homed inside of your chain and then you
just, and then your chain just is a thing that is based on Celestia, then, like, you
do, you do have that kind of full unconditional level of security, right?
So, yeah, I think it basically, the big thing to remember is just, like, the level of
security depends on what your application is.
And then additionally to that, what level of security is okay for you also depends on
what your application is.
So, I think it's all application-specific.
Really interesting.
Well, I think that's all the time we have.
Vitalik, thank you so much.
Yeah, as always, that was, you know, just incredibly insightful.
I really appreciate your time.
And, yeah.
Yeah, thank you guys, too.
Yeah, it was great.
It's been great.
Well, once again, this was L2 Unplugged.
If you enjoyed this episode, I hope to see you at the next one.