what's going on everybody this is Smoky behind the polygon account while we get this stage
set we're going to play a little bit of music
all right perfect so why don't we get started so again this is smoky behind the polygon
account and the community manager at polygon labs and we are going to introduce the type
one prover so I think to start off why don't we go around the room and just give some brief
introduction so why don't we start out with Brendan hey everyone I'm Brendan I work on
the polygon zero team at polygon perfect all right let's go on over to PG PG Paul hi I'm
Paul I am part of the product team here I work on product research and got lucky enough
to work with Brendan and Daniel and team on getting this type one prover out the door
perfect thank you Paul all right let's go on over to Daniel
hey I'm Daniel also from polygon zero I work on mainly prover libraries and the zkvm as well
thank you Daniel and then let's go on over to Robin
hi everyone I'm Robin working as a photographer at Topazware and well happy to be here and happy
to have collaborated with zero on this zkvm
thank you Robin and we do have one more person down in the audience we have John
I did invite you up to speak so if you just want to accept that invite
all right perfect hey John you want to give us an introduction please
hey hey everybody my name is John I work at polygon and I lead the dev tools team
so I'm mainly focused on running the software and getting it optimized and performant
perfect thank you all right well let's get into it so why don't we start out with you know high
level overview and and significance of the type one prover so I'm going to direct this
question to Brendan so can you tell us what a type 1 prover is sure so the type 1 zkvm prover
we use the naming from Vitalik's framework for classifying type 1 zkvm provers but it's a really
exciting development that allows any existing evm chain to immediately upgrade to zk so it's
with the type 1 prover we can generate proofs for any evm chain whether it's a side chain like
polygon pos any optimistic roll up or even the ethereum mainnet itself and so what we've shown
today are benchmarks and obviously open source code that we've been using to prove mainnet
ethereum blocks and kind of the exciting thing about this is I think that if you asked most people
how efficient and how practical type 1 zkvm provers were they would say not very and we've
shown that we're able to generate proofs at an average cost per transaction of two to three
tenths of a cent and we're just getting started in terms of optimization and reducing this we
anticipate this might be a little bit optimistic but I think there's reason to believe that we will
reduce that cost by 30 to 50 x by the end of the year. Wow super impressive thanks Brendan
and you know for is it that a type 1 is better than a type 2 prover like what are some of the
trade-offs between the two? Yeah so I guess to back up for context the evm is something that's
notoriously zk unfriendly so it uses a lot of cryptographic primitives and data structures that
are just not very conducive to being proven with Azure knowledge proof and so the way that we've
gotten around that for the last year was by using a type 2 zkvm so a type 2 has full compatibility
with all existing ethereum smart contracts all wallets all dev tools it's a really great solution
for a new chain and it's just more efficient to generate zero knowledge proofs for
but there are a lot of chains that already exist and they've already been running
and they have state and they have users and it's better to generate proofs for those chains as they
are and so the type 1 allows us to do that it doesn't require us changing the data structures
or the primitives that are used by these chains we can just immediately start generating zero
knowledge proofs for chains as they're running without any changes on the client or the wallet
or anything and so it's the type 1 is really for upgrading existing evm chains to zk and part of
the reason that we built this was because we want to onboard those chains into the polygon ecosystem
and it's just a really powerful thing to be able to take any existing evm chain and allow it to
instantly join polygon's ag layer and share liquidity and state and user base with other
polygon chains so they're able to just immediately move over and you know launch uh with the type 1
yes perfect thank you and do do you want to get into kind of like the difference between a type 2
yeah so so a type 2 like i said just uses different data structures on the back end
so it uses a different representation for the state try and so like to the user and to the
developer it's an identical environment to ethereum but you can't use a type 2 prover
to generate proofs for existing evm chains so that's the main difference
that's perfect thank you brendan and
can you dive a little bit deeper into the applications like a little further into
the applications of the type 1 prover uh yep so it's really any uh any existing evm chain look
like like we we see this as uh being useful for for basically one thing like there are a lot of
users and liquidity and state that's locked in evm chains they might be um uh side chains like
the polygon pos chain or they might be optimistic rollups and these chains are making compromises
either in terms of not having full ethereum security like the polygon pos chain or in terms
of imposing real economic costs on their users like optimistic rollups do with seven day withdrawal
delays and so the type 1 fundamentally unlocks these chains and allows them to upgrade to zk
which means that uh like for optimistic rollups they can get rid of the seven day withdrawal delay
and so if you if you look at the real cost that optimistic rollups have to bear um or the users
of optimistic rollups have to bear rather it's that they have to pay third-party bridges uh to
avoid having their funds locked for seven days and so in aggregate optimistic rollup users have
paid tens of millions of dollars to third-party bridges to avoid this and to get around the the
capital inefficiency that's inherent in optimistic rollups and so zk allows uh us to to change them
perfect thank you brendan all right so so for this next question i'm going to direct it
to robin so it wasn't long ago that the notion of zk evm an evm compatible zk rollup was considered
years away much less zk evm that was ethereum equivalent so what are some of the breakthroughs
in zk rnd that account for this acceleration and who are some of the people teams not on this spaces
whose work contributed um well so there are different reasons and why this became
practical over the years uh and so people part of the people that made it possible as
the people speaking here right now as a zero team um so basically there are two things uh
improvements in zk uh research uh in general proving system that targeting better more
optimized for this kind of unfriendly primitives uh for example we have seen in the past few years
uh considerable improvements on the lookup arguments um which are a big thing for proving
stuff like uh the evm or usually like any kind of vms around and then was something that's a bit of a
uh friendly fights uh in the space but uh it's field size usually this is zero not snarks as
usual ones i mean with pairings and everything require large fields which are extremely
wasteful for some specific operations typically operations uh and everything while uh zero is uh
with the gold deluxe field that's introduced um is basically reducing this waste um and the push
forwards to like smaller fields over and over again uh is making this like more and more possible
so we feed with bronchi three and the focus on 32-bit fields and some also concurrent works like
kubetana with binary towers of binary fields uh all of these improvements over the years
are what makes zk evm type one possible today thank you robin and and for this next question
so what were some of the main technical challenges from an engineering
perspectives or put another way what part of the evm do you hate the most
uh well it's not meant to be easy proving zk so that's definitely something um it really depends
as a cryptographer what's really hard when you want to like make statements about something zk
is you want to be sure that you're not forgetting anything in terms of constraints in your circuit
otherwise people may be able to forge fake proofs uh so this makes it uh quite hard because you want
in one hand to respect all the specs specified by the ethereal paper and on the other hand you need
to make something as efficient as possible so typically all the requirements around like
opcode specifications that you need to operate over large words uh stuff like this that are not so
memory copies uh stuff like this that is not super uh trivial to do in zk to do it efficiently
these were like uh probably like not the funniest part to work with uh i wasn't involved in the
initial design of the k-shock table for example to prove uh statistic permutations but this must
have been like probably like not super funny for the zero team to work with and to make it as
efficient as this now yeah maybe maybe we could hear from daniel uh about what the uh the experience
was because i i know that daniel uh drafted the original mpt and rlp uh logic in the uh in the type
one and so i i'm curious daniel if your ptsd isn't too bad from from that period um what that uh
what that was like yeah that was a pretty rough few weeks when i was uh trying to get the initial
mpt code working and uh yeah it's i think the whole design of ethereum's mpts has a lot of
kind of legacy aspects to it the way it it uses an arity 16 try for example which isn't really
ideal um it and also the use of ketchup as the hash function so but you know we we knew we had to
deal with those things so we we spent a lot of time on ketchup and we came up with like a pretty
good way to arithmetize it and to prove it fairly efficiently um one one of our teammates angus came
up with some more insights and some more kind of clever tricks to to arithmetize ketchak in a more
zk friendly way um so i think right now we can prove like a few hundred ketchak hashes per second
and when we do switch to plunky 3 that that should increase to a thousand or more
awesome awesome um so a big thing i i think sort of the headline for this announcement was um the
benchmarks and and i personally uh i i was very optimistic about the type one i didn't think that
we could get to this level of performance this soon um and i think what people might not realize
is that it was like a you know somewhat long road to to get here and so uh maybe paul and then and
then also john uh could talk about kind of where we're at in terms of performance and efficiency
um what it was like to get here where we expect to be in the short term um and yeah um yeah sure
of course and yeah john i'll probably call you up to talk about some specifics but uh on the high
level i'll just describe like what the benchmarks we we published today were um and and what sort
of testing we've been doing um so so the main the main benchmarks that we showed today were for
three main blocks um and these the this is worth noting like these are real benchmarks these aren't
benchmarks that were constructed randomly these weren't benchmarks of just like transfers in a
particularly good case these are benchmarks from real ethereum blocks um the three that we have
here today as our sample are interesting mostly because they uh were previously reported upon by
risk zero so we have some other kind of information to compare against um and the main one there is
this block uh 17 million 354,424 um and that block or sorry 1.7 million that that blocks an interesting
one uh currently i think if we look at it that had about 16 16 and a half million gas so that's
over the normal target gas limit hundred had 182 transactions and it's like a variety of of items
in it so one of the things that we're looking at when designing these benchmarks are well what
type of information are we benchmarking with and what we really wanted to do is to come up with
some cases that gave like real kind of average theory use case and if you were to inspect into
this you'll see there's uniswap trades there's withdrawals from main net there's like random
stuff there's some small contract deploys blah blah blah that are in this block um so this is
like a real ethereum block and the uh and really what we did is we went through this process of
a just you know getting those proofs working and then uh sticking john on the problem and saying
okay let's come up with a way to decide what sort of machines run these well uh john if you could
pop on um maybe you could walk through the types of machines we've tried and where we ended up here
yeah sure so i think the the beginning of the process was just like okay can we run can we
run it at all and starting to understand kind of the boundaries and knobs that we can tweak in terms
of like getting getting the prover to to prove a real block um and so i think we started off with
like a hypothesis that we would need a lot of memory like a ton of memory um and and that was
like going to be complicated so that's really the first thing that i think we started working on is
is optimizing and getting it to complete uh properly on on a machine with adequate memory
and then over time we started working within the the gcp environment to uh i think from a system
perspective uh figure out how to reduce the memory so that we can run more and more processes
in parallel and what we learned over the over the course of this is that you know by kind of
over driving on parallelization we can really really uh like kind of overdrive the prover and
reduce costs per unit gas basically and so we're able to drive up our efficiency by still using
machines with a decent amount of memory but kind of over committing uh and driving more and more
provers in parallel in order to to like reduce uh the kind of overall costs that you're paying
per transaction or per proof basically uh and that's been kind of the essence of the exercise
and there still feels like there's wiggle room there through you know either even just from system
perspective uh let alone all the kind of crazy improvements that daniel was talking about
yeah so i'll just say this brendan so like by by doing this you know i think when we started
uh really starting in earnestly trying to benchmark we were seeing you know proving
costs of around 300 per billion gas um and that was on pretty large machines with large amounts
of memory i think 700 gigs of memory or something is what we started on um and uh using more or less
the same code base we were able to drive that down to about 21 to 30 dollars per billion gas
um so what that like if you really think about that that's you know that sub-dollar transactions
for normal gas for like an ethereum block is where that comes to um and like i think like
that much improvement based on the operational side is really impressive but it's also a testament
to how uh you know plunky 2 works and how the evm the zk evm uh works here where it's really
designed to be run on commodity machines um if we compare this to like you know risk zero uses gpu
for as a part of its proof they're just much more constrained on the type of instances that they can
get um there's a lot less of a competitive market for that uh and also like it's just you know
you you can't sort of consider that like consumer hardware from the perspective of running at scale
whereas with these i think on our latest ones we're running on basically like bog standard t2d
instances with a couple hundred gigs around um and i don't know if anybody's checked recently but
like the world is covered with those now google and amazon have just plastered the earth with
these machines um so they're they're abundant they're available they're really easy to scale
across uh and since the the prover here is horizontally scalable we can both hit these
costs and increase throughput just by by scaling yeah the other just like quick comment is that
like what we've seen is like yeah we can run on commodity hardware and get more uh getting like
just basically scale that way and then on top of that as we've even experimented with like bare
metal or other other servers that are higher end we get a you know if that server costs twice as
much to run we often will see just as much of a throughput improve performance so it's like uh
i think really positive uh in terms of like where we can go even with you know faster hardware
but yeah we can get a huge amount of scale just with commodity servers basically
yeah i think that's a great point one one thing that might be helpful for the listeners
is we we really express all these benchmarks in in terms of cost because cost is fundamentally
i i would say our most important metric because it determines how much users have to pay
um but i i just want to be clear to to the listeners that um like latency is something
that we actually have a lot of control over and uh we can affect latency by you know parallelizing
and we generate proofs on the per transaction level um for now and then we're going to move to
to implementing a risk zero style continuations mechanism in the future um i guess paul and john
first maybe it's uh worth kind of getting into sort of the the knobs that we can toggle for uh
for latency and and what that might look look like because i i think people might have this
perception that it's going to take multiple hours to generate these proofs and that's just not the
case uh yeah it's a it's a really good point so uh like you said we're currently parallelizing uh
based upon the transaction what that means is that if we take a block of 186 transactions
you have a couple of different ways to approach it one is that we could prove each transaction
in serial um in order to get through that whole block another is that you could spin up 186
machines and those 186 machines can each individually generate a transaction proof
and then we can aggregate those proofs on small on uh together so you kind of pairwise
aggregate these proofs you create a little tree and by doing this recursive aggregation you get
a final proof um and what that kind of ends up looking like is that the worst case time for for
doing the the proof of all the blocks is the slowest transaction to prove plus all the time
it takes to aggregate all these proofs together uh the aggregation is extremely fast so you end
up being bottlenecked primarily by the slowest single thing that you can prove at a time uh
so right now that's one transaction so if you had one transaction that uses 30 million gas
these are extremely rare because they're extremely expensive uh that would be the bottleneck but what
you actually see in ethereum workloads is you tend to have a lot of transactions that are not
really doing very much a lot of transfers a lot of like smallish trades and those sorts of things
so we're able to parallelize a lot on those uh while while keeping the latency down to you know
basically the speed of proving the the slowest transaction in a block uh as we move forward
with a continuation style uh what this fundamentally means is that we're able to
take a block of transactions and split them up at arbitrary boundaries so it doesn't necessarily
need to be at a transaction level uh which will allow us to have a lot more control about where
we split and just how much you want to scale that across the cluster of machines in order to reach
the latency levels that you want the interesting thing there is you're still using approximately
the same total runtime just over more machines so as you're able to scale uh up and down quickly
you can reach the the same low