Will building verifiable systems be the only way for us to maintain trust?

Recorded: July 31, 2025 Duration: 1:02:24
Space Recording

Short Summary

In a recent discussion, industry experts explored the transformative potential of zero-knowledge proofs (ZKPs) in blockchain technology, highlighting the launch of innovative projects like Lurk and Proofbase. As ZKPs gain traction, they promise to enhance privacy, reduce transaction costs, and simplify user experiences, marking a pivotal shift in the crypto landscape.

Full Transcription

Thank you. Hey, John, how's it going?
Yes, loud and clear.
Can you hear me?
Yes, I can.
Perfect, perfect.
How's it going, John?
I'm just going fine, thank you.
How about you?
Great, great, great, great, great.
I'm doing great.
I'm seeing Nicholas has joined as well.
Paul G, welcome.
Hi, Nicholas.
Nicholas, let me send you a request to become a speaker.
You should receive a notification right now.
Okay. all right all right can you guys hear me all right loud and clear perfect great thank you for uh thank you for joining nicholas uh today as well. Welcome to Paul G. I'm sure more people will join along the way.
So it's great to have you here, Nicholas.
And like, welcome to today's space, Geek Space.
We hold this space every week where we have different guest speakers,
different topics.
So I'm really excited to have uh you here and uh john as well so we're diving into
something that's at the core where crypto is heading basically which is zero knowledge proofs
or zkps it's how i pronounce it so we'll be we'll be discussing that um like what are they
how do they work and most importantly can they actually help us build a trustless future we're all aiming for?
So, Niklas, if you would allow me, let's kick it off with you.
Could you introduce yourself, your background, and tell us a bit about the work that you're currently doing?
Yeah, happy and also excited to be here.
It's not often I get the chance to talk about ZK kind of like outside of the ZK space,
which allows me to talk about it at a much higher level, which is often way more fun for me.
So yeah, so my name is Nicholas Ramsrud.
I am the co-founder and CEO of Lurk Lab.
We're building Lurk, which is a ZK programming language.
But most recently, we've announced that we're building Proofbase,
which is a high-performance privacy-focused Layer 2 network on Ethereum.
What that really means is Lurk, the ZK programming language, is a way of creating zero-knowledge proof programs.
And Proofbase is a way of deploying those and controlling blockchain with zero-knowledge proofs.
So, yeah, that's a little bit about what we're doing.
Now, my background, I got into ZK.
Well, it's kind of hard to say.
So I really got into ZK back in 2023-ish as I started a ZK interoperability project.
And then previous to that, I had gotten introduced to ZK back in like 2018, I believe, and was just fascinated with
cryptography and the beauty of these systems.
But even back then, I knew ZK was going to be so impactful.
But the complexity of it back then at that point was just astronomical.
And so it really came a long way in the years following. And then in 2023, I decided this technology is just going to change the world.
It's going to be the key to unlocking so many different use cases for blockchain.
I'm just going to go all in and invest myself in it.
So that's kind of how I got to this point here.
Awesome. That's great to hear.
Thank you for a quick background and what you're
doing as well right now, Nicholas. We're great to have you. And I think it's important to have
people like you as well in these spaces to share knowledge, not only between our audience, but
for everyone, right? Because it's an interesting topic for sure and important as well.
for everyone, right? Because it's an interesting topic for sure and important as well.
And so, John, let's get into it, I'd say. Your opinion, like a lot of people hear ZK
proofs and think privacy or magic math, right? But can you break it down at a high level?
Well, I mean, I know something, but Nicholas is clearly the expert here.
Yeah, I can go ahead and kind of break it down.
Because this is like what I have to do with my family.
Like I'm in the special space where not only do I have to explain that I work in crypto,
but then I have to explain that I also work in ZK, which is like a layer deeper.
So ZK, at its surface, ZK is actually quite simple of a concept, right? Where you have a program,
like in the natural sense, you have a program, it spits out a result. Like imagine your calculator
on your phone and you put in some numbers and it spits
out 42. But ZK, what ZK adds to that is a certificate of authenticity to that program.
So not only do you get a result, but you get a certificate with that result that says that it
came from a specific program and that program ran correctly. And so you can imagine how useful this kind of thing is in not
just in blockchain, but in systems widely, because you don't have to trust the result that's being
communicated to you, but you can also verify that it was done correctly and that it came from that program. So it's got kind of like
a fingerprint from that program. In blockchain, this is highly useful because it allows you to do
an unbounded amount of computation and then prove it to a resource-constrained computer.
And so back when ZK was first theorized, the use case that they explained was a supercomputer running that took up a huge room at a university campus.
It spit out a result, and you wanted to be able to verify that result on a simpler computer.
or computer. And so that is basically what we use it for in blockchain as well, because Ethereum or
other blockchains are very resource constrained. So you can run those computations external to
the blockchain and prove the validity of that result on chain. So it doesn't sound directly
what you're saying that privacy is involved. I'm just trying to find an economical way of showing something.
Yes, and privacy is just one feature of what these things are called snarks,
succinct, non-interactive arguments of knowledge.
That's the whole point of them is to reduce the argument that I know something down to something much smaller.
And then there's technical mathematical terms around this, whether it's logarithmic in size
and complexity to verify the statement. And then we have other protocols to make them non-interactive. I like to describe ZK by example of if I had two billiard balls,
and I had them and they're different colors, and I wanted to convince you that they're different
colors, but I don't want you to know anything about what color they are. How would I do that?
Well, I would blindfold you first. I'd give them to you and I'd say,
go ahead and put them behind your back. And then you can switch them around behind your back as
many times or as little as you want. And then show me, pull your hands back forward. And I'll
tell you if they've switched hands. Now, at first you would, um, you would think, oh, 50, 50 chance,
you know, maybe, maybe he just got lucky. After the second time,
you'd be a little bit more convinced. And after several iterations of that process,
you'd be very convinced that they are indeed two different colors, but you would not know
anything about what colors they are. And so there's the privacy aspect of it.
I can actually prove things to a verifier,
being the prover. I can prove things to the verifier without the verifier learning anything
about it. And in general, you can design these protocols such that that's completely configurable,
right? You can configure it to hide all of the information and just get a true or a false
that a program ran correctly without revealing
anything. Or you can have it so that some of the information is public, some of it is private.
So it's really a configurable tool for doing a lot of different things.
But what you described actually sounds like it increases computational effort.
Yeah, absolutely.
Right. I could just look at the balls and then I would
know. Yes. Yes. So there's two separate dimensions which don't appear to have direct
relation. One would be to hide information. And that's what I would think about. I mean,
the word zero knowledge suggests that I do something without having to know something or something like that. But this idea of making parsimonious proofs of complex operations,
that's a whole, seems like it doesn't even have to be related, although perhaps it is.
Yeah, it is perhaps related, but in different use cases,
perhaps related, but in different use cases, they can be taken advantage in different ways,
they can be taken advantage in different ways, right?
right? So there is a trade-off space where the amount of computation for a verifier,
it becomes very economical to do that computation off-chain versus doing it on-chain, the cost-wise.
So, and that's in a couple, obviously, a couple different dimensions. There's the cost of
generating the proof, but there's also the cost of verifying the proof. So that's why things like ZK Layer 2s on Ethereum make economically a bunch of cents because now you can, instead of paying a single transaction fee, you can then do 100 transactions for the same cost, but with the same level of verifiability.
So there's levels and many dimensions to using this in practice.
So that's interesting because that's part of our sort of motivation here, which is to move computation off-chain.
We're not big believers in smart contracts.
We're not big brokers in smart contracts. We don't have them, in fact.
We don't have them, in fact.
But what you're suggesting is that you go ahead and do the computation off-chain, commit a proof or an element of a proof to the chain, and that then will allow people to take off-chain data and then know that whatever you've asserted is true.
So that's conformable to what we're trying to do, except, of course, we don't do that last step.
We don't do anything on chain, any type of computation.
Go ahead. I'm sorry.
No, I was just going to say that's exactly right.
In the ZKL2 space or ZKEVM space, privacy isn't necessarily part of it.
space, right? Privacy isn't necessarily part of it. All the transactions in the ZKE VM are still
happening in a transparent execution environment. You're still seeing all those transactions happen
on chain, but Ethereum, as the resource-constrained computer, can't look into that execution
environment, so it has to verify the proof, right? But in other systems using ZK, you can actually
have your execution environment be completely private and not reveal anything about what's
happening inside of it, the state, et cetera, your assets, and then prove that those state
transitions are correct to the blockchain. Without blockchain having anything. It all has access to is the proof and a commitment to the state.
So it hides it, but it makes you be, it forces you to be consistent in your, in your state.
So yeah, it can be, it can be used in both dimensions.
So let me, let me walk through this as a, as a naive user of wanting to use this kind
of approach.
So you, you've, you've done something magical off-chain, and what I assume is that you would have, we can talk about arithmetic circuits
later, but somehow you've reduced an algorithm to something which is systematic in a way and can then therefore generate
these kinds of proofs okay hope it didn't I didn't drop off there so
Robbie you still got me okay yeah go ahead my phone did something weird I
apologize okay okay all right so so the first thing you have to do is commit Yeah, go ahead, John. I'm sorry. My phone just did something weird. I apologize. Okay. All right.
So the first thing you have to do is commit something to the chain which is not large
and doesn't in itself demand a lot of computation.
Ethereum is constrained in both ways.
So let's, I don't know what that is, but maybe you commit a root or something like that.
Now, to prove that, I guess I'm imagining I have to have access to some off-chain information.
I have to actually know either the fundamental blockchain protocol or the algorithm that described it or the arithmetic circuit that is equivalent to it.
that is equivalent to it. And then I could then run an input, which I guess I'd have to have as
well, and then discover if the output is what you say. In other words, what would I have to do in
practice that is off-chain to prove to myself that what you're saying is correct?
that what you're saying is correct.
Yeah, so you definitely,
this does put the onus on the user
to maintain their state.
So since it's not a transparent blockchain ledger,
which means then that it's maintained globally,
therefore you only just need to maintain your private key
so that you can sign a valid transaction.
This is, when you start moving into a private execution environment, your permission of creating a state change is much more complex, right?
Because now you need, it's a proof of knowledge.
You need to have access to everything that is in that state in order to iterate that state because it requires it as an
input correct so your model your model is is right on and it's it's how essentially how lurk works
which is your state is merkelized into a hash tree right so you have you have uh that collision
resistance uh state as a commitment so you have your root commitment and you read that in,
that's what you're committing to the blockchain, right?
And so your first step of your computation
is always going to be the previous state commitment.
And then you have to, as a private input,
provide the private state that gets Merkly-lized into that root.
And so the program first checks all of that to make sure that,
okay, yeah, we're starting with the same state
and the user knows that state.
And then you have your program logic, whatever that may be,
that changes that state, generates a new commitment to the state
and the proof.
And that's kind of how it works.
So two things there. So one, so you're saying I might have to know the entire, I think you're
saying I have to know the entire Merkle tree. So if I'm looking at USDT, I have to know the leaves
that describe the 5 million balances of all the users who have USDT. Is that okay? So, so, so it, in general, it is correct.
You have to know the entire state, but in this application and programming model, we
look at state differently.
So state in a private state architecture like this, every user state is separate.
And so you're operating only over your own state, your own account. So you're not operating over, there's no notion you do it? You're immediately getting to some of the
hardest questions in private blockchain, of which only now Aztec, Miden, and what we're doing with
Proofbase are getting to the point where this technology is going to become live for the first time. These things are just coming live this year for the first time.
So you're asking the hard questions.
The application topology becomes completely different.
It becomes completely different.
So you have to encode the rules,
the global invariance of the logic you want to imbue into the program logic that
is then replicated through all of the private execution environments.
So this means then that there is going to be some minting authority, as there is in
in a public blockchain, but it's a minting authority. And when you're transferring from
a public blockchain, but it's a minting authority.
one private state to another, it's more like you're burning your own and sharing a secret
that allows the recipient to mint their own. And the only way that you can do that is if you've
proven that you've burned your own and then they can then mint their own, right? So it's a completely different application logic than many are used
to thinking about. So let me ask two questions and I'm going to see if I can maybe help the
viewers at home. So the two questions are, one, if you didn't have that approach, wouldn't it be
the case you'd get concurrency problems where, you know, I know the current state, so I want to update my state, knowing
the current state. But then, of course, when I update, it's no longer the current state.
So nobody else can update until they know the current state after my update. So I don't
know how you would solve the...
Yes. Yes. So these things are naturally asynchronous.
You have to design them as naturally asynchronous systems.
So that's you.
But without that, you would have that concurrency issue.
So you cannot have conflicting state, which means then going back to the point where you...
And this is natural with all ZK, right? You can only make a statement
about state you control, which then you have this very natural concurrent asynchronous concurrency
model where I can only make a change of my own state. I can't change Bob's state. Bob can only
change his state. He can't change mine. And then you need a mechanism then by which to interact with them, which different, like Miden and Aztec use a note nullifier system. We use encrypted
messaging between them. And those encrypted messages have domain routing, et cetera, that
prove to the, not only the verifiers of the blockchain, but also the programs inside them, that that logic was carried out in a verifiable manner.
Sure. Okay, but that's fine, of course. But the problem I think I'm pointing to, I mean, I think you solved the problem. But the problem I'm pointing to is different. It's that if any, if what I have, it's not so much that I can only alter my private state, which of course is necessary. It's that I have to also know the global state in order to alter my private state.
You seem to have broken that link.
And that's what makes it possible, I guess, to solve that concurrency issue.
Yeah, yes, yes.
So yeah, there's no...
And it's different for every application, right?
Because it's going to be completely configurable.
There's applications where the notion,
there is no notion of global state for that application,
at least that you can view.
And that's fine, right?
So like sending tokens around,
you don't need to have a global state.
That was my second question.
I'm not sure it's fine
because if I don't know the global state, how can I know that the generalized rules had been adhered to?
Well, because you can only make a proof with a valid execution of those generalized rules.
So your global invariants are enforced by local invariants proven.
Well, so that's okay.
Well, let me make one more try.
So, you know, I go to a bank and the bank has a closed, you know, I can't see inside.
And I say, you know, the agreement when this bank started was it was only going to issue $100 worth of script.
And so I go in there and it gives me a piece of script
and it gives a guy behind me a piece of script and so on.
But the global rule, if there's only a thousand,
a hundred of them is a global state.
And how can I verify unless I'm just doing it incrementally,
I suppose, I mean, you may be,
what you may be saying is that if every interaction
followed the rules, then the global state has to be correct.
That's right. That's right. And in some applications, then there does need to be a centralized coordinator, right? And that, and that's, that's, that's true of, of like USDC or USDT, that there is an issuer. So in certain applications, there does need to be one application
state that users then interact with. You can think of like a DeFi exchange, that still applies.
So nobody can see my internal state of what I own, but interacting with this external application
also happens in an asynchronous fashion, but there is another third party involved, right?
If your applications have to have that kind of separate execution, you know, if there can be no relationship or dependency between executions in one place and the state of a different account, that excludes some logical constructions.
No, not necessarily. So the language itself is Turing complete, which means that you can express any form of logic as a program.
Then the enforcement of local invariants.
Those are enforced by the virtual machine itself, right?
So the virtual machine dictates what is valid as a state transition based on the logic of the application
and defined within
that virtual machine as its set of rules, right? You think of it like an operating system, right?
An operating system can only execute in a certain way. And when that happens,
you can produce a proof. You can't force it to do something that is not valid. And then by enforcing message consumption, that happens on the blockchain side
of things. So on the blockchain, the blockchain becomes the verifier of the proofs and then is
also the message passing layer. So let's say I send tokens to Bob. It's a proof, state transition
proof for me and an outbound message that's encrypted.
The validators of the blockchain put that in Bob's state or Bob's chain's inbox as an inbox
message, and they enforce him to consume that message on his next state update. And the only way Bob can consume that,
because again, it's just a root committed to,
and that he has to also have the pre-image for,
which we commit or we communicate out of bound to Bob,
that then he has to consume that
in the generation of his next proof,
which means also that the logic must be verifiably executed.
So you enforce that.
You just can't make an invalid proof.
Well, so you can't make proof that fails what you set up logically.
The question is, can you set up everything logically?
And so here's an example of what I might have in mind. Suppose that there's some sort of pivotal mechanism where if, say, the staking goes below a certain amount for a staking pool, then the pool loses all votes altogether.
of my stake. That's a local interaction with me in the pool or something. Maybe it's a local
interaction of me and an agreement with other people, but it affects everybody else's abilities.
It affects whether or not their vote is valid or how much, if they're still in the group of 100
that actually gets to count. So what if there's a spillover, something like that?
At what point is there another transaction that forces somebody to
engage, to absorb the fact that even though I'm passively just staking my stuff to you,
now suddenly the whole stake is exploded or something?
No, okay. So this is an inherently shared state application design. So there's always going to be those applications that are shared state.
Like they're staking where it needs to be transparent because accountability has to occur.
Right. Those things are shared state models. Right. And there's no getting around some of
those, which is why we're going to have a shared state chain so that, like, even for validator entry and exit, you need to have that.
You need to have it for several different application designs.
So this is why I was suggesting in that sense it wasn't Turing complete, because you couldn't accommodate that kind of logic.
Well, so Turing completeness and application, possible application design, topologies are different things.
Turing complete being a language-specific construct is different than being able to accommodate different application types.
So more accurate would be to say that your topology is not Turing complete.
Yeah, I mean, I think that
the formalisms
matter here.
I'm with you all together.
Yeah, Turing completeness
is a very specific thing.
Now, whether
you can do shared state
model applications
requires a
shared state model.
Well, I would say this, so maybe this would be the
accurate thing to say.
An application that I can express in a Turing complete
language like, you know, like Viper or something isn't
necessarily something that you can incorporate into your
zero knowledge proof system.
Right, just like you can write a program
that doesn't work well on a smartphone.
You have to design the application
for the intended infrastructure.
One minute is a criticism,
just trying to understand the limits of what's going on.
Yeah, yeah.
Okay, so let me see if I can bring this
to the viewer at home a little bit.
So if I were trying to put this more in layman's terms,
If I were trying to put this more in layman's terms, what is it that you're doing?
what is it that you're doing?
So you're trying to let people do a couple of things.
One is you're trying to generate the possibility of privacy-preserving applications.
So an example might be that there's private transfer of tokens.
So we want to make sure
that everybody has done things appropriately according to the rules but
it's nobody's business that I gave tokens to you and that's difficult to do
in a pseudo anonymous ERC 20 environment so you want to you want to improve that and make that possible. So to do that, I don't think that tokens can exist as a, it can't be an ERC-20 contract because then they would simply be visible.
token, I think, that maybe still operates under the rules of ERC-20 in the sense that
the transfers still follow the same logic of signatures and so on. But the state is hidden,
but nevertheless provable because of the ZK developers kit and everything else that surrounds it
that you've developed.
So have I gone wrong so far?
Nope, that's bang on.
Okay, good.
All right.
Okay, so that sounds very useful.
I mean, that's a thing, actually, I should say that geek can't do because it is a transparent and an open ledger.
So natively, that's not a thing that we can accommodate.
I value the other.
Yeah, correct.
I mean, these they they they all come with different trade-offs. I think that there's some areas of interesting use, like where, in particular, where I think ZK is going to be a massive enabler, is that all of blockchain UX up to this point has been in public key, private key pairs.
key pairs, whether it's multi-signature or some MPC systems. You have to have that. And that's
a real big blocker for a lot of big enterprises and institutions because they're new workflows,
high risk, and they don't work well. That kind of thing doesn't work well with legacy systems.
Now, ZK, because you're running these things in a private
environment, you can have completely configurable authorization controls, even so much so as having
taking the existing legacy system controls and importing them into the environment. And then
you're not running off of public key, private key pairs. Again, it's just proof of knowledge. Do you know the state? So that is much more amenable to legacy systems and also can change the blockchain UX entirely.
Well, let me ask you this. Isn't just a simple signature effectively a zero knowledge proof?
Yes. Yeah, yeah, yeah, exactly. Exactly. Which, which actually, once you really get into understanding the mathematics behind behind the like the elliptic curve ZK systems, it's essentially that right, you're creating a public key for a certain program. And the output of that is a signature of that program.
That is a signature of that program.
And so what that satisfies is I don't reveal something to you, but I show that I know it.
And it's ZK, but it's also very simple computationally.
And I suppose that it's interactive in the sense that if I want to know it's you, I send you a challenge that says, you know, is it really you?
Yeah. And that's, that's the.
With your private key and send it back.
And you have the private key. And if not, you couldn't.
Right. And that's.
I can't take, I can't, I can't take that proof and make it static because you don't know how that exchange evolved.
Make it static. What do you mean, make it static?
John, can you hear us?
I can't hear you. I hear you now. Okay. Go ahead.
So, Nicholas has a question like,
what do you mean with making it static?
Well, so a static proof would be,
I claimed, let's see.
So static proof would be something that,
it's the same thing with like with snarks,
that it sits there and it can be used
independently to show a fact. So if you have a static proof that shows that at some point
a signature took place or here's a correlation between a challenge and a response,
but you're not sure that that exchange
was accomplished by the right people
or that I might still have that.
Maybe, you know, so I can look at it
and mathematically prove it's correct.
But unless, I guess, depending on the application,
I have more knowledge,
I don't really know what it proves to me.
Oh, I see what you're saying.
So this is, you know, you can do this with application.
Oops, did I lose you guys?
Oh, can you hear me?
Yeah, we can still hear both of you.
For some reason.
I hear everything.
John, can you hear us?
can you hear us? I can hear you, but
I can hear you, but Nicholas, I have zero knowledge.
Nicholas, I have zero
knowledge.
Nicholas, can you
hear us? I sure can.
So are we being selectively
censored for one another?
That's interesting.
What's going on?
Why don't I go
and come back and let's see if Elon will put us back together.
All right.
All right.
Nicholas, in the meantime, I got a question for you as well, because you touched trade-offs earlier also.
So about types, ZK proofs can be interactive or non-interactive.
Could you explain a bit the difference and what the trade-offs are between them?
Yeah, so all proofs are inherently interactive, right?
Like there's always going to be a prover and a verifier.
It's always a two-party thing, correct?
So there's always going to be some protocol by which the prover convinces the verifier that my statement is true.
Now, that doesn't work in practice very well, right?
You can't send something to Ethereum and then Ethereum asks you back.
And so what we actually do is encode that process into the mathematical proof, right?
So we encode the... you can think of like going back
to the billiard ball example. Instead of me just doing that interaction directly with the verifier,
we have a separate machine that I put the balls into and we go through that process and it spits
out a receipt. And then I just hand that receipt to the verifier. They don't need to go through the process.
They can just see, oh, okay, he did this a million times.
It's correct.
And so we actually do that in ZK-SNARKs.
We take what is, you always generate the initial proof in an interactive way, and then you
have a protocol to turn that into a non-interactive proof via mathematics.
So let's go one step deeper, higher, sideways.
I'm not sure which way, but all right.
So you have, do this stuff.
You have this program that you wanna run.
Maybe it's analog of an ERc20 token that's layer two
if i understand the process right to generate these these zk proofs the first thing you do is you use
a um uh an analytic circuit sorry arithmetic circuit generator,
which for people that are civilians,
it's a kind of logical mathematical construct.
It's a little bit like a binary proof
and or if only if that kind of thing,
standard binary logic, except it's arithmetic,
meaning that it can handle numbers,
integers instead of just binary, but it doesn't matter.
Anyway, it's, it's mathematical.
You produce this, this logical construct,
which is equivalent in a sense to this code,
which might be rust or PHP or something.
And so now we've got this thing in a standard format,
and then there are rules about polynomial proofs
that if you can generate a true, either it's true or false,
you found a root.
You found a root, it's true.
If you haven't found a root, it's false.
So if you can get to true with this proof,
then it proves that the execution was correct.
Roughly speaking, that's the general notion.
Am I okay so far, Nicholas?
All right.
So given that, you have a program.
Now, how would I know, if I'm going to believe your proof, the first thing I have to know is that your program was put through an arithmetic generator that was correct and was the one I know works correctly and hasn't been hacked or anything.
So I have to know that you actually used the right program to
generate the arithmetic circuit. And that what I'm seeing is in fact the output from the input,
which is the program that you claim that you want to prove to me.
Correct. Yes. And so all of these systems, when you go through, it's kind of like roughly like a
compilation process, right? Where you're compiling your program, it generates a prover key and a verifier key.
And those things are tied to the program.
And so it's like kind of like an MD5 checksum,
like for the people out here that have done code and open source code,
where you check to make sure that the resulting program came from the
source. And so that's linked to the program, linked to the source code. So you have to have
this way of sharing code also so that the verifier, because it's all about trustlessness,
right? So the verifier can not just take the verifier key they're given,
but also have a process of going and taking the source,
recompiling it, even auditing the code themselves to make sure it's doing what it says it's doing
and doesn't have any side effects.
That's even a step backwards.
Right, that's a step backwards.
But so you say the compiler,
this is not the compiler into machine code,
This is not the compiler into machine code.
you're talking about the compiler
You're talking about the compiler into the arithmetic circuit.
into the arithmetic circuit.
And so I tend to look at ZK as if it's a different computer.
It's a polynomial computer, right?
So instead of compiling code to run on hardware,
you're compiling code to be turned into polynomials
that you can then make statements about.
And then you can configure those statements to make proofs of different sizes, right? And for different situations. So there's multiple
different paths you can go. Writing a circuit is kind of the simplest, also the hardest,
and oldest pathway of doing it. Now there's much more dynamic and configurable systems with ZKVMs,
where you essentially, like I said, you make a computer,
a virtual machine that has all these instructions that are themselves
arithmetic representations of those instructions.
And then you can dynamically compile to dynamic representations of those as a number of
iterative steps of that machine, and then make statements about that. Okay, but for this to be
trustless, I have to, let's assume that your code works. That's already a problem with smart
contracts generally. Even people that are well-meaning make mistakes
and have exploits.
And people, of course, may not have good intentions.
But let's assume that we all agree
that the original code is, in fact, what we want.
So if we got that far,
then I would have to make a trustless.
I would have to get that code.
I'd have to run it through your arithmetic compiler.
And then you would go ahead and do that.
And you would provide me proofs of whatever it is I asked.
And then I would, having generated the output of the arithmetic compiler,
I could then verify the proofs that you offered me.
But I would have to have the arithmetic, what I would have called the arithmetic circuit,
but now apparently is something a little bit more elaborate.
Yeah, you can just think of it as the source code.
You just need the source code for the program.
So just like an all kind of open source software.
So I need the source code.
Well, if I only had the source code, I would have to have the literal inputs and then the literal outputs, which would destroy privacy.
Nope, because the verifier key is program dependent.
It's not dependent on the inputs.
I think we're talking,
we're using program in different ways.
we have the source code and then we have the arithmetic circuit that is
derived from the source code.
I need the source code to derive the arithmetic circuit for myself.
And then of course,
if I have your,
your proofs,
I don't need the original proofs.
I can just run it through the arithmetic circuit, and it's verified as correct.
And I know the arithmetic circuit is correct because I compiled it itself.
But if I only had the source code itself without the arithmetic circuit,
then I'd actually have to have the inputs and simulate it myself to figure out that it's correct.
No, you still don't.
So think of it just kind of like it's a program, right? have to have the inputs and simulate it myself to figure out that no you still don't so you so
so think of it just kind of like like it's a program right so you you get the source code
you get the compiler no no okay i i agree but i'm saying that that without the compiler i would that
that would be my no yeah you'll always need the compiler so so you need the compiler because the
compiler gives you the output of the keys by which you can generate a proof and verify a proof.
Understood. Understood. In the new world, we need the compiler for sure.
This new method. Yes. Okay. Okay. So, and then of course
we also have to assume the compiler is the right compiler and
the smart guy that wrote it didn't make a mistake. Yes, 100%. And this is, you know, ZK security is a big area of research where, you know,
because you can have bugs or exploits in ZK that are much,
much different animals than in other, you know, traditional software security.
So it is, yeah, all of that is a big issue
and something that is hotly debated.
Yeah, and it was the case that you had to have,
this is before we had, I believe I'm up,
I may not be entirely up,
but I believe that the most,
the current iterations of SNARKs
don't require a trusted setup. Well, there's, there's the different types of SNARKs are used
for different systems, right? So you have the, you have Groth 16 that does require a trusted setup.
And those, those are going to be your most high overhead, but they produce the smallest proofs.
Like the proofs are on the order of a couple hundred bytes.
And then you have PLONK systems, which are a little bit more configurable.
And you still end up with proofs of like 700-ish bytes.
So they're really great for resource-constrained verifiers.
And so you still see those in systems that are stateless often, right?
So stateless applications like identity, et cetera.
That's what like Privy with ZK Login.
That's what WorldCoin uses with their ID system.
And then you have the ones where you're wanting to do dynamic programming,
general programming and verifiability of high-level language programs compiled.
And those are ZKVMs, and those are all Stark-based systems, which are hash-based,
meaning that there is no trusted setup.
It's all transparent.
And the downside there, though, is that the proofs are huge, right,
on the order of tens to hundreds of megabytes.
So they require compression
to be used on chain good gracious yeah I mean right so well so so I do questions
then here that that occurred to me so first source code, and even if I did it, I'd have to acquire it.
it, well, at least I'd have to, I guess, so if I did it once, and it never changed, well, then I
would have to, I would have to, I guess, then the proofs that you gave me could still be verified.
So at least I'd have to do it once. Everybody that wants to have a trustless verification of
what happens has to do it once. Okay. And that's, of course, beyond the reasonable ability of most people.
And in any event, almost no one would do it.
So we're really relying on somebody else to do it,
just like we do with open source software.
We hope somebody else has done the checks.
But is it the case, though, that let's suppose that there's somebody who's lazy and I know that they're lazy.
So if I'm a very alert guy and I make sure that all my inputs and all my outputs are correct, you never cheat me.
that all my inputs and all my outputs are correct.
You never cheat me.
Couldn't the global state, which is hidden,
still be wrong because you played silly buggers
with a bunch of, you know, Satoshi coins
that nobody ever looks at?
Yeah, so this is why the problem of...
The problem of program-specific verifier keys is kind of like a very complex issue,
and a lot of these ZK deployment networks and prover networks,
these are problems that are hotly debated and trying to be figured out now.
hotly debated and trying to be figured out now. But what we're doing to solve this is the verifiers
are the validators of the blockchain. And that forces them to have the verifier key up to date.
Now, if you had then all problems having a different verifier key, you get into this really terrible
problem of how do you launch a new program, how do you get it to distribute amongst the validator set,
et cetera. But what we've designed is there's one program and it's called an operating system.
And that operating system can evaluate any Turing complete language or
our Turing complete language, any programs written in that language inside of it.
And so then you only have a single program that's ever run. And then updates to that are just like
anything in blockchain. You update the client software, everybody migrates to the new client
software, and now you're running a new program.
So you reduce the problem to a single program that you have to distribute.
But then that program has to be open source, openly audited for maximum trustlessness.
But does it have to be updated with every block? Nope, it is an execution environment. So it's basically like your RISC-V hardware instruction for a CPU.
This is the LURC OS, which is just the operating system.
And so you only need to update it if you've got something you need to change about how the operating system operates. But all programs can be distributed to people outside of the blockchain
and can be run inside of it without changing the verifier key for that program.
And the verifiers can verify this without knowing, not only without knowing the global state,
but without knowing anybody's particular state. So somehow we've got a CK proof without knowing the state that I'm trying to verify.
Exactly, exactly.
Because what you're doing is you're committing to state
with every state transition of your private state.
So you're making a chain of yourself.
Now, you could just be doing complete nonsense
that matters to nobody.
They're valid state changes.
Like if you're on your Mac at home
and you're creating applications
that just make pretty
colors, you can do that. And I can prove that to the blockchain that that's all valid. It doesn't
affect anybody else's state. Okay. But I thought that you were going the other from the top down,
that I'm part of your environment somehow. And then you, as the validator, can, without my cooperation,
validate that, in fact, I changed red to blue.
No, I would have to send that.
Without knowing either red or blue.
No, I have to generate a proof and send it to the validators
for it to be attached to the blockchain.
So I can do whatever I want,
but if I want to affect the world of other people's state,
I need to prove it to the blockchain. I want. But if I want to affect the world of other people's state, I need to prove it to the blockchain.
I see. So that could be censored. I mean, you could. I mean, that's why you want a decentralized validator set to stop such a thing.
stop such a thing, but all outputs and all state transitions would look the same,
whether I am sending a billion dollars or doing an art project inside of my environment.
I see. So let me pause again for the viewer at home, because I think I can put that simpler.
So I think what your architecture is doing is the following, that I might have at home a very complicated process where I have an abacus and I have my gerbil running around and somehow I add up numbers.
So what I do in the privacy of my own home is my own business and you don't care.
But at the end, I generate a proof, but a proof which contains no
information except truth. And I send that off to you and you verify it's true. And so you can verify
it's true. And so the updated state is a collection of all of these effectively encrypted receipts
that are independently
approvable. And so now we have a new, you know, we have this big, I don't know if you're probably
too young to remember this, but we used to have these, I didn't actually, I'm too young also,
but we used to have these places you put receipts. There was a big spike on a circle
and you dumped the receipt on top of that and spiked it. So you had this pile of paper that had a spike through the middle of it.
So that's kind of what you're doing.
You've got these proofs that you sort of stack up.
And collectively, they imply the state of the chain or of the application.
And I don't have to actually know the content of that particular receipt, just that that receipt is on the spike.
And then that's enough for me to operate with the updated chain state
if I actually happen to have enough information to need to do that.
Okay, all right, good.
All right, let me ask one final hard question.
I think it's a hard question anyway.
And maybe the answer is you're only doing this on L2.
But if I wanted to generate a ZK mechanism, as you're describing, for something like Ethereum,
am I correct in saying that if I deploy a smart contract on a chain like that,
I've effectively generated a new set of rules that can generate truth that your DeFi application,
there are inputs which you have defined, and there are outputs which are, again,
specific to your application. And there's a rule that connects
the two of them that makes them true. So wouldn't I have to generate effectively new source code
and then recompile to get a new arithmetic circuit to be able to prove that the states
in that new smart contract are, in fact, correct? Are you saying this to prove this to Ethereum?
I'm saying suppose I try to extend your project
and my ambition was to incorporate Ethereum as it stands.
Something which I believe allows code updates.
Effectively, it allows a code update when it deploys a smart contract.
And so if that's the case,
wouldn't I have to regenerate an arithmetic circuit
to show that all possible transactions were destroyed?
So this is where ZKVMs come into play, right?
Because ZKVMs allow you to have,
you define all of the possible instructions available,
and then you can define the rules by
which those can, they feed into each other, right? The outputs of one instruction go into the inputs
of this instruction. And so, and by doing that, what you create is a computer, right? You say,
these are all the possible instructions. And then you have a process by which you take a language, a high-level language,
and decompose that into those instructions in a certain order. And that allows you to have,
you don't need to recompile a circuit. In the old notion of circuit, you have this
blessed set of instructions, and you can make any arbitrary program and verify
any arbitrary program that is made by those instructions,
right? So you don't have to go through that process.
But I have to know that it's a valid set of instructions for
the chain. You know, I could have a set of instructions that
says if you put in an input 2 plus 2,
the output is going to be 2 plus 2 plus 2.
Or one that says it's 2 plus 2.
Okay, yes.
You have to define the ZKVM over the instruction set.
If you want to increase that instruction set,
it's the same argument that I made with LurkOS.
If I want to add something to LurkOS,
that inherently creates a new version of that virtual machine, and you have to therefore update the verifier with it.
involved smart contracts, anyway, mounted directly onto your system, because then you would have to
be always recompiling to get a brand new arithmetic. Yes, unless, yeah, you would, you would have to,
if you're adding instructions, you can do it just fine, which is what what they do for ZKEVMs. That's
exactly what they do, right? But if you wanted to add an instruction to the overall instruction set, yeah, that is a
new proving system or a new proof. Okay. Let me finish this with that,
letting you say something. So, tell me in a, so here's your elevator pitch or something like that.
Tell me how it's going to make me an idiot consumer better. How
is this going to affect my life? I'm just going to walk in the door, turn on my iPad, and suddenly
my life is better. How is this going to affect me? I think that it's going to become ubiquitous
with experiencing and interacting with blockchain. Because what ZK allows you to do is to hide all that complexity inside of a proof,
right? So you don't need to go through signing keys. You don't need to do any of that. You can
have much more complex experiences on blockchain without the need and the complexity in the UX of
like trading on Uniswap via Metamask, where you're clicking, you know, 15 times and having to worry
about your keys. This enables all of
that to be automated and to move that away from the device that you're on in a trustless way.
And so I think that this is going to totally change how people experience blockchain.
We've already built systems now where you can take a legacy system like an ERP system and automate the sending of
a payment without ever leaving the application as it's currently designed. And that's going to
happen across multiple avenues, multiple fronts, and all of blockchain UX is going to be abstracted
away by these types of systems. Well, that sounds great. Are you getting much uptake right now?
What's that? Are you getting very much uptake right now?
At the moment, yeah. So there's a tremendous amount of interest in the enterprise space,
which is where we're focusing at this moment because of the regulatory environment shift
and banks signaling interest in supporting the bank issued stable coins and automation through, you know, like Chase with Coinbase for using stable coins as a settlement to their traditional payment systems, which, you know, cost U.S. companies some $160 billion per year just in bank fees. And so there's a tremendous amount of interest in that
kind of shape of problem solver where you're able to automate out all the complexity.
Okay. That sounds great. I wish you a lot of luck. It sounds very interesting.
Well, I think you guys so much for having me on. I really appreciate the chance to talk
at a more high level about the things I love, which I don't get to do enough.
Okay. I hope we get to do enough. Okay.
I hope we're above zero now.
All right.
Thanks, everybody.
Thank you so much.
This was really an incredibly interesting discussion.
Thank you so much, Nicholas,
for sharing your knowledge on this matter as well.
And I'd like to advise people,
the audience right now
people listening in later on
to give Nicholas a follow as well
to follow up on these matters
and thank you everyone as well
for listening in for today's spaces
Nicholas thanks again so much for joining
hope to maybe have you again in several months
to see how things have been rolling out
and everything to have an update see how things has been rolling out and everything,
to have an update on how things has been going.
And I wish everyone a great day, morning, evening, wherever you guys are.
Thanks a lot, Robbie.
Appreciate it.
Thank you, guys.
Take care, guys.