New Paradigm of Light Clients for ORUs

Recorded: Jan. 26, 2024 Duration: 0:52:49

Player

Snippets

I'll see you in the next video.
See you next time.
See you next time.
See you next time.
See you next time.
Hey, it's Mil. Thanks for joining early.
Of course.
Awesome. We'll just wait for the rest of the speakers to join and we'll get started when everybody's here.
Wonderful.
Hello. Hello, testing.
Hello. I got you.
Hey, it's your arm.
It's my arm.
Hey, it's your arm. Hey, it's my arm.
Hey, it's Mil.
Yeah, I guess we're waiting for Babish, but I think we can just kind of go ahead and get started while we potentially wait for him to join a bit later.
So yeah, welcome everyone to our Twitter space.
We're very excited to discuss an interesting new primitive that we're bringing to Mantle in collaboration with the icon layer and the Lagrange teams called the state committee.
And yeah, we're going to kind of discuss what this enables and why this is, in our view, and hopefully commit you to kind of a new paradigm in the way the life of integral will be able to trap each other in the future, as well as a broader web ecosystem.
So, before we kind of jump into that, let's just do a round of intros, even though some of us here certainly don't need it.
I'll go first.
I'm going to research at Marana, which is the venture arm of Mantle, as well as, to some extent, the larger bybit ecosystem.
And yeah, I'll pass it over to S.Firam.
Hi, I'm S.Firam. I work on the Eigenlayer project, which is a mechanism for providing a universal framework for decentralized trust that using Ethereum stake and node operators that can be consumed by general purpose services that can be built on top.
Okay, awesome. Good morning, everyone.
Awesome. So, I'm Ismail, I'm the founder of Lagrange Labs. We build infrastructure that makes it scalable for verifiable compute to be run efficiently over on-chain stake.
One of the core primitives that we build is our state committee infrastructure, which is a light client for optimistic rollups.
It is based on our prover, and it's based on the usage of restaked security on Eigenlayer.
And Babus, our chief scientist, is here as well right now.
So, I'll give the mic the hand of the introduction.
Hello, I'm Babus Faramanthu. I'm chief scientist at Lagrange Labs. I'm also associate professor of computer science at Yale University.
My research is to see all its proofs, verifiable computation, and in general, enabling computation on untrusted platforms.
I've been working with Lagrange on record trace, which is a new cryptographic primitive that enables map-produced computation, verifiable and produced computation on updatable and dynamic data.
And I'm happy to be participating in this call.
Awesome. Thanks a lot, everyone, for the intros. And yeah, record trace is a pretty epic name, and definitely looking to potentially dig into that a bit later on.
But for now, it just kind of would bring the topic back to light clients.
And in order to kind of ground the discussion, I think it'll be useful to discuss what a light client is in the context of Ethereum, and not just a light client, but specifically an enshrined light client, which is a light client built into the basic protocol.
So a light client is essentially a, it's kind of like, you can kind of imagine it as a node or a reporter of state, which can validate state against, which can validate transactions against the current block headers,
but it cannot actually tell you whether the entire, the current block header is valid relative to the entire chain history, which is why it's called light clients.
It doesn't have the same level of trust assumptions as you would have with a full node. And Ethereum actually has what's called an enshrined light client, because it finds this primitive called a sync committee in which 512 validators are randomly chosen from the entire 600,000 validator set,
and they attest to the current state of the chain every single block. And that's essentially how Ethereum does it. And this thing is secure because the validators are randomly sampled and have a certain anti-collusion assumption built in.
So you assume they're not going to all collude and lie about what the current block header is.
So that's kind of what this looks like for Ethereum. And we're kind of bringing this kind of primitive now to optimistic rollups and to kind of kick things off.
I'd want to first start off with asking Shriram kind of like what he views as the role of eigenlayer in the creation of these new kind of light client designs for rollups or for any kind of situation.
Yeah, thank you so much. I think one way to think about eigenlayer is you get crypto economic security. So what is crypto economic security? Crypto economic security is there is the service and the service gives a certain output.
Let's say it's some computation or calculation on a state or whatever. And you want this output to have certain integrity.
So then you ask like what are the methods by which we can get this kind of an integrity. And one kind of broad class is crypto economic integrity. Crypto economic integrity means this claim that something some computation or output state is correct comes with a certain amount of economic safety.
What is economic safety means if that claim was wrong, I know that some amount of money can be slashed or burnt. So this would be like what happens in Ethereum.
Ethereum has a $75 billion stake and the chain has crypto economic integrity that you know that two conflicting finalize blocks will not happen because if they do happen, some large fraction of the $75 billion can get burnt.
So there's one kind of crypto economic safety, which is purely the idea that if you run a service, and if the service has enough stake, which can come through eigenlayer staking, where like each staker is either freshly or they've already staked in Ethereum, they also stake in eigenlayer and then they participate in making promises.
And if those promises are broken, they will lose their money. But what is even kind of like one level deeper and new that we've brought in is what we call attributable security. Attributable security is the idea that when a service makes a claim, and that claim turns out to be wrong later on, not only that you know that some amount of money can be burnt,
but that there is a portion of that money called your your attributable security, which you can as a service uniquely redistribute to your own harmed users. Because you know, at the end of the day, somebody's depending on the service, they got rugged let's say by the validators, then you know the right thing to do the right karma is actually like snatch the funds away from the attackers,
and redistribute those funds to the harm parties. So this is a new framework we're building called attributable security in eigenlayer. So this lets any service kind of like you run a company, run a service. And if this service is wrong, you know that you have a certain amount of economic safety backing it, and you will be able to redistribute to your to the users who consume that computation.
So this is a general phrasing of eigenlayer and what kind of crypto economic services can be built on top. Maybe I think it smile can dive into particularly how this relates to lifelines.
I think that's very well put. And I think what makes attributable security such a powerful primitive with like clients is when we look at the design of the Ethereum sync committee, there is not a slashing condition to in any way remit any types of payment or any types of correction of an incorrect attestation back to any harm parties.
So as people are using like client based bridging approaches from Ethereum and are proving the attestations or proving the validity of attestations for the Ethereum state Ethereum sync committee.
What you typically have is a very high degree of leverage on a set of nodes that are somewhat small number and no mechanism to actually remit any type of remediation to anyone who has been wronged by a bad attestation.
And so with the state committee and some of the work that we're doing at Lagrange, we're designing a new light client that is specifically for optimistic rollups and specifically with attributable security at its core, where in the event that there is a incorrect attestation of state and that there is an incorrect block that has been signed by the committee,
there will be a payment that nodes who have consumed that state or cross chain protocols that have consumed that state will receive.
And as we are designing the state committee as well, one of the very interesting properties that we get from it, thanks to some of the cryptography that we have underpinning it, is the ability to scale the number of nodes that are supported to be unbounded in size or only bounded by the entropy of the set.
And what this lets us do is to design committee structures that can scale to far larger sets of nodes and start really being able to benefit from some of the super linear security guarantees that you tend to find from very large node sets.
I think Babus can talk a little bit about some of the cryptography we have underpinning our light client design as well.
Right, right, right. So basically two things that are happening at the same time. The first thing is what is the claim, what is the statement that the sync committee members are going to sign.
So this is basically, is the statement correct or not? This is one thing that if this statement is not correct, it's taking care by slicing conditions and so on.
Now the other thing is, whatever is being signed, we need to ensure that the signature is an aggregate signature from a subset of nodes from a total subset, from a total set of validators.
So how do you prove that an aggregate key that is being used to verify the signature from a subset of validators is indeed containing a certain number of public keys from a total set of validators?
This is where Lagrange's technology is coming into place. So we're able to produce these proofs that these aggregate keys contain a certain number of public keys, of BLS keys, from a total set of validators.
So we can compute this proof very fast in a massively parallel fashion. And, you know, this proof contains a lot of things like, you know, you need to prove membership of the keys that are participating in a certain commitment, and then you need to do elliptic evaporation within a circuit source.
It's very hard to do, right? So parallelism is very important. This is what we're doing. Now, this is one part of our technology, but the most important in my opinion, part of our technology,
and this is what really, really enables very fast public aggregation, is the fact that after the next block comes in, if some people from the validator sets get out or get in.
So in other words, if the aggregate public key changes because some people decide that they're not going to sign next time, right?
Instead of recomputing this snark proof from scratch, our technology enables to very fast and very efficiently update this proof for the new aggregate public key, right?
So by having these proofs for the new aggregate public keys produced fast, we can attach these proofs to the circuit that consumes and produce the final proof for the SIN committee and eventually have a proof at the station for the new block.
Got it. Yeah, thanks everyone for running through that. I think that gives everyone kind of a clear idea of kind of what the state committee is, but just just to recap really quick.
So all this all the cryptography and the AVS architecture is underlying this. Essentially what this boils down to is a set of nodes which are attesting to the current state of the optimistic roll up, in this case mental,
and also attesting to the fact that the current set of validators, sorry, the current set of committee members who are attested to this state are valid and are able to commit.
You don't want people coming in attested to the state and them leaving, and then we're drawing an economic stake. That means that it would be insecure and you can actually use what they've staked for attributable security.
So essentially that's what it is. We've got a set of validators and the set can now, essentially because of some of this amazing photography work, now essentially grow to be very large in size.
So we're no longer limited by some of the boundaries of tenor-mic consensus and order and squared messaging overhead.
Everyone can get a very large set of validators kind of attesting to this state, and they can attest to this state as the block actually happens, which means that you now have kind of a secure, granted different security assumptions that actually you would get for a full finalization for some days,
but a secure way of consuming optimistic roll-up state backed up by a large amount of cryptocurrency capital.
And the fact that this capital is large and also distributed among a large set of validators is super important to get the level of security guarantees here.
Ishmael touched upon super linear security, and I want to touch upon that later on.
But first, just kind of want to mention that Mantle, we've been working with Eigenlehr for probably about two, at least almost two years at this point, and we've been also bootstrapping this state committee primitive with Lagrange probably since last September.
So we've been kind of really trying to push forward this board just to kind of address some of the current issues with optimistic roll-up interoperability and just kind of like a continuing show of that support.
I'm really happy to announce that not only are we going to be the first kind of instantiation of the state committee from an optimistic roll-up, but also to kind of show our commitment and to kind of also give the state committee a level of security that it needs to actually function properly.
We'll be putting in $10 million of Mantle from our treasury to initially see the state committee deployment.
And this will eventually be permissionless anyone can join and add their own stake or delegate to a node operator.
But this initial amount will provide kind of a baseline level of security guarantee they'll actually be enough for developers to kind of build off of it.
But to kind of get into why that is, first I think it'd kind of interesting to discuss kind of like what some of the pitfalls are with bridges and other kinds of like fast messaging approaches that have kind of been done in the past.
I think everyone is kind of familiar with some node authorials of bridges in the past, but yeah, I would like to turn it over to anyone who wants to kind of chime up to kind of discuss like what's kind of been wrong with the previous paradigm and how this kind of fixes it.
Yeah, when we were designing the state committee, what we really wanted to try to move away from was this idea of isolated security guarantees per bridge.
We felt that when we look at the modern bridging architectures, there were large numbers of nodes that every single cross chain protocol had to run for every role of it supported, which made it increasingly difficult in a modular future to be able to have protocols that could effectively
interoperate across the thousands of potential rollups. And so what we wanted to do was to be able to shift the imperative away from a cross chain protocol and building an independent security guarantee around the rollup onto the rollup of being able to demonstrate security over its state.
And so with the state committee, what we functionally do is we try to enshrine a degree of security and cross chain state into a rollup itself, back by both the restaking Ethereum and the staking of that rollups native token, such that arbitrary cross chain protocols can plug into that and be able to start messaging and bridging rapidly without compromising security guarantees and not fragmenting security guarantees.
And it's the concept of pooled security, which becomes unlocked when you start having restaking and you start having more efficiency and aggregation, some of these more complicated primitives.
And Mantle's commitment to supporting this has been something that has been very helpful in the development of the state committee, and it's something that we're very excited to continue to work with them on as we move towards our public deployments of this.
Arun in particular has been working with us for quite some time, both on getting this to be something that can be deployed on Mantle, as well as theorizing ways that we can improve and enhance upon the design that we originally came forward with.
Thank you for the kind words, but I really feel like this is the majority effort, but I'll happily take any credit you. You want to send my way?
Cool. Yes, Sriram, I don't know if you have anything to kind of add to this discussion. I'm kind of curious your thoughts on not just the state committee, but in general, how I can layer potentially health interoperability.
Yeah, absolutely. I think one of the really amazing things with this particular set of problems that Lagrange is trying to tackle with the state committee and how it fits with the Eigenlei roadmap.
That's why I'm going to comment. The first thing is, when you're looking at interoperating across L2s, one of the powerful things that you have is you have eventual correct settlement of these L2s on Ethereum.
So you will know eventually, on Ethereum, what the correct state of these optimistic rollups, I mean, even though the talk here is phrased as new paradigm for lifelines for optimistic rollups, you could also apply this to ZK rollups which only post-proof infrequently.
But the idea is, when you have an L2, you are basically getting eventual correct settlement of state to Ethereum. Why is this relevant?
When you want to build a system of cryptoeconomic safety on Eigenlei, you need to have a mechanism by which you eventually can slash or prove that the set of operators who made a claim were malicious.
How would you do it if the L2 is settling back to Ethereum? Eventually, you know what the right state is, and you can use that to actually slash the malicious operators.
So that's the first point, that is, the kind of precise fit of this kind of a technology to what we're doing with Eigenlei and the beautiful complementality.
The second point is, when you think about L2s, the paradigm of L2s is what I would call unconditionally correct bridging.
The idea is, when you have two chains, and you want to construct a bridge between these two totally different chains, what could happen is, over time, one chain forks,
and because of whatever problem the validator set was malicious, they signed on an invalid block, they promised a block header for which data was unavailable,
they withheld under censorship, whatever the case, there may be a case where one chain forks and the other chain, therefore, now the bridge de-synchronizes between the two chains,
because the bridge is not able to understand that the other chain is forked.
And so this leads to, there's no way to do unconditionally correct bridging across two disparate chains.
And the idea that Ethereum pioneered was having a common trust zone on which all these different systems settle so that you have a common reference frame, which is Ethereum, on which all these different systems settle.
So L2s is a system of unconditionally correct bridging. Now, but, you know, it is slow. That's the main problem.
So if this bridging is slow, what can we do to make it fast?
But the thing we do to make it fast, if it loses the rigid properties that you get from these L2s, which is that you're getting really high-grade security, then, you know, maybe there is not a big point because most people are going to use the fast bridge,
and they get much more safety. So what's the whole point of these L2s?
And that's really where I see the Lagrange approach fitting in, which is that because you get this measurable, sharp amount of cryptoeconomic safety, which is attributable and redistributable,
imagine there's a bridge which is moving $100 million weekly volume across the bridge.
If it holds an insurance bond of more than $100 million, then this bridge is unconditionally safe because either the committee was correct and they did the right thing.
All of the committee was wrong, you will be able to slash and redistribute that $100 million.
So you're getting, like, really the same high-grade security that L2 to L2 bridging was originally meant for.
So that's the second point, which is that, you know, the idea of this kind of, like, state committees and restaked security operating these L2 to L2 bridges is really upholds the high-grade of security that the original L2 to L2 was meant to provide.
And then finally, there is a super interesting feature of, you know, shared security in layers, which we call the elastic scaling of security.
Imagine there are lots of different bridges and each of which need different amounts of security during different times, because, you know, sometimes there is a lot of demand to move from arbitrum to optimism or some other thing to some other thing.
And each of these have different security, randomly varying security requirements, instead of provisioning a static amount of stake for each of these things to hit the worst case.
Let's say a bridge does volume between $10 million and $200 million in a given week.
Now, you know, if you were to design a separate thing for each of these things, you would have $200 million in security separately for each of these systems.
One of the really powerful things that Eigenlayer offers is what we call elastic scaling of security, which means you go and consume only as much security as you need.
And you have a common pool which averages out these random fluctuations across many, many systems, including bridges, but also other things.
So what this does is it reduces the cost basis.
Another Amazon cloud service is called elastic scaling of compute, because you can get as much compute as you need.
And that's really what we can replicate here is elastic scaling of security. You get as much security as you need. Security as you go, pay as you need kind of a model.
So that's the three things that come to mind in this context.
That's fascinating. Yeah, essentially, because we have this like insurance pool, although I think calling it a like client is much sexier and kind of also better highlights.
I think some of the capabilities of it, because we have this, we're essentially able to do without fast messaging and fast bridging without some of the non explicit security guarantees that have occurred in the past.
And to kind of highlight this, I think we can look at not only some of these these previous bridges, which have small quorum sets, which have been compromised.
And if I cash traffic issues for their LPs, but also we open up the space of possibilities, not just for bridging of assets, but arbitrary message passing.
We see that it's very important for this pool to have a certain level of stake behind it, because oftentimes the people who are hurt downstream are not even the people who are who are using the bridge potentially.
Like, for instance, if you were to imagine a cross chain liquidation or a cross chain lending protocol in which you have assets on one chain and you are you are borrowing on another chain.
They have to send a message back from from chain to chain one very quickly to liquidate because the assets on the other chain have fallen down in value.
If that message is invalid, then the person who has who has been harmed is the LP provider to the lending protocol or lending protocol itself versus the person who sends a message may be a liquidation bot holder or something like that.
So it becomes very difficult, I think, for protocols to kind of build on secure bridging infrastructure for that reason, because they have to worry about these negative externalities from malicious actors who who have every incentive to try to harm them.
But if these messages are being passed through this kind of credibly neutral, like client or reporter state, suddenly the level of trust becomes high enough that it becomes possible to build applications like this, maybe even for the first time.
And that's something that we're very excited about. A mantle is kind of exploring these kinds of kind of cross chain applications.
The board is not only for assets, but to send messages from the L2 to the L1, from L2 to L2 without waiting for seven days to pass or even if you're not mentioned without even waiting for a few minutes for a ZK proof to even settle on the L1.
And also potentially even passing messages between the mantle and other L1 ecosystems like TON or Bitcoin. Really the possibilities for this thing kind of are endless and very excited to see what developers kind of build off of this primitive for that reason.
But I guess it's like kind of like an interesting digression. Let's turn over to Babish really quick, just to kind of talk about maybe a little bit about how this message passing actually occurs, because there's some really interesting on the hood with Lagrange's ZKMR framework.
Yeah, sure, absolutely. So basically the idea is that whenever you want to produce, you're presented with an aggregate public key and a claim that says that, look, I mean, this aggregate public key represents two thirds of the total set of validators.
Right. So you have two options. Either you trust the statement or you don't. So clearly you cannot trust such a statement and you cannot easily check the statement.
So the way to do it is to produce proof that basically says this aggregate public key is a product of specific keys that are contained in this commitment to the whole set of validators.
OK, so there's different ways to do that. One way to do that is to compute kind of sort of like a monolithic snark for that. But this is too expensive.
So we have a way to do that in a massively parallel fashion so we can compute these proofs fast. So now what happens is whenever you change the aggregator set, the people that sign,
the question is, how do you update this proof? Right. So one way is to compute it from scratch or what we're doing is we find an algorithm that can update the specific proofs extremely fast so that we can keep up with the production of new blocks. Right.
I think keeping up with a production of good of new blocks is a very important point here, especially when we're talking about generally these proof-to-optimistic rollups for the block time is two seconds for mantle and it was 0.4 seconds for for Arbitrum, where we really need to be able to generate proofs over very, very large quorums very quickly.
And the updateability of the aggregation is the key differentiated. Let's just do this. Let's go.
Yeah, that's a really important point. There's like a tremendous optimization that's going on under the hood, on both the the particular side and also on the under cryptography and as the case side to kind of make this happen.
It's pretty fascinating and wants to kind of dive down the rabbit hole.
But yeah, I kind of alluded to a few different applications that I thought were interesting to kind of build off of it, but kind of like in the time we have left, we'd love to kind of discuss more.
What types of things are you all kind of like dying to be able to build using this architecture? Like, what would be a killer application? Like, what kind of things do you see people doing with it?
Not just now, maybe even in the future when we have a network of these like clients all interacting with each other.
I think, sorry, go ahead.
No, no, go ahead.
I was going to say that I think what makes it very interesting when you have more fluid forms of state interoperability is not just bridging where it's very easy to kind of underwrite the value that's being transferred, but more complicated message passing.
Where the value of a message that might have a chain reaction in some downstream cross chain protocol is very, very difficult to assess a priority.
Where you're thinking about a price feed for a keywop of some asset on Ethereum that's going to be used to price an auction on mantle or vice versa.
Where you don't really know the maximum amount of value that could be compromised in the event that there is a protocol that or there is a there's a breach in the cross chain protocol.
And so when you start having more, more robust forms of state security, you can start building things that can pose on top of cross chain state in new and dynamic ways.
And there's a lot of applications you can build to this end that we're quite excited about.
So I think you're going to see an increasing number of D5 primitives between chains that are going to be very powerful.
You're going to be able to see primary market LST rates for L1 consumed in L2 when you're going to see vice versa as LSTs can start structuring aspects of their pricing based on multi chain state.
I also think you're going to see new cross chain pathways open up, where it becomes easier and easier for emerging ecosystems to have a robust set of cross chain integrations out of the gate.
Where you spin up a roll up today and you might really not have access to the cross chain protocols that originate the majority of the volume, right?
It takes time for your new app specific roll up to have layer zero and axler and hyperland integrations typically.
But as we think of this future state where people can permissionlessly create block space, there needs to be the ability to permissionlessly create interoperability endpoints for that block space.
And we think of state committees really as this way to create these endpoints.
And so as you have these networks at scale, you have cheaper access to state between chains, more secure access, and you also have permissionless access to state, which we think opens up a lot of other opportunities as well.
I have a crazy new idea.
It's related to the whole recent discussion in the Ethereum community on the client diversity.
That is, to face the problem briefly, you have different execution clients, like Go Ethereum, Geth, and many others like Nethermind and so on.
So one of the things, one of the principles that Ethereum was designed with is to have many different implementations of the execution logic.
And the one problem is that Ethereum has no kind of way to understand who's running what client and to specifically incentivize diversity.
So there was this whole discussion in the society about like, hey, what happens if there is a bug in one of the clients, let's say in Go Ethereum, then what happens to the state?
What happens to people who depended on this output? What should we do?
There is a protocol in place which will, if it's a majority client and it finalizes, then those days can get slashed. There's all this complexity around it.
And one of the things, as I was thinking about what Lagrange is doing broadly, is it might be really interesting to say, for example, a state committee, when it's signing off on a roll-up state, not only verifies it according to one execution engine,
but verifies the state, and maybe there's a fraction of nodes, maybe 20% of the nodes verified with Go Ethereum execution, 20% verified with Rust Ethereum and so on, and different ones verified different things.
And the quorum is only formed if you have many of these different things agree on it.
And so, you know, there is a way to bring client diversity to these L2s, even before client diversity becomes common on these L2s.
And also, you could bring client diversity to L1-like clients, even if Ethereum itself doesn't have the client diversity profile that we would ideally like.
So that's a really, you know, it's a new idea. It's not well thought out, I just thought of it right now, but would love to hear from a smile or others, if there's any thoughts.
That's very fascinating. Yeah, I mean, you really could impose any really arbitrary condition for the diversity of clients, you know, with this mechanism.
I think the only question I have is on the degree to which it's attributable, where if you could determine from the signature what they were running back on L1.
But I think there's definitely ways that we could likely design something like that, and I think it would open up a ton of very interesting use cases and a ton of very interesting ways to build L2s that are potentially more robust,
or even build something that monitors Ethereum state with a very similar design to the Ethereum-like client that is focused specifically on incentivizing or punishing client diversity.
Yeah, there is the high-grade requirement of, like, everything being on-chain verifiable, but there are more social ways that, you know, we can incentivize this kind of diversity when people are opting in and saying that, hey, you know, if you're opting in in this quorum, you have to run this particular client.
And, you know, that's how they get their incentives and rewards. So there is, because it's a new layer of the Lagrange State Committee, there is a potential to innovate on the kind of both positive and negative incentives to make sure that people comply with these diversity requirements.
That's fascinating. And I imagine, especially if we think about other execution spaces that have potentially a lower number of overall validators and even less diversity, this becomes increasingly relevant.
Or if you think of, like, a gnosis, for example, that is going to have less client diversity than Ethereum.
Absolutely.
Yeah, actually, I think it's a really cool idea and definitely saw a very core issue with Ethereum, which is a client diversity. Like people said, there's a social level, some level of adherence.
Like, I think that, I remember back a month ago, people were making a lot of thoughts about LIDO's growing dominance. We saw LIDO's dominance kind of hit 38% and just kind of stay there, for whatever reason, which is pretty funny.
I'm not sure what it is right now, but for a while it's kind of stuck there. I think there was some level of people's kind of feeling the need to diversify out and prevent Ethereum from suffering some kind of potential attack factor from centralized staking.
Yeah, but I think if you can enforce it with incentives, it becomes a lot more robust, for sure. And something like this could definitely do that. You could incentivize, for instance, with some kind of...
I mean, don't take my word for this, some kind of balance of mechanism, which kind of balances incentives based on the relative rarity of a particular client infrastructure.
And then at that point, verifying them is very difficult. But there are a few ways potentially to do it. I guess the most compensation intensive way would probably be to basically ZK proof, the state route calculation.
And then you can essentially verify that particular LIDO client leads to calculate it just because you have that proof. But then, even then, you would need to go away to differentiate between the LIDO clients, and it puts a lot of burden on those clients to calculate a proof whenever they submit an attestation.
But yeah, definitely fun to think about. Yeah, so thanks for bringing that up. I don't know if Babbage has any thoughts on that particular aspect of it.
Yeah, actually, the fact that we can have many clients attest to the route, I think this is definitely going to increase the security. And from a ZK point perspective, I think the fact that you don't have to put all of this in a monolithic circuit, then you can potentially highly parallelize this attestation and use sort of like a recursive SNAC technology to compute such proofs even faster.
Okay, nice. Yeah, that was super cool and interesting. I think something that can really happen if you get a bunch of high level experts on a call and just kind of jamming on the subject. But yeah, that was really fun conversation.
I guess like in the last few minutes, I would love to kind of dig into this concept of attitudinal security versus super linear security and actual quantification of some of these things, which is a bit of a tricky topic because I think we want these LIDO clients to have a quantifiable amount of security of stake they're securing relative to the amount of stake that's being committed.
And also for that to be less than the actual amount of stake security to be less than the total amount of security being attested to. But I think in practice, this might break down or put limiters in place to make sure it happens.
And also in the case, we open up arbitrary message passing becomes very difficult to, for instance, determine the economic impact of sending a primary market rate from mantle to optimism.
For instance, like, what is the potential impact of that? It really depends on, you know, how many different protocols are you making that rate to do critical protocol function and things like that.
But yeah, it's like a very, very general question. But how do people kind of kind of view this this paradigm of a trivial security moving forward in a situation where such security can be can be not easily quantified and the state committees could potentially come over leveraged.
I so in our view, you know, and yeah, so so in our view, we think of the concept of pooled security and trivial security as things that have a very high overlap in our view, where effectively, if you pool security for a lot a number of cross chain protocols that would today be running independent sets of validators testing the state, which you get is holistically a more secure system that can support that.
It can support more leverage larger pools of collateral can empirically support higher degrees of leverage than smaller pools of collateral.
And I think the concept of having something that is leveraged is oftentimes uncomfortable, but oftentimes, very reasonably secure, where the attributable security you would need for a very, very large cross chain ecosystem might not be at parity with the actual amount
of security if the amount of value secured is very hard to quantify. And the example I'd look is a theorem state, a theorem validator set functionally secures a higher value of ETH than is behind the consensus validation.
And so, so functionally, what you what you have is a validator set that secures more than it actually has at stake, but given the side of that validator set and the cost of attack to compromise it, it can do so.
And when we think about this kind of concept of being able to have very large pools of capital that support a higher degree of leverage, the concept of attribute attribute ability for us is really about what degree of insurance do you need on that amount of capital.
For example, if we think of like, you know, it's a bank and FDIC insurance, the amount of insurance that bank has on the assets that are that are that are that are secured there is going to be less than the total assets, but still enough that in the event that something happened with the bank, there would be a large number of participants who would be able to be made whole.
So holistically, this is the system is is a lot safer than if you did not have that degree of insurance.
Yeah, that makes sense to me. Yes, I don't know if you have any additional thoughts on this topic also.
Yeah, so, you know, the way I can live protocol is designed is for each AVS to come up with their own kind of like navigation between how much of polling is enough, how much of attribution is needed, you know, what is the scale at which they perceive, you know, security is good enough.
All of these are, you know, subject to each of the AVS kind of making these determinations from our end, you know, some of the hard problems that we've had to solve are, for example, how do you guarantee attribute ability, you know, unconditionally, what I mean by unconditionally is different sets of fakers operate to different subsets of services.
And, you know, what may happen is at any given time, any group of any subset of validators may collude and attack any subset of services.
And what we have to guarantee when we're talking about attributable security is that no matter how many services trigger slashing simultaneously, everyone's attributable security is honored.
So if there are like, you know, many protocols, they can all trigger slashing simultaneously, we have to ensure that the protocol is solvent to honor all this attribution because the attributable portion of security that Eigenlehr is offering is totally over collateralized.
So the total, so suppose there's $10 billion at stake on Eigenlehr, the total amount of attributable security sold will be less than $10 billion.
Not only that, we have to honor that, you know, whatever we're selling for each of the different services, the protocol is solvent, which means if you trigger slashing simultaneously for any subset of services, for any subset of collusion of stake, the protocol is able to honor all of it.
So we just built these like crypto economic, you know, Swiss army knife kind of tools. And, you know, it's up to each AVS to kind of like understand and calibrate what is enough for each of these applications.
Got it. That makes sense. So eventually you're solving the problem. That's one layer up from what the state community is solving.
Okay, cool. I know we're at time. I want to thank everyone for joining. This is a super interesting discussion. Hopefully, people learned a bit about what the state committee is and kind of what it enables and are very excited to use it and build off of it.
I guess before we sign off, if anyone wants to say any final words about anything, that was your chance. Otherwise, we can just end the space.
Yeah, I guess I just want to thank everyone for coming to this. I don't want to thank everyone who has supported us as we've been working on the state committee as well as some of our other zk map reviews products and particularly want to thank Eigen layer who has been a tremendous thought partner and a phenomenal partner of ours overall as well as mantle network.
Who has been very close with us as we've been developing this. So thank you everybody.
Yeah, I want to reciprocate by again, thanking mantle, you know, like was stated, you know, we've been working with mantle for nearly two years. They were the first
First project to work with us at the very, very early phase. They were like, okay, you know, there's some real value here that we should actually double down and work with and we've been really, really delighted to have them as a partner across the whole variety of things, you know,
You know, seeking Eigen DA now, you know, partnering with Lagrange on some of the state committee stuff. I think this is, you know, really delighted to have mantle as a partner. Likewise, Lagrange bringing a lot of really high quality ideas into the space and solving some really important problems. So really thrilled to have the opportunity to work closely with both of these teams.
Yeah, absolutely. Can't speak for for all of mental, but it's been a delight to work with with everyone at the Eigen layer team and Lagrange team. So yeah, that's been a really, really amazing collaboration and very thankful for it.
Yeah, it's it's been really, really great to work with both model and Eigen layer. Yeah, it's pretty fantastic. Thanks.
For sure. Okay, cool. Um, if that's it, then yeah, thanks everyone for your time today and for the great discussion. And yeah, I guess we can in the space then.
Yeah, thanks, everyone. Have a good day. Thank you.