Thank you for joining us.
I'm Ilya, co-founder of Neo Protocol.
And, yeah, really excited to be here to kind of talk through these topics, get some retro, as well as discuss some interesting next steps and answer questions.
Bowen, you want to go next?
I'm Bowen, head of protocol at Pagoda.
And very happy to be here to discuss both the need meeting and also what we're going to do in the future, phase two of sharding and beyond.
I'm the CTO of Pagoda, which is one of the main engineering arms of the Nier ecosystem.
Excited to be here and talk to all of you.
Maybe kind of the good way to start here is kind of like get a quick review from Eric and Bowen on what were the effects on different parts of the kind of infra stack from a very kind of overwhelming demand that we've seen with neat launch.
And how did different parts, whether that, what were the kind of unintended consequences, and then we can discuss what are the interesting things we can do to make sure that the quality of service for everyone is kind of stable in cases like this.
I know, Eric, if you want to start.
So, first of all, I'll say all of this information is secondhand.
We didn't know about the launch of neat.
The blockchain is an open permissionless decentralized system, and anybody can do whatever they'd like on it within the constraints of what the protocol allows.
So, somebody decided to build neat.
It's what they're calling the first inscription standard.
Essentially, it's a token launch.
Not unlike any other token launch, including NEP1 and 4.1 tokens you could launch on NIR.
The difference with neat is it was intended to be to launch a ton of tokens that you could mint yourself on a one-by-one basis, paying a pretty low gas fee for each particular mint.
And the creators of it decided to kind of gamify it.
Again, it seems like they're having some fun with this, and we don't mind people having fun with this.
But what that resulted in was a lot of people minting neat transactions, sending a ton, millions and millions of transactions to the NIR protocol, each of which was designed to mint what they've now actually rebalanced as one neat token per transaction.
The other people picked up on that, the creators of the SenderWallet, we've got any SenderWallet users here, shout out to you guys, built another website.
It's again, unaffiliated with the original neat one, where you could just type in any number of tokens, and it would go and mint that many from your SenderWallet.
So this resulted in a flood of transactions to the NIR protocol.
It ended up having the effect of stress testing the network, kind of an unscheduled stress test, see how much traffic the network could take.
Now, a lot of things went well.
A lot of the protocol held up just fine as advertised.
Some parts of it didn't do as well as we'd hoped, especially some of the tooling that exists sort of outside of the core protocol.
Things like RPCs and indexers, some of that underlying infrastructure, and then the applications that are built on that infrastructure, like wallets and end user applications.
So we can get into a little bit about what happened there and what we're doing looking forward, but I'd love to pass it over to Bowen to talk a little bit about what happened within the core protocol and how it handled the massive increase in the number of transactions.
Yeah, so basically what happened is that there was like a lot of transactions to this contract that resides on one of the shards.
So basically today, on NIR, there are four shards, and the need contract inscription.NIR lives on shard two.
And what happened is that because there's like a lot of transactions calling into this shard, it created this congestion that essentially there are delayed receipts in the shard.
And it's actually not only that shard because there's high demand from other shards.
There are other transactions going from other shards.
In some cases, not always, it also caused some kind of congestion on shard zero, and maybe occasionally on shard three.
And what this means is that for some users, if you're interacting with, let's say, another contract on shard two, you may have seen that your transaction take very long time.
And unfortunately, all the tooling were not really well-equipped to handle this case because it's not really the normal case.
And so I think there are some unexpected errors.
People see like this timeout error when they send a transaction, but it will actually end up being successful.
It's just that it takes a long time to be processed.
And then another effect of this is that the protocol has a mechanism to essentially defend against spamming by adjusting the gas price.
So essentially, the idea is that when the protocol is heavily loaded or like it's reaching the point of congestion, then gas price would increase and actually increase at an exponential rate, even though the base is quite low.
It's something like 1.01, but still it's like an exponential increase.
And this is to defend against people keep spamming the network.
Then it actually worked in this case.
I think at the peak, the gas price spiked 20x.
But obviously, this is not great for the users who are still trying to use the network at the time of congestion.
Many people reported that they spent way more than expected on their transactions and so on.
And unfortunately, another thing is that I don't think any of the wallet is aware of this mechanism.
And I don't think the gas price is really necessarily like displaced.
So maybe people didn't notice and so on.
And I know that a lot of people complained about this as well.
Yeah, Eric, do you want to?
Yeah, maybe just to kind of add a little bit more context, right?
So near, although looks like a single blockchain, underneath actually has so-called, you know, sharding model, which allows to parallelize the kind of state and compute of the transactions.
And, you know, in a way, there's kind of multiple blockchains, but it's all chain abstracted for the users and to some extent developers to not think about it.
And so as, you know, something like this was happening, as Bowen said, kind of the smart contract of inscription was only in one of the shards.
And so some other shards actually continued processing fine, but because some other things were there as well as there's kind of different contracts on different charts.
It did affect things broadly when they're digitalized.
But the idea that kind of when transactions are being processed, they are put in a queue, kind of when they are, when the communication starts.
And with that, that queue is being processed kind of, with the, with the, with the blockchain connections with the transactions.
And that on itself is also pretty different model from what some other blockchains use, where the transactions are stored in memple.
kind of, kind of, on the, on the inclusive memple, getting that nice out in that.
Eric, do you want to take over?
So, again, this was a, you know, a huge spike in transactions on the near net protocol.
That in itself, you know, we are, we are hoping to see more and more transactions on the near protocol.
Maybe not as big a spike as we saw, but this has happened on blockchains before.
I mean, the Bitcoin chain really started filling up for the first time back in, I want to say 2016, 2017, maybe a little earlier than that.
The Ethereum blockchain started filling up and gas prices started increasing around the 2018, 2019 timeframe.
And in each of the cases, we saw some of the similar effects that we saw on near over the last week, where the core protocols, obviously built from the ground up to scale, super well tested, generally held up fine or upheld the contracts and, you know, implemented their anti-spam gas price increasing measures.
What we saw, however, in those situations previously, and on near in the last week, was that some of the services that are sort of built in and around them, not including certainly wallets, applications, and some of the underlying services that they use, sometimes weren't necessarily built to handle the same scale as the underlying protocol.
In addition to some of them falling over just for lack of computing resources, some of them aren't yet necessarily built to handle, you know, to properly handle what the user experience should be if there's a big gas price spike.
Now, for those of you who are, who use Bitcoin or Ethereum, you're probably well aware of having to play those fee games and figure out exactly what fee you're going to pay for a transaction or perhaps rebroadcast with a different fee.
Now, it's something that you're not generally used to doing on near, you're not generally used to thinking about the gas price, and we want to keep it that way to be clear, we want near to be a chain which it doesn't need to be top of mind, you know, how am I going to get this transaction into the network, and we can certainly go into more detail about how we're going to make sure to keep it that way as the network scales.
But that's still something that the wallet software you're using, and the applications you're using, are going to start to have to understand how to handle.
And so that's going to be a process that we're basically kicking off as of last week, to ensure that all the underlying infrastructure that backs the applications you're using, whether that's your wallets, or your end user applications, or the layers that sit under that, like indexers and RPCs, are a built to handle scale, and built to handle what happens when the network,
when the network does something a little funky, even though it's very rare, we want to make sure that the software is continuing to hold up.
Bone, should we dive into a little bit of what's coming up in the protocol?
Yeah, I think we can certainly talk about that.
So I think there are people who expressed concerns around the capacity that the protocol is able to handle before it's getting contrasted.
So I think, yeah, there are definitely a number of initiatives down the line that would increase the capacity of the protocol wall.
But also, I just want to first say that, for this need meeting specifically, one of the reasons that the capacity may not have been as high as people were thinking is that we essentially there is a,
we essentially artificially kind of capped the capacity of the protocol, because one of the gas costs is much higher than what it should be.
The reason why this wasn't addressed before is that we were essentially thinking in a more defensive mode that using this as a way to counteract any of the undercharging that there may be,
or like we may have not thought of on the network.
So in this need meeting case specifically, the network could have processed a lot more transactions than what it did.
But I think just because of this mechanism that was not really addressed.
So basically, there's like a function called gas cost that's much higher than what it should be.
So the capacity may not have been as high as what people had hoped.
And then kind of going forward, definitely we will address that issue.
But there are just, I think, two major initiatives down the line that would help increase the capacity of network overall.
So one is that in the next release, we're actually going to split one of the shards three into two shards.
So this is supposed to test this mechanism of resharding that we have built,
which is a general purpose mechanism that allows the protocol to split any shard into two.
But also it would increase the capacity of the network by having, allowing the network to have one more shard.
So this actually is going to be included in the next release for near-core, which is going to happen in a couple, in a few weeks.
And then separately, we're working on status validation, which, yeah, for those who don't know, this is what I talked about near-con.
So this is a new design for phase two of sharding, and this would also dramatically improve the performance of each single shard
in addition to accomplishing the original phase two of sharding goal.
And for this one, we are now actually thinking about opening up a kind of public testing network before the launch
launch and incentivize people to load test it with transactions and see how the network performs.
And we should actually see much better throughput than what we see today, even from every single shard point of view.
And I think we're now expecting this status validation release to go out potentially end of Q1 or beginning of Q2
after we do this public testing with the community.
Hopefully my sound is better. Yes.
Yep, coming through loud and clear.
All right. Yeah, so I mentioned to hear.
So I think it's really kind of you covered on just increasing general capacity of the network.
But also, I think important to know that stateless validation is also allows to open up more shards
because it removes the current limitations that we have that some value that are still tracking all the shards for security purposes.
And so generally speaking, this means NIR can continue adding shards as capacity needs are expended by various applications on the network.
Yep. As a sort of an actively managed and actively developed blockchain, you know, we feel that we've got we've demonstrated an amazing amount of capacity for bringing on more transactions.
We're just going to try to increase it from there, always trying to stay ahead of what's coming through on the on the system, whether that's whether that's neat, which was a big spike in transactions for a few days or the next thing, which I'm sure will be a hugely sustained amount of transaction activity happening on NIR as all the world's activity moves over onto our blockchain.
Exactly. So I think the other thing to mention, right, so neat kind of is an interesting example of this, like a single application that kind of leads to a congestion.
And so this is something that we've been kind of congestion. And so this is something that we've been kind of thinking through for a while around what is the ways we can make sure that every other application that kind of coexists on the same network doesn't get majorly kind of affected when there is something hot and new coming out and becoming, you know, really well transacted while, you know, all the other applications
applications and users, you know, have their daily needs and probably don't want to pay kind of higher fees in that moment of time.
And so kind of generally, you know, as Eric mentioned, like we want to keep the fees low for everyone in such a way that kind of generally users don't need to think about it.
And application developers who are covering user transaction fees don't need to think about it, while potentially things like, you know, inscriptions or some other mints or some other kind of explosive effect, you know, events that are happening may justify higher fees for those specific applications.
And so kind of a lot of this, I mean, there's been research in this area, right, around kind of creating local fee markets where specific applications or zones of the blockchain have a different fee structure.
And obviously there's interesting challenges in the sharded space because there's kind of the block space is not tonic already and we have kind of messages being sent between different shards.
But, yeah, I think this is one of the areas where we are going to invest to research on how to actually create a place where even if there is a active kind of event like this, where some application has really high usage, everything else, even if it's on the same shard, can continue working.
And kind of ideally have the same fees that they usually enjoy.
I know, Bowen, you have more to add.
Yeah, I think for sure one of the complaints that people had, which I think is a very fair point, is that even if they won't, there's no way for them to prioritize their transaction during the congestion period.
And as a result, especially for some like DeFi users or DeFi use cases, you know, they really want to get their transaction because maybe there is an arbitrage opportunity or liquidation or whatever that may be that they're willing to pay higher fees, but they were not actually able to do so.
And they suffer equally as everybody else.
So I think this is something that we have been thinking about, but we haven't actually put the but we will actually start working on in the near future.
So introducing some kind of transaction priority mechanism so that even if something like this happens, people can have a way to send to prioritize their transaction if they want.
And also, yes, you know, another thing that people pointed out is that one contract is essentially causing the cash price on the entire network to spike.
And then we will look into how to localize that to specific shards, specific contracts, and making sure that, for example, this inscription minting on shard 2 won't affect the cash price, for example, for like shards 3, where there's like not a lot of activity.
So I think those things are what we will start working on in the near future as well.
Yeah, I think the good example of what was happening in this case was that Aurora, which has its own shards, had, you know, everything flying through.
And given they cover transaction fees for their users, their users actually didn't even notice the spike.
And I think Alex Shevchenko was posting that Aurora lab's infrastructure was all green during this time because they have implemented some of the scaling mechanisms there.
And also their shards specifically wasn't affected.
And so that's like a good example of this already working if there's enough kind of virtualization that's happening on top.
But we obviously can make it more robust across the stack for all applications on there.
I guess maybe that's a good segue for you covering kind of some of the pieces of the stack above the protocol.
Another reason to highlight Aurora's performance over the last week is that they operate a lot of their own infrastructure.
They operate their own near nodes.
They operate their own indexers, their own applications on top of that.
And that's important for decentralization.
Anytime you're building an application or if you're building a wallet,
you have to make these decisions about whether you're going to use hosted services run by another organization
or whether you're going to run the open source software that's put out by Near or by Pagoda and do it yourself.
Now, obviously, there's pros and cons to each.
One of the pros to using, for instance, a Pagoda hosted service is that you can get up and running a lot faster.
You can get your application working.
You can get your application running.
Also, a lot of the Pagoda services are free.
So that's helpful for your wallet.
One of the two of the cons are, first of all, it can result in a sort of a different kind of centralization.
And we want to have a decentralized ecosystem where no ecosystem participant, including Pagoda, including Bowen and Ilya,
have outright control of what's happening on the network.
And so the more that transactions are actually funneling through, say, a Pagoda service, that's not great for the overall health of the ecosystem.
So we do want to encourage more and more application developers and more and more users to be relying on services,
either that they're running themselves, that they have the technical capability and infrastructure to do so, or services from other providers.
As such, what you'll be seeing over the course of the next few months is that some of the services offered by Pagoda that are generally offered, say, for free,
but without any service level agreements, we're going to start to put probably some some free usage limits on there, offer some paid services and have have performance guidelines for those paid services.
So you know what you're going to get. And also you're incentivized to to help to explore other options options on the marketplace, see what's going to work best for your application.
This can sort of overall help to centralization. It can help everybody not rely on one organization.
It can help everybody in the ecosystem learn how to scale up as the network scales together.
Sounds good. Yeah. And for sure, there's a number of RPC providers in the in in the ecosystem that people can use.
And there's also a work on partnering around decentralized RPC provider kind of connectivity,
which would allow to have pretty much the application to choose and subscribe to one of these RPC providers and kind of pay for that and then offer that to their users and then being able to switch those things when they need to to kind of have a marketplace of the RPC providers in one place through lava network.
So I think that's also an important kind of evolution in this in the ecosystem.
I guess a few more things that I want to mention for sure, the the kind of timeouts around transaction sending when transaction actually was in the queue and was actually processed,
which led to some of the confusion, which led to some of the confusion if things went out is something that needs to be addressed and ensure that there is a way to
for kind of wallets and applications to monitor start of a start of a transaction a little bit deeper than there are currently doing.
And so I think there's actually been already work on near core side to kind of offer a lot deeper introspection and status of transaction.
Yeah, that work is actually, I believe it's already done, but unfortunately didn't make into the most recent release.
So I think it's still not available to all the tools or applications built on top.
But basically, one big problem with today's broadcast TX commit basically is like the RPC that everyone uses to send transactions.
It's a blocking call and then it's essentially waiting for the transaction to complete.
And essentially, it's waiting for all the receipts that transaction spawned during the execution to complete, which also includes just like refunds and it's like unnecessary in some cases.
So we have implemented this granular control that people can have.
For example, you can specify that you only want to wait for the transaction to be included in some chunk before returning to users.
Or you could say like, I only want the transaction to be like executed, but not waiting for all the receipts to be executed.
Or like you can say, I want to wait for the transaction to be actually finalized.
So there are like different options that people can configure as the argument to the RPC call.
And we believe that this would help application developers as well as end users to get a better understanding of where like the status of the transaction as well as maybe on the UI side improve a lot because you don't actually need to wait for everything to be finalized.
It reduces overall latency on when you're kind of transacting if you don't need to wait for all of the things to go through specifically on chain when you know that transaction is already included.
And so this kind of solves that, like solves also the problem when there is like some delay in execution as well.
Yeah, so I think that would be especially helpful in this case because, for example, if you are a user sending a transaction from shard zero and the minting contract is on shard two, the reason why it took a very long time to complete is because after the transactions process on shard zero, where there's basically no congestion, your transaction is then sent, the receipt is sent to shard two.
But then it got put into this delayed receipt queue, which actually took a long time to clear.
But then from like, let's say we actually have this option for you to configure that you actually, you know, you included in the, if the transaction is already included, then you can return.
Then it would be like much, much faster for people to know that, oh, yes, their transaction is included.
And sometimes it's what they actually care about.
Of course, we, we want the transactions on near to continue to be fast.
A lot of wallets and applications, they don't have to have these progress bars showing you exactly how your transaction is winding its way through the network.
Now, we think the reason why we're doing this is because we think it's important to have that capability in those applications and in those wallets, even though it'll end up being used ideally only very rarely or never when the network does get congested again.
All right, so I guess to recap, we had a very interesting event on near that created a lot of kind of excitement in the community.
At the same time, it led to kind of massive, in a way, spamming of the network, which in kind of had a very consequential result that applications and many of the users who use network for daily transactions were facing issues.
And kind of there's a set of mitigations to make sure that those issues are not kind of happening in the future, as well as, as Boeing covered, we're adding more capacity to the network and we're ensuring that the network can continue growing with the capacity as kind of the original near plan always been,
that we can actually scale with overall demand growth in the network and in kind of Web3 overall.
And at the same time, there's going to be work on creating mitigation for specifically fees where ideally when there is one application that is experiencing some kind of event like this,
the fees, the fees are isolated to that application and its users, not to the whole, not to the whole ecosystem.
Now, that's going to be a research project that's going to run through and we'll obviously see some of the results of that over time.
And then there's a number of initiatives that kind of to increase capacity or improve some of the kind of introspection and tooling that's already been in flight and more that are going to start as a result of this, as Eric mentioned.
So maybe it's a good time to open up for some questions and we can also cover if there's any other topics as well as I'm sure that people kind of,
at least I'm really excited to, for phase two kind of test that to see how much capacity can we actually see with status validation.
Awesome, awesome, I really appreciate that discussion and yeah, like Ilya said, if anybody has any questions or feedback or any input at all around the topics today,
go ahead and request to speak. I'm happy to get you up here.
All right, so we have a couple of requests. We have one from, let's see, we have some coming in for sure.
So I'm going to go ahead and let Fidget will come up to speak. You should get access in about a second.
All right, and you should be good to go.
Yeah, we can hear you loud and clear.
Just so you know, there's a lag in coming up on the space, so you can't hear for about 30 seconds.
But that's cool. I'm actually in Art Basel right now. I'm remembering, sorry for being a play,
remembering that I met what I think is now your CEO. I think he was like 22 at the time, if I'm not mistaken.
And I asked him then, I'll ask again, can you briefly explain the consensus mechanism and core protocol functionality of Nier,
besides for being eco-friendly?
Sorry, i.e. the benefits of building on it if I so chose, which I'm thinking about.
Sure, I guess there's probably different levels of depth we can go into, but the high level,
what Nier is trying to deliver is a really easy to use and build on developer platform
that has kind of layer one capabilities, that has various account abstraction
and other developer instruments to build applications which can target everyday consumer,
and at the same time, make sure that it can scale and grow with the growth of Web3, broadly speaking.
And so to do that on the kind of consensus and protocol architecture level,
it means there is a sharding model implemented.
There is a consensus called Doomsluck, which allows for one-second blocks,
kind of next block does a Doomsluck finality,
and block after that gets a BFT finality on the, in, in, in, in, kind of when everything operates correctly.
And so that allows you to have kind of fast finality transactions.
Sharding allows you to continue scaling capacity.
So as we talked here, phase two will allow us to, well, first of all, there's one more shard getting added,
but also with phase two, we're going to continue growing number of shards,
which would allow to continue growing capacity.
And it's a WebAssembly blockchain, which means you can write smart contracts in, well, a lot of different languages,
but most supported ones are Rust, and you have JavaScript support as well.
And Nier is probably the place where, because of this, you can build the most complicated smart contracts,
which Aurora is one of them, being a whole other virtual machine running as a smart contract.
So you guys aren't EVM compatible?
Aurora is the EVM running on top of Nier.
Okay, and I'm sorry for the, for the nerdiness.
So you're telling me I can write in Rust in an EVM ecosystem?
You can write in Rust in Nier, and you can call into EVM smart contracts in Aurora.
Cool. Thanks, guys. I appreciate it.
Awesome. Thank you so much for coming up here.
Very good question, and definitely enjoy your time at Art Basel.
I'm going to go ahead and invite up LearnNierClub, who also requested to come and ask a question.
You should be able to speak in just a second.
All right. Looks like you're connected, LearnNierClub.
Hey, good morning. I have a couple of questions.
One is, this is regarding the weekend accident.
Is safety market in the pipeline, like for the, for the Nier?
And the second one is probably more practical.
Can Pagadak create some kind of an example of affordable RPC nodes that every project can use?
Like, like affordable, I mean, like less than a hundred dollars per month, maybe, maybe even less.
So we can't rely on our own mini infrastructure to, to serve our users better.
I mean, yeah, I mean, to the first question, yes.
It's something we will start working on, but I think the, the challenge here is like designing a sound mechanism here is quite difficult.
We need to look at all the different possible attack angles and make sure that the mechanism cannot be abused.
Right. So I think in, in our past discussion and research, we found that there's actually, there are like a lot of challenges associated with that, especially in a sharded blockchain, where like you can call from one shard into another shard and how, and if you have like different kind of key markets between different shards, how, how that is managed and so on.
So there are quite a few challenges ahead of us.
But yeah, I mean, that's definitely something we will be working on.
We'll start working on in the near future.
And I think for the second question, yeah, Eric, do you want to talk?
I mean, I, what I can say is that we are working on improving the kind of RPC architecture and this, especially with just like read RPC on the horizon, which supposedly it will get, at least the cost of operating RPC would get lower.
But for like, yeah, for like, yeah, for like, yeah, for individuals who want to operate an RPC node, I think, yeah, that is something that, yeah, we probably need like a separate optimization for.
Building on that, one of the, one of the things that's, Bowen, you might want to correct me on this, it's either out or coming relatively soon is the ability to have a node that's tracking just the account or accounts that you care about.
And that can somewhat lower the resource utilization required for operating the nodes that could back, back your own local RPC.
Another option that's worth considering, something that's pretty well used in the world, is if you, if the cost of resource utilization in order to run your own RPC nodes is too high, consider using a few different hosted providers, maybe a primary and a backup or load balancing between different ones.
That can keep your, your costs a little lower, while still making sure that you're not being over-reliant on one underlying provider.
So if something goes down, you can move that traffic somewhere else.
Yeah, I think, building on top of what Eric said, with the status validation release coming up on the horizon, yeah, it'll be much cheaper to run a node that kind of tracks one shard, because we have the state witness distributed across the network, and you don't actually need a lot of data stored locally to actually apply the chunks and get the results.
So hopefully that, once that is done, the RPC that trunks one shard would be much cheaper to operate as well.
And it sounds like a win.
That was one of the other topics that people raised question, which was if validator nodes also grew in size when there was a lot of transaction coming through the network, and it sounds like there's a large overhead on the validator nodes.
So maybe you can cover that.
That's, yeah, for sure, yeah.
That's definitely not ideal.
And I would say that's a lack of optimization, like, kind of due to legacy reasons.
So basically, a validator node is not supposed to, like, operate exactly like an RPC node.
So the RPC node actually indexes, like, a lot more data, because it actually needs to answer the JSON RPC queries.
But the validator node actually do not.
So today, they're actually using the basic exact same mechanism to store the data into their local database.
But for validators, they're essentially storing a lot of data that they don't actually need to store.
And we are aware of this suboptimality, and we will work on kind of pruning that.
So as a concrete example, the validator nodes don't actually need to store, like, some additional index that essentially replicates the data, such as transactions, receipts, and some other chunk information, which actually take up quite a lot of space, especially when there's, like, a high transaction volume.
So that is something we'll also start to work on, as well.
So lower requirements for validators coming soon.
Awesome. Amazing question.
Thank you so much for coming up.
We have enough time for maybe one, maybe two, depending on time.
Just be cognizant in case anybody else wants to ask a question.
Jay Mack has had their hand up for quite a while.
I'll go ahead and bring Jay Mack up to see what questions they have.
We can hear you loud and clear.
Yeah, I just have a question, because is, in your design, the capacity of one shard enough?
Because I understood that you had a very high spike in usage.
But I think it's, what, about 13 million transactions per day.
So, approximately 100 transactions per second.
So, yeah, I'm just wondering, in your design, the capacity of the shard are enough?
Of course, I'm not technical.
But I just did a quick calculation about you.
So, that's kind of what I was talking about earlier.
Essentially, the capacity was, in a way, artificially capped at the lower one,
especially for the need minting case, because we have a function call cost that's much higher
So, basically, every need minting is a function call to the smart contract.
And because the function call cost is much higher than what it could be,
essentially, it artificially limits the number of transactions you can get in every single chunk.
And the reason for that is mostly a defensive mechanism or, like, maybe overly cautious mechanism
to counteract any potential undercharging that we had observed before.
So, what we had observed before is that, for some transactions, like,
it actually takes longer to execute than what we charge gas for.
And, obviously, that's also not great because it could lead to instability of the network.
So, we didn't actually lower the function call cost to actually,
yeah, basically as a way to counteract this effect.
And now with the status validation release on the horizon,
a lot of the storage undercharging problem won't be a concern.
So, we would actually also work on lowering the function call base cost,
which I believe is, like, at least 10x higher than what it should be.
So, yeah, we can actually, from, like, a capacity, like, pure capacity point of view,
we should be able to process, I think, probably, like, 20x to 40x more transactions on shard 2
than what we were able to process on chain for that period of time.
Okay, thank you very much.
But, in case you have an application that needs, I don't know, let's say, 1,000 transactions per second,
what would be the solution for you?
Yeah, I think, as I said, the status validation release would improve the throughput of each single shard dramatically,
probably by, like, 10x or something, because the state is not held in memory.
And in addition to that, for, like, for a smart contract, there was a lot of users,
it's also, like, an option to actually shard the smart contract into different shards
so that they'd actually take advantage of nearest sharding mode, right?
So, today, most, I think, pretty much all the smart contracts are residing in a single shard,
and there's no, at least I'm not aware of anyone, like, actually sharding the smart contract to deploy on different shards.
So, I think that's something that we will eventually, or, like, some application will eventually need to do,
because there, I mean, yes, we will work on improvements of performance for each single shard,
but there will be a limit as to how much each shard by itself can scale.
So, let's say you have some application with, like, hundreds of millions of users,
then it is likely that you actually will need to shard your smart contract.
Okay. Thank you very much.
Awesome. Thank you so much.
Oh, sorry. Go ahead, Ian.
On a Bowen's point, that given the current parameters,
specifically a contract like we've seen with Zaneet,
which actually had a lot of function call,
but did not have anything to process, right,
because it only recorded a log message
and didn't actually have any execution inside that function call.
Like, that is a specific example where this limitation was hitting the most.
And so, the example how to avoid this now
before the stateless validation release
is to actually batch your transactions.
So, Sweatcoin is a great example,
which had requirements around 300 to 1,000 mints per second.
And what they do is they just batch all those calls into one transaction,
And so, there's ways to kind of operate this now
and achieve that higher throughput even on a single shard.
It just requires to be a little bit,
you know, have a little bit of adjustment for the current parameters.
the idea is to continue improving on that
as well as adjust that parameter
as we move into stateless validation.
Okay. So, just if I understand correctly,
if everything goes well, end of Q1,
it should not be a problem anymore.
Okay. Thank you very much.
And it actually looks like you might have, like,
answered everybody's question.
There was a lot of demand,
a lot of hands of people that wanted to come up
but it looks like the list trickled down.
So, we don't really have anybody coming to ask questions anymore,
So, I will say thank you all for coming today.
It was actually a really amazing turnout.
We had a crazy amount of listeners.
Hopefully, you extracted a lot of value,
and we're super excited for what's next.
do you have any final remarks for the audience today?
I think what we're doing is really cool.
It's fortuitous when we have both...
When we have something like Need on the Network,
We have already in progress
that could help us meet those challenges
These are the sorts of things that...
Even though they can be...
They can be fun in the moment,
they can also be challenging in the moment,
but they strengthen the blockchain
and they strengthen the ecosystem going forward.
for how we can build on top of
everything that's happened last week.
Yeah, I would kind of just say that
because of the challenges, right,
there's definitely been kind of projects
and people who are affected,
and obviously that has not been the best kind of...
As ecosystem, as platform,
it's not definitely something that we want
where people are sending transactions
some people needed to send things twice
kind of to double down on Eric's point,
to show both the importance
of some of the existing efforts
as well as some of the point
to some of the other efforts
to make the network stronger
we appreciate you guys joining.
and we're looking to do more
So, we'll see you at the next one.