for everybody. It's kinda nice.
Yeah, I honestly don't know how to turn that off.
I like you. Don't turn it off.
Too late, sir. Too late, sir.
Well, I'll give it a couple of minutes. Let some more people come on in.
Thanks, everyone, for coming back. We know it's been quite a while, but we are back.
We'll be at a monthly cadence, I believe, every first Wednesday of the month.
And then we'll wait for Sunny to hop in. Good to see you, Will.
Good to see you, Mad Cat Winfred. See lots of friendly faces in the crowd.
Who's Opa? Hey, Opa. Makina.
Yeah, so the Dimension launched quite the activity on osmosis today.
What is this today, like 11 million in volume in about 24 hours?
Let's check it. And we had deposits turned off.
Yeah, I think those are still turned off, but they need to be turned back on soon here.
Yeah, I think we're at 12 million in volume for 24 hours.
Give it another two minutes and then we'll get rolling gets sunny added as a speaker.
All right, we'll give it one more minute here.
Agenda today is sunny is going to come in with a few quick little updates. Roman's going to speak next, give us some big updates on the on the router changes.
Some some significant improvements have been made, essentially a gigantic bat nice juicy your recall.
And then Adam is going to come on and talk about the epic chain performance upgrades that he's been diligently working on.
So, yeah, I think we should go ahead and start with Santa here on the little mini updates.
Yeah, I mean, I just wanted to hop in and say hey, everyone. It's been a while since the last update from the lab and, you know, how many months has been there like three or four months.
I don't know. I guess the biggest update is we're back, baby.
This is back and kicking it. And yeah, you know, just been seeing a lot of like great volume and over the last few days, last few weeks, months at this point.
But, you know, Celestia Tia has been growing.
We it's overtaken Adam in daily volume, which is, you know, even at its peak.
I don't think Luna ever overtook Adam in volume on osmosis. So that's pretty interesting to see.
But then on top of that, we have, you know, new token launches.
We have, like, obviously Dimension launched yesterday that that brought in a lot of new volume.
And I'm just there's so many projects launching like in the stuff I'm excited for, you know, Namata Babylon.
A couple of like major, major ones working on like IBC integration that like, you know, like like top 10 market cap ones working on IBC integration and like we're working on getting like like using osmosis out there, like go to market on their interchange strategy.
So there's just a lot of fun stuff happening right now.
Yeah, so I mean, I'll kick it over for right now to start with.
Actually, I had a question with the dimension launching. Maybe this is better for Adam to touch on or Roman with the new implementations with like chain performances, performance upgrades.
How did dimension compare to Tia?
I don't know if the numbers were comparable, but, you know, were they comparable? And I guess, you know, what what did we notice from the upgrades that we did versus how performed with dimension?
So maybe Adam or Roman can talk about it. But one thing I noticed is that in the Celestia launch, the osmosis chain was actually like, you know, our mempool was getting hit like crazy and like was like kind of the problem.
This one, while there were IBC issues with like channel clearing and stuff, it was actually mostly on the dimension chain side.
The osmosis chain was seemed to have been like, you know, our new mempool work from what I could tell was working pretty seamlessly.
It's like now I guess now we have to what we have to do is take the mempool work that we've done on osmosis and like export it to more cosmos chains.
I'm looking at the the 24 hour chart here. And there was, I think, like four blocks that were full the entire twenty last twenty four hours and the mempool didn't flinch once.
So, yeah, it was it was if we had the same logic as we had during the Tia launch, we would have probably felt it what we definitely would have felt it.
So how much bigger was the Tia launch, like in terms of, I guess, you know, the mempool and congestion?
I actually can't answer that. All I can look at is the charts in terms of, you know, I know that people were spamming osmosis trying to get deposits.
And so there is definitely there would have been pain with with previous implementations.
Something also to add for the future is that we are working closely with the skip team on block SDK integration, which would be pretty interesting to allow like custom lanes for certain kinds of transactions.
And that would mitigate some of the congestions even further going forward.
Do we know the timeline on the block SDK with skip?
We are currently in test mode where we are trying to rerun some of the integration and tests.
We are expecting that maybe within the next two major upgrades. Right now we're working on the twenty three.
So either in twenty four or twenty five, we're likely to focus on integrating that.
So this is like in our very near future. Exactly. Awesome. Awesome.
And will other chains be able to benefit from this as well?
You know, assuming, you know, osmosis leads the way and then they just copy and paste. Does it work like that?
Yeah, I'll talk about it during my updates, but like anything that's expendable expandable to the cosmos were upstreaming like one hundred percent.
So like it's kind of a pet peeve of mine slash ours is that when people do things to their own chain and aren't bothered to upstream it, it's kind of the wrong move.
So we are growing the pie, right? Oh, 100 percent. Yeah. Awesome.
OK, cool. Let's let's hop over to Roman and then get the get the router conversations going.
So, Roman, can you tell us about one like going back to before the the new router changes took place and what was the problem back then?
And how did this problem exist? Why did it exist?
And I guess after that, the the new one and how did this solve so many of the pain points?
Yes, for sure. I'll dive into that. So I'll just preface that I think for a very long time, our front end software was like the best in class in terms of performance and UX.
If we go to our page and compare it like router performance, it would be in the milliseconds compared to that to some of the competition where it would take maybe multiple seconds to get a quote.
So would this competition be like Uniswap? Exactly. Right now, like I was just confirmed recently that they are catching up.
I think in terms of the architecture, they're likely to move in similar direction with us.
But I will get more into the details as we go along.
But basically, the reason why we had this incredible performance is because our router was implemented by Tony and John directly in the client side where all of the computations and swap estimates would be computed directly in the browser.
So there is no need to actually run any network requests, interact with smart contracts or a query chain, which always is the major cause of delay due to network.
And that ended up for us to enjoy this increased performance.
But then you folks might remember that in this summertime, we had this huge concentrated liquidity launch.
And with the new pull types, concentrated liquidity pull types, the complexity for the front end side drastically increased.
So even if we think in terms of how much data is needed to compute a quote or a swap estimate route, like the core of the problem is trying to run swap estimates.
And with old classical pull types, the only pieces of data that we care about is the resource in the pool, like how much of one or the other token we have there.
Whereas with concentrated liquidity, you might be familiar with the concept of ticks and being able to set like custom positions.
We need so much more data to be able to estimate and compute that.
So with the UCL launch, the complexity of data processing and data handling on our front end side has increased.
And just like design choice that preferred performance at the start has started catching up with us in terms of some of the difficulties with managing that code base.
And we started observing some of the production incidents around fall, like our product and front end team always aims to deliver products in highest quality and as fast as possible.
So the velocity has been really incredible, but working with that complexity kind of led to some of the swap router issues, which we then started thinking more about addressing.
I'll just pause here to see if there are any questions before I go into the actual solution.
I had some questions about, you said that we used to use the router was on the client side and now it's on the browser side back when Tony and John implemented that.
Are there any benefits to the client side or is it all benefits on the browser side?
So when I say like a client or browser, I mean essentially the same thing. So the computation or like the logic that estimates the swaps, it's basically running on the user's machine that is like in their browser.
And what we've changed right now is that we started supporting various implementations so that the next iteration of the router, it leaves actually on the back end side, which I will go deeper next.
Okay, good information. Some benefits of the client side is that it is like a little bit more decentralized because you're not relying on a server to give you the quotes while the point of what we've done is we've built an open source one that anyone can run the server.
So there's a number of teams running it and eventually I imagine that we'll move to a world where like every osmosis validator will be running it.
But at the moment, the client side one does have that benefit that is a little bit more decentralized because you're just like running it yourself, which is nice.
Gotcha. And Roman, can you talk about how big of a lift this was? I know it's like incredibly complex and it took multiple months, but like what went into this lift? How is it like planned out and strategize? Yeah.
Yeah, for sure. So we ended up investigating developing a completely separate off-chain service from scratch.
So we explored various different designs, but landed on having a separate off-chain service that we run with close to the actual full node.
And during the architecture phase, we tried to account for various constraints such as how can we build it in an extensible way so that we can upload more data going forward.
Think about decentralization in the end, as Sunny mentioned, so that we can eventually require the majority of the validators running that, as well as the general performance to account for the most immediate issue of solving our swap router problem.
And the architecture that we landed on is where, okay, we have several full nodes running in different geographical locations, and every such node is having a sidecar process running.
And at the end of every height or block, the full node ends up pipelining the full data into this off-chain service.
So essentially, we do not persist any data in this new service, but it gets overwritten every block. And then when a user quote arrives in the application or in the browser, what ends up happening is that we start interacting with this off-chain web server
that operates on the data that is updated every block. And that leads to a number of new improvements, such as allowing us to cache a lot of the data, optimize it and provide many of the performance benefits, while also ensuring strict quality due to having this service be separated
and be able to test extensively as its own component.
Given how complex it is, how robust is it? You know, I know the previous router had some issues and like, you know, some users were reporting, you know, like, swap through just, or like, you know, there's the, was it the squid and DYDX forum proposal and this wasn't like our fault, of course, but like, you know, sometimes they allow for these things.
Or also, like, random users would be, you know, chatting with the OSL reporting like, hey, like, you know, this doesn't add up. Does this like protect against a lot of these issues that people solve?
Yeah, so we actually, like, currently we are supporting various implementations of the router in the application. And the way it works is that the app uses three different types, our legacy one, our new off-chain server, which is called SQS.
That stands for Sidecar Curie Server. And we also have integration with TFM. So the end router ends up picking the best quote, but also what we have right now is the observability into the system.
What that means is that we know which router is being picked up the most and provides the highest quality quotes, and we can compare and contrast them across various implementations.
So our new Sidecar is currently the winner and has been since the integration at the end of December.
Is the winner like, on 100% of quotes or just like 99% of quotes?
In terms of how many quotes it's processing, it would probably be like around 70% of all. So 30% is still going to other router implementations. But given like the complexity of this problem, like this is a huge improvement.
And also like the way that our router works is that it picks the best quote within a certain timeframe. So it is possible that our Sidecar would be able to provide a better quote under like a little bit more of time.
But the overall architecture that landed on where we select the best quote out of three implementations, it's all done so that like our users can enjoy the highest quality independent of the like underlying route provider.
You mentioned TFM. Why, like one for maybe those that don't understand like, why do they have a router, you know, they're another front end. How do they how do they like, this is like a pretty big lift. So I guess what motivated them to create their router.
And then, I guess, is their router better than the legacy router? Or is the legacy router better than TFM's current router?
So, TFM, they have their own product suite centered around swaps, as well as like cross-chain swaps. So they have their own application and also have developed a pro-trade UI for osmosis where they primarily utilize their router.
So during the time when we've had incidents with our legacy one in default time, our most immediate, the most immediate problem that we were trying to solve is like how can we ensure that we get working high quality quotes to our users.
And we decided that while we are getting our own service off the ground, which might take a few months, it ended up taking us two. But in the meantime, we decided to partner up with the TFM team who are very like helpful to offer their API endpoint for us to integrate.
So, John on the client side or on the front end, he ended up generalizing the router so that now it is possible to interact not only with legacy one, but also integrate TFM in the meantime.
And to answer the question about performance. So, in terms of route quality, TFM is much superior to our legacy. It's due to the kind of algorithm that it uses for the route search.
Basically, if I am to specify the details of the algorithm, you can search the routes by breadth, or you can search them by depth. So our legacy router tries to go like deep into a sequence of pools to find how to get from token A to token B.
Whereas TFM and our new one does this in the breadth for search matter, which ends up leading to higher quality. But then there is a trade off.
With TFM we are observing some of the latency delays.
And our client ends up providing much faster quotes. So the logic that we have in our application right now is that we provide the best quote we can find within the fastest timeframe.
But then, if within a certain time period, let's say two seconds, a better quote arrives either from TFM or our sidecar query server, then we immediately replace that with a superior route.
And does that show on the front end? Because I don't know if there needs to be a refresh or something, but if someone's putting in their swap and they haven't executed the swap yet.
I guess how would that work if someone is executing for some reason just spam clicking and they get to it before there's two seconds? Do they just miss out on that better quote?
The way it functions is that assuming that the time differences would be that high, yes, then such user would end up with the worst quote, but it all happens a year instantaneously.
So when I'm using the app, sometimes I can see something like in a blink of an eye changing, but most of that time difference is negligible in terms of the human eye.
Going from here, how does the router improve further or is this maxed out improvement already?
There are many exciting improvements to be made and routing is actually like an NP-hard problem where to implement it correctly, there is this convex optimization problem and implementing optimal routing is really hard.
I don't actually know of any production implementation that uses this convex optimization. So there is a lot to be done in that direction. There's a lot of research that outlines how to do this.
And our router is currently very opinionated, so it makes certain heuristics to rank pools, but we could always work more towards improving these heuristics or actually focus on this convex optimization problem.
What we are also noticing is that having moved this router off chain, we basically created a new microservice. And I previously mentioned that this service is geographically distributed in three areas, so US, Europe and Asia, where we can also have
redundancy of those servers running so that if one goes down, we can easily rely on a second one.
And what we are observing is that the operational work is quite challenging, but at the same time interesting. And we are going to be focusing on improving our off-chain services operations.
Another side point to add here is that we are observing that there are a lot of use cases that we could start moving off-chain for better performance, such as incentive distribution, for example.
So we are going to evaluate how we can create a service that could simplify our epoch time, and this epoch time is basically bottlenecked by the incentives.
And this is another interesting use case that we think that all these off-chain services will enable.
Also to add here, we are forming a team right now around all this interesting off-chain and platform work to help us continue exposing high-quality APIs to the application and our excellent front-end team, while at the same time thinking about how to move
some of the complexity into all these new services. We are also going to be focusing on improving the observability into our chain stack. Essentially, this is going to be a platform data pipelines engineering team.
So if you are interested in problems like this one, there are a lot that we can think about and solve, and a lot of value to be involved for our users. So please reach out to me if you are interested.
Awesome Roman, awesome. Maybe we should open it up to the floor for some questions on the router, if anyone has router questions.
And if there are no router questions, we'll hand it over to Adam.
Well, I'll ask something that I already know the answer to, but I'll phrase it as a question.
Astraport is launching their pools within the next probably two to three weeks. Will the router split over those Astraport pools as well?
So with the new Astraport pools, the way it is implemented is that this is a Cosmos pool. So Cosmos pool is implemented as a smart contract, and we are enabling various types of Cosmos pools to be supported.
For example, besides Astraport, there is also a Transputer one. Transputer allows one-to-one no slippage swaps between the assets.
But Astraport one is a bit more complex, requiring very difficult, not very difficult, but like a custom implementation.
So what we opted in for currently is to query the smart contract directly for estimating swaps for Cosmos smart contract pool types.
And what that leads to is the hiring number of network requests, which results in decreased performance.
So the split routing that you folks always often see on our front end, it's very computationally intensive because we try to see what are the various proportions and what is the most optimal split.
And if we were to do this with a Cosmos pool such as Astraport, this would result in an increased delay. As a result for the time being, the Astraport pools are excluded from split routes, but we have full support of that already tested on testnet where Astraport direct pool routes will be enabled once it is live on mainnet.
Awesome. Awesome. Thank you. Thank you. Adam. Hey.
Tell us, let's dive into the old state of the chain and the old state of performance and maybe a little breakdown of what was it that was such an issue with the TIA launch and performance there compared to other chains.
It's my understanding that all chains would have experienced this, just all chains do not get the traction and the activity that Osmosis gets. So yeah, let's dive into that.
Yeah. So first let's talk about why performance is important. I mean, there's some obvious reasons, but the less obvious reason is Osmosis is moving towards stuff like perps and native bridging.
And when the chain is down every day for five minutes for Epoch, and then if the chain gets spammed and that's down for, you can't get transactions in, things like perps just don't work.
I mean, aside from this, there's the obvious thing of like users being mad about the chain not working well as well as integrators.
A lot of the devs are reading the comments on the telegram and I always get very sad when people are mad, but also shout out to that one guy who always asks about WEXP weekend on Osmosis.
It kills me every time. So the question is, what were the problems? And so I'll go into that now. So we implemented this EIP 1559.
And if you were to notice, it actually worked really well. And then all of a sudden it just stopped working well. So I wanted to go into why that happened.
We kind of were like experimenting with different values. We realized that, you know, maybe we were, we wanted to recover the base fee quicker, etc.
So we made this state compatible change and like half the validators moved to it half didn't.
And so because of that, there were like the way some validators saw a base fee was different from what the other validators saw as a base fee.
And so what that did was it caused a lot of blocks to be like zero transactions because, you know, the validators said, hey, I see the base fee is this.
The transactions are coming in that are lower than this base fee. So no transaction we get in. So that was an easy fix, like, you know, just get everybody on the same logic.
The next thing is, you know, the block size, we increased it to hold more transactions, as well as we've reduced the amount of gas that each transaction takes up.
So this is kind of my bad is that in V21, I introduced this, like, per block tracking of transaction fees.
Basically, what we want to do was, you know, you see that really cool graph where we see each day what we're getting in terms of protocol, we're getting in terms of swap fees, and all that needed to be tracked.
But I kind of made a mistake of, you know, tracking the transaction fees, which was like very intensive because there was so many things to pull and iterate over.
So we just got rid of that, because it's very easy to just pull from events anyway. So that that was a large load off of the back of, you know, the transaction block size.
And then we just let it blocks to below five seconds speed. And I'll get into kind of why, why we don't go faster yet.
What are blocks are right now? Like 4.9? Yeah, something like that. And then stuff that the users care about the stuff that integrators care about is we now have this, you know, instant testnet creation that we upstreamed as well.
And so, you know, if you if you talk to your like other chains, what you should be asking them is, you know, when you do your testnets, how do you do them?
A lot of people, you know, they might make a testnet that just kind of this local testnet. This is kind of the wrong answer.
What we have experienced in the past is that when you have mainnet states, this adds so much entropy that it just, you know, really highlights issues.
And so what we used to do was we used to do this like state export testnet, where you exported all this data into a new genesis file, you created a new testnet off of it.
And when we moved to SDK four, seven, there is this issue where essentially it required, you know, a NASA computer, 256 gigabytes of RAM, and like six hours of time to actually export the data.
And so what we did was we created this like in place testnet creation and we upstreamed it so that pretty much every cosmos chain now is going to be able to create these testnets like essentially instantly and be able to, you know, test with state that's mirroring mainnet.
Additionally, there is this like a patch issue. I don't want to go into detail too much detail to the audience. I'm not sure like, you know, how much I should talk about for this.
But like, essentially, there was like data race problem where when nodes were queried heavily, there would be different results and it would cause a node to just crash.
And so that was six and other teams were having this problem as well. And we kind of, you know, distributed this to the other chains so that they don't have this problem also.
And so that's the stuff that we fixed so far. And I'll just really quickly go over what we're going to be working on in the next few weeks, as well as months.
So first off is faster epochs. How are we going to do this? We're going to probably start spreading the logic over multiple blocks. Right now we just do it all in one.
There's no real reason to do that all in one. There's a lot of things that we could spread over multiple blocks, as well as as Roman talked about was moving the incentive logic off chain.
How this will look is essentially, although validators will run this sidecar, they'll determine, okay, this is what the incentives should be for this for this, you know, epoch, and then they'll post it on chain and then they'll run consensus on that.
So then we could do some like really interesting things. Like right now, we take a lot of time to figure out what the best, you know, most performant way to do incentives is.
But, you know, when we run this off chain, we can do some very interesting, more complex deals of how we determine that.
So for faster sync speed, we're moving over to IVLV1. This is another thing that we were able to upstream fixes for, is that when chains move over to IVLV1, there's essentially this like four hour waiting period, where a lot of things are getting pruned and, you know, we can't have an upgrade where we're just down for four hours.
So we ended up doing was, I say we, I think it was mostly some people from the IVLV1 team, as well as Dave, were able to basically sync, or I'm sorry, prune these orphan nodes synchronously instead of having to wait for it.
So that will improve with our sync speed, as well as, you know, just analyzing profiles, just figure out where we can get better time from.
And so once we do that, we can do fast, faster blocks. Like I said, it's very simple to make blocks faster. It's just a parameter change. But the problem is, is that, okay, let's say we make blocks, you know, save speed.
And then we have all of this traffic, you know, when it comes time for someone to sync a node, they're not going to be able to catch up to the head in time, and it just becomes very problematic.
So first, we need to make the sync speed faster. And once that's done, then we can talk about making blocks faster and faster.
Next thing is block SDK. We're already as Roman talked about testing this, we're going to have some free lanes. So like claiming rewards will not cost any fees, as well as we're looking at, you know, having relay or transactions be free as well.
So we can probably, you know, whitelist some of the high, the highly known relayers. And so instead of having them have to pay all these fees to relay transactions, they can just relay it for free.
A couple more things here, fee markets, that you notice, if you look at like, you know, the block explorer, you'll see a lot of our bot transactions that just purposely fail constantly, they pay a very little price for it.
And sometimes they have a higher payoff, despite all of the failed transactions. So what we're doing is going to just make these ARB transactions cost more money off the off the gate.
So they, they don't have this incentive to just spam as much transactions as possible.
And what would this cost?
We're looking at, I think it was a Roman competition, I think it was like point five Osmo per transaction.
Yeah, because right now they're able to spend it like point 001 Osmo. And there's, if you look at the transactions, it's just nonsense. It's not like helpful to the chain.
How do you determine if it's ARB versus a regular? Well, I guess it's a recurrence.
Yeah, it's just all it is, is if you look at the input, D-NOM and the output D-NOM, if it's the same D-NOM is essentially an ARB transaction.
And so we're just trying to, you know, make these mempools more smart and just trying to filter these things better.
And there's quite a few more things I could go into. I'm not going to.
Those are the main things that like, you know, users actually care about.
You know, I'm happy to answer any questions that anyone in the audience has about, you know, if we're going to focus on other things performance wise.
But yeah, this is kind of the blocker for us before we start really diving deep into exciting features like perps and native bridging.
Because, you know, what matters is how good is perps and native bridging if the chain, you know, starts clogging up at any sign of volume.
And you notice that, like, I'm not sure if you have noticed it, because once it works well, you don't really notice it.
But, you know, if you look at the mempool, it has not been clogged once since the V-22 upgrade.
So the things we've already done is great. And the things that we have coming is going to make it even better.
And to highlight this is that we're also going to be sharing this with other Cosmos chains.
We're not just hoarding it for ourselves so that we can, you know, grow the pie, as you say.
But like, you know, just make a Cosmos chain safer in general.
Right. So with all the efforts that I got patched up, improved upon over the last, you know, one to two months.
How much of that it let's say like, let's say there were there was 100% that you wanted to fix.
How much of that was what percent of that was fixed with the last two months?
And then what percent remains that you want to address like the lives or the bulk of it?
Well, like if talking about like an item is one unit, like we've addressed like 20% of the problems.
But the items that we've addressed, like you have to like weigh the items, right?
Like there's there's high priority items, medium and low.
We've if we're talking about strictly high priority items, we probably addressed like 45 to 50%.
The remaining 50% will be addressed not this coming upgrade.
We're addressing a small issue with incentive distribution for the, you know, very high precision pools.
We just thought we would get that out ASAP.
But after that, the remaining high priority items will be addressed.
You know, each one of these high priority items takes like a week of just solid work.
It's been a serious grind for this past month of just dealing with, you know, these problems aren't very fun to to solve.
You know, it's a lot more fun to do novel things like concentrated liquidity and perps and native bridging.
It's it's fun to solve not native problems.
But, you know, it's just kind of the closet that we need to clean up and and in order for users to actually have a good experience with osmosis.
So you think the next one or two upgrades like pretty much every major item will be hit.
Maybe some like small things that over time will get fixed.
But overall, all the high priority items should be completed by like I would say we're on twenty two right now or we on twenty one or twenty two.
We're on twenty two. So so by twenty four, I'd say it should be should be all the high priority items knocked out.
And if I may add something. So like some of the some of these improvements that Adam is describing, they are kind of even at times independent of the chain upgrade.
For example, one of the quick wins that we have achieved is setting a transaction TTL or time to leave in our front and up.
So the idea here is simple is that we estimate a height at which the transaction would time out so that you're in the times of congestion.
Users will be able to get feedback about what happened with their transactions.
The way the feedback is not there without this field is because the way that Cosmos SDK mempool addiction works is that after the transaction initially getting into the mempool will run a certain like set of checks to determine its validity.
And then the order of transactions. So some other swap, for example, may make the current swap invalid.
And then we would have to rerun the sequence of checks to see if the transaction submitted by us is still valid and possible to get on chain.
If it's not invalid, there is no way to signify that feedback in the app.
So with this timeout height, we did a small change, but ended up unlocking this huge pain point.
There are many other medium to low priority items that we will focus on concurrently with the chain scope of things.
Awesome. Very, very in depth.
Do we have any one from the crowd that has questions with Adam or Roman as well on the chain performance side?
All right, I guess that's a no.
Sunny, anything else you want to dive into?
One thing I forgot to mention earlier was WBTC.
So I know it's been a long time coming, but the native WBTC on Osmosis is finally live.
There's a stable swap that's been created between the native one and the axle version.
And so, yeah, you can swap over if you're holding WBTC on axle.
So it'll be it's now renamed to that for 1422.
And so, yeah, there'll be, you know, there are new pools being created WBTC pools with the new native version.
And I think the idea is that we're going to propose that governance shift all the incentives away from the WBTC on axle and into the new native WBTC.
So this will, you know, you'll be able to hold WBTC on Osmosis without any axle or bridge risk.
So that's quite exciting.
Yeah. So then on top of that, let's see, there's a couple of market makers who are going to be like bootstrapping some of liquidity on those pools.
You know, just the number of market makers that have been onboarding on Osmosis in the last couple of weeks is insane.
And so some of them have been contracted by some of them have been contracted by the OGP as part of the governance proposal that Osmosis approved a couple of weeks ago.
But then honestly, most of them have just been very organic as well.
So OGP will be like issuing their report on giving an update on the whole status on the market makers that they've contracted.
So keep an eye out for that.
Speaking of market makers, I mentioned this earlier, but as report will be deploying on Osmosis in the next coming weeks.
Expect to see their proposal probably go up on the forum, maybe by, you know, probably sometime early next week.
All the code has been tested, tested on testnet and we're looking good to go.
And I believe the idea is that they're going to start with a couple of pools, primarily, you know, Neutron USDC, Luna USDC, say USDC and Injective USDC and of course Astro USDC.
Yeah, looking forward to having them live and putting incentives on those pools.
Beyond that, you know, we have some new smart accounts work is coming along quite well.
Session keys are going to be the first usage of it, which basically means that the whole one click trading experience, but with a lot of safety features built in to make sure that you can take control of your browser, you have to install a malicious browser extension, all your money doesn't poop away.
There's a lot of crypto projects out there that have like one click trading, you know, you have it on DYDX and on Avo, a couple of others, but like, you know, on almost all of those, like, if you have a malicious browser extension, they can basically steal your whole account.
And we're trying to like, you know, take a very security conscious approach to a lot of the smart accounts work that we're doing.
And finally, probably something people have been waiting for for a long time, probably one of the top requested things in like the history of osmosis is we have finally begun work on a mobile app.
So we'll have a native osmosis mobile app that you that people will be able to use.
You know, what I think started I mean, like, literally we like, you know, ran the create new react native app command, but it'll definitely be a few months before we see anything, you know, like a beta version live on like test flight or anything like that, but it is coming and on the way and, you know,
that's what this foundation has hired, like, developers specifically for the mobile app development so people eyes peeled for that.
Yeah, I think that those are the main update to have.
Oh, one of the things is on the osmosis. If anyone's looking to get involved with some osmosis contribution.
We have bounties now actually on the osmosis front end repo so you know there's small bounties ranging from like 100 USDC to some of that, like even 1000 USDC so these are just like small quality of life improvements that like, or like small.
Like, you know, I've been requested it just like, you know, our front end team doesn't have the capacity to do them. Right now so we're just giving out these grants from the foundation, like micro grants to, you know, some of these take like an hour, if you're a, you know, a rap developer maybe it'll take
like an hour to work to just like knock them out and oh nice, this is a $1 bounty for an hour of work. So, you know, and, you know, who knows where that leads up, I believe, Adam, you know, started by just doing open source contribution to osmosis and now he is one of the core developers.
So, yeah, we did have a question come in about mesh security, and I believe message here right now has a fully working testnet that has four chains on it.
So there'll be, you know, adding support for unstaking on botting and on bonding and everything seems to work right now so we'll get more updates out on mesh security as the mesh team provides more updates, but that was the update as of yesterday so moving on, it's, it's, it's in its final legs.
Do we have any questions for Sunday looks like a die nub, please.
Are you guys able to hear me. Yep. Yes. Awesome. So the art transactions, was it 0.5 osmo fee for fail transactions, or would that also be a fee on successful transactions that are detected as arbitrage.
So, any arbitrage transaction. This number 0.5 can change. It just the current value we have, I think we set it to like, you know, three x to the value but three x of 0.001, you know, it's still not sufficient enough to stop the spam.
If you have, you know, valid concerns about because arbitrage is important still, I acknowledge this, maybe 0.5 is too large of a number.
So we could probably cut this down, as I assume you are implying.
But yeah, I just we just need something in place to stop the, if you look at men scanning look at each block you'll see these off the transactions, and it wouldn't be a big deal if you know they were just scattered and sprinkled but what they're doing is they're like,
setting max gas to like, a million and they only need 100k. But they're doing this because the cost is so low to use a like such an a million a million gas for such a transaction that they just do it anyway.
Yeah, I'm aware. I monitor the blockchain quite regularly. I'm just wondering how it would impact the successful transactions for people that are not spending.
Yeah, yeah, no, it's, it's, yeah, if you guys have recommendations, please, please let us know. And this will be a parameter change. So whatever we do, if it's too much of an overshot, we can always dial it down a bit.
Alright, I've followed you on Twitter in case something else comes up in the future.
Awesome. Thank you so much.
Something that I would like to also add on this point that I think is going to be interesting is the block SDK integration that would help us to detect the ARBs and direct them into custom mempool lanes.
So with this customization, we will be able to either encourage the ARBs to go through a top of block auction and compete for the top of block auction space or participate in a localized fee market where the fee would grow only specifically for those ARBs.
So once that integration is live, I think it's going to be like very, very interesting and lower or like easy operations and transaction flow for everyone else.
Yeah, the solution I explained is like this bandaid approach. Definitely what Roman's explaining is the ideal world to live in.
Yeah, excited for that for sure. And Skip Team, as always, working super close with us and testing things.
We just probably could have launched block SDK a couple, you know, upgrades ago, but Skip Team is being great about just being completely safe with it, which I totally respect.
And I'm sure everyone agrees on we should be safe.
Awesome. Do we have any other questions for anyone up here?
Okay, it's probably probably good for today, our return of updates from the lab.
We'll be back once a month, every first Wednesday of the month.
Thanks guys for tuning in. Thank you, Sunny. Adam, Roman.
I'm not sure if Will had a question or if this just bugged out.
Yeah, I think it's just bugged out.
I'm trying to like add Will.
I'll wait right before I dropped off.
There's a, I think Will had some. Oh, there he is. Speaker, there we go. Will, welcome back.
Hey, yeah, and everyone's stepping out now and that's fine.
I just wanted to say I've been, I've been asking about this for like months when you guys are all coming back and just want to say super grateful that these are happening now.
And I have the monthly on my calendar now, so I might have seen your tweet.
Just applauding you guys on Osmosis continues to ship faster than just about anybody in the space.
And there's just, you know, just as you've rattled down today, there's like dozens of things all happening at once.
And I feel like this like unified space just to get everyone on the same page is so invaluable because if you're just following Discord and Twitter and all these other things, you just, you know, it's just a blur.
Yeah, we're gonna have a recap of the Twitter spaces as well. I don't know what the timeline is maybe a few days but we'll get recaps out every month as well so that way there's, there's a place to just read a quick little five, five minute summary, or probably
less than five minutes, honestly.
Awesome. Thanks. That's all for now.