Music Music Music Music Music Music Music Music Music Music Music do
do I'm going to go to the next video. The Good afternoon, everybody. Welcome to another Radix recap. My name is Adam, and we're going
to be diving into what's been going on in Radix at the foundation side in the ecosystem and what is going to be coming up.
Joining me today is none other than Timan, the interim hyperscale lead.
How are you doing today, Timan?
I'm good. Thank you for having me.
more than welcome. Unfortunately, Andy can't join us as well today. He's tied up with some other
More than welcome. Unfortunately, Andy can't join us as well today.
stuff, but it's going to be a fun one nonetheless. So I'm going to start off, if that's all right
with you, Tiemann, with a couple of the other bits of foundation news. I needed to be a bit
of a quicker one today because I've got some calls I've got to jump onto shortly. So we're
going to speed run this slightly. But of course, the first bit of big
news that came out this week from the RADX side is that phase two of multi-factor smart accounts
in the wallet is now live for testing on StokeNet. And this is a really cool one. So phase one has
been live for a while now. It has been testing successfully of setting up your account security
shields named TBC. Please, anyone who's
got some suggestions for names for that, feel free to throw them out. Phase two is bringing in the
ability to update those shields. So the two big things here are A, obviously just being able to
change which factors you have securing each of your accounts and personas. So you can have different
factors for different accounts and you can swap out, say like an Oculus card to a ledger, or if you get a
new ledger, your old ledger to the new one. But the other big one is something called time recovery.
So time recovery is like the ultimate backup move. So typically when you need to change your
security factors or regain access to your account, there has to be at least three roles, your primary, recovery, and confirmation key. If you've only got one of those, and
specifically the recovery key, you can use that to initiate a timed recovery. So what that does
is like, hey, if I've got my recovery key and I've lost my other two, I can use just that one.
And after an amount of time that you set as a user, that will allow me to use just that one. And after an amount of time that you set as a user, that will allow me
to use just that one key to set up new security factors. So it's a bit of a safeguard against some
other challenges. And essentially, this is a way to get MFA that's already live, already running
with account controllers, oh sorry, access controllers on the Radix network into the wallet,
into a retail and consumer-friendly way.
And not only is that really good for retail, I was talking in Telegram a little bit earlier,
I did a demo of this to a very, very large bank and their blockchain team pretty recently,
actually, around this kind of concept of like semi-self-custody wallets,
where they're really keen on being able to have users self-custody wallets where they're really keen on being able
to have users self-custody their wallets. There's a bunch of regulatory and other reasons for doing
that outside of just like the ideology of it and cost of custodying assets as well. But their big
concern is that most users can't be trusted with self-custody. And the bank wasn't comfortable with this concept that, well,
our end users are going to lose their keys. And how are they then going to access their funds?
They're going to call us up and be like, hey, I've lost my phone. I've lost my key. I can't
access my accounts. Help. And they're going to be like, oh, that's a problem. And that wouldn't
be a good day for the bank. So with MFA, you can do some really cool things here where if
a self-custodied account of someone
you've KYC'd and done like your normal bank security with had a self-custody crypto account
and they lose the factors they could ring up their bank and be like hey I've lost these
you're my time to recovery factor I've lost one of them they go through their normal security like
you would for your current bank account for, and then be able to get access back to their self-custody wallet without the bank
having to be a custodian. So there's some really powerful kind of like traditional finance
applications for this as well, that I think is sometimes overlooked by the community.
Timon, I don't know if you had anything to add to that.
Well, I've been wondering for a while, and now i'm on the call so i can actually
ask you directly given that this is has already been implemented in the network uh when when
babylon launched and this is an update to the wallet as far as i know does this mean that this
will work for any existing account as well so that say wow So I have an account on my ledger and I now decide that wasn't that
useful. I would like to make it a hot wallet. I can actually swap in my seed phrase, swap out
my ledger accounts or keep them both. That's super powerful. Wow. So because all accounts on
Radix from Babylon are smart accounts already. So they already have an access controller.
It's just the access controller was a one-of-one access controller.
So the other cool thing is that even though this is the wallet implementation,
using the example of the big bank I was kind of explaining this to,
in their use case, they could go and build that tomorrow if they wanted to,
They could have their own white-labeled Radix um that was part of their banking app or something like that and have self
custody built within there and interact with the access controller there are a couple of projects
um that i already know of um i won't name them because i don't know if they've made this public
but they use the access controller for various reasons um. So from a dev perspective, smart accounts are already
fully accessible on Ledger. Yeah, really cool. Yeah, I've been using them for back in the day
for HAG for the, what was it, Tidbot, where we had shared accounts. Yeah, it worked in the same way.
It's super interesting how this works on Reddix. It's so powerful.
And I fully understand why a bank would want to be able to recover accounts for
their customers, because it always hurts so much if you see someone come in on the
Reddix chat and say, hey, I lost my seat race.
And I'm like, oh, no, poor guy or poor girl.
You really lost your funds.
And as a bank, I totally understand their interest.
And the other really cool thing about this is because an obvious thing is there's other wallets in the space that have like social recovery or other recovery systems that you're like, well, a bank could already do this.
But the point you actually made, Tiemann, is precisely right, that that requires you to have a new account. And one of the other things
that kind of ties into like TradFi regulation and open banking is like easy switching of the bank
that you're working with, for example. So if you had one bank and you want to move to another,
you could actually move your self-custody fund and which bank was your recovery factor without having to make a whole new account and everything else, which again,
we're going a long way down the line of kind of mainstream financial systems adopting DLTs.
But this is where building on the right architecture allows these things to work
without having to abstract away many different layers of complexity that really big enterprises don't want to do.
One of the big reasons they like DLTs is because a lot of the complexity and duct tape together in
different siloed systems and abstracting away complexity is already eating in to a lot of
their profits. It's the big reason why every large asset manager wants RWAs, because they can cut out a bunch of costs associated with their traditional assets if everything was on-chain, which ultimately means either more profit for them or more competitive market offering for their funds to their end-of-consumers, which gives them more assets under management and therefore more profit.
I see I've blown Austin's mind as as well on this he's instantly jumped into
telegram being like what what that's cool um so uh in terms of phase three because i know people
are going to be asking when's phase three coming i'm not giving any spoilers yet gannadi will give
an update on that um there's a big chunk to do on phase three of it um so that's something that
needs to come in. And then we need
to make sure that's all working really well. Because the hard part of doing this in the wallet
is not actually a case of how do you make sure this works? Because obviously, it's all working
as an access controller on the ledger right now. It is how do you set this up in a way and with UI
flows that the end user can't get themselves into a bad state or into a state that
isn't secure for their funds. And that is a more challenging task. And it even goes down to simple
things of like when you're signing transactions. And that nicely takes me on to my second point
of things that we've been working on. So any eagle eye devs, you may notice that we have open sourced the season one Radix Rewards distribution
smart contracts. This is a work in progress, but we wanted to get them out as soon as possible so
people can start looking at them. They're quite simple. There's going to be some content coming
out in the very near future for community discussion around how we're going to propose
that those rewards are distributed, when they're distributed,
the mechanism for distribution, and also some important discussions I think we need to have as a community on actually, when do we say season one's finished? What do we do for subsequent
seasons? How do they look? What goals do we want to get out of that? And whether we want to keep
pushing in the same direction we have been, extend season one. So that's one of the reasons
why we didn't give a firm end date to season one is because across the industry, there's everything
from seasons and campaigns like this running for a couple of months, all the way up to kind of six
months, and in some cases, 12 months. So there's some flexibility there. It's important for both
me personally, and I also think for any crypto project to have community input on these things.
So we'll be kicking off that process in the very near future, but wanting to get the code out there so that we can get some feedback on that and get some of the great devs in the community taking a look at it and making sure that anything that we need to check gets checked before we go and get it checked by some other experts.
And final bit on Radix Rewards
was just going to be the fact
from the 1st of September 2024
until roughly around then,
and would like to get a bonus for Radix Rewards from season zero, you must link all
eligible accounts to the main Radix Rewards season one program before the, oh, it's hit me now, the
17th, that's the date. So ideally do it this weekend if you haven't already. If you miss the
17th cutoff date and you link an account after then,
it will not be counted for calculation of season zero.
So please make sure that you do that.
Otherwise, you're just going to miss out on some extra bonuses
that come out of a different reward pool just for you guys.
So make sure you do that.
Yeah, and if you're connected with all the accounts that you already use
or have used in the past, there's nothing you have to do, right? So I participate with all my main accounts. There's not another button that I have to press.
Correct. All you've got to do is before the 17th, have all accounts that you want checked for season zero eligibility linked on the dashboard.
Cool. So then the final bit of foundation update that I think should come out is, of course, the hyperscale update, Taman.
And we can have a quick discussion on that. And then I think we'll probably wrap it up
because I do have a time constraint today.
So understand this week for people who've been following,
hyperscale progress on testing
has hit a couple of challenges.
And it's easy to be transparent
when everything is working and going well.
So I think it's also important to share when you hit some challenges and some speed bumps. So I don't have any good
news to share. Luckily, I also don't have any bad news to share. It's been a tough week.
That's that's, I think the TLDR, our hosting provider, froze our accounts because they weren't liking the things we were doing very much.
Not every hosting provider apparently is too happy with us spinning up 100, 200, 400 nodes and then destroying them again a few hours later because we're not using them and we don't want to pay for anything we're
not using so we moved over to aws who is so large that i think they won't even notice that we spin
up 400 nodes and that's working really well and thanks to the foundation devops team the migration
was done in less than a day but the performance of these nodes is just not the same as the old ones.
So I have to re-find the equilibrium or the balance
that I had during previous tests, and I haven't found it yet.
I also noticed that the test script that Dan wrote
is hitting its limits with the amount of nodes
as i have done some research into that and try to improve it so it becomes easier uh and better
reproducible to do these tests at scale well at hyper scale because 250 000 tps is already scale
um and and that's what i've been working on for the last seven, eight days.
So one of the things that I know is often mentioned, and I think it's so easy when we're
working on these things internally, day in, day out, to forget about some of the jargon we throw
out. When you're talking about the spam nodes or the swap nodes and stuff like that, ELI Fiverr to
people, what do you mean?
What is the difference between this and like a standard
Why is that the thing that is causing a challenge?
I try not to use any terms that are too technical.
It doesn't mean that the concepts all of a sudden
become easy to understand.
It's surprisingly hard to send 250,000 transactions per second.
Dan told me once and I laughed about it, never gave it a second thought until I started running these tests myself.
It's actually really hard to send out 250,000 transactions per second.
And you can't do it with only one server or my MacBook on Wi-Fi on battery.
The interesting thing, it seems it's easier to actually process the transactions than to generate them and send them onto the network.
So what we've done is that if I run a test, say, of the one with 250,000 transactions transactions let me quickly recalculate i think we use around
180 nodes um so six yeah six nodes per chart yeah around 180 nodes to process all the transactions
and some of them are used to send transactions and also validate them if they are the one
validating in that round.
The others are only processing the incoming transactions.
So when I'm talking about spam nodes, they are normal hyperscale validators, just like
They just do some extra things and that is generate transactions and submit them to the network
and the spam script or the test script that i'm using is a piece of java code inside the hyperscale
node that first creates a set of wallets and a set of pools and then is able to at a variable rate generate swaps between
different tokens so by heart i think we use six or eight different tokens that we swap back and
forth over these pools in in different amounts and to try and generate real world swaps.
And that is what that swap script or test script is doing.
That is, like, I find it really... Is that ELI 5 enough?
It is definitely ELI 5 enough.
I'm trying to think of a good...
We'll continue the restaurant and recipe analogy, Tiemann.
It's one thing having enough chefs being able to produce all the meals.
It's another thing having enough customers reading the menu and asking what they want to buy
and have served up and the wait staff taking them back and forth yeah yeah or it's easier to eat all
the foods than to actually cook it is another one i was thinking although actually this is the inverse
i it's the inverse it's easier to cook all of the food than eat all of it which also makes sense because if you had a busy restaurant and told the chef that he had to eat all the meals
in one sitting that's a lot harder than having a full restaurant with many many people
yes exactly so that's also the what i've been struggling with, like over the past few days, it's, it is something I will
solve. It's just a different strategy, different mindset, I have to have to get into it, better
understand what exactly Dan wrote for the test script, make sure that it's fair. Because
we don't want to run a test where I don't know, there was a test from a competitor where
they sent one token to 100,000 accounts and called it 100,000 TPS.
Those are not the tests that we want to run.
So it needs to be fair, it needs to be balanced, it needs to be reproducible.
And that's what I'm currently struggling a bit with.
All went fine up to 250,000 on our old hosting provider and things changed and that takes a little bit of time.
And I think it's also fair to say that the guide and documentation for most of Hyperscale was very
well documented from a code perspective of actually validating and how the system works on that side.
Whereas Dan personally ran a lot of the spin up and creation of the universe.
That's a technical term for the network universe creation, the spam nodes and was altering those a lot.
So there's a lot more to go through to understand what was doing, what what the different configurations were while also running on a bunch of kind of parallel research tracks that Dan was working on on the validator side as well so there's a bit of detective work
going on there yeah correct and then was also running these tests partly by hand and not fully
from the test scripts that I have access to and that just makes it just a little bit harder it's
like the secret sauce from the chef is in the chef's mind and has always been in the chef's mind and unfortunately i have to really i have to figure
out what that secret ingredient is to really make it click i think that's a nice analogy
so because i've got a hard stop coming up is there anything else from your side that you would like to
ask put out there um anything on your mind team and would like to ask, put out there?
Anything on your mind, Tiemann?
We're working on getting the Hyperlane assets onto CoinGecko.
It was a request from the TC community channel because the DAX volume on CoinGecko was lower than it should be
because the wrapped assets weren't counted by CoinGecko.
So I've been working on that for the last few days.
It's filling in a lot of forms and getting them listed.
But luckily they were really reactive as well.
They immediately said, yeah yeah please do so hopefully I just so UTC has been approved
UTT uh HETH and HWBTC are submitted so hopefully in a few days those will go live and then I'll
have a look at HSOL and HBNB as well and I think then we have them all, but the last two are way lower volume.
So we should see better DEX volumes on CoinGecko soon.
That is awesome. And the CoinGecko team have always been really helpful as well.
I can't give them first place. First place for kind of like data provider team is DeFi Llama.
DeFi Llama, I'm happy to give lots of credit to many people there's many awesome people in the
space but defi llama are fantastic um yeah to work with and i've heard that from many other people
so yeah but yeah thanks for doing that timon yeah you're welcome um from my side i'm going to give a
little hint as well um there is going to be a new route on Hyperlane coming up shortly.
It will probably be live next week and announced.
And that is going to be getting some additional,
some additional access to XRD.
I'll leave it at that as a tantalizing tidbit of information.
Well, thank you very much, everyone, for tuning in. Sorry it's a bit of a shorter roll drum roll well thank you very much everyone for tuning in sorry it's
a bit of a short space than normal um it has been a pleasure and speak to you all very soon
have a nice weekend everyone Thank you.