Music Thank you. Music Thank you. Thank you. Hello, so we're going to start this space today.
We have a great lineup of speakers today for this one.
The space title is TVL is a pool metric for predicting crypto returns.
Let's discuss. And we have the authors of this research paper with us today. So our intro to
speakers one by one and then Simon will be taking over for the space today. First, we have Simon,
head of data analytics at Algorand Foundation. Hello guys, how are you? Great to be here.
Thank you. And next to Simon on Simon's team, we have Mark L.
Hello guys, great to be here. Hopefully we can have a great chat discussing the paper.
Amazing. We also have Michele.
And finally, we also have
Happy to be here and talking to you all.
Thank you. And over to you, Simon.
So, why did we see the need for this analysis?
So, as you all know, in Web3, there are ongoing debates from analysts,
There are ongoing debates from analysts, analytics platforms, investors, VCs, about TVL, its methodology, and the importance of this metric.
For example, what should be counted as TVL and what would not.
For example, as many of you know, RWAs and native staking pools are not counted as TVL.
So, since many strategic decisions are based on this metric, we try to come up with this analysis, with this challenge.
More TVL, is it equal to higher returns?
Because this will help us, obviously, in our strategic decisions.
And to be honest, the results that we got from these papers were not what we were expecting,
especially the people who are heavily in Web3.
As we all know, for example, TVL has some flaws in it. The famous one is
the double counting of certain tokens. When you put a token in a dApp, you receive a
wrapped token and then you put that token in another dApp and that's counted as double.
And even some blockchains gamify this TVL with temporary liquidity boosts
also there are there aren't certain standards which are used for TVL verifiable measurement
on chain not everything is there so that is what kick-started this analysis and let me ask a question to michele michele could the over emphasis
on the tvl as a metric distort incentives when protocols are designing these incentives or even
or even misleading investors
simon thanks for the question uh i would start with the good things uh related to tvl and then to
move on the possible criticism so let's say that at the beginning of the defy era let's say 2022
tvl for me it was a good proxy to to gauge, maybe in a rough way, the relative size between
L1 protocols, because it's a quantity that tells you how much of the capital is locked
in that particular class of defy which is the
automated market maker because in this case the tvl is a very cool metric because it's strictly
related to the liquidity of the protocol of this amm and so it is a measure of the efficiency of the protocol of this AMM. And so it is a measure of the efficiency of the AMM,
and in particular, it's inversely proportional to the impact to the user.
So people could compare with the TVL different AMMs
and deciding whether to go in one or the other,
because it could affect the performance of the transaction.
So in that particular case, which I wouldn't say it's a corner case, but it's a limited case,
TVL is a really interesting metric.
Then I would say another positive aspect for maybe behavioral reason is due to the intrinsic nature of the crypto world.
So it could be a good metric for an investor because in crypto, what is lacking as of today is a robust and quantitative assessment of the value of a blockchain. So in some sense, the community and the investor relied on the so-called herding behavior.
So people tend to mimic what's there and what the mass is doing.
And two, like I tend to see as a visit is like a flock mechanism right is
a way to defend yourself from possible risk because the more people are there
in this in these pools the lower is the risk to get to get to be prone to read
to risk or to or to hacks and so so more investors are there and more eyes are monitoring the situation.
So from a behavioral perspective, it's cool, the TVL,
because it was a measure for making not rational,
but good decision at the beginning.
But then maybe it turned out to be one measure fits all with the maturity of the DeFi space that maybe is limiting.
Because, for example, TVL is not able at all to measure the capital efficiency in this in this project that is the the quantity there is located in this l1 or is
the protocol but tv is not telling you anything about how the assets are used and what is the
profit and and the general gain for the for the overall l1 and moreover uh it doesn't account
for the real user activity because it's just capital
sitting there it's not related to the the real activity like the volume or other other relevant
quantities and it does not tell you at all about the sustainability of this of this particular project. So if I can make an example, in our case, in Algorand,
we had to resort in governance, sorry,
we had to resort the different metrics.
Because if you remember, the TDR, the targeted
different words were distributed at the beginning
according to TDL, proportionally to the projects in Algorand.
At some point, we realized, but not we only the foundation, but all the community belonging to the DeFi community,
that the TVL was a notd metric which is a blend of tvl for every project plus it's a weight 60 of the tvl plus 20
percent of the fees generated by this protocol and 20 of the active user of this protocol so also in
this let's say simplified case that is the is decide how to split the cake among the different protocols,
the DeFi protocols in governance,
we realized that TVL had a limitation
and we have to move to a more complete metric.
Let me finish with the following observation.
The problem is that, as we said in crypto,
since there is very hard to evaluate the intrinsic
value, what happens if all the community sticks to a particular metric, then the VC, if they
want to understand what is the best project to invest on, they will look at the same metric,
User will look at the same metric metric and then there is a kind
of self-fulfilling prophecy that is everyone is looking this TVL so
projects that are focusing but maybe too much chasing the TVL will will emerge as
the winner even if the underlying technology is not the best and the more
efficient. the winner even if the underlying technology is not the best and more efficient
thank you very much michele that was super insightful and i like the way you
reminded us a bit on the evolution of the defy rewards we had and the limitations and the good points we had in TVL. So thanks again.
And now I'm going to go to Matt and ask him a bit to explain, give us an overview of the paper,
the research paper which was conducted, the methodology used to measure the statistical relation between TVL and crypto
returns, and some other considerations on the paper. So it's up to you, Matt.
Oh, thank you, Simon. So maybe I'll talk a little bit about a broad overview of
the idea of the approach of how we tested this, and then drop down into a little bit of the idea of the approach of how we tested this and then drop down into
a little bit of the data and the specific methods, how it was implemented.
So the overview of the method is to say that if TVL provides some information that's unique
to the market, that is not just redundant, then we should be able to construct portfolios
based on TVL that earn alpha.
And what alpha means here is it earns, these portfolios should
earn some return that is not just a linear factor of other market factors, of known market factors,
right? So the idea here is we're going to construct these portfolios. And then we're going to look at
the return on these portfolios and say, are these returns explainable by factors that we already
know? Some of the factors that we already know,
main factor is crypto market returns. We also have small minus big crypto portfolio returns
and momentum returns. So this is a standard methodology in empirical finance. It goes back
to Fama and French in 1996, who used this methodology to test stock market anomalies,
to see if some anomalies that people found in stock markets were truly anomalies, were truly,
truly generated alpha, or were they just linear combinations of known market factors.
This methodology has also recently been used on cryptocurrency markets in a Journal of Finance
article in 2022. So it's a well-known approach. And I'll use this
analogy quite a bit. This is why we know you shouldn't construct stock portfolios on price
to earnings ratios, right? So good. So that's a little bit of an overview. I'll touch on a little
bit of the data set and then the specific method of how we implemented this in our paper.
So we constructed a data set, 335 cryptocurrencies.
Over 2023 and 2024, the idea is we couldn't go back further than 2023 because we need enough TVL data to be able to construct portfolios. So 2023, 2024, we used 335. So
these are any cryptocurrency that has been in the top 100 over that period. We didn't want to go
further and include more cryptocurrencies because of microstructure issues and
being confident in the price discovery process for,
you know, for very small cryptocurrencies. We obviously excluded Bitcoin and stable coins.
So what the specific method, what we did now, I should mention, we, you know, there's a lot of
robustness checks in here. So we use multiple measures of TVL. So TVL, total TVL, TVL without double counting,
and then the most strict TVL without borrowing, staking, and so forth. And we constructed
portfolios both on the level of TVL to market cap. I shouldn't mention anything. Obviously,
it's TVL scaled by market cap. So we constructed measures on TVL to market cap portfolios,
I should say, and also on the change in TVL to market cap. So basically what we do in any given
week is we rank all the cryptocurrencies on TVL, our TVL measure, and then we form long,
short portfolios. These are zero cost portfolios. So portfolios that shouldn't cost you anything to construct. So these zero cost portfolios are we ranked all these cryptocurrencies
and then we buy the top 25% and then short the bottom 25%. That will be the high minus low
long short portfolio. And then we also construct portfolios which are on quintiles.
So this is borrow at the risk-free rate and buy the first quintile, second quintile portfolios.
So quartile, I should say, actually.
And so what we do is we construct that portfolio this week and then we look at the returns
over the next week, right?
And that's our return on our
TVL sorted portfolio. So construct the, we rank the cryptocurrencies this week,
choose the portfolio, observe the portfolio return over the following week, and then we
rebalance this weekly. So we do this every week over the sample period. So this gives us our TVL
over the sample period. So this gives us our TVL portfolio returns. We then look for any
portfolio returns which are significant. And if they're significant, we regress these returns on,
we used a three-factor model, right? So crypto markets, small minus big and momentum portfolios. But in general, all of the portfolio,
all the TVL portfolios were explainable by the crypto market. So in general, what we could say,
you know, in some is that any of these TVL portfolios are just basically leveraged positions
in the crypto market. Right. So they're all explainable. There's no alpha once we regress these TVL portfolio returns on the crypto market,
which means there's no unique information being provided by TVL. I'll mention just kind of in
conclusion here that this is sort of consistent with a lot of research that's coming out now. You'll see some papers on archive on improving the TVL measure. So not to go too
far in implications here, but so this is consistent, these results are consistent with
the idea that this is, we need some to improve on this measure. It's not to say that it is
to improve on this measure.
It's not to say that it is irrelevant.
Definitely not saying that.
But what it is saying is that it's not providing unique information, right?
So that you should not construct portfolios based on TVL.
I think I'll leave it there and then see if there's any follow-up questions.
Thank you very much, that was great guys what we will be doing is i will be
taking another couple of questions a couple of answers from matt
mark and michele to and explain everything and the roadmap we have
and then we'll open up for questions from anyone of the community.
When you were doing the analysis, did you analyze L1 specifically as well?
Or you continue to analyze these 300 blockchains?
So we ran this analysis on a subset of L1 cryptocurrencies
and then a more broader set. So absolutely. In terms of robustness, we're looking to see,
you know, can we classify cryptocurrencies based on L1 or the type of cryptocurrencies and see if
TVL is useful there, right? So our results specifically for L1 were broadly similar
to our results for the entire set of cryptocurrencies,
that TVL sort of portfolio, L1 only portfolios
are a linear multiple of the crypto market return.
Simon, you're mute if you are talking.
Okay. Thanks, Matt, for your reply.
Now, I pass to Mark Loparto.
Was it easy to get the data sets that we needed for this analysis?
What data cleanups were needed?
What categorization and labeling was needed?
And from which sources did you get this data?
Well, thank you for your question, Simon.
To get this data, when we started the analysis,
we had to set up the basics and to set up like which was the set we wanted to analyze.
So we ended up by taking the top 100 currencies from CoinMarketCap from 2020 to 2024, basically, to avoid having survivorship bias on the set even though the analysis is done
on 23 to 24 on years basically because of tbl data was mainly pulled from coinmarketcap from CoinMarketCap and CoinGeekle. And there we had to do some label restrictorization.
And this comes because there are some projects
that were on the top that have been affected
And also these data providers work with different labels.
So we had to kind of make a standardization to be able to pull
all this data and towards to tbl data we basically pulled data from defy lama from all the different
tbls that are available there as matt mentioned mentioned before, having double counts, without double counts,
or the most strict with the simplest TBL available.
And then we applied some labels to have
these individual groups analysis, having all ones
or maybe more DeFi projects oriented
to spot different patterns between those groups.
And all those labels were basically guided
through the biggest providers,
such as CoinGecko and CoinMarketCap,
So that would be a little bit
all the process we've done with the data.
Thank you very much, Mark.
Thank you very much, Mark, for the great explanation.
Now, from an analytics perspective,
we're seeing if you follow analytics platforms like
Messari, Artemis and Token Terminal and others, we're seeing that these platforms are also challenging TVL.
And in fact, some of them are no longer treating it as a key metric, but rather as a secondary metric.
Others, for example, like Blockworks, have introduced the concept of real economic value, which they say is much better than TVL.
Dune and L2Beat, for example, leave the users, they provide a decentralized TVL interpretation.
For example, on Dune, you decide what you want to show and to construct.
For example, on June you decide what you want to show and to construct.
For example, L2B, which is layer 2, have come up with the concept of total value secured rather than TVL.
Nansen, for example, analyzes TVL marginally but focuses more on smart wallet movement. Flipside crypto are not using TVL much anymore,
and they are focusing more on user retention, user growth,
So the research that Algorand Foundation did
is in line with what's happening in the industry.
And as Matt mentioned, there are some other journals and papers which are coming with these analyses as well.
Matt, before I move on to more questions, I wanted to ask you something.
In your analysis, did you take care of the double counting and other flaws which we had mentioned about TVL?
Yes, absolutely. So the idea is we use multiple measures of TVL?
So the idea is we use multiple measures of TVL, knowing that the research that has come out that has pointed out the double counting problem and tried to improve TVL, knowing
that we used as many measures of TVL as we could.
And if I could just add on here, it makes sense, you know, what you're mentioning,
what sort of BlockWorks, Nance, and Flipside and so forth are doing,
they're sort of moving away from the TVL measure.
You know, that absolutely makes sense.
I do go back a little bit to the price to earnings analogy that, you know, that absolutely makes sense. I do go back a little bit to the to the price to earnings analogy that, you know, what we're saying here is similar to to research that was done in the 90s on stock, that you shouldn't construct stock portfolios on price to earnings ratio.
you're going to see on there is the price to earnings ratio. So absolutely, if I were posting
metrics, I would try to move away from TVL. But to the extent that they are posted, there's nothing
wrong with it. It's just that just like in stock, we all know don't construct a portfolio on price
to earnings. If you ran a head fund and said you were going to construct portfolios on price to
earnings, people would think that was strange, right. So similarly, it's fine to look at the measure and quote it,
but I wouldn't construct portfolios on it.
And it certainly makes sense to look for other measures,
which like you say, which we did in ours,
tried to use multiple measures of TVL.
Thank you very much, Matt.
Yes. Thank you very much, Matt. Yes. Thank you very much, Matt. Very informative. Now, Matt,
so if TVL is not that important, what are the next steps in our research journey? What
metrics are we going to look at?
Is that for me or McKellie? So I can touch a little bit.
Yeah, so we're going to start looking at some wallet level activity measures.
Can I go a little bit into, I guess, maybe the next part of the analysis?
So, you know, and some of this is driven by some of the analysis? Yes, yes, yes. Oh, okay. Yeah, absolutely. So, you know, and some of this
is driven by some of the feedback. Some of the people I think that are here that we got on X
and linked into the paper is saying, well, you know, maybe TVL might not be
useful in and of itself. However, if you use TVL in conjunction with some other measures, then that might be informative. That might generate alpha. So absolutely, we took that feedback and
we're going to use, we're creating a data set of some other measures, again, wall level activity
and so forth. And we're going to create some multi-sort portfolios, meaning we're going to
sort on TVL and, you know,
let's say active addresses and see if TVL in conjunction
with some of these other measures
will generate some unique information
that's not already sort of priced in the market.
Yeah, so that's kind of the next approach
is to use some additional measures.
And I'll let Michele go into some measures
that we're uh looking into sure uh so but first of all i would like to stress i would like to
abstract a little bit because i think it's very important to to observe that this research is
like a crossroads of uh let's say, three real arms.
That is, scientific method, because by building a portfolio to extract value,
it's a reproducible and accountable way to assess
if a quantity is relevant for the economic activity.
Then it's all related with, as Matt is working in traditional finance
and traditional econometrics. So he's leveraging all the tools, the mathematical tools that
have been developed so far in traditional finance, but is looking at the present slash
future, which is, let's say, crypto finance it's it's important to stress that here we we try
uh as a high level we try to um to assess quantitatively the value of the blockchain
of our blockchain but also to other blockchains why i i would like to use the metaphor that matt has used in when discussing this uh this this paper that trying
to find the value of the l1 is like uh reminds the the famous black shorts formula which
revolutionized the the finance in uh in traditional finance so'll let Matt explain a little bit the role of Black Shoals
and what could imply for the crypto landscape in general,
not only for Algorand, but for the old crypto.
Sure. Yeah, absolutely. Thank you, Michele.
So we had a discussion a little bit on the history of the Black-Scholes model. Prior to Black-Scholes 73, people tried to value options, but you had to know the expected return on the stock. And you don't know that. So we had no real way to confidently value an option. And hence, we were not allowed to trade options on U.S. exchanges.
an option. And hence, we were not allowed to trade options on U.S. exchanges. Black-Scholes,
you know, came along with their, you know, very innovative approach where we could value an option.
Basically, they created a hedged portfolio and valued the hedged portfolio. And what the
brilliance of this is it's subtracted out. You need to know the expected return, but it subtracts out the expected return on the stock.
So you can value an option without knowing the expected return.
So long story short, this gave us a coherent method to value options, and that mattered for regulators.
say this is the value of the option, the Chicago Board of Options Exchange, you know, that was
approved and allowed to trade options, you know, the following year. So the idea here is to the
extent that we can get a good, coherent method to value cryptocurrencies, that matters a lot
to regulators. And it opens up the field. Once we have that, then it opens up.
In other words, it satisfies regulators and makes it easy to create exchanges and so forth. So
there's a little bit of an analogy between valuing, sort of trying to get a coherent value
of these cryptocurrencies and then how a coherent value of options allowed a market to exist where one didn't exist before
thanks man really really uh enlightening i will add a view still from my point of view as a physicist
that if if we ever look at what we discussed what we said before like uh the the investor
the usual crypto investor is an investor which moves with the with the herd like a momentum we
could uh define it as a momentum traders there are uh interesting studies in agent-based modeling
which which clearly uh finds out that uh in a population with all momentum traders,
like traders who are following the trend,
there are, let's say, intrinsic instabilities in the market.
But as soon as you introduce a decent amount of fundamentalists,
which are traders who try to trade according to the value,
to the fundamental value of the, in that case, the stocks,
but we could extend the analogy with crypto.
We find the market that is more stable.
And as we said, being more stable, having more, let's say,
certainty on the valuation is a way for regulator to allow
also for unsophisticated investor to enter in the market
and that would mean a huge uptick in the overall crypto system.
So it's like a movement that we would like to start that is beneficial for the whole community.
that is beneficial for the whole community.
Thank you very much, Matt and Michele.
That was very insightful.
Now, I'll ask you the last questions, Michele and Matt,
and then we'll open discussions for the community.
At Algorand and many other blockchains,
everyone is studying economic sustainability. Transaction
fees, MVs, inflation, priority fees, fees to validators, fees to nodes, what's the best model,
Michele can you enlighten us a bit what's happening at Algorand on that area and where we're moving
what's happening at Algorand on that area and where we're moving.
And this is why we're doing these researches, guys.
So it helps us to build an analytic framework for decisions,
for these interesting coming decisions.
Yes, so essentially this study is an important study,
but it's the first step of a wider project related to what we could call our long-term economic sustainability.
That is, essentially speaking, how can we improve the design of our protocol and, let's say, the other process around the protocol in order to favor self-sustainability?
We all know that we started in January by the moderating consensus
with, let's say, funding the 10 algos at the beginning,
now they are decreasing, plus the 50% of the collected fees.
This had a huge impact on the security of the network
because if you remember, we were at the verge of less than 1 billion of the online stake
And now we are sitting around 2 billion securing the network, of which 400 million belonging
to the foundation for, let's say, a strategic reason.
to the foundation for, let's say, strategic reason.
But the idea is to gradually decrease our presence
in the online stake as soon as organic validators enter in the game.
So we could say that it has been a success,
but we have also to admit that it is intrinsically a temporary measure for assessing the self-sustainability.
Because the 10 algod, which now are 9.5 something, are coming from the foundation treasury.
And there is, as of today, a certain commitment for a pretty fine number of months to be there and fund the the the
FISINC which is at the end funding the disbursement but we all know that at some point the the
foundation treasury will run out of algos and so we are thinking about ways to move from this temporary measure,
or better, move, rather than introduce new ways
of being sustainable in this particular aspect.
So the big question is how we can remunerate the block producer,
and what are the current and current and let's say the usual
the usual way to do it and for doing this we are let's say try to benchmark with other
with other protocols and so in this aspect we have to find ways to really compare apples with apples
because some things for example example, the TVL,
let's take the TVL as an example.
If you look, if you go to the Defy Llama,
and you see that the first protocol is Aave,
and then we have Lido, and then Binance stake,
So except for the first one, the other three projects are staking protocols, right?
So you realize that a huge part of the TVL, which is measured, let's say, in this naive way,
belongs to the category of staking.
belongs to the category of staking.
In our case, out of the two billions,
we could say, I don't remember by heart,
let's say 400 million of these two billions are staking.
But due to our technology, the reminder,
let's say 1.4 billion, are solo stakers,
which are able to participate in the protocol
without the abstraction layer.
So in this case, we have to understand,
we have to make a quantitative and robust analysis
of these quantities, but comparing different architecture
and try to understand what is different from what.
Okay, so this is a first study, but we are going to explore, let's say, different ways of being sustainable.
Essentially, the idea is to learn by what Ethereum and I would say more Solana than Ethereum are doing in order to resolve this
And we can say essentially that we have identified three ways for solving this sustainability
Collecting fees and we are there.
We have collected fees, but we know that our current volume is not able is not sustainable because that's the
point that is only fees are currently not able not enough to to fund sufficiently the the block
production but then we also we also see that the inflation is present in all the all the all the
relevant blockchain and uh moreover what i would say is to me personally is very interesting is the MEV
let's say process and in particular the MEV profit redistribution that is finding ways to
use the MEV as an incentive for people to stake their token to participate in consensus.
So summarizing, we would like to explore this direction together by doing something similar to what we have done now
with, let's say, with all the careful aspects of scientific aspects
that are taken into account,
but also trying to gauge what other projects are doing
and what is currently, let's say, ongoing as a discussion,
because it's not, let's say, an overall accepted way of tackling this problem, right?
Because we have seen a huge discussion in Solana about the last proposal,
2.2.8, that has been rejected by Haire, let's say,
and we are experiencing near-re-discussion of the inflation model.
There is a huge activity of discussion in what
are the next direction in the team so it's not a it's not a matter of just identifying a solution
but it's really a matter of discussing possible possible solution which could be uh let's say
implemented with different uh with different blends among each other
implemented with different blends among each other.
Wow. Thank you very much, Michele.
That was very, very insightful.
I don't know, Matt, if you want to add something on this topic
or on the paper, which we didn't cover.
Yeah, I can, you know, of course,
the second thing, Michele is there,
but I'll add briefly that, you know, sustainability, economic sustainability is always on the forefront of our minds on what we're doing.
And what we would like to do is bring a sort of a robust empirical method to sustainability, to understanding what will improve sustainability so at core of
what our tvl paper did is it's it sort of sets a challenge it says okay tell me how to construct
a portfolio based on tvl um that will generate alpha and you know we construct that portfolio
and we test it see if it'll generate and then anyone can give us feedback, say, well, I think you should construct the portfolio this way, right?
So construct a portfolio that way, see if it generates alpha, test it, right?
So what this is, is just a robust empirical testing procedure where we can come up with a hypothesis,
you know, TVL in conjunction with this factor, then we test it. And so it's that sort of methodology, that sort of careful empirical approach
that I'd also like to take to society to make sure that we understand
sort of the levers that we're pulling,
that there are relationships between these variables.
And so that's kind of a first step in that in that that endeavor
thank you very much matt now um i would like to open questions from the community
on what we discussed and see your feedback
anyone we discussed and see your feedback
yeah we'll open the floor for a few minutes so if you have a question just request and we'll bring you on stage yep Governor hutt
yeah i didn't know if i didn't hear any noise yes i had three brief questions so
i think that most people would recognize that tvl, without anything behind it, that it can be gamed.
If nobody's trading utilization, you know,
yours not utilization, it doesn't matter.
And it seems like as if there's, you know,
a fair bit of work put into trying to figure this out.
I think most people would know it intuitively.
My question is, first question is,
why wasn't this done sooner before putting in
a bunch of money to DeFi governance and TDR
The second question is, you know,
things being framed in the negative versus the positive.
I don't think that, in my personal opinion is I don't think that many people care
when they're told, hey, this thing doesn't really matter.
I think people want to be told what actually matters and then, you know,
how that's being acted upon is the goal here to eventually find things that matter,
the price, and then focus on them and if so
can you share what that looks like and then the third thing is are you looking at the effect that
generous staking rewards have on defy utilization because part of the problem that you all had
for years was rewards going to the governance and that cannibalizing DeFi activity,
which I would argue DeFi cannibalizes trade volume, which drives TVL.
And is there any effort by the economics team to look into that and see if it's having that same effect and addressing it?
Sorry, but I haven't understood the first and the third question.
Maybe, Matt, if you want to address the second?
I could actually, I got the first questions, and then I didn't catch the third. So maybe repeat that or let me let me jump in and just talk about the first. Why wasn't this sooner? a data fundamentally. It is only with 2023 and 2024 in the later that we had significant amounts of TVL, in
other words, enough to construct portfolios and test this.
So fundamentally, what's coming out in research now is some papers, like I mentioned in our
archive, are saying, well, there's some theoretical problems with TVL, right? So
people are noting it from a theoretical, and we're jumping in with empirical analysis confirming it,
right? And again, couldn't have been sooner just simply because data didn't exist. Now, the second
was one thing I just want to reiterate in terms of saying TL doesn't matter.
I go back to the analogy.
I'm definitely saying TVL just doesn't matter generally.
What we're saying is it doesn't affect returns.
So the idea here is TVL in and of itself doesn't provide unique information beyond full returns beyond that provided by the crypto market generally so
yeah so it's not and then and then why would we do this we do we do want to understand what affects
um cryptocurrency returns and so to do that we have to go through the space of
factors and and see if certain things do and certain things don't. So this is a very prominent measure that we want to make sure that when we're pricing,
that we understand its effect on churns.
And then, yeah, I'm happy to jump to the third one too,
It was, is there any analysis being done
into the effect of generous staking rewards that is above and beyond what would be naturally occurring from chain usage on DeFi utilization?
Because there was a long time where people were paid handsomely for just being in governance.
And I would argue that that severely cannibalized
defiant utilization and so then that there was programs to pump up tvl but what it produced was
non-meaningful tvl which was it's, but it's not being utilized.
My concern is that there wasn't.
Forethought put into the amounts.
On how the incentivized staking and the focus on it would impact DeFi.
Is there analysis being looked into that and those incentives in various rates affect behavior as it relates to trade. And I'm trying to tweak that so that there isn't the same sort of problem that has
existed in the past. So yeah, I'll turn that briefly and then turn over to McKellie. So
absolutely, the idea is, it sounds like you were talking about relative interest rates between cryptocurrency.
So absolutely, we're looking at that.
This is a frame that is similar to sort of international finance and relative real interest rates in currency.
This is a sort of a relative pricing approach.
I don't want to get too far into it because we are actively, we are doing, yeah, we are actively doing research in this.
And I don't quote any results at this point, but it's a very good point.
Yeah, it's something we're looking for.
Yeah, I think, I think you're maybe thinking of it as comparative of what I'm talking about
is people will seek out the highest risk adjusted return that they can.
And if you incentivize, you push on, you would incentivize one thing, you push people away from another. And I'm wondering if there's any analysis being done as to whether
the things that we're pushing on are having the effects that we want.
If I may add, I tried to see your point. I think I have understood. So you are talking
essentially about the possible impact, the evaluation of the impact of the TDR in our DeFi,
if I understand correctly.
Is it related to Algorand in particular,
or is there an overall search?
The bonus incentivized staking amounts,
the effect that that has on the wider ecosystem.
But in algorithm in particular or in abstract way? You can do it in particular or across to the extent you have comparables, but I would be, you know, I think that...
Yeah, okay. i mean it's a super complicated matter but essentially the idea is to understand what is
the the desirable apr that a block validator should have in order to be competitive with
peers in the in the in the validation space this is at first level the the possible analysis now
level the possible analysis. Now we are around 5.5% as an APR, given, let's say, 2 billion
of the online stake and the current level of 9.5 algos. Is it enough? Is it not enough?
This becomes a harder question because, as you correctly pointed out, you have to take into account the risk, but also the cost for participating.
And in this case, the things get quite complicated because you are right.
For example, risk adjusted.
uh risk adjusted what's the risk of running a block validator in algorand with respect to running
in ethereum or in solano and in algorithm i think you're i think i think you're missing my point it's
not just it's not comparable not saying uh what's this what's this apr versus this apr
this APR versus this APR rub between chains. It's how within an ecosystem. Also, there's a secondary
component that I think is not being looked at, which is how is it within an ecosystem if you
incentivize one thing, right? Because it's not a natural APR. You're incentivizing it, right?
And when you incentivize that thing,
how does that impact, for instance, grade volume,
utilization of platforms, and the like?
Because every time you have an incentive,
it has knock-on effects elsewhere.
OK. So you are referring to the let's say the
period in which the governance rewards were incentivizing DeFi because as of
today it's not present anymore right correct and there's a happy medium in
there somewhere but I'm curious if anybody's looking for it.
No, it's a fair question, but we never explored quantitatively this aspect, but
we have asked for the evaluation of the increase of the users on the volume
of the users on the volume respect this but it's it would be a fair an
interesting an interesting study I don't know if we have enough data points to
let's say to extrapolate into to get conclusions there but the rationale
behind let's say putting reward putting the fire words on top of it is because a risk-free rate,
which was the governance rewards, the plain governance rewards,
were in some sense in conflict with DeFi
because people had to decide whether to stay in general governance
or to be in the DeFi space through G-Algo or through other ways,
So, as you correctly pointed out, there was the need to remunerate the excessive risk with respect to the general governance.
That was the rationale also behind that approach.
Is there any other question?
Yeah, I think I wanted to speak so you can ask the question now if you want.
Yeah, my question is regarding the sustainability of the chain.
I heard that MEV can be used, it is being considered to extract value and then share it with the node runners kind of thing, but maybe actually reduces the experience,
maybe the user experience, the, of the chain itself. Uh, so, uh,
making it sustainable and producing the user experience,
how will we maybe like, uh, yeah. What is your take on that?
What is your take on that? Will it work out?
It's a super interesting question.
Let me say, I mean, super complicated and it would need a dedicated session.
And maybe in the future we are going to discuss about it. all, MEV as of today is not enforceable because the Algodd client is not able to reorganize
the transaction. So it wouldn't be needed a proprietary, let's say, evolution of the
client. So as of today, we are not seeing MEV extracted in Algorand,
but the question, which is a broader question, is,
supposing that MEV is extractable, would it be a crucial ingredient
for making Algorand as a whole sustainable?
If we ask this question to Solana, if you look at the numbers and Simon cited the block
works with, I think is super interesting for looking into data.
We realized that almost 30, 40% of the profits for the validators are coming from the MEV
extraction in particular to the JITO to the JITO abstraction layer.
So in this case, in Solana case, is a crucial ingredient for sustainability.
But you're right that MEV introduces possible, let's say, bad scenarios for the users.
But once again, we have to understand that if we
can afford to not have MEV and being at the same time still sustainable right so
it's it's a matter of understanding if the option that we have on the table are
enough to make it sustainable or not in one side.
On the other side, MEV as a whole is not a generally accepted negative experience.
Some aspects, some particular cases of MEVs can be detrimental for the user,
in particular the sandwich and the front-running.
But for example, the MEV extraction of reducing or removing the arbitrage with DEXs
is an healthy activity and is useful for the overall community
because it's reducing the friction and at the end of the day
is reducing the impact on the transaction.
So it needs a thorough discussion.
But personally speaking, I would say we have to play with the cards that are currently available.
Maybe in the future there will be other interesting ways to get to be sustainable but as of today if we look outside mev inflation or uh collected fees are broken at
protocol levels are the only way to be sustainable so i wouldn't discard the mev uh just because it could create friction to the users.
But my question is, Risi, in Solana,
maybe they are mostly trading meme coins,
but maybe when we look into the future,
where there will be on-chain trading of stocks,
bonds and all, at that time there will be
comparison of the on-chain trading with the regular traditional finance trading
we have to make sure that our MEVs will not have will have a positive impact that's making and this making the trading on chain a better
experience rather than like making it worse so that is where my thought process was when
I agree we have only few minutes left but let me let me say this I personally think that the
MEV if it is let me if it if it emerges like in the Gito let's say
paradigm it's something that it's at the end redistributed among the
validators but if you think MEV extraction, in some sense,
already is present in another way, for example, in Algorand.
People are spamming transactions in order
to be the first in a block to extract the value of being first.
So at the end, I would say, personally speaking,
that MEV is some kind of unavoidable.
There are ways, opaque ways, to extract MEVs.
That is, for example, co-locate to relay nodes in order to be the first one to make the transaction,
spam the entire network in order, once again, to be the first transaction to be received.
But in this case, we are not letting emerge the value of MEV.
It's just in the hands of these guys who are tech savvy,
who are very cool in designing architecture
to be faster and efficient.
So my personal view is that at some point mev is
always relevant we have to decide if we want to let it emerge and redistribute or let it opak
yeah thank you thank you for that answer that's all from my side yeah thank you thank you very much guys we have to end it here it's
time I thank Michele Matt mark and all of you with your interesting questions
who participated thanks very much guys thank you Simon Thank you.