Thank you. Thank you. Thank you. Hi everyone, thanks for tuning in.
I think we'll give folks another couple minutes to tune in and then we'll get started.
So just be patient a little longer and then we'll get started. Thank you. Thank you. All right, I think we can get started as it's five past the hour.
All right, everyone, welcome to today's AMA.
My name is Eric, social media manager here at NodeOps and the core contributor at NodeOps
Today, I'm stoked to be joined by Rachel Bacon,
who is the founder of Momentum 6,
and Harry, who is the Doc's lead at NodeOps
and herself also a core contributor at NodeOps Network.
Considering the magnitude of today's news and development
that went out surrounding our token going in love on Monday
and having just released our tokenomics,
we thought it would be suitable to host this AMA
to dive a little deeper into some of the key principles
underpinning our tokenomics model,
and some of the thought processing and decisions
that drove this design of ours and much more.
So what better way to do it than with our two guests today?
I'm truly, truly excited for this one today.
The audience will be able to ask both myself and our two guests today
questions in the last 10 minutes of the AMA.
So make sure to hang around until the very end
for a chance to ask away and hear answers to your questions.
That said, thanks for joining us today, Harry and Virchul.
Why don't we start with some brief introductions
before we jump into the discussion.
Virchul, if you want to start.
Create a lot of videos i've been doing andrew vc investing for
about six years uh with a firm called momentum six and we've been working with the node ops guys
um since pretty close to the beginning and um but they have been just one of the fastest builders and shippers, and especially Deepin, all the infra that they've created
with hosting Deepin nodes, making it easy for just everyday retail investors
to not only get their hands on the nodes and everything,
but also running them with no friction.
I myself run a lot of nodes through them.
So yeah, I'm excited to talk about their own approach
to the tokenomics and to decentralization
and everything you guys might have questions about.
Thank you very much. Harry, if you want to provide a little primer on your background
and then we'll get started.
Hey, guys. So I've been Doc's lead here at NodeOps since January this year. Big fan of
Virtual Bacon of the channel. I did keep an keep an eye there and yeah it's been exciting
times those of us who those of you who followed us from the beginning will be fully familiar with
the node as a service product that virtual bacon was was discussing there and of course if you've
been close to the community recently you know that we've expanded that offering into Security Hub and Staking Hub.
And now, of course, the Verified Compute Service is very close to mainnet with our announcement that went out today.
And Virgil, you mentioned something, right,
considering your position as an investor in the team from early on.
So before we actually get into the specifics of addressing today's development, right,
around the tokenomics, I wanted to ask,
would you be able to sort of share some insight into what drove your decision
into sort of trusting and backing NodeOps.
Was this something that you noticed about the team?
Was it just something about the solution and the vision that immediately resonated with you?
It will be, I think, interesting for the audience to hear your insight on the matter.
Yeah, I think, especially at an angel level and very early, like pre-product,
pre-mainnet anything level, it's really based on the founder.
So when we first chatted, another partner of ours,
he introduced us to NodeOps.
And back then, Deepin was just starting off a bit.
And back then, Deepin was just starting off a bit.
And there were core technology, I would say, expertise from your team,
but it wasn't focused on this Deepin.
It was initially like a node up.
I think it probably is not a coincidence because your team's background comes from infra,
comes from actually hosting nodes for staking for validators, et cetera.
But that is kind of a crowded space and it requires a lot of capital.
Whereas in this upcoming deep in retail nodes, it's more about friction, user experience,
how people can maintain their nodes without having their computer on 24-7
with technical expertise. So you guys pivoted literally like within the week of recognizing
this potential. And really that was when I was pretty convinced on just backing you guys because
wherever you see there's an opportunity in solving problems for end user, you guys are like first to do it.
And I remember this was when my community were participating in the XI nodes and you were the only available third party to stake with,
to run the nodes with, et cetera,
while everyone else were literally keeping their machines open
and having to deal with gas fees and all of that.
So, yeah, and after that,
it was just like constant, like practical iteration
of what people are having trouble dealing with
and you just simplify that as a SaaS model.
And now, you know, excited to hear about what your plans are next.
Appreciate the sort of summary.
That was super interesting.
And now to sort of get into the ins and outs of the discussion at hand,
Harry, like, why don't we start with our problem statement
that drove us to sort of look for further innovation, right,
for more dynamic, I don't think that's a random word,
So why would you say that the current,
how have different networks suffered from fixed inflation
Yeah, sure, I'll take that.
So when Mint follows a predefined schedule,
which is pretty typical in many deep ends,
that means that only the burn is dynamic.
So the negative outcomes of that often include inflation
and price volatility. So if you think
about it, if demand for the network's services surge, this can raise the price of the token,
but often that's short term. And then if demand declines, then there's an oversupply relative to
the demand. And this oversupply means that each token is less scarce. And until demand catches up, the purchasing power of each token declines.
And then there's a compounding factor to this, to price volatility, which is that if the token price is not pegged to any external factor, then the service costs and rewards also swing widely and what that means is the supply
side might be overpaid or underpaid during volatile times and that means that there isn't
actually an ROI a return on investment that's guaranteed for the very people who are providing
the network's service does that make sense totally Totally, totally, 100%. Also the way you
explained it is very sort of clear. And Virgil, now given sort of your systemic
exposure, right, and view that you have given your position in the space, I'm sure
that you, a lot of projects are sort of common, that's kind of weekly basis.
So you have a visibility from a bird side perspective.
How do you think that all the models such as fixed mint schedules or schedule halvings
fall short today, given the sort of market that we're currently in?
So I think for deep in projects,
really you have to have that balance of, again, demand and supply.
So the demand comes from actual users of the network.
If it's a GPU network, then they use the GPUs for render, for AI.
If it's a Helium, which we'll talk about in a bit as a good example, they come on the network to, you know, use the mobile service or whatever hotspot service that they have.
And then that theoretically should match up with how much supply is generated in new tokens minted.
But usually, it's very simple.
The inflation is usually fixed,
whereas the demand is just free-flowing.
So what ends up happening is it doesn't really adjust.
And sometimes, in a bull market, it's okay
because people don't really care about the demand as much.
They just buy the token anyways because of the narrative.
But then you see dramatic drawdowns in a bear market where the constant
inflation is still coming out.
And even at those high bull market price levels,
the inflation just there are too much rewards being paid out where the service
providers are not serving nearly as many users.
So then they just basically print free money.
And then that leads to a lot of cell pressure and that ultimately hurts the
project long term. So yeah, basic summary there.
I think we've all seen a kind of
from the back of our mind, but this is just
a pretty common pattern now
All the models that we've seen from last cycle
doesn't really work past the first bone-bear phase.
And Ari, I think I'll put these two questions together right maybe you could
use the first part of the question as an assist for uh the second part right so um i wanted to
ask like what pushed node ops right to abandon um static models when we were looking uh um
available options and um how did the sort of thought process as a result of that decision
lead to us integrating and using the dynamic mint and burn mechanism and yeah if you can sort of
provide a quick primer on this important mechanism that pretty much governs and underpins our entire tokenomics design and approach. Please.
Of course, can do. So we recognised, as VirtualBaking just said, that the static models are suffering
or making the long-term economic sustainability of the project suffer.
And we recognise that they're just too blunt an instrument for a service that's as important as cloud compute.
There's basically no point in fixing the pain points in the current hyperscalar cloud model, only to go and introduce risks at the actual coordination layer.
So that risk of runaway inflation or insufficient marketplace incentives is simply intolerable,
incentives is simply intolerable, especially when you consider that the demand for compute
resources can fluctuate dramatically with the lock-in free model that our cloud marketplace
offers. So NodeOps chose to avoid these limitations in favor of an adaptive dynamic approach that can
respond to market conditions and ensure sustainable growth.
So what that means is, as we said before, many deepins apply a static mint or they might apply a fixed mint decay rate where the mint starts high and goes low
in recognition that they're trying to bootstrap the supply side
They're trying to bootstrap the supply side and attempt to control for inflation long term.
and attempt to control for inflation long term.
And of course, in the normal models, burn is then dynamic based on the demand side.
So where we differ is we have dynamic mint and dynamic burn.
So this is an algorithm where both the token issuance and its destruction are adjusted in real time based on network conditions such as
demand and usage. So unlike the static models, the dynamic mint and dynamic burn allows the
protocol to adapt to changing market dynamics. And what our theory shows us is that this should ensure that token supply and incentives remain balanced and sustainable.
I mean, I've heard you even during the podcast that we did with Vault,
and it's always sort of refreshing to hear this answer because it just always becomes more and more clear,
sold but of course I'm pretty biased and you mentioned earlier Helium
virtual how do you think that the dynamic mint and burn approach that Harry
very well eloquently just sort of put forward and described, compared to Helium's regulated mint and dynamic burn mechanism?
Yeah, so I think most of the larger deep-end projects,
they do recognize this issue, but it's kind of a little bit too late
because it just depends on how the initial tokenomics are designed,
part of the on-chain, part of it in writing,
and then how the governance runs.
So I think Helium, a good example.
Render, a good example that I have.
And then even some of the gaming projects that have,
like Gala, for example, they have very dynamic mint and burn
based on pure governance.
So what ends up happening is the effect comes in too late.
So for example, you have initially there's a bootstrap period.
People are okay with that.
You have to give a certain amount of like boosted rewards
for early infra providers to even come in
before there's enough supply for users.
But then you should have like kind of
more defined stages at what point do you turn tune down those rewards maybe it turns into having
so that um there aren't uh unnecessary inflation anymore because there's enough compute or whatever resource you have in the network already. On Helium, I believe it's got to be based on governance and not directly based on, what
do you call it? The ratios between how much data credits are being used by users on Helium and how much rewards in HNT tokens are being minted.
So it's basically a oversight after. site after so if they see okay this quarter we have made too many tokens the next quarter they
created a proposal to turn down those rewards and then maybe later on i think they're they're
trying to do this as well with one of the newer proposals after so many years to use a halving
schedule but in my opinion all these should have been implemented day one because um i mean you
could have simulated this beforehand uh you know you know that we need to bootstrap but you know
that at some point once your network reaches a certain maturity the supply just automatically
new supply issuance gets halved and then you have a certain ratio between how much tokens are being minted versus
how much are being burnt from the demand. So all of that should be predefined instead of a
post-launch governance process and I think that's the key difference.
Totally aligned and in our tokenomics piece right that everyone can sort of readily
access on our blogs and it's also part of the thread that we for the tokenomics
thread that went out more or less like a couple hours ago that is pinned on our
profile we have the burn ratio mechanics sort of laid very clearly, right?
And it's initially set at 0.2,
and we also have an example schedule, if I'm not mistaken,
that goes on until Q3 of 2026.
So, Harry, no better person to hear this from.
Would you be able to sort of walk us through the ratio mechanics
and shed some light on some of these numbers, if possible, please?
Yeah, of course. So what we're doing is starting off with an initial burn mint ratio of 0.2.
So to give you that perspective of how that compares to the ecosystem, that's about five times tighter than the early deep in models.
So what this means is that for every unit of economic activity, the tokens burned,
a corresponding amount of new tokens are minted. And that ratio is set quite generously early
because we're bootstrapping to ensure that suppliers and stakers are fairly rewarded.
But what we're doing is attempting to do that without the risk of oversupply, which means that
over time that ratio will change. So as the network matures, the ratio tightens,
reducing the rate of new token issuance to maintain scarcity and value over time. And the interesting thing is
this dynamic side, the exact ratio is not fixed, it's dynamically adjusted based on real-time
network signals. So of course we've been modeling this and our Monte Carlo models predict that equilibrium should occur around about 0.7 burn to mint ratio. And given
that model's parameters, that means what we're saying is for every one unit minted, 0.72 units
are burned to create a stable equilibrium around about year one or two. And of course, we don't
just have levers controlling the mint. Those emitted tokens also have a kind of protective layer in that there's 180 day controlled emission schedules for those new tokens.
That means they've got a 90 day lock and a 90 day linear vesting. And there's another kind of cooler for the system as well, which is that there's a daily cap of just over 186,000 nodes.
So even if the demand side is quite high, there is a limit to how many tokens could be emitted every day.
And just then with that answer, I think you answered a lot of the questions that our users
had in mind also because I was quickly going through some of the main ones and reoccurring
ones that were sort of laid forth in the comments below our Spaces post.
And also, thank you very much, Harry.
Virgil, this is for you right like where do you
see i don't know if you answered this a little bit previously but maybe you can laser in and
focus specifically on this now it's like where do you see demand driven supply controls being most
being most impactful in in practice going forward when it comes to tokenomics design
practice going forward when it comes to tokenomics design?
So there's the optics of it, which is,
so even from the article you shared with me yesterday,
the burn and mint ratio and the expected time timeline that these ratios will be implemented is all
So this gives quite a bit of transparency for the community so they can kind of actively
track, okay, in whatever like your explorer or dashboard page post launch in the first
quarter, people will see, okay, the total amount of tokens that are minted for rewards are this and this, right?
And then total amount of burnt should have been at least 20% of that.
And then the next quarter, they come back to the same page.
They see, okay, that ratio has been tied into 32% should have been burnt for each 100% minted.
And the next quarter, they check it back again.
So they can really see this instead of following
some vague tokenomics governance proposal saying,
oh, we want to benefit our community
without having any way of tracking this
other than literally following ether scan
and seeing what the top holders are doing.
This is ecosystem ecosystem wide,
how you're trying to control this as just a central gauge.
So I think it lays it out by timeline
and also fits kind of what your product roadmap might be.
So maybe you have a bootstrap phase,
you have in the tapering phase,
you start to introduce some other features.
So that's one side. And the other is obviously just the sustainability of the timeline.
So let's say right now the market is just OK.
So you might require more bootstrapping incentives.
But then later on, let's say the cycle heats up really next quarter,
you can significantly tighten this if you want because you have those.
I'm not sure if it's completely automated or if it's somewhat, you know,
tunable, but you can even tighten it further if you have just prices really
skyrocketing across the board because of the cycle.
And then you don't need that much rewards.
Then you can just pretty freely work with those numbers compared to other deep networks from last cycle where they have no choice except to keep using those fixed schedules.
except to keep using those fixed schedules.
Otherwise, they risk their early investors
essentially being mad and funding the project,
even though it doesn't really benefit the project
So I think it's those two sides.
For the retail, for your community holders and members,
they see what the rough timelines should be.
and for actually how much tokens are out there,
you have that flexibility
based on how the cycle actually plays out.
So you can use the incentives longer
longer or you can do tapering a bit quicker. It's all up to you.
or you can do tapering a bit quicker.
Awesome. Thank you very much sir. Wow, there's a lot there and I wish there wasn't so much
to address otherwise I would sort of delve into this each point at a more granular level
but there's an agenda to follow and given the magnitude of the topic at hand, we need to proceed.
The Volk Capital podcast that we did with Mustafa,
who is a research engineer of Volk Capital,
Harry was very sort of illuminating, at least for me,
and insightful to understand at a more granular level
the logic behind the optimal control theory
and how it's sort of pertinent
when it comes to deep end networks right um would you be able to a i'm gonna sort of reference to go
and check it out it's already on our youtube page and also it's on our twitter so please please do
check it out if you are um deeply interested in the topic at hand, but for the time being, Harry, would you be able to elaborate on how optimal control theory is applied in our tokenomics design?
The sort of high level overview is that the optimal control theory is what we're using in in the tokenomics to steer the system towards an ideal state of equilibrium.
So the theory is that you can balance growth and scarcity by addressing certain levers that that are affecting the economy.
And this is why we've chosen this dynamic adjustment, this minting and burning,
because what's happening is that's responding to external and internal signals.
So you can think of it like being a thermostat,
maintaining a stable temperature by responding to changes in the environment.
Very, very well. And Vir um virtual this is for you as a systems thinker right given once again
what i was mentioning earlier your position right and your uh systemic overview that you have
of um maybe pertinent to deep and only by the different deep ends and we also covered this in
the volcapital uh podcast that um there's so many different verticals within the Deepin network itself,
and those are the ones that apply themselves to connectivity,
energy grids, or computers such as ourselves.
How do you see tokenomics evolving to behave like real economic thermostats?
If Deepin networks do further evolve in the future,
the tokenomics designs will also have to followats, right? If Deepin networks do further evolve in the future, the tokenomics designs will also have to follow suit, right?
Yeah, so I think we need a pretty big iteration coming up. So at first it was really quite decoupled.
I think Render was probably the first D-PIN network to actually have a token.
And for the longest time, they had a big, big imbalance
where the mining rewards are just kind of mining rewards.
And whereas the users are completely decoupled.
So early days, I don't know if you guys were tracking these.
So early days, I don't know if you guys were tracking these.
They were literally treated as like kind of proof of work, more usable proof of work, where people just literally put their GPUs to to mine this token instead of joining a network as info providers.
providers and whether people use their power or not didn't really matter.
And whether people use their power or not didn't really matter.
And then when this broader concept of Deepin became popular, people realized, okay, there's
It should be based on demand.
And then we had kind of soft linkage between...
Hello, Pedro, are you there?
Do you hear me yeah now okay yeah now
i can hear you sorry i sort of cut out um about five seconds ago okay yeah so um so i think after
um the first cycle of render then came um this even concept of dpin maybe four years ago when
uh people realized it's not just about mining and we shouldn't design these deep in token
rewards purely based on mining and based on issuance and instead it should be based on usage
then it became kind of a middle unit layer so for example on, you have like these data credits that are, um, kind of compute credits.
And then that's kind of your, your unit of account, um, which represents, uh, how much user demand there is for, um, the network, uh, compute.
And in this case it's, it's, um, data network.
In this case, it's data network.
And there's a ratio between the credit unit versus the token.
That's the helium token self-agency.
I think going forward, we should see these two concepts even getting more closely linked.
So I'm not sure if data credits are completely based on like stablecoin,
you know, USD prices, because there are some fluctuations there.
I think ultimately the price of the token that's being minted
and the whatever point system per unit of compute that you have
on the network um and how much uh demand you have should all be one system right you shouldn't have
like a you know data credit point and then a uh helium price in usd and then maybe a data credit point and then a helium price in usd and then maybe a data credit points per usd if
you have a um on off ramp right there for people to just pay with credit card that's all too complex
of three conversions that you have um all together then how do you balance all of that? You have to have like kind of a central conversion system.
I think it should be much simpler where you have a kind of central gauge that sees how much uh how the price of your token is doing and what the rough uh usd
value of a unit of compute uh is on your network if your network is is um really meant for ai
maybe that's that's a number of tokens if it's uh for rendering maybe that's
some other unit but there's there are some um rough numbers you can use just by going on like
amazon and then looking at their prices so you know roughly what the demand there is
and then it's simple it's a simple conversion of your token price versus um the market rate
and then with a face based racial if you want to have it more incentivized you have a loser
ratio if you want to have a very tapered because you have a lot of rewards already
um then you have a tighter ratio.
It has to be very closely related.
Otherwise, these tokens still end up being governance tokens and it kind of defeats the purpose.
So yeah, that's what I think.
I just think the whole system should be a lot simpler.
All of this conversion back and forth
is just creating more delay.
And especially in our market, in the Web3 market,
like even if your proposal and your adjustments
are one month late, it becomes too much of an issue
because that one month can have a lot of issuance
of your token. If your token price is up 10x in a month in a bull market,
it's very difficult to come back from that.
Fantastic. Thank you very much, sir. And Harry, this is for you. And what I wanted to ask,
I've seen also that it was one of the most recent questions below the post, right?
And I can sort of join the two.
The question was that during anomalous events, sort of like Black Swan events, such as sudden spike in compute usage or significant drop in token price, how does our model sort of adjust, right?
model sort of adjust, right? So what are the signals that we have at our disposal to ensure
that the tokenomics design that we do deploy and utilize is able to adjust dynamically and accordingly?
Yeah, good question. So there's two main signals that we're using for these,
Two main signals that we're using for these in terms of the dynamism, the dynamics side of things.
And the first is revenue. So the rate at which network revenue is generated.
And that's measured on chain because we're, of course, looking at the amount of credits purchased.
And to Virtual's point, that means users can come in and pay fiat or use their favorite stables or tokens because our payment partner accepts all of those.
They don't have to go and buy Node to purchase services.
It's very simplified from the user experience.
Some people will want to consume this compute without ever thinking about the fact that Web3 exists.
They don't need to know that.
They're just purchasing services and products that they need because they don't need to know that they're just they're purchasing services
and products that they they need because they've got utility to them and then the other
measured side as it were the the dynamic side is staking so there's two ways you can stake within
our tokenomics you've got the You've got people who are token holders
who want to stake and earn.
And you've also got what we're calling a bond
to differentiate the two by the actual suppliers,
So we've got these measures of almost commitment
by the community, by the supply side
and the token holding side and they are both
driving this algorithm to to support this this understanding of network usage to to control
emissions and then the question about that that spike in token price or that plummet in token price, the way we're trying to protect the community there
is the compute units are valued in dollars.
So what this means is there's always a link back
to the value of that compute,
whether you're purchasing it or providing it.
There's a pegging between the token and that so that you don't end up
in a position where it's very expensive to purchase the same compute as you would have
purchased three weeks before for a different price.
It's pegged to a dollar value.
And then the number of tokens that are essentially burned and minted in response to the purchase
is then adjusted to the usd value
there you go i think no better answer so i've even commented on uh below the the person who
commented who dropped the comment right with the question so thank you very much harry and i'm sure
that others also had a thought in mind right in terms of how it would dynamically adjust based on the volatility of the
market that we work in because of the yeah the sector that we work in and thank you very much
I think you also answered the the next one that wanted to ask, which is around the staking and the importance of the term bond, right,
for compute providers who want to become like an integral part of the network
to help power the infra, underpinning the Nodops network.
So now, Virgil, for you, how do you think that deeper protocols balance,
should, I think it's a better term,
balance short-term rewards with long-term sustainability?
It's quite often like an unanswered conundrum.
Not that there probably is like a silver bullet solution, right?
I think just to be realistic.
So just like a typical early farming reward system in DeFi, you need LP.
You need early providers to even come in and then test the network and provide enough liquidity so people can trade.
So you need to incentivize quite a bit.
like a few hundred percent aprs on relatively stable pools so it's the same right you have a
boost driving period because you need a certain number of machines and info providers and people
are okay with that right but just be transparent about it we do this for one month three months
you know six months whatever and
then when the time comes you turn that down and then in just like in defy terms you know you go
from 300 apr to 50 apr on a stable pair and uh but the system has been running smooth for six months
so people will still farm there and people can still trade there so then your tax is stable so once you're past that testing period of whatever you you defined
the rewards are tuned down and your long-term users and long-term providers will stay because
they see the risks are a bit lower by pooling the resources in your network. And that's totally fair.
Yeah, I think more Deepin projects
need to think about the core economics from the beginning
instead of treating it kind of like mining
and just kind of hiding behind the,
eventually we'll do halving type of schedule.
And yeah, and there are so many
deep in communities anyways
if you are deep in the space you know
don't work anymore and I think
you just gotta take your community
respect their intelligence and actually
So to Lee, thank you very much that was super and respect their intelligence and actually be transparent about it.
So to Lee, thank you very much. That was super important.
Everything we do is around transparency.
So that's why even all our traction and traffic data metrics
It's all sort of publicly and openly accessible by anyone.
And if anyone has any kind of question,
we're here at your disposal, any member of the team to address it has any kind of question we're here at
your disposal any member of the team to address it any doubt or question that you might have
and we sort of wanted to retain the same approach also when it came to the token launch and
the tokenomics everything we any number we have we'll put it forward and share with the community
so we literally there's nothing to hide.
And I want to take this moment to also answer a couple of questions that I keep, like kept
seeing over the last two or three hours ever since the news came out.
So when you do your eligibility, right, and you click there and on the checker, the number
that you see is a pool of the total, it's your total allocation, essentially, there
distinction between wave one and wave two it's uh the way you can you can think of it it's like a
summary of all your contributions uh that you've done on the node ops network regardless of what
they may be whether it's deploying completing quests uh to date so there won't be one once
the claim comes a distinction between wave one and wave two. And for anyone out there who is either looking to stack up more G-node points
or to essentially enter the race,
all the farming opportunities are still available.
And there is a guide that we've shared a few days back.
And I'll also include these in the answers that are about farming.
But I just wanted to say that you still
have time to get in on the action so there's no reason to panic. That said thanks for the answer
on the short-term rewards versus long-term sustainability. Harry is there anything you'd
like to add from like how do you think that the token design, like how does our token design support both early rewards and long term equilibrium?
I think you've answered this through your more technical questions, but at a conceptual level, how do you think that what we've gone with is able to achieve both these conditions, in your opinion?
Sure, if I zoom out and do it a bit more high level maybe. So this token design
is intended to support early rewards by setting an initial more generous burn mint ratio,
which is intended to incentivize early adopters and infrastructure providers. It's the bootstrapping phase, if you will.
As the network grows, that ratio is dynamically tightened,
and promoting a more deflationary economics,
which should increase token scarcity and therefore value.
So what we're intending is that this approach ensures
that the early participants are awarded while maintaining sustainable growth and eventually equilibrium over time.
it's been like an interactive session because of course there's a lot of um questions that
people have with regards to to the token uh launch and everything around it and some of
the decisions that we've made that the team has made around the tokenomics so we've answered a few
here and there but um as we so this is more like a thought-provoking question for you uh vb right
virtual um do you think that token dynamic tokenomics designs will become the
standard for all Deepin projects and networks going forward? So, for example, if and when,
hopefully, it goes well for us, right? Like, our case study could be studied as a pilot in
six or 12 months' time for other Deepin networks that will be looking to to launch their token uh during in
that time frame uh i think it certainly could be different so um nowadays like even if you have this
um capability of integrating your token at the core utility level because it's deep end most
projects don't do it they still just latch onto the narrative and
you know launch a token with pretty low float and just follow a regular schedule and even though it
sets like 50 for community it's like multiple buckets that they ultimately still control um
at this point like there's so many of them like decently savvy investors know what they're looking at. So it depends.
On one hand, we're all here to make money.
So people will be decently happy if they get a sizable airdrop
and just sell on those other projects.
But that doesn't really change anything.
They'll just look for the similar type of opportunity.
But the market needs to shift, right?
But the market needs to shift, right?
We see like these fairer launches on, you know, just Dex launch pads, et cetera.
So if your model can really start a bit lower and have this ramp up period where people see constant growth and especially for deep end, like if you have a positive trajectory instead of negative,
then that could serve as a like entry point for members to actually come in and study
what you guys are doing different versus other deep end projects.
So ultimately people would need kind of excitement first to even visit your project six months past tg to see okay like this
they have something here and this could be more sustainable and this is something i can
hold on to at least for the for the cycle instead of just always rotating for the next um major
launch and doing trying to get some airdrop for some free money up front. So that's what I think.
People are smart. If you have a good performance, people will take notice.
Totally agreed. And I think one last one, maybe if you could keep it concise,
Harry, and then we open it up to the floor.
It's like what, you've answered this also during the whole capital,
but it's a pretty hot topic.
So we've been reviewing it and sort of covering it quite thoroughly and extensively.
What innovations are you most excited to launch in the coming months
from our perspective without dropping too much?
Something that's cooking that you're particularly excited about sure i'll i'll not release too much alpha i'll just stay quite tight actually because
next step is we need to get our historical revenue on chain so remember the node as a service product
was successful from month one node ops responded to demand with product market fit and that drove revenue.
So it's a huge step that this is a revenue first deep in,
that revenue now goes on chain.
And then the next step is we get to see
our dynamic mint burn mechanism in action
so that we can see how nodes adapting to usage and demand.
And then finally, we get to offer node holders
the opportunity to stake and actually earn a share
in that network's revenue.
If you take a look at the tokenomics breakdown,
you'll see that a portion of all the demand side payments
actually return to stakers.
So together, all of these factors should come together
to support token stability
and early price discovery that isn't dictated by those folks who are only interested in
There you go, folks. That's your answer. And so literally now we've covered everything, or at least most of everything that I wanted to sort of touch on.
We could go on literally for hours,
but if someone has a question,
please try and keep it as original as possible
in the sense that if what you're thinking and to ask
has been addressed in the AMA so far,
don't use this slot, please, because it's a recorded session,
so you'll be able to go back and review some of the answers,
some of the extensive and thorough answers provided by Virtual and Harry.
So please use this opportunity to ask questions
that haven't been covered and addressed by the panel in this AMA so far.
So the process is simple.
Just please just raise your hand or essentially just request to come up as a speaker.
I think we have a few minutes, so I think probably three to four questions
we'll be able to do if you keep it concise.
So please send your request and yeah we'll go ahead
so the first one is up and it's
First one is up and it's...
The floor is yours. Ask away and then we'll do our best to answer your question if possible. I think he sent a request earlier, so he might not be there.
I'll pick a couple questions from the list of questions I can find below the AMA and I'll ask it to both Harry and Virtual
and that will pretty much simulate the same instance.
So either of you, maybe Harry specifically,
with other tokenomics, this question is from Chai Boy,
Essentially what I'm trying to ask is that if we looked at any other projects
and how they approach tokenomics as a model to follow, as an example, or maybe something, sorry, a project that you particularly enjoyed their launch and you found their tokenomics approach and design to be rather unique.
so yeah yeah um i can't actually because we wanted to do things differently i don't think
i'm not sure i know of anybody who's who's launched a deep in with the level of dynamic
feedback that we're putting in so i would actually say if you're looking for inspiration take a look
at the vault capital article um mustafa and and myself discussed their article a little bit in
that previous podcast so that's where they go into the theory what he's essentially done is taken a
kind of academic area of optimal control theory and kind of drawn that out and made it accessible
for us web3 folk and that is that was actually the inspiration point for us because we were
looking at trying to find a model that that fitted and they break it down really well
awesome thank you very much harry we do have a request now from oxy zero xd i'm approving your
request please keep it concise as we only have a few minutes left sir.
The floor is yours, hopefully this works.
It's always an uncertainty and unknown with spaces.
You can unmute yourself and ask away please.
So, okay, we'll do a rapid fire and we'll do it this way, similar to how I did it with the last one.
So I may review the questions live.
There's a bunch of ones that are about wave one.
The way you see on the eligibility checker is essentially an aggregation,
an accommodation of all your contributions to date.
Maybe this is once again for you,
Harry, this is from Gatto Verde,
essentially how whether the long-term vision
for the Nordosnav Quark has also been
what can I say it's reflected within the tokenomics design from today as in that I think you answered
this earlier but the the current choices aren't just guided about the current priorities but it's
also done with a long-term vision in mind and if you'd like to briefly add to that if possible yeah so one of the
challenges is that what we're trying to do is ensure that very different participants with
very different motivations are all rewarded by this protocol so as i mentioned earlier like
essentially the compute service you can see as a as a web 2 model like people can come
and use the compute without ever thinking about the underlying web 3 whereas you've got people
who are interested in the token side and are interested in the web 3 side so we're trying to
balance the the needs of people who just want to consume compute with those who want to provide compute and earn in crypto,
those who want to hold the token and perhaps maybe engage in the protocol further by providing a
watcher service, because of course we have to have verified compute. So there's roles within the
ecosystem that are coming soon, whereby you can ensure that a compute provider is maintaining the service they've
they've promised and that's where their bonding becomes so important because that bond is up for
slashing if there's malicious behavior or the inability to keep that compute available so
these tokenomics are designed to try and bring all those participants together and ensure that everybody is engaged and receives
the level of reward they need. And of course, what we've done, this is one of the reasons why
the node token contract has changed, is there's such a level of control at the governance layer
that's been allowed. So if we feel that, if we can see that the economy is slipping and not
rewarding certain participants, it's possible to step in at the governance level and adjust those
settings to ensure that we can return back to our equilibrium path. I hope that answers the question.
It does indeed. And yeah, there was also another one from Stegros asking for submissions. And yeah, don't you worry, like anything, if you encountered, if anyone, any members of the community encountered any issues during the verification process, and you've filled out the appeal form, the team is reviewing them. So if you're a genuine, what I keep saying in the comments
and to the community is that if you're a genuine user
who has generally contributed to the network,
we'll find a solution and you will be rewarded accordingly,
And guys, this is where we hit the top of the hour,
which we fully appreciate how busy you are,
so we don't want to sort of extend it too long.
Is there anything you'd like to add
to the magnitude of today's event?
It's been literally beyond a pleasure to have you on
and learn more from some of your insights
that you kind of provided.
Yeah, I mean, just happy to be here
and looking forward to literally just running more deep in
and trying out all the other compute resources
you might have coming up.
I think your product has been one of the most
hands-on things I've used,
even from your Node AirDrop and Checker
There are so many things to do.
So yeah, just excited to keep moving on this.
Oh yeah, thank you very much, sir.
One last sort of call to action that I'd like to leave the audience with is,
like, now that you know the end date, you still have two full days,
depending where you are on the globe.
For example, I'm in New York now because I was here for permission.
You were almost two and a half days to get in on the action.
So the funding opportunities are still all available.
Rent-a-no, deploy, and all the other ones that I can't remember now on the top of my mind,
given how hectic this moment is.
But you still have time to get in on the action, right?
Start contributing and your efforts will be recognized um harry is
there anything else you'd like to to add as we um as we close things up and of course thanks for
both of you for joining today no just excited to welcome so many new community members with with
the airdrop we're growing and yeah, yeah, come build on NodeOps.
Oh, yeah, no better note to end it on.
Guys, thank you very much.
T-minus two days to D-day for Monday, June 30th.
And, yeah, if you have any kind of questions,
either by opening a ticket on Discord if you have technical issues
or comment on any of the announcements.
Team Emble will make sure to address
any questions or doubts that you have.
And yeah, Node season is here, fam.
Gnode, and thanks, Harry and Rachel,
It's been a pleasure. Thank you.