GM GM guys welcome to our AMA today. Thanks for tuning in. We're joined here today with our guests here from Daya. We have Paul and from Neon we have
our very own shorty and Pascal. So we're just going to be kicking it off and just going into like talking about DIA, Neon EVM, what are like their on-chain, they're like crushing oracles and
and what do they mean for Neon. So let's start it off. Hey guys, could you tell us a little bit about yourselves, a bit about your history in Web 3, and how you got here?
Hi there. Then happy to kick it off. Thanks so much for having me and excited to share a bit about DIA and what we do with your car's nickname. So maybe just quick intro for myself. So my name is Paul.
One of the co-founders, DIA, I'm in charge of business development, I have a background management finance, but I'm trying to do a question, I'm drawing, I guess, like, building full-time web, we in 2018 and there was also the time basically when we found
So I have the pleasure of basically to work with a bunch of cool dabs and ecosystems such as Neon understanding the needs and glazing with our product and tech to build the tools which are needed by the next generation of ecosystem and dabs.
Cool, thank you Paul, maybe I just go next. Thank you for kicking it off for sure and I know everyone. Super super happy to be here today and thank you all too so much Paul for having us here for joining us in. So yeah, maybe a little bit about the back
from to my side even before Neo. So I actually also studied finance. I was also more in the traditional finance segment before working in different positions before I ventured into FinTech and I found it like a Neo bank before.
And when also during the progress of founding the company, we went deeper down the ravage hole of crypto, eventually ended up founding our own product orders, when it was two and a half years ago, it was an other knowledge actually. And we had a lot of exposure to
on December and I'm actually leaving you could say heading to Neon Foundation giving out grants and so on and so forth and yeah super pumped to be here especially because I did a lot of research before and know a lot about the Oracle site so super pumped
Hi everyone, this is Shruti. I am an integration engineer and developed Leon Labs. And yeah, first of all, thanks for joining and happy to be here. I started my journey in 2019 and yeah, I just found a really good concept overall, quite fascinating.
and I gain my expertise in avian chains and worked with a few startups related to NFTs, CFi and edge platforms. Streamed a bit on different platforms to like build my audience and yeah really great to connect with
different people in the space. And yeah, I'm psyched about Oracle a lot because it's a revolutionizing technology and just combining the information from off scene and on scene. It's an incredible concept. So yeah, I'll actually talk about that there today.
Awesome and just finally my name is see I did socials at neon been here for like here and a half. Oh, so it's nice to meet everyone here today. Okay, so now that we have introductions out of the way, let's talk a bit about
So could you tell us what diet is like well before like the audience listening in like from Neon and like anyone knew and like what it will bring to Neon and it's important like about it.
Yeah, sure thing. Thanks. So in 2018, I think it was, yeah, was pretty early was pre-D5 pre, I actually need an Oracle for anything like production, at least at the broader scale. But back then, we still have
the feeling that other projects building similar infrastructure that we saw a different way of how to approach the Oracle problem. So obviously there's one way in basically trying to focus on
creating infrastructure which comes to a consensus and is decentralized and this works well. But we felt it was a bit of a transparency which will be needed especially back then when we took the assumption that there will be more than just the theorem out there.
possibly different chains which need to somehow be connected and applications living on not been possibly just one chain. And hence we did take a different approach without it's important for us to be natively integrated into different
different markets because there'll be different markets living on different ecosystems and then order to bring this data together or have a very separated unit to understand where the data actually comes from. This is basically what we did and there's also a user differentiator.
of tier to a lot of other very cool oracle projects which take all different approach out there. It's that we can pull data natively from these markets, no matter if it might be like some decks on neon, being super relevant for you as a
as a venue or a centralized exchange and applications can really choose what markets shall be included. What methodology can be applied and hence we're providing much more than just a market price but we can be extremely specific.
And this is definitely a huge differentiator. At the same time, as you elaborate, we're cross-chain Oracle. I think it's fair to say that we're probably still in a multi-chain ecosystem, that we see the first applications building across different chains. And visibility for us to
build ecosystem specific or across ecosystem price fees is really one of the big differentiators. And a bunch of other things like how we build up beer in a, I guess we set this up as a down in 2018, which was relatively novel and also the way that we use a lot of
collaborators and did coin heavily back then to build up the year is a bit of a different approach other oracles take out there but yeah just not to expand here too much I think that's the gist of it but happy to expand later on
Yeah, like a product. I actually really like the point that you were just mentioning that actually we have right now in this space like different chains and different ecosystems that also have different needs when it comes to Oracle's data that they need for to deploy their products. Definitely makes
a lot of sense because I personally also, I mean I also experience like how for instance when I only talked about the insurance market for instances especially for like disaster protection for farmers, the actually need to choose a chain that is
really like fast and transactions we get very low on transaction costs so that's why we really have different use cases for different chains I mean like that you touched upon that point for sure yeah and I think that's that's also something really to to to highlight what you guys are building at neon so
to have this combination of high throughput low cost and like basically leveraging so long, but at the same time having this ease of use of of the VM and being able for for builders to to use existing
tooling, blueprints and so on. I think it's a super exciting combination and I mean I just just basically before this talk here I scrolled through our telegram history and I think it was like back in 2021 when we started to
chat and that's what we love to like work with cool ecosystems together, see how we can actually build out our infrastructure so we can empower the use cases they're specializing in.
Yeah, that makes me extra fictitious. Oh, shoot you just have you. So yeah, so yeah, I mean, it's actually a really cool platform and every ecosystem
needs a numerical for its servers. So you mentioned that it's like it's both off-chain and on-chain, it supports both the price fees, right?
Yeah, that's correct. And I think it makes a ton of sense to have got just us as an Oracle, but to have the selection such as it makes a ton of sense for the apps to have different consensus layers possibly. And we taking approach of
being able to build market price feaks, which are based on centralized as well as decentralized markets and decentralized markets across different chains. But centralized markets also have, when it comes to like a data fee perspective, lots of advantages.
They just create much more takes. So let's say how do you build a price feed? I don't know. You're looking at a certain time window. Let's say 120 seconds. And within these 120 seconds, you would include all the all the trades. And unless you have like a very highly traded asset on
on various taxes, you don't get a lot of price information in this window. So centralize exchanges do enhance certain elements of a price feed. And that's why I would like to include them where it makes sense.
also want to have an ecosystem specific price feed just to ensure liquidation based on that's prices then it might not but yeah definitely yeah I mean every job any price it affects is that we can a lot by the volumization of the transactions here agree
Maybe to build up on and through this question actually one thing I was always curious about is human you do we're just talking about like on chain and off chain sources and an interesting question that I got a few maybe like one year ago is like how do you actually ensure that the off chain data that
you bring on chain is trustworthy because just like putting data on chain is not making a trustworthy you also need to choose the right providers where you can get where you get the data from like how do you assess these off-chain sources? Absolutely and I think that's that's kind of a trade off so
For for for decentralized exchanges it is relatively easy because you obviously have the history on shame But for centralized exchanges unless they provide a signature even if they provide a signature you can only say okay, it's from them but
I mean, I guess it's not a secret that if you look at certain exchanges, you never heard the name and they have higher volumes than buying it. You might be like doubtful of the actual orders if they happen or how they happen and within the end doesn't
really matter. I think what's important is that you get a lot of, so the more information you have within a certain window, the easier it becomes for you to potentially through a methodology exclude data, which is not representing a market price.
So, so we are to get back to the question. There are ways of how you can distribute to fetch that data and have a relatively high degree of certainty that the data you are pulling from an
exchange and you come to a consensus to that data, but it's pretty hard and the central exchange is so far don't seem to really really like working on any trust building any trust assumptions of that these trade
actually happened. But from a pure operational perspective, the more data you get, the easier it is to find outliers and exclude them. And this leads in the end to a resilient price feed you want to have for default applications.
Yeah, yeah, make sense. Okay, especially I like the point that you just made regarding centralized exchanges that some of them are not even interested in showing the real data. I think I give all of them also like what it's in this space that is trying to get like to
to show traction and user bus, claiming the 10 million month active users or having, as you mentioned already, more trading volume than buy-nice. It's kind of crazy actually to think about it. I wonder when it comes to, of course, what do you have? Sure, yeah, gone.
I can hold that thought. Okay, yeah, first thing like it's changed is it's most of the times like when you just launch they don't have much volume so they regulate imports to trades and trades online so yeah definitely it's not that trustworthy.
Yeah, I think that you can always see with some indicators like market like the depth of liquidity How like how likely that these traits are are actually
market base or just internal volume. But again, the more trades we have, the probably the more resilient actually your data think becomes.
Very interesting. So I have a question. So how would you guys differentiate from other Oracle providers like this? Like chain link like the biggest provider out there is Pith and some other one. So how do you guys differentiate yourself from them? Yeah, so we so
While I think chain link is probably pretty well known architecture, so they're pulling data from so-called premium data providers. So it's third parties who aggregate price data, and then they have a nice mechanism on architecture how people can spin up notes.
and bring these price points on-chain and then there's like a meeting, a meeting organization or some mechanism of how to agree on a given price. And while this makes it and the past has shown operationally really stable and works well,
It lacks an algorithm in transparency because you don't actually know where does the data come from. So it's going to be very easy to understand where these nodes fetch the data from and how does this data actually calculate it and work sources are.
are they using? That's not super important if you're just needing if you use D price, but you just have so many markets and deeply quiddity that you end up more or less always at the same price, but it does become an issue we believe if you are
need of being very specific. So again, if you want to create a market price, be based on taxes on a specific chain. If you want to go beyond just a market price feed, but for example, what we do is providing fair value. So here we're looking at contract stakes. So what is within a ball?
or what is, for example, how much collateral is in a liquid-staking LST and basically creating a market price feed on the underlying effort looking at the collateralization state. All these things which go beyond something everybody
The premium data providers have access to becomes increasingly difficult. And so again, that's the big differentiation that we are pulling that data directly from markets or from contract stakes and hands enabling applications
to really specify methodologies and logics which go beyond just price feed. So that's really, I guess, the differentiator on how we can provide price feeds at the same time.
On the governance level, yeah, we're structured as a DAO and we from, I guess, early days on have been building out the infrastructure with various contributors. So I think we were like one of the top GitCon users
back then. And this is something we are continuing to do to basically try to onboard as many stakeholders as possible who can contribute to the ASO. It's a pretty open approach where we're relying on a larger ecosystem.
So for like pulling the data out of the contract that you mentioned so it refers to the contract of different sources like Dexas and the lending and other protocols. Yeah, so Dexas is so we have with a bunch of
of tax scrapers on different chains pulling, pulling trade data. But we're also looking for example at like building price fair value fees for a liquid state tokens or derivatives, however you want to call them. And here we're looking basically
the LST contract. So how much let's say ETH has been provided as collateral and how many LSTs representing that ETH have been minted. And for like, say, STE, if you have pretty decent liquidity on different taxes,
So you can build on top of that data, a resilient market price feed, but there are a bunch of other protocols like launching every day and they want to be utilized in different DeFi applications. And here we can basically use the under
line, asset price and market price and provide the collateralization ratio. And then you can build logic around there to say, okay, don't accept collateral if the collateralization ratio is below 90
5% or something. And this is basically additional tooling to mitigate risks with long blue ship assets, which are though super relevant in these ecosystems.
Yeah, I also like I went through the documentation and I read somewhere like rick me if I'm wrong that the price sheets don't depend on the third-party data so it doesn't have to be necessary for token to be listed
on objects or centralized exchange to be accessible by the price rates. So how does that work exactly? So I mean the token needs to be, it needs to be traded somewhere because otherwise you don't
You can't create a price. Or at least the underlying asset. So let's say it's a representation, let's say STEF. Let's just imagine STEF would be nowhere traded. Then we could like plug this together by using
even though there's a risk, especially with ETH, there's a DPEG because there's an unbounding period, an unstaking period. But no, you always need a market, but we provide the
flexibility of applications or builders who have a mix these price feeds to select the markets which are included in a given price feed. Okay, okay, so you allow the, I mean, the one who is listing when token for the price feeds, you allow them to
So select the market for like, at the end of the price. Exactly. So this is, I mean, usually when build a desk come to us and request certain fees, we help them. Yeah, based on our knowledge, obviously work.
markets we recommend to include or not, usually it's the more the better with the caveat that you don't want to include a lot of low liquid pools. So the most liquid tax pools and
X changes with a decent reputation, it's always something we would include unless there are certain reasons just to use taxes on the chain where the data is deployed for some liquidation, making us issues.
Yeah, yeah, it makes sense. So, okay, moving on. So, how does Dioz Oracle provide like a cross-chain experience? Like you mentioned, it's cross-chain experience.
So how does it work with the like DAPs and developers and like what infrastructures do you have in place to make this actual actually work? Yeah, so I think we're kind of uniquely positioned. We are always saying we're deployed on 35 trains, but we I think we're we're
just recently deployed on two new so well about 35 different ecosystems and this enables yeah this this and I mean most of them to be fair are our EVM ecosystems using EVM but also wasm well so
deploy on Solan and yes, like different ecosystems, which are not IBM based. And this really is, I think, the foundation for obviously cross-chain applications being able to receive price fees across different ecosystems. But I think it is more than
that because what we just briefly talked about being able to create ecosystem specific price feats is also needed here. So you don't like potentially an asset price is not always the same. It might differ for for for
on liquidity issues across different chains and crossing applications will possibly want to benefit from these inefficiencies. So being able to provide very specific price peaks on different ecosystems is our belief to be
the foundational need in order to build cross-chain applications. At the same time, I think in the last 12 months, mostly we saw multi-chain while we now see the first people actually building cross-chain applications.
And that's maybe just sorry just to go set it up like like how we differentiate like multi chain is like in our belief just like applications doing the same thing on more than one chain
while cross chain is actually utilizing the benefits of different chains and building logic which executes across different chains.
As you mentioned already that you are deploying on so many different ecosystems, I believe you get usually a lot of requests from different apps. They want this price feed, this price feed. Was there any given moment where you received a request for a feed or for like a
source of the yellow that you did not expect like did you ever get let's say insurance protocol like anything insurance related when they reached out and said oh I want some price for grandma can you can you do this can you do anything for damages earthquakes maybe even or something like that did you ever had a crazy out
you say this? Yeah, I mean, so to be, to be fair, we are, we're pretty good when it comes to price feeds. So financial data is kind of our focus and bringing that across different, different chains. And I mean, not again, not just my
market price data, but augmenting that information with chain states or like protocol states. But yeah, we do get some interesting requests which go beyond that. And like from time to time, we try to venture
our focus because we really like the use case. So there was for example, like one team which we acquired and like often it's just a small lift and then we try to help out. But basically they want the need and the Oracle for
get a pull request to basically pay out bounties when the pull request was accepted. Obviously it's easy to just imagine how this can be nicely used and if it's a small lift we're happy to do it. I think here we'll also
Well, well, like our structure, how we can actually enable these unique requests is slowly slowing us down in the areas where we're focusing on because we can just
create a bounty for this specific request, somebody from the community will pick it up, build it out, then we'll have to validate it and add it to the stack. So that's actually the nice thing that I usually, and this is obviously what we do to be able
to be very focused on your core value proposition, but with this organizational structure and the Dow, we can also accommodate these requests which are not like perfectly aligned with the focus area we're having.
And that's actually a really, really nice very proposition for a dowel. What you just mentioned, even though you're focused on the price feed side, you can actually just go into your dowel and submit like, let's say, take it on like a project and anyone from your community can pick it up and work on it. That's actually really, really cool. And I mean, you never know where you're going to go.
you might shift into, right? So it was the same thing. Like with liquid-staked tokens, we spoke with some DAPs who were interested how to utilize them better and then you build out the first one and then the second one and then you're seeing how this becomes like a major value proposition
and a growth driver for you. So I think being able to be super exploratory and then helping out while not losing focus on the core value of proposition you see currently maybe as the core contributors is quite, yeah, quite
Oh, was this lump some kind of bill? Yeah, there was just calling. Sorry, I just muted it.
Okay, and um, so let's move on a little bit so Speaking of like the data and like Oracle's and like because they're capturing data from like smart contracts in different places. Can you provide like insights?
What are the steps involved in this and how do these concepts involved in capturing all the data from contributors and how does that work and how do you verify the source of truth?
Yeah, thanks, Greg, questions the... So, yeah, I think that's helping to clarify possibly what I just shared as we're being organized as a DAO and having a broad set of contributors
I guess from day one, obviously buying out more systematic way, but have these various contributors. So when somebody comes and says, "Okay, I need to have this market, like maybe like the latest decks being launched."
on the young. I need the data because we're going to incentivize a lot of liquidity there and this is super important for the price feed. Then it is really easy. The requesting party can just create a bounty funded, somebody from the community
the pixel up and then it's being reviewed first by us internally and then it goes through a process, a governance process called car community approval request, where then the source is being added. You can use it even before this has been like through the governance process being being very
But yeah, this gives like I guess it always casts additional eyes on actually validating that source and what we're currently adding is test spaces so people can just like developers have a very easy time
to test with the environment we're using new data sources. The same applies to some contracts, some logic you want to add if you need some collateral ratios or whatnot.
create a bounty, you find that somebody else picks it up, sometimes maybe call contributors from the project requesting it will work on it. So yeah, that's a process. And if you hit up our docs, that's basically an explanation of how you can contribute
these for different data source and if you have any questions you can just join our discord server. There's a bunch of people eagerly helping new members who are willing to contribute.
Awesome. Yeah, everyone just head to their docs or the discord if you have any questions like if you're building a dappani on and you want to like use die as Oracle's just head over to like their discord and they'll help you out. Now just one thing for all the listeners here today, we're going to be taking questions from you guys.
right at the end. So just wait a little bit longer and just keep your hands up, and we'll bring you up. One of the most important things with Oracle's, because they play a very key role in Dex's and all these financial instruments, how do you guys,
What's involved in the off-chink computation of all this stuff? And what security measures do you have in place to stop manipulation and all that stuff from happening? Yeah, the super relevant question. So I think we're
We're convinced that the biggest risk Oracle's compose is as of now an operational site. So the selection of price sources and the methodology being used to provide a feed. So we run ahead of
lot of different monitoring services which are checking that scrapers are runging. We have a high degree of redundancy for these different components, segregated and yeah we're constantly monitoring the
the inflow and the computation is actually working properly. And this is really important when you go beyond like a blue chip like E-FUSD. Here, for example, like Changling, the process brings a lot of resilience having
different sources, different notes and coming to an on-chain consensus. But when you are looking at different assets and different shanks, different pools, you really need to understand how these sources change over time.
So this off-chain computation for the time being is really, I'd say, essential to be able to monitor all these different sources. And, for example, stuff like understanding how our pools changing, how some TVL of a pool
and should be included or excluded in the source is the API of the exchange still reporting. Do we get the same information from different scrapers from that source? So all this is currently asking for us
certain level of centralization on the computational science while we are also working on basically, I mean it's distributing yes but we are obviously working on decentralizing certain parts of that architecture
texture further. So while we can ensure, I think, like an unmet level of monitoring and redundancy in that infrastructure, we're still working on further decentralizing each and every component on that infrastructure. So for now, I say,
we can take off the caveat of the oracles we believe don't address sufficiently on the transparency on the sources and monitoring that these sources and the methodologies we chose in can provide a resilient price feed all the time.
While we are, yeah, we're really excited about, I guess, the next infrastructural upgrade where we can put certain services here also in the hand of the community to run these services.
That is really cool. So I have a question regarding the cross chain price speed. So like in with the single chain network the consensus would be quite obvious but like how does it get affected while you're relating with
So it really depends on the use case. So for example if you want to have a price feed of a DAX from chain A on chain B, there's like some
the dollar-g of like we would pull that data computed in a different time window and then basically bring that price point to chain B. So for basically bringing market prices from run like pulling the underlying trade data,
like calculating a price point and bringing it to a different chain. There is like this as an curiosity between chains is it's not an issue. Obviously when we are providing additional contract state data so for example adapt, not get
And we're not providing only with the market price of the collateral, but also giving information of if that bridge collateral is actually the underlying asset is existing on the on the origin chain, then you
need to define some minimum requirement of block finality, which is then chain to chain specific. AC and does it affect gas fees much while repairing these flashing price fees?
Also, also depends. So, factoring the data is not affecting the gas costs. We're propagating it. You basically pay for every update on that chain gas costs.
Like if you're a dapp, if you have the need of very specific price feed, you only need, you need this in a high frequency on various chains, then yes, this might become expensive depending on your actual gas costs on that chain. But at the same time, I mean, we're seeing, I guess everybody sees this.
From a theorem perspective things moving to an L2 or to more efficient chains such as like ecosystem such as Solana and leveraging like the EVM stack with you guys So I think there's obviously a move into into lowering gas fees overall and then having some maybe even
and depth specific role apps on, um, utilized on on these new ecosystems, which I think will drive down gas costs, even in an increasing, um, level of, I guess, uh, block space demand. Yeah, yeah, definitely.
So if someone wanted to submit like their scraper to dial, like how would they go about it? Yeah, you you just basically go to the documentation. There is there's a guide of how you basically
how you can submit a new scraper. Thanks being reviewed, added, and then goes through this community approval request where it is being checked. And again, like I'm quite excited about
about that we'll be able to provide these test spaces where you can really test out these scrapers in environment which really replicates our current stack.
And this will make it much easier for you to debug, test it, and also for others to be honest to review it. So it's something where the level of the Dow taking completely over that task is going to increase even further.
Yeah, I think that's a really good push forward. You know, like it goes with the ethos and everything and keeps it a strong keeps it in the hands of the people. Yeah, and so nice to basically again here like we don't need to choose what to do.
but it is really basically decided by the ecosystem and their needs and there's like almost unlimited supply and smart people being able to contribute and be rewarded fairly for it.
Yeah, true. It kind of creates its own economy in this way as well. Speaking of people, let's bring up our audience. I know at least there's one person who's been waiting for quite a bit of time to ask questions. So, fleeting time, I'm bringing you up.
Please go ahead, ask your question.
I think he's connecting out. If anyone else in the audience listening here today wants to ask anything from anyone, please go ahead. Just raise your hand and we'll bring you up to state on the stage.
Hey, fleeting time of year on, please go ahead.
Okay, seems like he doesn't have a question at the moment. Okay, so I guess we'll give our audience like a minute or two to decide like they have any questions while we wait like Could you tell us like a hot take some alpha?
You know something you wanted to share with the community like every year today So we're quite ex we're quite excited of I guess so when you know requesting already
you basically will need to hit up our forum or our discord discuss what the team what you need and I think you'll be able to see a much higher degree of self-servicing where people can basically get whatever they need.
So, without saying too much here, I think, we're really trying to improve the developer's experience and also the ease of use. So, not creating any bottleneck.
to speak with the BD or tech team, but just like combining this ease of deploying Oracle's, what you would, I guess, expect from Web2 environment.
That would be very cool. Very cool testing things out early. And Pascal, would you like to drop something from our end? Surety, maybe.
Any alpha, you guys have to share.
I mean, you mean just in general? Like a ball player in the government or focused on Orkets? No, I mean like Neon, like what we have like going on. I mean, fuck.
for my end. Thank you guys. You guys will see a lot more in the coming weeks regarding depth that we bring on our ecosystem for the early build-less program as well. We actually have some really exciting projects
the pipeline that are going to build on our chain. So we will update you very shortly. Last you on all the channels on Twitter have more spaces with them as well. So I think we have some really, really exciting months ahead of us.
Awesome. I guess we'll wrap it up here. I don't think we have any questions from our audience. So thank you for coming on today. Thank you Paul, Shruti Pascal. Thank you Paul. Well, the pleasure. Yeah, hey, see you Shruti.
Pascal and everybody who listened in here. Thanks so much for having me and everybody to join here. Just like last work for myself, so I really can encourage everyone to hit up especially all the builders in the new ecosystem, the DIA
It is caught with more than happy to help there with any questions you might have. Thanks again for having me. It's amazing to be collaborating with you guys. Thanks so much.
Thank you. Yeah, thanks all of you for joining and having me. It was fun session.