Firedancer Friday πŸ”₯πŸ’ƒ

Recorded: Dec. 9, 2022 Duration: 0:51:19

Player

Snippets

Hello everyone, is the room starts filling in. I'll bring the speakers up and we'll get started shortly. Thank you for joining us. I'm fired answer Friday.
Alright, welcome in everybody. I'm going to go ahead and say a couple things before we get started and we might bring one more person up to the stage in just one moment. Please note that the opinions coming from myself and our guests do not reflect the opinions of John.
or its affiliates, and they are for informational, educational, and entertainment purposes only. Under no circumstances should the opinions expressed here be considered financial advice, investment advice, trading advice, or any other type of advice.
You should not make any decisions based on the information presented here without undertaking your own due diligence and consulting with the financial advisor. This recording may be used in the future, and for more information, please head to jumpcrypto.com/NFA.
What is fire dancer? Fire dancer is a new salada battleinator under development. So what is black?
[inaudible]
But this is a lot of situations when I say here's a lot of the systems. You have to get them all right.
Alright, we had a little music intro there, but now let's go ahead and get started with a little develop with a little
developer introduction. Here with me today I have software developer Richard Patel, CEO and code monkey from GTO Labs Buffalo and JUP crypto security engineer Emanuel Ciesza. And eventually on the stage we hope to have executive director of
We've got some fantastic guests today just sitting in the mempool right now. So without further ado, let's go ahead with a round of introductions. First, we have Richard Patel, a software developer at Jump Crypto working closely on Firenancer. Why don't you start us off by giving us a little background on you and how you find yourself
of working on and building a new Salana validator client. Richard? Thank you. Thank you. Well, I'm not joining from the MENPU. I'm actually joining from South Germany, bringing more of a dejan back around to this space. So I've joined crypto full-time, dropping
out of college when I was 16. I did about 5 years of blockchain infrastructure, trust, wallet and block demon. And meanwhile, I always had personal interest in node development and software reverse engineering. So the story, how I actually got into clientele is really funny. So basically I just got nerds now.
into oblivion while I was working on the Solana protocol at BlockDemon. I was super happy at the place, but eventually I just thought I want to build my own validator. So I kind of linked up with a material ink who is the author of the GoSD case in Solana and started implementing
Alright, thank you. Richard, appreciate it. Next we're going to go ahead and throw it over to Buffalo who heads up Gito Labs. Buffalo wanted to take a moment and let the audience know what Gito Labs does. And if they're not already aware, give a brief description of why you are excited about
fire dancer. Thanks. Yeah, thanks. Thanks for having me. So Gita Labs builds high performance MEV infrastructure for Solano to help Solano scale and kind of help the network run a little more efficiently.
Super, super excited about fire dancer. I think the systems that they're building are just going to be super fast and super close to metal and so you know all the testing that they're doing and from all the less
is that they've learned are going to be super cool and going to make a huge impact on the performance, reliability, and scalability of the salon of network. So I'm super excited for fire to answer. And thanks for having me on this Twitter space.
Yeah, of course. Thanks for joining. You're one of the more engaged people on crypto Twitter with some of the stuff we've been putting out. So appreciate you coming. Next up, we have a jump crypto security engineer, Emmanuel Sesta. Emmanuel wanted to tell us a bit about what you're
do and what you're interested in pertaining to fire dancer. Hey everyone, thank you. So I'm Emmanuel, I work in John Crypto in the security team mostly on cryptography, various area of cryptography.
I do custody for junk, so protecting our private keys, I've been working in zero knowledge proof and specifically hardware acceleration. And the fun fact is that this is when I started connecting.
with the FireDancer team, so my interest in FireDancer is more of the very low level implementation of the photography, making it fast and making it in hardware so we can get the best out of it.
All right, great. We might have Dan Albert up here a little bit later on, but why don't we go ahead and get the 101 stuff out of the way? Richard, could you give us a brief description of just what is fire dancers?
Of course, so file answer is mainly two things. Number one, it's a ground up, really signs a learner line and it aims to increase decentralization through software diversity. The second thing is it's also a research and development effort and we try to explore the high performance computing space and in
opinion that's just a greatly underappreciate paradigm to building fast blockchains, of course, complementing ZK. And that also entails operating at a level in reliability and performance that you really only see in Tratfie, but you can actually do that in blockchain without giving up on any of the decentralization.
So I guess that would describe to my non-signal. That is a great description. We did end up getting Dana and that peer. I'll give a brief introduction is the executive director of the Salana Foundation. Thanks for joining us. Perhaps you can give us your rendition of the introductions that some of the other folks gave up here.
before hand. Hey yeah thanks yeah thanks everybody for joining my name is Dan I lead a lot of the Network Growth Initiatives and Network Support on behalf of Solana Foundation. I've been working on the Solana project for close to four years now. I was
an early employee at Solano Labs and actually was involved in hand building some of the first metal validators that we deployed for the Solano Network before we launched MainNet. And so like seeing the network kind of evolve and grow in our validator community and our validator
are developer community, you know, expanding and evolving in this way, particularly as it relates to, to fire dancer and kind of all the MEV has just been really exciting and something I've been tracking, you know, closely near and dear to my heart for many years now. So it's just super exciting to see all of these technologies coming together.
So I'll go ahead and just ask a bunch of questions for anyone on stage. It's not like a strict Q&A if you want to go ahead and add some thoughts to what someone else is saying. Please go ahead. It's going to be a nice back and forth. I'll throw it right back to you, Richard, to start. Just who is building this client? We've heard a bit about Kevin
is it really just one guy in a dark room? Well now it's a few people from the more jump classic side. I think a lot of our team is actually not crypto native with the exception of me but it's actually a fairly large team at jump who's spilling this client on it full-time.
Okay, beautiful. And then Buffalo, I know you've been pretty engaged in the Salon community for a while. Richard, you had spoken to the community effort in building this validator. Perhaps you two could say a bit about a bit more about that.
Sorry, a bit more about what the community effort.
Yeah, just the community effort surrounding like communications and another Salana validator being built Richard, maybe you want to get the lead and then Buffalo can add on.
Yeah, I think what's really interesting is that the Solana client on the protocol has mainly been developed by Solana labs. You had a bit of external contributors here and there, but this is radically changing where we
We have teams like Gido of course, now Jump also Mango and Illusive who is kind of, you know, we're all getting together. We're starting to have core contributor calls and I guess it's really interesting for the development
where no single entity owns the Solano protocol anymore and no one on their own can decide how the protocol file gets developed and how which features get added. So yeah, definitely changing things up.
Yeah, you're starting to see the foundation step in here and it's super cool to like see the contributions that Richard and others are starting to make with these salon improvement docs that you go to the GetHub Repair, Salona Foundation/Slan improvement
We've got a lot of equipment documents. There's going to be some more network level improvements that are going to be posted there that people can chime in on in our team and Solana Labs and jump kind of work together on some of these
improvements for the various clients and then you're also seeing like an official spec that Richard's been putting a lot of effort into as well, so kind of how outlining how to validate talk to each other and how to do the company
consensus and things like that and really diving deep into that type of things for all the different clients to make sure that they're kind of aligned on how you talk to each other. Yeah, something I would add to that, which I think is pretty interesting and it's sort of an open problem that we're kind of feeling around
the solution space for right now is like, you know, how do the validators talk to each other and come to consensus on the network, but also the humans, right, and the various different teams, you know, part of the effort in, you know, setting up some of this stuff on the foundation GitHub is to, you know, figure out like,
What are best practices in the community? You know, we can take some learnings from you know the Ethereum community which has a lot you know Kind of well long-standing, you know, Ethereum improvement process and you know what works what doesn't how can we create?
You know, a similar level of community discourse and ability for you know engage community participants and core developers and validator teams to You know get get their ideas out there You know, we can kind of go through like the messy social consensus to be
bait period of figuring out the right way to help move the code forward and then formalizing it into a bit more of a standard document. So there's a whole lot of steps here. And some things we're not trying to, you know, bludge into depth with excess process. But in this time,
where in just the last three to six months where we've seen fire dancer come out the gate and the JITO team releasing their thing like all of a sudden there's this incredibly increased demand now that it's not just Solano Labs primarily making contributions.
on how can we truly let the validator development and protocol development be truly community led. So it's going to be a little messy and it's an exciting process. So I appreciate everyone who made contributions to the repos but also just sort of to the community discourses as we figure this out.
Yeah, that's good to hear. I guess along with lines of that, I'm wondering what the communication between all the different devs, the teams working on separate validators in separate spaces looks like. And if anyone wants to learn a bit more, is there some sort of all core devs?
call or something similar where they can go to get that info. Yeah, we actually have just been talking about that this week. I don't think we have a date finalized yet, but we are planning to kick off something like a Core Devs call, very similar to how Ethereum does that Core Developer calls.
That would be open for public listening and you know we're planning to try that out. You know see if people find that format useful and you know not be afraid to experiment right if something doesn't work you know we'll ditch it and try something else.
But generally most technical communication happens either on the Solon attack discord or if it changes to the existing validator client, a lot of that is just it's all in GitHub.
in issues and PRs and debates in the comment threats, as well as in sort of the somewhat more formal repos that we're spitting up on the foundation side. But this is still an area where we're trying to figure out what's the right way to get information and feedback effectively to and from the community.
Nice. Switching gears a little bit. I guess I want to talk about
sure it's very cool that this hard thing is being built, but why is it being built? Perhaps a Richard you can lead that and then in manual A maybe you can expand on why fire dancer is being built and if an existing validator already exists. Go ahead.
Sure. Well, yeah, the existing validator implementation right now is the Solana Labs implementation. I cannot fully, you know, speak for jump while they are building this since I've I've trying a bit later, but it's safe to say that we deeply care about the
So we have this great engineering team who has done decades of software optimization. The Solana Labs team has been particularly open to collaboration and also has a software stack that fits the kind of software
and systems designer jump is already doing a lot very well. So it kind of naturally aligned. The other part of it is the global jump trading infra and you can hear more about this on Kevin's talk is actually not that far off from a globally decentralized
because there's points of presence in all of these various sorts of places all over the world and it kind of needs to act as one coordinated system. So I think I guess long story short, the engineering challenges aren't new and we can make a big difference by bringing some good software engineering to the space.
Yeah, and to add to that, so definitely one of the main goal is decentralization. Why? So when a blockchain goes past a few users and definitely Solana is way beyond that, you cannot safely rely on a single team, a single development team.
and a single validator instance. So we are seeing it in Ethereum where there are multiple implementations and we are seeing it in Solana. This is also the reason why, and probably we'll discuss later, many choices have been made to be completely independent.
because there is a competition, but because exactly you want to make sure that, for example, if there is an attack on one implementation, most likely the other implementation is going to be completing me own way. And yeah, to echo what Richard just said,
So, jump as a very, very large network that, you know, it's kind of a private network, so nobody knows a lot about that, but we have teams of experts in network communication, in FPGA, hardware, in acceleration, and so bringing all this
minds together, Solana seems kind of the best blockchain in this regard, not because the goal of Solana is probably one of the goals is to be as fast as the network permits. Yeah, so I guess this is where the two teams come together.
Nice, you spoke a little bit about how if there's some bug on one validator, it's unlikely to be on the other. Is there a specific reason for that? I know that they're written in different languages. Perhaps you could speak a little bit to that. Yeah, for example, one of the choices.
a jump has been to build fire dancers in C, so completely from zero, so not using anything from the existing Sorano Validato. And yeah, maybe some people will say, oh, but why in 2022 are you using C, no, like there are
So many modern languages including Raster. So if we had to start from zero, we would probably have chosen Raster as well. But since there is a not having implementation, we don't want to risk to copy too much. And for example, one crazy thing,
Part are the dependencies, no one will be the software you're probably relying on. Addition on dependencies from the open source from wherever and so if you end up using the same language most likely you're also going to use the same dependencies and
And when you are in this situation, a bug in a critical dependency, for example, in a cartography library, could become a very serious bug vulnerability on both implementation. In our case, the decision has been to use CNC++.
And again, we have team that are experts that run very large systems in a production scale, in a very large production scale. And so you shouldn't think about, you know, the student building in C and so coming up with random
very simple bugs. No, like this team of experts that are very well trained on what are the quirks of the language, and what are the risks, and what are the possible vulnerabilities, and try to work around that and making sure that the code is safe.
Richard, I know this is one of your favorite subjects to talk about. You want to build off that at all? Oh, yeah, I could talk about the language playing was for a long time. And let me just add that we are definitely cognizant of the risks building in C, but yeah, as Emmanuel pointed out, we also, you know, don't just ship it
push code straight to main we actually have a quite lengthy review process where we actually cannot check in a single line of code without having at least someone else reviewed in death that's that's enforced in our code review tool that we use but then also we are working with security auditing firms that we try to
to get formal verification for as much as they call this possible, but that just takes a very long time. So we cannot formally verify everything in this alone protocol, but we can at least make steps towards that and have also reasonable kind of safety margins or like failure domains where when one component
behaves incorrectly, then we can ensure that that component will not bring down the whole validator. To give an example, if there's a critical back on the virtual machine, we can kill the virtual machine without having the entire other systems like key management and the database and so on get impacted by that. And you can actually read more of
about that on our latest blog post called FireDancer reliability efforts. Now there's also another super interesting reason why C versus Trust. It doesn't only extend to like using two different tools but it's many about high performance computing and when I say that that's
like riding a particular algorithm and making it faster, but that's actually a completely different way to build software. I think at Jump, kind of the engineers, they think of that as the default way to build things, but coming from an outside of perspective, if I went to the FireDancer
just looked at the code, it would look kind of like black magic. For example, first of all, we don't do any heap allocations at all, which is a very basic thing that RAS code almost inevitably does regardless of what you use. Even if you use a simple hash map, there will be loads of heap allocations in the background.
The main thing is, keep allocations kind of go hand in hand with virtual memory. Virtual memory is basically the concept of the operating system, sorry, is lying to you. If you access any particular number in some random array that you've allocated, that array might not even have a physical location on memory yet.
What the operating system will do is it will suspend your current execution Do some magic in the background they call this page fault or demand paging and then give you control back but meanwhile while the kernel is doing all of this the CPU has spent a bunch of time on not executing your code and that gets even worse because the hardware and Intel CPU
do this as well and of course, AMD CPUs. This is not like vendor specific, but this is quite literally just computing architecture. So the to do to end my rant, the kind of way we build file answer is we make everything very aware of the specific locality of data. So that means where
When we allocate memory, that actually has a physical location, a physical dim of RAM, and we also know which CPU core it's closest to it. So in multicore systems, we will have seamless scalability where data is actually using the most efficient path. So for example, not traveling across to cores. So this kind of new manoeuvres
system engineering is quite hard to do in Rust because Rust has many focus on a different areas of computing and while it's a really great tool and we appreciate in other places, if we used Rust for this kind of system type of performance computing use case we would basically start from zero we would use
a lot of unsafe code, which is kind of what Raster's designed to prevent. So, you know, considering all of this and then also avoiding shared dependencies, C is actually a really attractive choice. And another reason is we have the GCC compiler, and this compiler is particularly good at everything, high performance code,
And the counterpart to that, I'll be honest, that is what Russ currently uses. So the nice thing is like, even generating the whole code, there is basically no shared component between the Solana, Leps, Validates and Firelands. And even though that might seem a bit overkill at first, it's a financial system. We have to be careful and very careful.
responsible in designing it so they're actually really deep considerations into why CU has a thrust and so on and that's not to say that we'll never use rust I think the design is modular enough that we'll see it develop over time and we're considering we are evaluating all our considerations okay sorry for the rant I hope this was helpful
If you'd like to hear more of Richard's rant, I did just pinned up the blog that he mentioned, the reliability efforts blog. If you're looking for a high level overview, I suggest reading the thread.
If you want to dive a little bit deeper, go ahead and click the link in the first tweet to read the full thing. Dan or Buffalo would you like to expand on what Richard just said in any way? Not I do have a couple more questions.
One thing I'll add, I mean, I'm not the language expert, so I couldn't expand on the C versus Rust debate, but I'm kind of going back to your earlier question of where you opened with sort of like, why fire dancer and the commandeulet did a great job talking about some of the security benefits. There's also a
It's sort of like a secondary benefit here in that because the, you know, fire dancers, it's being built off of the reference implementation built by Solano Labs and they're, you know, building it from scratch and C, but that necessitates that you have
The second team of folks that are tearing through basically every line of the Solana Labs code base You know, so on a lab says built like this insane incredible product And where it stands right now like this this is production software right every time that you know labs that
have to make a change, right? This is something that we're thinking about, okay, how is this going to impact mainnet? You know, what's the rollout going to look like? Whereas, fire dance or like has sort of this benefit right now that the code isn't yet in production. And so they're able to, in addition to all of the high performance
that Richard was talking about, kind of take like a much more critical look at perhaps some of the implementation decisions that Solano Labs has done. And maybe there's a piece of protocol or implementation design that they just decided, okay, well,
We're going to rip that out and we're going to rebuild it from scratch. Not just in C, but just with a fresh angle. And possibly one of the things that we're thinking is that because they're not necessarily inheriting any technical debt, that the fire dancer team could also kind of
push feedback or push code changes back to the cylinder lab validator in RAS, which would inherently sort of raise both products ultimately, like as far as quality and well still maintaining protocol compatibility.
Thank you for expanding on that a bit. Buffalo one second. I see toly is an audience if you'd like to come on up feel free to request if not that is cool to Buffalo take it away. Yeah, I think the things that I'm most excited about with this are
the things that Dan were just talking about. So Salana is shifting incredibly fast and amazing what they have accomplished, but they've also been building any startup you're building the plane while it's kind of in the air.
you just need to make sure that it continues to run and it doesn't crash. And having another team kind of be able to take like a 30,000 foot view and kind of see how everything's hooked together and like what is where the different functionalities at different pieces of system needed. It's going to be really
cool to see jump kind of modularize that. Another thing that I'm super excited about is just their concept of like tiles that they've added so making the system multi-processed and being able to for for me it's like being able to
iterate super fast, tear down pieces of the system, make some changes, spin it back up, see how it does. I think that's going to be super cool and allow a very fast iteration when you're inevitably testing in prod.
All right, thank you Buffalo. Totally thanks for joining us up on the stage. Go ahead and give us your thoughts on what we've heard so far if you've got. I actually started building Salona and see this was super like the first day that I started coding.
And I switched to Rust because I didn't have the resources to build everything from scratch. Even though I knew that it would be faster. And it's awesome to see you guys take that
on because this is basically how I would have built it if I had the time and I wasn't also trying to raise and like higher people and do everything else at the same time. If you're an engineer and you just started kind of programming I think
Back when I was in school, we had to run these experiments, to really show the difference between direct access to physical memory and paging. You can run a bunch of benchmarks, just create a very large array and then see the performance of...
touching that integer every in sequence or randomly. And when you can tell the difference between the one array gets bigger than your memory, you start getting page faults. And you can really tell the difference
between what the prefetcher does when you're accessing things in sequence and it's able to pipeline those requests because it knows you're coming to the page boundary and when you're accessing it randomly. And these are things that you kind of become instinctively part of your development process if you had to like
debug these performance issues for years. Just kind of look at the code and you know immediately, okay, the thing is going to thrush the cache is going to thrush pages as soon as this hash table grows past a certain size. So you start building different data structures that are generally not common standard
libraries in any modern language. So you do need to see for these things. And it makes these are the kinds of optimizations that are very hard to do later because
these performance bottlenecks become insidious like you end up having like a soup of the soup of code that just constantly thrushing cash and I'm doing like the wrong thing and optimizing just one part doesn't have any impact like on the Amdol's law but if you start
From the beginning and you actually designed your code from the very start to really be aware of how these data structures and memory accesses work, then you can actually start seeing the benefit of all of these performance optimizations and they add up like two hundreds of X
like improvements. Nice, nice. We are getting a ton of questions that are rolling in now. One of them from Mr. Y down in the audience says, was the RBPF an important factor for selecting Rust? Maybe you could expand a bit on that.
The BPF virtual machine was built in Rust. We would have built our own BPF is luckily a very simple and small VM that you could re-implement. So we were just happy that there was one that already existed that was built in Rust.
built in Rust and that accelerated some of our development. And we've, I think, upstream fixes that we found. But generally, like BPF is like one of the easiest VMs you can build. It's a pretty good toy example if you're writing a compiler or, you know, just just want to learn how to how to build your own virtual machine.
I want to add a shameless plug here. So I kind of started to learn a research in the BPF virtual machine a bit more and there's actually a second implementation right now in the Fire Dance Arrayians rep one. We haven't talked about it a lot.
But this was kind of my initial attempt at writing a second validator. So if you're curious and you want to think about it, I think this is a great place to start. When I get questions like, "How should I get started in this programming?" I also sometimes respond with both your own operating system or do but
low level which machine work and it sounds like a bit crazy at first but it teaches you really a lot of things so I think the Solana bike performance in comparison to Wasim and EVM and so on is like a super underappreciated and you know very cool module that makes Solana first
Well, we're standing in the shoulders of Giants there. It's like the Linux community built this thing, right? And they've done, I think, an amazing job keeping it simple and fast and not letting it grow into an octopus like most VMs do.
Yeah, the beautiful part I guess is that Solana has kind of started working with the the Linux kernel community. So I do hope eventually Solana will be its own operating system target and that would be end but that might be a long shot. Yeah. Alright, we've got a couple more questions.
down in the audience, this one from Seven Layer. What component of the Salano Labs codebase do you see as the toughest part to replicate? Sounds like that's for Richard. That's a subjective question because that's the things I suck at in development and other
people find them really easy but I would probably say the any database kind of stuff like the account database because you're building a multi-version concurrency control engine or get to them a second basically that means Solana is constantly evaluating multiple blockchain forks at
And you kind of don't have enough system resources to have your own separate database for each of these. So what you kind of do is you save everything in one database and then you create multiple views where you can basically access this database at multiple versions in time.
is black magic as it seems you have to design it in a way that it's very aware of the like intricacies of NVMe SSDs and so on and that's something we've largely avoided for now with FireDancer because we're building it in a very modular way.
We can basically just plug in the old accounts to be that Solana uses but that will take quite a while I think usually when you when you do this kind of work, you can basically write a research paper about it Mainly because the way Solana does it is you not only have a database where you just request and like retry
free stuff, but you're actually doing computations in the middle of it, so you have to factor that in, so you'll schedule them too. - Daniel wants to expand on any of this, feel free to, I know we're a bit of a sort of stumbled into a Q&A section, but can go ahead.
or we'll keep going in the Q&A section. We've got another one from Mr. Y. Mr. Y says, "Would a Solana client be possible to exist or a fork or a fork one that is existing, which doesn't
map all the accounts into RAM and instead relay on fetch from SSD. So it already does that. So both accounts TV and the accounts index don't need to be all in RAM.
It would be awesome if some validators started experimenting with using multiple SSDs to stripe those accesses and figure out what's the minimum RAM size you can run with and may not safely.
That's a great question. That's by the way another one of our major goals for violence is to reduce the amount of memory required. We're not saying it's easy, but I think it
can be nonsense. Right now it's quite significant and the bandwidth seems to be enough between the CPU and the NLMS.
A totally question for you was sort of just how this project got started. Did someone come up and approach you? So if we want to build this, let's make Salana a bit more discussion.
centralized and what was your reaction to that in the moment? This is some like idea that I guess came from me and it just seemed like
The jump team was just like the perfect team to do it because of their experience and high frequency trading and building custom hardware with weird protocols. So that was really kind of where that idea came from was really just kind of a
to put this opportunity working with the jump folks and their engineering team. It's really critical for safety of the network and by safety I mean like consistency failures where you have some catastrophic bug that
Wax account, you know, status, stuff like that. Those kinds of failures are just scary and it's what keeps me up at night. And having a separate team that can verify the exact same state transitions with a different code path and come to the same result.
would really make a very substantive difference to security on the network. So it's something that I think every decentralized network needs, they need two different clients that can do the exact same state transitions and verify each other, at least two.
Sorry, yeah, the other part was that there's always some questions. Yeah, well, if I don't say doesn't reach the network majority and like so that's kind goes down by like 80% which is you know, I think it's
very unlikely to happen in giving the recent runbub of security efforts, but in that case, even then, fire then so I can help because the current implementation kind of just stops if finalization doesn't occur anymore. So, finalization is basically for the context of the audience, two thirds of validators are like in
in terms of stake being online. But the actual truth is you don't have to stop. FireDancer can happily continue producing blocks with just whatever percentage of stake there is. And that basically means, regardless of how much stake FireDancer has, it drastically reduces the chance of any block production outages.
That's one of the major goals that we really pursue. It's very likely that the top validators would run multiple clients and just make sure that both of them are always in agreement. That's kind of what I would expect. Simply for safety.
The benefit to safety is that we've heard a catastrophic failure where there is an invalid state transition that prints infinite soil or something like you saw with the integer overflow and Bitcoin to a halt. And that's much easier to deal with a halt than with a state corruption.
I've just gone ahead and pinned a question up from 006. It was a little longer so I wanted to pin it so people could read it. Also read it loud. With an order of magnitude, pref improvements, what's
incentive to run the Rust client if the C version is more performant and can explain hardware much better, potentially do more with less. We'd like to hear a bit about how we can retain client diversity.
So having two clients, like once we know what makes FireDancer fast, it's actually much easier to optimize the Rust Code because you can just compare and bench. And the FireDancer client is going to be faster and then hopefully the Rust client catches up.
And they may be different hardware needed for different nodes. But the worst case, let's say fire dancers is just so awesome. You can still run fire dancers as the main client and then have Rust run just the execution path to verify the state transitions, kind of like running ledger tool in the background.
And that gives you that second, at least for safety, that second implementation that can keep verifying the state transitions. And you only need at least a third of the network to be to run both, right? So you don't need a lot of stake. But having that second client does make a huge difference.
difference. All right, nice. Yeah, I like there it that there's definitely just like, uh, you just gain so much knowledge when you see somebody else prove out like a system that does the same thing.
just much faster. You as an engineer, you've kind of like are given free R&D and you know exactly like what changes you need to make in your code to get there. So it's a much faster iteration. Great. Thanks for coming up to the stage, everyone.
listening, I wanted to go ahead and give our speakers a moment for final thoughts if they had anything that they wanted to say but there wasn't quite the right moment during these spaces today. Sorry with you, Richard. Sure. Yeah, I took some notes of things I really want to share with the community. By the way, I really appreciate
The audience coming on I think it's always makes me super happy to share our recent developments when previously I've just kind of sit in my basement and you know code it for myself But nobody actually looked you know, so this is this really cool share first of all I think most important announcement is the block packing pipeline
new completion. So that's about, you know, I know, 10% or 20% of the valid data. This basically allows us to build blocks that networks will accept and execute. However, this is kind of part of our like frank condenser effort, which means, you know, obvious reference to frank
design. We're not fully there yet, but we're still largely married to the Solano-Labskine and build some sort of compatibility layer, but that does allow us to test the components that we have so far in a real validator. And I think that's really cool, right? Because usually you're just building for a year and then you
eventually get to test but this is kind of like an iterative process where we get instant feedback and as to totally point it seems like Solana Labs was kind of you know exploring what to build as they were building while we have the complete picture and can just pick one component replaces so like from the R&D angle that's been a really nice experience
experience. We also implemented hardware accelerated shower to the six and 12 and in parts at 25 19 these are all parts of Solana signature verification. So the question was why accelerate shower at 25 19 like that's that's not even the fastest part. Like the part that takes
the most performance, right? But kind of the Amdus Lore reference of, well, you need to optimize the bottlenecks. That's not actually true. Like any part you optimize is going to free up a computing power and in turn give you more space to, like just more time to execute.
When you turn that will make you value it a little more reliable because you can process more forks in parallel if you are building a really fast client and it only ever runs at like 5% of the performance then the performance you get is also very predictable. Also kind of legal note like all these numbers I'm throwing around
of like 5% these are like I don't know I met with them but it's it's kind of hard to cite specific statistics but we're also working on weekly updates where we can share like individuals synthetic benchmarks and individual improvements so if I say 100k subs please don't be focused on that I think
will take a while, a long while until Sonana runs out that throughput. And yeah, final thing, you might have seen in the death stage presentation of the Fire Dancer talks we gave at breakpoint. We also improved the block packing algorithm quite a bit. So we used the quantitative research approach
to see how we can find a better algorithm for estimating how much compute units, which is kind of solanistic rule of gas, individual transactions going to take. And what that allows you is basically to just execute more in a single block and then in turn get the validator small feed rewards.
I think the very cool thing is like Gido's kind of research goes hand in hand with it. We're also trying to build in in such a modular way that for example, MEV search as we'll be able to take out the block
packing components and install their own proprietary or open MIV searching engines. So everyone gets the maximum possible performance without having to just spend a lot of time on systems engineering.
Want to give everyone one last chance to send some closing thoughts. If not, thank you guys all for joining us up on the stage today. It does look like Buffalo dropped a little bit earlier. Always back in. So I just saw Richard gave you a little shout out there for Gido Buffalo. So wanted to shout that out. Any closing thoughts from anybody?
And just that very quickly that I'm really bullish on this approach to essentially outsource R&D to a different team. And I feel that when you take to great team of developers and you put in the middle a little bit, a tiny little bit of positive competition on who can go faster.
Yeah, I think that's a great approach and we'll see fire dancer, uncolor, hopefully some good things that we'll turn back into the last implementation. And by the way, we have been talking a lot about technical details.
maybe someone in the community is not so technical but this is just to share that the last implementation is not slow. This is still one of the fastest blockchains that exists and here we are really trying to do the over and the top but this is already really great.
Nice, we're coming up on the 15 minute mark. So I want to thank everyone for talking to us this afternoon. If you want more information, you can go directly to firedancer.io.
We've got all the information you'd want to learn about this right there. Feel free to follow everyone up on the stage that's spoke tonight. Appreciate it, you guys. Have a good weekend. Thanks a lot everybody. Take care.