Good afternoon, Dan. Let me quickly read a disclaimer and then we'll go ahead and get into this. Please note that all opinions coming from myself and our guests do not reflect the opinions of jump and/or its affiliates and are for informational educational and entertaining
the financial advisor. This recording may be used in the future. Please head to jumpcrypto.com/NFA4moreinformation. So thank you guys for taking some time out of your Friday afternoon to talk to us.
to talk about fire dancer. Here with me today I've got jump crypto software developer Richard Patel. We have a jump crypto research and development architect Philip Toffit and executive director of the Salana Foundation Dan Albert.
I've pinned a few relevant tweets up at the top of the space. Feel free to look over those as we chat today. And if you feel so inclined, go ahead and retweet them out or comment something that you find interesting underneath it. Feel free to share this space on your feed as well.
You'll only be able to see those on mobile. So if you are on desktop, go ahead and just go right to the jump fire dancer page and you should be able to see them. I wanted to give you guys a reminder that we'll be doing a Q&A section at
the end of this. I have to recover a couple topics that we have pre-planned. So go ahead and comment questions underneath. Later on, I'll pin those up at the top and we should be good to go to answer as many as we can within the
hour. We got some fantastic guests so I'll throw it over to Richie for a little introduction just so everyone knows we can move through these somewhat fast. Richie, why don't you go ahead and introduce yourself.
Hey, I'm Richie. I'm a software developer at Jump Crypto. I've done so a lot of development for like one hour a year and great to work on it for time building freelancers.
Throw it on over to Dan now, Dan. If you're there.
Hey guys, yeah, thanks for putting this together. My name's Dan. I run a bunch of stuff over at the Salona Foundation. I've been working on Salona close to four years now. I originally was
One of the early devs and team members at Solana Labs and I've been heading up a lot of our kind of network technology and sort of growth and maturity efforts on behalf of Foundation for the last year and a half or so.
Yeah, thank you for joining us here today, Dan. And then you're going to hear two voices from the Fire Dancer account. One is me and the other is Philip. Philip, do you want to go ahead and give a little intro? Sure. Hi, yeah, I'm Philip Taffy. I'm a research
all my architects had jump. I actually started on Solano about a year and a half ago as well working on some on chain programs. I worked on PATH a bit and now I'm working on fire dancer.
Yeah, thank you for joining us today. You'll see both Richard and Philip in the video we posted today, which is up top of the demo that these guys did earlier this week, showing progress on fire dance or so far. We're going to have a varying degree of expertise in the audience here. We'd like to get
fairly technical a little bit later on, but let's start off the first five minutes with just a high level overview. Richie, maybe that's a good spot for you to just explain what is fire dancer, what is this milestone, and if you can dive into that a little bit. Yeah, of course. So for those who are new to this space, I'm
the last one. The pilot is a project by Jam Crypto to completely re-implement the Solana Validator but not just to a generic rewrite but address that with the lessons we've or the company has learned by building HFT architecture and really focusing on performance based on
with no end. And we do that in an incremental process where we incrementally replace components of the Solana Valley data until we piece together a full implementation. So that has many three areas. So the first one which is our current focus is network improvement
the nice thing about this component is that we can actually tie this into the Solana Labs value data and architecture we call frank condenser. And that basically gives immediate improvement without having to finish the whole thing which will take quite a while if it's a long-term project. The next one is runtime which is
everything that's got to do with smart contracts and pretty much the meat of executing and validating the blockchain and then finally consensus which has some really nice we hope really nice improvements to security and reliability on the network so the biggest or one of the biggest
improvements that fire dancers are going to be able to shift, is drastically decrease the severity of what would have been previously critical bugs. You might imagine there's like some bug that might affect every node of a particular kind of implementation.
that would be severe if everything runs the same notation but it's very unlikely that that same bug would affect fire that's because it's a complete rewrite every component is going to be audited and rewritten from scratch so to put a chart a completely reimpeditation and yeah
Nice. That's a pretty good overview. Dan, I don't know if you wanted to expand a little bit on what it will mean to have a second validator in addition to the Salana labs one. Yeah, absolutely. Yeah, I mean, Richie gave a great overview of sort of like the, you know, working through like that.
The main technical chunks that the fire dancer team is targeting, this actually has a lot of implications for the existing Solana labs, Validator code base, as well as network composition and how we think of the validator structure as far as
the code but also the operator population and how the network is composed. So it really, like, fire dance or, you know, writing this code, creating all these incredible performance improvements in their own code base is an impressive feat as it stands.
It's really cool though, in order for them to be successful, it requires the existing lab data to become quite a bit more modular so that as the FireDancer team has been working on a lot of the networking protocol stuff, the transaction ingest and things that will
or dive into later, there aren't at least in the current form, as well defined functional boundaries within the existing lab's validator. That is some scaffolding and code rework that need to be put in place so that fire dancer or maybe even a
And I'm just going to, you know, hot swap it out, you know, and kind of make a little bit more of like a composable validator stack. That's sort of a, you know, a lot of things that I'm going to do.
required re-architecting in order for this initiative to succeed, but it also means that more teams and more people in the future as more people get greater insight into the Solana network architecture might be able to
do their own research, make their own improvements or do custom modules a little more easily so it can really make the codebase and the protocol while extremely high performance and complex available to even broader population over the next couple of years as this process ensures.
Nice. Thank you, Dan. One to go right to updates since our last space. We're doing these sort of similar regularly, monthly or by monthly or whenever there's a significant milestone hit, Richie and then maybe after Richie's talk, Philip, if you want to expand on the latest
milestone transaction ingest and what the difference is between frank and dancer and fire dancer and what that means. Richie? Yeah, of course. We're excited to share that our first of a bunch of milestones has been completed. That's got to do
right as he said with transaction ingestion, I could be described of what this component is task to do. So for those who are familiar with other blockchains that's quite kind of the equivalent of the mempool in Solana. And the Solana doesn't quite have a mempool but basically a stream of unconfirmed transactions
will need to be validated. So basically checking if the signatures are valid and then kind of pre-packed into a smaller set that the block producer can then take an assembly into a block. It does that all in a streaming manner and that already
gives the Solana Network really good odds at achieving high performance and we see it already, um, quite surpassing any other chain, but we actually realized that this can be pushed much further simply by optimizing the first path of like the first component that the path of the
transaction takes. So when you go to your wallet and you submit a new transaction, that transaction travels over a protocol called TPU, which in turn uses a transport called quick. So when this stream of transaction arrives into the validator, usually during the like peak loads,
That is going to exceed the available block space even on Solana. You can do that by just running a ton of spammers, but the fact of the matter is that most of these transactions are junk, they are duplicated. So this kind of transaction in just stage is actually quite critical.
cutting down this incoming transaction rate in a way that it preserves the best quality of service for all of these users. And that's kind of the bread and butter of what jumpers in the traditional markets. You have a massive flood of updates when there's like peak trading hours.
or like exceptional events and we need to be prepared for all of these. So we've realized that we can improve quite a bit here so we've used a modular and horizontally scalable architecture on a validator where you can throw as many cards as you have onto this transaction.
and further past done some really nice improvements on using a quantitative and heuristic algorithm at scheduling these transactions in a way that the banking stage, which is the part that executes these transactions, is always constantly fed in an optimal way in
has basically the waist-to-least amount of time waiting on any contention or any locks that might happen for example you have multiple transactions at the same time trying to access the same piece of state. This also kind of ties in with the local fee markets that Solana has and basically that makes it really like
It's almost intuitive to achieve a good amount of performance all across the board and not have any particular hotspot kind of bring down the whole chain. So before I get into rambling kind of going a step back, what we've done in this particular model,
as we said, all of these milestones are supposed to shift improvements and actual real improvements to the network and to users. That milestone can be currently run even though the whole validator is not completed. And the way we do that, as I previously mentioned, is the frank and denser architecture.
So what we've done is we the Solana validator internally the last validator that's currently running network is already quite modular and we've identified a nice spot where we can kind of rip out and existing Solana labs component and put in our own you know fired answer pipe
line, of course, we speak different protocols internally, so we've also come up with a quick solution for that call a shim. So we then use the shim to get this incoming transaction stream over to FireDancer, and then FireDancer will do it's part of the transaction in just one stage, namely, SIG Verify, making sure
that all the cryptographic signatures are valid. This architecture is built in a way where you could use FPGA, Hardback, acceleration in the future and accelerate that decently more. I don't have the exact numbers, but I hope we'll do a future space where some of JAMS, Hardback, R&D engineers will join.
then we have the D-Dook stage. The D-Dook tiles we call it. That's a pretty important one, as you might recall, in the kind of earlier life of the Solana network when there was some arbitrage opportunity or some, you know, a proper tool meant people just duplicating
did the same transaction, like literally hundreds of thousands of times. I've seen it one of them, no operations, they just copied the same thing. So what FireDance has done is it built a really optimized, a detailed tile. We just, you know, we don't just take any standard library hash map and throw it in there.
because that doesn't get high performance. It's actually CPU layer one cache optimized and optimized for avoiding TLB misses. So really all the nice kind of approaches you need to use in high performance computing. Applied to that. And as a result,
we can duplicate transactions at an insane rate. We are able to scale, so verify quite a bit. And then the final component after this D-Dubed stream, we can throw that into the Pactile. We call it Pactile. It's maybe a pre-Pactile to output an optimal kind of pre-opera.
the stream that then can get past downstream to the block producer. So that was a lot of ones. Maybe if there's also a more material where you can look into this in more detail. So the FireDancer account has tweeted a link to the readme
that has a nice architecture overview. There's also a demo video that was posted, but I really want to focus a bit more on this final tactile, which I think is the most interesting improvement that we've achieved in this milestone. So that's all Philips works, so I'll let him take it away.
Sure, thanks Richie, so I can definitely talk about the Pac-Tile and I guess maybe first of all, Richie was mentioning like, what is it for and why do we have this?
So if you saw Kevin's presentation at Breakpoint, he had this picture on one of the slides that sticks in my mind a lot of the red sports car in the middle of a traffic jam.
So right now, as we're still working on kind of clearing out all the traffic and building out all the components of the validator to take, you know, super high performance. At the moment where, you know, we even though we have this fast ingest.
pipeline. We're still, you know, the block space is limited. It can't keep up. So we have this bottleneck where we have to drop or select some of the transactions. So the goal of the Pactile is to ingest transactions that are really high rate, but then
output them at a rate that's suitable for the block space and suitable for the rest of the validator. So that's like 12 to 48 million CUs per 400 milliseconds or somewhere in that neighborhood, which you notice is also measured not in transaction
per second, but it's used per second. So the block packing, though, doesn't just want to take some random selection, take every hundredth transaction or whatever. We want to pick the best ones, the most lucrative ones for the validators, so that they can maximize the fees that they're getting paid.
So when a packet comes in, the packed tile takes a look at the fees that it'll pay and then using some of the quantitative work that I presented at Breakpoint estimates the compute units that it's going to use and then also works at the accounts that it writes, reads and writes, what other
transactions is going to conflict with. And then when the the tile is ready to output another to schedule another transaction to keep on this, you know, 12 to 48 million to you per four to millisecond pace, then it looks at all these transactions that it has received so far and picks the best one
Like based on some adjusted fees per CU for that or at that time giving all the other transact all the accounts that are you know potentially in use and all the other transactions that have already been scheduled to avoid these You know any conflicting rights and then it takes these
transactions and as Richard was mentioning, we have the franken dancers, you know, we have our code knit together with the labs code. So we the Pactile outputs these transactions back to the labs validator over the shims that we built.
Britschie, do you want to talk some about the shames and how we knit the two validators together? I have, of course. So the internal messaging libraries that individual components in the labs validator and in the file answer validator used to pass data around unfortunately.
So Solana, as many I keep saying Solana, I'm going to say labs now, I mean the labs validator right? The labs validator uses a rust crate called cross beam which is quite common in the rust ecosystem, but we don't think it's like optimized for this
optimal performance in transaction processing because it's kind of a generic solution. So, FireDenter has come up with its own message passing library called Tango, which directly derives from some of the work that Jump has been doing in the track by space.
To put it short, like you can't really get any faster than Tango. So therefore, like, we made some large changes to, like we didn't consider at all the way the cross-beam works. So we need some kind of translation layer.
that allows us to pass data from and to the cross beam kind of world and the labs validator and the tango world and the violence of validator. So another colleague called Wayne who is not on the space but he's come up with a really nice solution for that
We use a shared memory protocol that's kind of you know designed for simplicity and interoperability And we've implemented that both in Rust and in C so that lets us just write data into that shim as an intermediary solution We know that it's not like the optimal performance
The ideal thing you'd probably do is re-implement all of the like tango libraries in the labs validator, but when we just actually tested this on a machine, if we saturated the entire network interface or you know theoretical line rate that you
might have on a real network interface, the shim wasn't the limiting factor. So that's kind of going away. It's not the most interesting part, but I thought it's like an interesting anecdote of how you use that intermediary solution to make Rastin C work nicely together in that context.
Nice. Thank you guys for covering that. We're providing ourselves on giving you guys pretty technical content. So maybe we can take it one step back while everyone in the audience has a moment to digest that. If you have any questions, feel free to comment them under the Twitter space.
tweet out that goes when you start it and it'll just be the first one on our profile. And we're going to roll into the Q&A section in about 10 minutes after a couple more curated ones here. Anyone can say anything we but we will curate which questions get get chosen. Dan if we can throw to you
We've just talked about how building this benefits the Salana network in general and maybe take it one step back from how technical we just were if that's okay with you. Yeah, for sure. Yeah, I'll leave it to Richie and Philip as the technical experts to dig into that.
But yeah, so you know something that it kind of came to mind when you guys were talking like a lot of the You know performance challenges that Solana as a network was suffering under you know last summer when things were getting really hot when we have you know these crazy NFT minutes and our
bot spamming the network, 100,000 TPS, we were seeing some validators were getting hit with traffic that was at or exceeding the line rate, the fiber that was actually coming into the machine or saturating the NIC cards. So people were getting 20 gigabits per second, 100 gigabits per
second of incoming spam transactions. And at a certain point it was really overwhelming the hardware itself, but there were a lot of challenges and bottlenecks in the software, particularly before Quick was implemented a number of months ago that were
driving some of these network performance issues that we ran into, where the software itself just simply couldn't keep up with this massive volume of transactions and it was causing all sorts of cascading issues further down the software stack. And so to see what these guys are building,
Basically, this tool that can handle orders of magnitude more inbound transactions on, I don't know what the demo was on, one gigabit or ten gigabit, fiber lines, but seeing orders of magnitude more transactions coming in and this re-architecture
handling them super smoothly, super fast processing and just spitting it right out to the blocks, you know, without breaking the sweat really is, you know, speaks to a future where, you know, when we have these, you know, massive transaction load events that we're going to have
of both software and hardware that can continue to handle that without running into some of the issues that we've seen in the past. So it's just like, it's a great movement forward in the maturity of the protocol. One other thing I want to mention is, and I think these guys will probably talk about, or maybe you'll
link to the demo video that you did a couple days ago. This was running on, I correct me if I'm wrong, but I believe this was running on basically the same level of hardware that production validators are using today. So while fire dancers are architecture to take advantage of FPGA's and custom AC
And like all this really fancy hardware, you can basically run it today on a bare metal server that an existing validator is using to run the Solano Labs stuff. So you don't need this 10X jump in hardware to take advantage of the software improvements, which is super awesome.
Sweet. We got one more topic to cover and then we'll roll into some questions which are flowing in nicely now. That is, what is the next milestone and what does it mean for fire dancer? Richie, that one is for you. Yes, the next milestone will be quick.
So this is actually the first component that precedes transaction ingest. Now, quick is, as Dan mentioned, the upgrade over the transaction gestion protocol with the nice option to rate them at senders. You can now basically
tell them what their quota of sending transaction is. That's not only nice for congestion control and keeping the amount of traffic that goes in kind of in a controlled manner, but it's also that's the nice detail that pretty much every good day
center and even like transit providers in between have adopted the quick protocol. It's a real like internet centered by the IETF just as TCP and UDP is. So that means if someone spans a lot of malicious quick sessions, the internet infrastructure will actually take care for it for Solana.
That's like one of the like reasons that in a large large we motivated me to go to Salana because Salana is really good at taking existing infrastructure like EBPF you know quick and various other technologies to amplify you know the use of that for blockchain
network, whereas a lot of other protocols, I want that's not to shame them, but they try to reinvent the view, which has other trade-offs, but not you can you can already make a lot with existing internet infrastructure. So in terms of the actual goals of this milestone, we looked
around different quick libraries that are there as like stock code but most of them don't really align well with the site performance computing pattern that we use to develop the the financial clients so a big first like turn off was that pretty much every
everything does heap allocations and that's just one of the things that you really want to avoid in like HFT or in high-performance computing because that brings a lot of like unpredictability into latency and it's just you know nicer if you pre-allocate everything at the
very beginning, then click is also actually a fairly complex protocol and we were quite surprised at how deep this calls, like random variable length integer encodings all over the place and like it also do inherits TLS.
So the technology used in HTPS in a modified way. So a lot of implementations kind of shipped a ton of features that we didn't need. And that's just not the best thing to deploy when you're in this security
We hope that one more thing as we were making this demo, preparing for the
the demo, we noticed that if we use a transaction spammer, so finally Solana has various DNI-Live service tools in the monorep, or we used to test the pushy software to its limits, the existing quick implementation kind of was the limiting factor.
we weren't able to push any of the transaction ingesting stage to its maximum performance because the labs code was just too slow. So this kind of shows that there is some bottleneck that we can definitely widen and get more TPS through if we re-incomment quick from scratch.
I'm going to be very excited and this is not only just blockchain specific, we are arguably going to have one of the fastest, if not the fastest quick implementations out there which has a wide array of use cases including regular internet infrastructure.
All right, I'll give every speaker one more chance to say anything we might have missed today and then we'll roll into the Q&A section. I have pinned those up to the top. We're going to cover a couple.
that doesn't mean anyone in the audience shouldn't continue to write questions down below. We might still choose them. We still got about a half an hour left here. Anything you guys want to cover before we roll into the Q&A?
I'll be getting one with Kevin Bauer soon. There's a lot of people asking when KFB. Yeah, that's a hot topic. We'll have to ask him. He's heads down at the moment, but at some point I'm sure if people keep asking enough, we can get him up here. For today, he won't be joining, but
keep keep bothering us and maybe. Phillips said he had one more top key one in the cover and then we'll go into the Q&A. Yeah, if you saw in the demo, we ran a test validator with the Franken dancer so we can, you know, using the labs code as well for the most
you know, the transaction execution and a bunch of the parts of the components, you know, we can actually execute transactions, but we wanted to make sure that we're, you know, we're still in the process of getting everything audited and secured. And there are definitely some issues that we know about right now. So just a reminder to not run this on Maynet yet.
Yeah, that is a great disclaimer. And with that, we'll go ahead and roll into some questions. This one is from Devon Bandera.
How can Web 3 Cloud providers and Node providers best align themselves to support Jump Fire Dance or Optimal OS and Hardware setup? Richie, do you want to take that one?
Yes, so you want me to change anything in terms of existing hardware. We build by the answer to be a drop in replacement. So while we cannot promise anything, by all means it looks like the existing, you know, mainnet hardware configuration that you use is going to work out.
So you won't need any special network cards or anything like that. And in terms of operating system, we kind of stay on the like recent kernel APIs. So don't try and deploy this with Linux version 4, like anything beach and that's come out in the like
Last year too would be great. And there's a good bunch of interesting system configurations. So one of the like awful secrets of like basically any sorts of general purpose development is that you CPU spends a ton of time just looking up where memory
is and that's a process called ATLDMIS. So basically in virtual memory all these addresses where various application memory start will have to be resolved to a physical address that's sitting on an actual piece of memory in your system. So kind of the way how fired and
This allows that performance penalty by that page table works, as we call them, by using large pages or huge pages. This is a process of feature that basically allocates larger blocks of physical memory into virtual memory.
require some special system configuration so you'll need to configure a bunch of CCTLs and ULimits and the setup requires a fair amount of steps that you have to run to initialize the valid data before you run it. Some of this you already do with the existing Solane
I think it's a binary called sys tuner, but that process now has a lot more steps. Maybe some of this can be solved with some kind of like wizard or like nice scripts that do it for you, but you probably might want to take a look at the existing readmeas and deploy setups, of course, always
feel free to shoot us a question. Me and my else study open up our discord where people can ask specific deployment questions. If you're trying to deploy this in Kubernetes, it would also be a good time to look into a few more advanced APIs on like system tuning. That's all. Thanks.
All right, thanks for the answer, Richie. The next question is from Brian Long. We'll have Philip take this one. The question is you mentioned ordering transactions by fee. The current priority fees have unpredictable success to
land a transaction. Success seems to be sensitive to the order that competing transactions arrive. How long will you queue transactions to choose the highest fees?
Yeah, thanks, Brian. So the exact question of how long will queue transactions, I think we haven't determined and that's not a part under consensus. So right now that's even a validator's choice.
But in general, I think having looked at the labs validator code for this, I think finance is much more aggressive about choosing the highest paying transactions sorted by fees per CU compared to the current validator
So there is some priority to transactions that arrive earlier, but that's only at the very early stages of the block after that it becomes, you know, once you have a pool, a big enough pool of transactions, big crumb, it goes by a fee. And so then I guess from a developer
perspective, what can you do to get your transactions to land at higher rate? Well, then we have the right incentives. Make your transactions consume fewer CUs and pay a higher fee. So I think that's definitely something. Both things we want to promote. I would just add, I think
There is at least a present like a little bit of sort of inefficiency in the priority fee market just because we haven't seen like you know every wallet or every doubt necessarily Adopt fees for you know in ways that like an end user is going to be able to You know sort of intuitively understand
And actually provide the result. I say, "Oh, if I just increase gas feed, I'm going to have an x% chance of higher than my transaction lands or it's going to land as much faster." Something that actually another team is working on, separate from fire dancer, but we've been having a lot of discussions around the priority feeds.
you know, more broadly recently is actually baking, basically like a transaction act, a transaction acknowledgement into part of the quick protocol. So I guess there's like a few bits or a few bytes that are available for basically providing some feedback so that
a client or wallet or a debt, you know, sends a transaction that has a certain priority fee to the block producer and that transaction doesn't land right now. There's sort of like a, you know, you don't know why, right? Did the transaction not make it? Was the block already full? Did it just forward it to the following block?
producer and it just get forward and forwarded around was priority fee not high enough. So a lot of things are, I think, there isn't a great like closed loop feedback right now for people to figure out like how do we make this market more efficient. And so,
I'm like feedback built into the quick layer. Why did my transaction not get included or why did it get forwarded? It's also going to give people some interesting insight that I never
work level of, you know, if I send to this block producer versus that block producer, do I have a different likelihood of the same transaction at the same fee landing in the block, which can basically translate to which validators are the better block producers, which validators, you know, maybe are either running better hardware or have tuned
their validator in a particular way that they are a higher success block producer, which is good insight for everybody and also excellent motivators for the validator operators themselves to learn how to pack blocks better, which is ultimately going to be a net positive for everybody.
Nice that rolls into the next question we have from way new, which is would fire dancer lower the hardware requirement for main net validator now that it can achieve higher performance with the same level of hardware. It's a common question. Richard, do you or Philip, do you want to take it away?
I actually wanted to answer when this question earlier, but I didn't get to typing out. But thanks for the question. So as I mentioned earlier, the hardware appliance very likely won't increase. And we're also seeing right now that under the current rates that you see on Solane,
on our mainnet even with the theoretical worst quick span. We do that in a minimal amount of CPU power and it feels kind of natural that if you take less resources to process something then you can probably also downside your hardware. However, there's also the whole runtime
milestone which we have really started working on yet and the runtime is one of the most complex pieces of the Solana protocol. It's certainly also the executionally certainly also one of the like biggest improvements that Solana brings to the blockchain space. So it really depends here if we can
I hope you all continue to be validators of course.
I'm assuming that's going to be the case until fire dancers ready. Of course, we are also always trying to make the client to be basically more economical to run for users and memories remains like a pretty expensive piece of hardware.
of CPU, we could downsize if we can really improve some of the EVPF code that's really nice improvement such as runtime, AV2 and ABIV2 and type bytecode that the Solana, the N team is doing, that's one of my favorite teams in the
In Solana Labs they're amazing. I wish they'd be a bit more on Twitter, but so we can kind of all see what they're working on, but they're working on quite a lot of things to make the Solana runtime itself more efficient to implement.
That will directly benefit violence as well with that all said I really kind of don't want to Have weaker validators because if you have all of the additional performance we strive to be the fastest execution layer and the best way to synchronize state across the entire globe
You know, why not keep that 24 car box to do that?
All right, we have another question from Devon Bendera. Can you unpack how FPGA can provide greater performance for nude node providers rolling out Greenfield support for Salana?
Do you want to touch on them? Sure. So we'll have a lot more on this coming soon, but already Kave presented some at breakpoint. And since then we've continued to work on this. There are a lot of operations in the, especially in the transaction in ingress pipeline, but then also
So even the EPF execution where you can design hardware to be much better at those specific special purpose operations. And I'm thinking of like a shot 256 and shot 512 and the ED25519 signature verification.
You can build a hardware or design hardware that's much higher performance than a normal general purpose CPU. And so, and you know, like for example, we've already seen in the current VAT Labs Validator the GPU accelerated aspects.
You know, there are certain parts of the code that are better fit for architectures other than a general purpose CPU. So that's what we're working on, especially if celebrating sick verify. And, but then, you know, also looking at other other parts of the validators as well that we can put it hardware.
That's great. That's the end of the questions that we have under the comments. I'll go ahead and do a little moment for our speakers to get one final thought. In the meantime, if there were any further questions out there, just comment them under below. We still got a couple of minutes if there are any questions.
Then Richie, fill up if there are any of anything else you guys want to chat about now is the time to just sort of just go for it. I would just call out really quick. I think it's really awesome that like the fire dancer team is running
everything they've been talking about having built so far, running on validators on TestNet right now. I believe you have a doc or something maybe we can link it for folks, if other validators on TestNet want to
try dropping in your transaction ingest protocol instead of using the labs one what might folks need to do for that. And again, disclaimer, fire dancers said their code is not fully audited yet, so please don't run this on main net. But I just wanted to shout
out that, you know, it's quite the accomplishment. I know we've been like jumping up and down and talking about fire dance, it really excitedly for a long time. And this isn't just like tech talk, right? It's built, it works today. There's still a lot of work to be done, but it's out there.
Right, that's exactly right. You can find this rig me on the Fire Dance account. It was tweeted a bit earlier. So what you need to do is compile fire and the sparks of the Solana Labs client. Of course, we'll upstream all of these changes.
That's pretty much all you need to do. Just then follow the file and set up steps. We start your test and evaluate data and you'll be good to go. We have a little monitoring tool where you can kind of see the transactions pick up. Sorry, you know, they are actually going to file and so.
Yeah, if anyone in the space scrolls all the way to the right in the most recent thread, we put out there's a link to the GitHub where you can find some of this information that Richie is talking about.
looks like we're going to wrap up this space here. Thank you to all the speakers and everyone down there listening in today spending your Friday afternoon with us. Really appreciate it. Have a nice weekend, long weekend if you're in the States.
Thanks everybody. Thanks so much for tuning in.