Compressed NFTs: Metaplex & Solana Labs/Foundation

Recorded: Jan. 24, 2023 Duration: 1:10:49

Player

Snippets

Okay, hey everybody can you all hear me? Yo
- All right. - Cool. - Yeah, am I coming through loud and clear? Can I see some thumbs up? We were having some audio issues. We couldn't turn the music off.
Okay, cool. I'm seeing a bunch of thumbs up. All right, I think we've got a pretty good crowd here. We're a few minutes in. So maybe we just get started. All right, so thanks for joining everybody. I'm Chris. I'm the CEO at dialect. Dialect is a
We're doing web 3 native messaging, one of our most exciting features that we're thrilled to talk about today is smart messaging. We're not talking about it on the space. It's cool new tech on making messaging in web 3 interesting and differentiated. What can you do in web 3 that you can't do in web 3?
to. But today, the topic of conversation is around compressed NFTs. And the question is, why is this interesting to us at Dialect? The reason it's interesting to us is as some of you may have seen on Twitter recently and in Dialect the app, which is on the ILS today, coming to Android and Saga very soon.
We're doing this thing called NFT chat stickers and I'm sure a lot of you in the audience are big fans of chat stickers in whatever your favorite messaging apps are, telegram, WhatsApp, WeChat, etc. They're just little, like not that anyone needs to hear this, but like chat stickers are like little expressive animated content.
that you can use to express yourself in messaging. And a dialect we've been having a lot of fun making NFT chat stickers. So these are chat stickers that are NFTs like any other. And if you have them in your wallet, you can use them in chat. And so this owning your voice and being more expressive, owning your identity,
All the fun and exciting things about crypto and Web 3 that we love, we think of chat stickers as a new way to own and express your identity in Web 3. I say there are NFTs like any other, but that's not exactly quite true. We're using the compressed NFT tech that has been pioneered by Solana Labs and
Metaplex. And why is that interesting to us? The reason it's interesting to us is because over the next couple of months we are going to mint probably a million NFT chat stickers. This is a scale and a level of abundance that is new in our minds to crypto.
And we are insanely excited about this. This is across a bunch of different collections. On one end, you've got some of the sticker packs that users write when they download dialect they can get just for signing up. And those are going to be extremely abundant. And on the other end, we've got very sort of like exclusive rare chat stickers that you might only get
get because you own an NFT from a PFP collection. And that kind of variety and abundance is what gets us really excited about the future of crypto. Like, you know, the big thing for us is how do we onboard the next billion users? And a big piece of that is the cost. Salana is already one of the fastest blockchains and one of the least expensive
So I think today the transaction fee on slana is about a hundredth of a penny, but rent is still really high. And to mint an NFT, you're looking on the order of spending, you know, on the order of tens of cents to a dollar. We think to onboard the next billion users, we need to transition into the subsidizable scale.
So with Solana introducing Compressed Tech, this is going to dramatically open up the space of the sheer volume that we can do and the users who can try things out for free because a business can offer to perform a mint for them. So we're insanely excited about this. You've heard enough from me, blabbering. NFT chat stickers are one little thing.
other incredibly big and exciting efforts on the compression of T-Spec or on the compression of T-Effords going on at some other projects such as Helium Foundation. But how about this? Let's introduce the guests who we have joining us today for this Twitter space is Jordan Sexton from Salon Alabs,
John Wong from Slonal Labs as well. We have Noah and Noah. I don't think we've ever talked about how you pronounced your last name, but Gondotra is that right? I think I got that correct. Cool. I'm getting a thumbs up from him. Also from Slonal Labs. We have non-I don't think we've met before, so please correct me if I'm saying your name incorrectly.
And in addition to that, so those are, I think, some of the main speakers. I think we also have potentially Noah Prince from Helium Foundation, who's leading their efforts on developing their compressed NFT tech. So welcome everybody. I would love to hand it off to you. So maybe we do a round of introductions. Jordan, do you want to lead?
Sure, I'm Jordan Sexton. I've worked on Solana pay and wallet stuff. And I've been following along with kind of the work on compressive entities because I think it will unlock some really transformative use cases, which dialect is pioneering now.
Hello, good to see you. My name is John. I actually work at the Salona Foundation, the Tech Lead on the NET team, and I've actually been working on compression entities since before I started working at Salona.
I remember totally just kind of mapping it out on a board being like this is how it works and I was like cool I didn't get that but like I'll figure it out so I'm working on this on some form of this project the last year or so
and the next and you are
That's how you say my name, haven't met some of you folks in person, but hey, Denour and John, I'm the CTO of Metaplex. And so we are all things NFT for
Solana and super excited about the tech that we're delivering in conjunction with Solana on these compressed NFTs.
Fantastic. Go ahead. Yep. Go ahead.
Oh, I was just going to stay hello. Hi, I'm Noah. I've worked with a bunch of people from Metaplex and Solano Labs and with Jerry to working on the compression tech behind the compress NFTs.
Fantastic and I don't currently see no prints from Helium as a speaker, but maybe what we'll do is we'll come back to that if we if we see him show up great, okay, so John you hinted at this a little bit, but would love to hear a little bit about what motivated the dev
of compressing FTs. Tell us about the problem, what was on folks' minds, and a little bit of the history of like, okay, here's the problem, and the development of the solution. Yeah, so as a high level primer of Metaplex NFTs, they're comprised of four accounts. You know, you have your token, you say you're mint, you have
your ATA, you have token metadata and a master edition. And they all have different functionality and they're all very useful. I think the downside though is that when we think about how much those NFTs end up costing on chain, it's like something like .01/2/s. It's not that much, but I think in the context of PAP,
and other things that get minted, it's the minter that usually pays that cost. So the creator doesn't have to internalize it. When we were thinking about how to handle NFTs at scale, we had a little bit of a problem. We're basically, we were thinking about use cases like large enterprises that were facilitating mints on the behalf of creators.
or you could think about games where games wanted to issue tons of assets as NFTs to be able to allow for transactions and things like that. But if we think about like .01 to sold at the scale of hundreds of thousands millions of NFTs, it's just, you know,
economically infeasible for the company to handle that. And in a lot of cases, they wanted to abstract away that particular cost from the end user. So we had to look for a way, a mechanism to store these NFTs on chain, allow them to be operated on in contracts and so on and so forth, but do it.
at a much more cost-effective way. And so that's where the development of account compression came around in the context of NFTs. And I think very concretely, there's actually two pieces to this. Account compression is a generic sort of primitive that's part of the SPL that handles the compression of R
And compressing ifTs were just the most obvious use case for compressing those accounts. That's actually really good context and time permitting would love to talk about what you all see on the horizon for generic account compression. That's great. Does anyone else want to comment a little bit on like so when was the
this conversation just starting, John, you mentioned a little bit like a bad a year ago, is that right? Yeah, I think about a year ago, I think you can you can fill in the blanks here on large enterprises that might want to be creating tons of NFTs. And obviously like MetaPlex was a key part of that, right? Because I think we want to like NFTs are going to end up
They're going to start representing real world assets, virtual assets and games, all that sort of stuff. So this felt like a great alignment of a couple of different efforts from the business side as well as the technology side as well as the generic primitive. But I think in the context of NFTs, yeah, completely different.
this has been something that various folks have been working on for the last year or so. Very cool. Yeah, just chiming in here real quick. You know, the way that we see NFTs at Metaplex is actually to John's point a lot broader than
We like to just a bunch of JPEGs. We see them as really a database where each NFT is a row and the only difference is it shows who owns that row and the content of that
that row can be any type of data. Today we use it as some JPEGs and PFPs and some pretty interesting use cases that are emerging, but you know it could be anything. It could be code, it could be something that is executable, it could be ownership to deeds and legal documents, it could
be anything. Right? And so if people remember the last Salana summer, I think this is before FTX, this is before the macroeconomic crash, Salana NFTs were just like moaning into space and everyone was trying to figure out, you know, an economically
viable way of minting a ton of NFTs. And the Salana price was like a hundred bucks or two hundred bucks or something. So these things turned out to be like one or two dollars to mint and when you compare them to when you compare that to other chains, right?
They were minting for free or something like that. So there was this big incentive for us as a Salana ecosystem to deliver a mechanism where we could significantly reduce minting costs so that we could get some of these next billion users into Web 3 to go into Salana.
or at least a large portion of it. Yeah, I love that. And I think back to your point, you just made it and what John was saying earlier, as bigger brands come in as creative, like, like, enterprising entrepreneurs come up with creative new cool things, whether it's in gaming or other
otherwise the ability to like subsidize if you want to is incredibly cool and that's definitely where we arrived at dialect was we just the sheer volume of what we want to do. You're right like today a lot of it is like
just a bunch of J-Pags. We are moving the needle, hairs width, maybe just a bunch of gifts. But incredibly excited about all the other use cases that you brought up. Noah, I'd love to ask you a question or two really.
just open ended just on the history of this. I'm based in New York, NOAA, and I intersected a lot at Empire Dow, the former coworking space for Web 3 coworking space. And I remember starting to hear from you about this. Do you want to tell a little bit of story about how you got involved?
Yeah, it's kind of crazy. But almost like exactly a year ago, I had just started working at labs. And I'd actually like, I think my first day was the first day at the New York Hacker House.
And at that time Jerry Zhao, one of the co-founders behind the lips of Slabs, he was still with Salana, and he introduced me to this project which eventually became a counter-impression.
And it was just in the very early stages of him doing research to figure out ways of accumulating a bunch of data on chain and playing around with like proof sizes.
As you guys all know, one of the biggest issues with Solon transactions is just that they're small. And so you can't send two kilobytes of proof data. And so a lot of the algorithmic and cryptographic data accumulators that Jerry
and others had been researching just would not work on Solana and Miracle Tree's would work but there was just like some key limitations with how they processed data serially that would have made it infeasible and so basically it was a really great way
What my first week joining was this really chaotic mix of kicking up the context for why this was needed and also staying up late with Jerry in a hotel room, just trying to play around with how do we get around
This issue of that mercury trees are edited serially, but we need them to be able to be edited multiple times in a block. Solving that is how we got to where we are now. That's really cool. And actually that's a good segue, unless any of you want to give a little bit of history
Actually, maybe I'll just ask one last quick question. Jordan, I would love to hear a little bit about how you've intersected with-- because I'm definitely familiar with NOAA's having big work done with this. And folks from the Metaplex team, do you want to give a little context on how you've been involved?
Yeah, I'll say I haven't been involved a whole lot. So I wasn't involved at all with the development of the spec. It's mostly that I found it really interesting. I think it unlocks a bunch of really interesting utility and our original conversation about it. I remember talking with you Chris about like
being able to use account compression for dialect messages so that you can actually store them in the ledger without using it in account state. I also think that it gives you because the amount of data that you can store
is larger. It allows us to potentially do things involving privacy and encrypted data on chain that would otherwise be quite expensive. If we're thinking about storing encrypted data in account state. So I've been following along.
development there and then figuring out from the perspective of wallet support what we need to do and perhaps just as much like when we need to start talking with wallets and how about it can integrate this technology.
That's great. Yeah, I remember some of those conversations and there's no question we'll be exploring other ways that we can use account compression generally at dialect. I remember that conversation. Cool. And actually the question of wallet support adoption, etc. Hopefully we have time for that.
the end of the call. I was going to say no and the thing you brought up about serially updating Miracle Trees, that's probably a good segue to talk a little bit about the architecture. So we want this to be a technical space. I hope folks are excited about that. I am. So we'd love to dive into the various pieces of the tech. One of the more interesting things that stood out to me was there was
was this two team effort here between Salonal Labs and Metaplex. Nola, you mentioned the challenges you faced with, Miracle Trees. Maybe where we start. Like, my aim is to think in terms of repos, so I know that there's like gummy roll, which I think is around that problem, which you're talking about Nola.
There's bubble gum itself, which I think is on the metaplex side. There's the digital assets standard API that I think is going to be running on a bunch of RPC providers and indexers and whatnot. Maybe we first talk through the high-level architecture. I like to think about that in terms of user flows. Maybe we talk through like, okay,
from first, like, I am putting up a collection on chain, and then I'm going to have users mint, and then they're going to be doing things like trade, buy, sell, transfer, et cetera. Do you want to talk us through a little bit about the architecture? Anyone feel free to take that one? Yeah, I just--
So I'm more than happy to talk about like the technical stuff, but in terms of user walkthroughs, user flows, I think Nyan is probably better positioned or John. I'm not as well polished on explaining that, but happy to otherwise. Cool, yeah, Nyan or John.
Good for a John.
What was the thing that you had trouble explaining? User flows. Like if you meant to compress an FT, how does that trickle through the system and how does that make it come into compression, tacking back? Yeah, so this is a little bit, so I guess to start with a regular
Metaplex and Ft with a master edition you're typically creating accounts right you're saying here the three or four accounts that we need you're gonna configure them you're gonna call it to different programs that own those different accounts and they'll make different constraint checks to see hey like making sure that I stole the freeze authority making sure that the mint only
has supplied one, etc., etc. The key difference here with compression, account compression is that there is no account per NFT. There is a storage account and that's the tree. And so a storage account has a tree and in the context of a user flow going through the minting process, they do the same, they issue
a maintenance instruction against a program called bubble gum. And bubble gum is a program written and maintained by Metaplex that will under the hood call out to the account compression smart contract. So the combination of the two
means that the minting process is just the same as you might expect. You send an instruction that says mint, it has some metadata that you know, at URI that points to an off-chain piece of JSON, and then the result is, you know, when you call your RPC to return the NFTs that you own for this particular wallet,
It's going to return that information. And structurally, it's identical to an existing MetaPlex NFT with a master edition. So from a user UI development standpoint, a front-end perspective, it's going to be structurally the same. The key difference here though is not on minting, but rather on reading and writing.
So from a read perspective, because this information is no longer in an account, you have to have been watching the ledger. And this is a pretty big departure from how NFTs are typically read today. So instead of calling the RPC for individual account information and deriving PDAs,
You would be typically using an API or an RPC that has been watching for this information over time. MetaPlex has something called the READ API which is a really solid interface for doing this. And so when you call into the READ API, it returns you a list of all the entities that you own, both
compressed or uncompressed. And it has all the abstractions, it has all the metadata fields, everything that you're typically used to for NFTs. So the read sign is a little more complex. And that's the digital asset standard API, the DOS API. Is that right? They're different things. They're different things.
Yeah, in this case, this is a specification for an API and under the hood the infrastructure is handling is basically watching the ledger saying for every new mint instruction that happens against this tree, I'm going to keep track of the NFT metadata that got issued via that particular instruction and then the API it's
when it's returning this information, it's returning the latest state of this particular tree. Because it's been basically monitoring this tree over time, rather than being able to look for an entity at this current moment in time. Got it. So I'm going to just just to answer. Yeah, just to answer that specific point, did
used to be called the DAS API, but we've kind of like focused it a lot more on just reading at least at this point. And so we've named it the read API with perhaps some some expansions to DAS in the future. And we've also added a bunch of different
features to read API for usability reasons, for example, the ability to filter search and sort, something that RPCs would basically love to provide to their users. This is also a decentralized way of running the read API because your
own favorite RPC provider will be running this or most of the major ones Genesis Go, Triton, Alchemy I think and others are running this. And it's also got all of the standard back-end infra for
I guess best practices for data pipelines. So for those that have worked in big data or data pipelines, you know, it's got things like backfill filling in missed blocks, missed transactions. And it's I think got a live-ness time at least
of our first main net deploy of something like less than 100 milliseconds from when it happens on chain to when it's pretty blazing performance and all of the credit for redapy I goes to Austin on the metaplex team or the Austin
But I think he was excited to have you speak on behalf of Metaplex. But he's in our, we have a Slack channel where the lead engineer on our end, who's Nick, who's probably in the audience,
I told him I may force him to come up here at some point. He's been interacting with all of you, but especially with Austin with Ospot. He's been incredible, like an amazing resource. So what I'm hearing is there's an action. So okay, let me try to play this back. I'm almost certainly going to get this wrong. John or Yann tell me if
this sounds right to you. In general, with compressed NFTs, there's a, like the full source of truth is decentralized, but it's distributed across an account that stores only compressed, mercury information, and then transactions, which are when a user goes to
form, let's say, a mint or a transfer or something, that information is encoded in transactions as a part of the bubble government program or maybe not bubble government program. But then basically the infrastructure running on various RPC nodes, RPC providers,
And this comes back to this live news question, are monitoring for these changes that are happening on Chayn and turning that into index data for use from this redapy. So it is still a fully decentralized technology in that everything is either the full source of truth can be produced from the mercury and the transaction histories.
that right? Yeah, that's correct. And that's a big value prop. We wanted to be sure. Yeah, we wanted to be sure, like again, the enterprise conversations here were basically like any RPC should be able to pick up this information. You shouldn't be able to rug all of these NFTs, no matter if you allocate
So there's some constraints here that are really lean into the decentralized nature of the chain and key pieces that we designed into the whole architecture. Very cool. Yeah, there's one other piece that I think we missed here is that the red API also serves up the proofs.
that you can use those proofs and submit them back to bubblegum or gummy roll in order to make changes to the mercury. That makes sense. So when I'm, I'll go ahead. Yeah, I think that's another piece too. That's different from a user flow.
that when you're making transfers or you're doing sales and things like that, you do need to have the latest proofs to be able to prove, "Hey, actually this piece of data that I'm operating on doesn't need to live inside of this market tree and I'm the owner or the delegate therefore I can make this change or make this sale." That makes a lot of sense and
It seems to me that's fairly consistent with just the paradigm shift of crypto in general and this is true of slana in many ways especially with like the account model and everything is that there's an onus on the client to provide certain kinds of proof information in the form of
So, I'm saying that Tro's are here, it's the Miracle Proof, and that's the beauty of cryptography is that you can hand that responsibility off to the client, and there's like mathematics to like say that something is true. So it feels very much still in the spirit of like how a lot of modern blockchain tech works.
I kind of wanted to hop in real quick. What are the things that I think is, the way that I was thinking about the difference between NFTs and compressing NFTs?
Is that when you when you're building like a website to actually mint NFTs Like okay backing up like when you think about minting NFT you're usually going to some website They're creating a transaction. I think they're sending it for you
Most websites are like sending the transaction with your data that you've signed to some arbitrary like RPC provider, some URL. And like that data is getting read back on the chain.
The trust model is that you're trusting this website to perform an action for you and to verify that you can go to a trusted URL like one of the public salon RPCN points and verify that your NFT is there. With compressing entities, it works exactly the same.
From a user standpoint, you're going to a website, you're performing some transaction, which is being sent to an RPC, which may, for example, inject the AmeriCole tree proof from the Metaflex redate the eye. And then you can go and verify that. The entrusted URL.
The main difference is that when you are looking for trusted URLs, you want to look for RPC providers which have knowledge of the Miracle Tree proofs. And there are some providers like John and Yann have said who are running these indexing services already. So you can go to the Metaplex 3D API if you trust
Metaplex, you can go and get that data from there and that data can be confirmed on chain trusslessly. That's the point of the mercury is that nobody can lie to you about whether you're, whether or not your compressing of T actually exists. You can verify it using the indexer.
So I hope that helps. I just wanted to say that like from the trust model perspective, nothing has changed. That's right. Yeah, that that makes sense. And yeah, despite this meaningfully more complex architecture, fundamentally, there's been really no compromise on trust.
It seems to me, especially obviously with the live-ness thing of under 100 milliseconds. Because I know that with a lot of scaling technology and crypto's question of time to settlement, et cetera, I know live-ness is not exactly the same thing as that. But that's really interesting. I had one last question on this, and now I'm forgetting it.
Yeah, maybe, I think I forgot that. So maybe we move on to the next topic. Back to this question around like the, or maybe one last comment is, I'm personally excited to see that two of the most foundational organizations, three of you say,
a lot of lab data and a lot of foundation plus metaplex is really important at that last point of that like RPC providers, the decentralized naturopists that like they're going to opt in to run these. And so the establishment of a standard, its adoption, etc. It's just really great to have the like three juggernauts in the slanted ecosystem pushing this forward.
Maybe this is the wrong thread. Just on that note with RBC providers, it's actually very beneficial for them to run this because currently if anyone's developed and used GPA to find NFTs, it's just incredibly slow and it
requires RPCs to index all of token metadata, which is growing day by day and never shrinking. And so this provides a mechanism or an option for RPCs to not allow GPA calls for all token metadata or all NFTs. And therefore they don't have to
stored all in RAMs. So they don't need these like massive whatever multi terabyte RAMs that are just going to grow forever. So this is quite also an elegant and cost effective solution for our PCs. The acronym in there that I think you're on use is GPA, which is the get program accounts is that correct?
which is like the death of many people often say like try try try not to use gepper and accounts in your clients because it is a huge performance issue. So that's great to hear that there's also a pretty meaningful incentive from a performance perspective that makes perfect sense. Cool.
So the other thing I wanted to ask a little bit about was the relationship between, and we talked about this a little bit earlier. I'm hearing, so Bubblegum is the program that written by MetaPlex that allows users to perform actions. And my very naive, because I haven't done very deep on this understanding of gummy role, is
That's the Salon program around this concurrency issue with writing to, like, making edits to a Merkel tree. In the interest of time, I think let's just assume most folks have a pretty good understanding of how Merkel trees work, but I'd love to ask around this problem of concurrency in writing to a Merkel tree.
know what you in talking at the beginning at the top of the call bring up the history behind this. I'd rather very interesting problem around how Mercultry is kind of written in series or edited in series. I'd love to hear a little more about that problem. I know you all have an interesting paper on it which maybe we can link to after the talk.
Yeah, also quick note. I think we renamed this program a bunch of times originally it was called gummy roll Apparently it's called account compression and it's stored under the sauna program library. Okay, great We are probably going to rename this again going forward
to prove it's actions. But yeah, I want to add just a little bit there because I think that was really important. One of the leading questions I get is, okay, now that a con compression exists, can I put my 3D images on chain? And it's like, actually, no, this is not that kind of compression. We're compressing
the accounts. I was like, okay, these entities lower quality. No, no, no, no, that's not it either. So there is, we're trying to work through some of the semantics there. One of the things that, as you already talked about Chris is like bubblegum uses account compression to sort of issue action.
And as Noah is talking about here, like, uh, provable actions is kind of the, the, the nomenclature that feels the best here, where yeah, that's what's actually happening is that we're issuing actions against the ledger via these transactions. And we're able to prove that it came through a particular program in this case to overcome.
Cool, so no, just so I have my language right for gummy roll, it's account compression and for bubblegum it's provable actions, is that right?
Bubble gum is still bubble gum and that's own by the way cool. Yeah, we had some issues like naming things, but you know what we'll get there. We'll get to a stable point with good names soon. I know it. No, this is like the right sequencing, which is like
fast, shipping credible stuff, and as it becomes useful, it kind of like settles and solidifies into like an established naming system and all that. So to me, that's a sign of progress. We're like out on the pleading edge here. So that's why we name everything candy names, so we don't have to deal with any of these
issues. It makes sense to me it's like codename I guess but open source in public. So I guess question still stands I'd love to hear a little bit about this concurrent actions like this concurrency with Merkel trees. Yeah. Stuff that you all were working on.
Yeah, for sure. So the point of the Mercow trees is basically to provide the minimal amount of data on chain. I know to confirm that the data provided by the Metaplex read API is correct. And so what
What we're storing, ideally what we like to store on chain is just the root of a miracle tree. And so in this case, this is literally a 32 by array, just a 32 bytes, it's it. And what we do is we pass a proof that can be verified along with your compressing of t.
in order to check it to see if we can produce that same route that's on chain.
the issue, and so this is like the idea behind verifying stuff with Merkel trees. The issue is that you cannot update that Merkel tree root more than one time in a single block.
And the basic idea of why you want to do this is, for example, if you want to mint in the new NFT, and then you want to transfer it in the same block.
And this is an issue because the way this would work with a normal mercury is that you pass in the NFT that you want to mint.
It mince it and now we move we basically update the root of the mergot tree. So go from a 32 by 2 array to a new 32 by 2 array.
When we want to transfer the compressing of t, we have to pass a proof along with the updated data for who we want to send the new NFT to.
And now the issue is that when we try to combine the new data with the proof, it doesn't match the original root because the root has been changed on chain.
And there's no way for us to basically update that proof or for us to reconcile a difference on chain. So, and that's just because Mark entries are processed information and sequence, they process things seriously.
So that's like the basic problem does that make sense that I explain that adequately? I think so and maybe I'm just gonna say something here like maybe way oversimplified but this wouldn't be an issue if every single client which like we talked about before clients like end users out in the world or maybe some SDK that was actually my question
I would just ignore for a moment. But like some SDK that abstracts it away for a developer, there's a button on a website you want to press a button, do a thing that touches this data structure. If clients were always around and ready to continue to press the button and just wait and wait and retry until they are the first one in, this would be an issue. It's really that you want to ship off a proof.
And then action you want to take, and like Salana in the program wants to reconcile and perform sort of like concurrency. And it's about sort of like making sure that even though the route might have already changed, I can very efficiently on chain use the proof I got from the client to like see that it's still correct. Is that right?
I'm giving a thumbs up from John.
after block. And so, and if you're the first transaction touching that tree in the block, your transaction will succeed and you can move on. Is that what you're asking about? Yeah, and that's basically unusable. Like it's bad for the network, bad for users. Yep.
Yeah, exactly. And in particular with NFTs, I'm sure Neon can tell you, there's just a lot of NFT transactions. And so when we imagine compressing the T's interacting with the rest of the network, we want it to be as seamless as possible, or at least as a part of a framework. And so
The idea is that what we had to solve for was just making multiple updates to a market tree in a single block. And that's basically like the way this goes. Oh no, go for it. That's this sort of like now that the root has changed, taking the old proof on an older
and as efficiently as possible showing that it's still valid. Is that right? Yeah, exactly. Yeah. Cool. Yeah, the flow of having to recalculate that proof, not knowing that there was a simultaneous update somewhere else on the chain or from somewhere else in a different block just was untouched.
like that you actually sounded awful. Yes. So this allows even stale data that comes out of an RPC to be stale up to a point and gives a lot more flexibility for clients and RPCs to serve data that would especially for NFTs that are owned by a single person that they can make adjustments to their
our own stuff without having to worry about all the other things that are happening. Very cool. Yeah, from a technical perspective, the way that we got around the serial issue was by caching. It turns out that by caching the last 64
256 or however many updates that have been that have happened to the tree we're actually able to update outdated proofs so like John said if an rpc is serving you stale data we're actually able to so long as like that stale data is
only 64 updates old, we're actually able to fast forward that proof in the indexer to make it valid for the current state of the tree. And so that's really the innovation is that now, you know, if 64 actions are submitted to a block, they can all be processed.
and committed into that block because the proofs can all be fast-forwarded one after another. And so that's the difference between a normal Marko tree and what we've called concurrent Marko trees. And like that's the magic basically behind a count compression.
And my understanding, I'm a visual person, I was looking at your paper is it's about finding out where like the pads through the tree intersect. Is that correct?
Yeah, exactly. Yeah. So the basic algorithm here is that if you have a proof, a Markov tree proof is a little bit harder describing words. But the way to think it's harder describing words.
But when you think about a Mercowtree, you think about the paths, you think about a path from a leaf all the way up to the root. The proof is all of the sibling nodes along that path.
So if you think about two nodes being next to each other the leaf is your Metaplex bubble gum NFT and the sibling node is the first node in the proof and as you go up the tree all of the sibling nodes
So when you think about fast forwarding updates using the cache, really what we're going to try to do is there's a very basic proof that basically allows us to say there's only one update
If we have stale data, there's only one node in the proof that we have to update per item in the cache. And so we do a very simple intersection. I say simple, but the bitmap is a little bit annoying to look at, and it's difficult to describe the notes.
Yeah, basically it's actually really quick to be able to calculate the intersection between an outdated proof and the node that we need to basically perform search ray on in that proof against an item in the cast. Is this the and we do that over and over again? Is this
is the binary representation of the leaf. So if you have like N leaves, the Nth leaf, if you write it in binary, it expresses the like its trajectory and then you find out where they first collide is that I remember you saying that in your paper. It's a really nice trick. I like, I know you like present it without proof. You just say
Oh, it's this and then you kind of squint out of it. Yeah, that is right. It's very elegant. It's really, that's fun. That's really interesting. So when you say one known, you really mean like the only thing that's needed is when they collide because post collision, it's all, it's not a collision again. So it's just like where they first collide when you follow the tree to the root.
Yeah, that's correct. I think about that. - I don't think so. - But yes.
Cool. So yeah, you do that over and over again and you basically produce these, you basically allowing, you get the sick result, which is that, you know, if you have a gigantic buffer of like 1024 items for your cash, you can actually
Basically, basically get really old data or run a really slow indexer, even written in a TypeScript that's super far behind the network. That indexer that's super slow can be run on anyone's computer and you can still index
all of the updates to your tree, which contain updates to your NFT. Wow. So that's that's one of the nice things about like the buffer size is that it does give like really like slower indexers a chance to keep up with what's happening. Got it.
So everything we just talked about with the Miracle Tree is a part of this account compression tech from, I know that obviously very poorest boundaries between orgs, but roughly speaking, Solana Labs, Solana Foundation, is it fair to say then that bubblegum coming back to the very top of the call
I think, you said this, all the other possible use cases, well, okay, that's for NFTs. I mean, there's many very interesting things we can do for NFTs, but even just like beyond NFTs generic account compression, can we think of bubblegum as the like NFT consumer
of that account compression, is that like the right abstraction boundary where now anyone else can write their own program to consume this account compression, Merkel TreeTech, getting thumbs up. That is a yes. Yes. Cool. Very cool. It's a
exciting to see that like that interface is already created and we're like barely hitting production in the first place. Does anyone have any interesting comments on that? Just a bit of trivia and maybe Noah can speak on this more but back in the glass chewing days
of bubble gum and gummy roll. There was another program called Candy Land which was a mish mash of all of this tech in random locations and it was actually quite a bit of work to separate them into, you know, clearly defined Solana own pieces and then Metaplex own pieces.
Yeah, that's that's where the name like gummy roll came from was because it made a lot of sense to have gummy roll which supported bubble gum and then when we had to break out when we actually had to like be professional I guess about how these repos are separated That's when gummy roll got renamed
into account compressions. But yeah, one of the cool little factoids as well is that basically this allows, as like a tech perspective, as a tech layer, what account compression allows you to do is move your data from Solana account
counts into the Solana transaction ledger. Yes. And that's really what you need the indexers for is you need the indexers to be able to identify the exact locations where your data was updated and serve you a proof that they're giving you correct data.
And so I think that's the cool part because this allows you to basically index your own programs. Like Neon said, this takes off
a huge load from the RPCs, like as now they don't have to support that program accounts, you're basically implementing it yourself by writing an indexer and additionally
The indexer allows you to basically, you can basically confirm that your indexer's up to date with the chain using Merkle tree proves. So that's, I think that's something that's like I said part of this, but it's a lot of glass chewing to get
And the indexer is performing as the Metaplex read API. Austin and Nyan and theme have done an amazing job with that. Amazing. Now that we've mentioned it a few times, I really do want to see if we can get no prints here since we have a
And then I don't know if they're going to be able to do that.
There's anything else you want to talk about. Obviously interfaces are really interesting to me because they define the responsibilities, etc. That was a big piece for me, understanding the relationship between these various pieces of tech. Any other comments on the tech itself before we segue?
So, Austin Federer has basically noted that this is allowing you to have like a strongly objective approach to indexing your programs because the Miracle Trees not only are they proving information about
your accounts, but they're also allowing you to prove that certain actions were taken. And so it's possible that this becomes a more generic primitive for allowing you to reason about programs in exchange for serial execution.
And I think from like a broader perspective, this is basically just trying to say that this allows you to have, like allows you to reason more clearly about your programs. But yeah, that's pretty much it for my end.
any other thoughts?
I didn't want to talk about how this is going to affect developers and roll out and stuff like that. I don't know if that's what you're going to say. Wait until. That's a great one. I'm actually dying to ask a little bit about the cost breakdown. Maybe the foods talk briefly about that because I think the sort of like what can you can do today? I think it would be a great discussion. Just want to ask briefly, concept#
So you all have a table in your documentation that talks about the cost when you create a tree of a certain number of leafs, like certain size, and I'm seeing 10,000, 100,000, 100,000, 100,000,000, and a billion. And the total cost, like we're looking at as little as a few
solve for $10,000 and $500,000 for a billion. The scale is extremely non-linear. That's incredibly exciting. That's the power here. But you have this interesting breakdown of tree rent versus tree transaction cost. I'd love to hear a little more about where this cost comes from and who bears it.
Yeah, so yeah, so when you're configuring your tree, what we're doing here is we're allocating this space for those roots, those marble roots, and a buffer of those roots. You basically say like how big is this tree going to be and how much can like not really concurrency, but kind of concurrency that you want.
That allocation is a typical account allocation against the concurrent Miracle Tree program, the account compression program. That gets pre-allocated ahead of time before you do any minting or anything like that. What's nice about this mechanism is that because we're storing so little information on chain, the actual cost
of operating a tree like this is entirely dominated by transaction cost. So when we think about a billion NFTs, it's actually the transaction, like the amount of transactions that are needed to fill the tree ultimately dominates the cost of calculations there. That being said, like it is a lot of
transactions and you know like again like no no aprince would have a better take on this because they're going to be sending a boatload of something like hundreds of thousands of transactions to do this but one of the things that we've also thought about is like can you batchment can you
So to answer your question about who bears the cost, the creator is going to bear the cost of the storage, which is going to be something on the order of like five to 20 sold for a tree. And then the mentor is the one that's sending the transaction, which they would be sending anyway to mint the
So that the transactions start cost is sort of amortized across all of the mentors. Although in the case of an air drop, then that cost goes back to the creator. So, and actually that last point, or the whole point, which is the actual rent on chain is extremely small.
And the vast majority of this transactions, the rough point here, just coming back to a billion, we said there's around 500 Sol total cost for like a tree of a billion, let's say NFTs in this case. Seven Sol, I'm just looking at your table, seven Sol is the actual rent.
and 500 is like the rough transaction cost. And that's really just to get to a full mint, right? So if like a business wants to mint an air drop, like you said. So it's still basically like highly distributable in a world where a product or users want to perform the minting action themselves, then we distribute justice before amongst a very large
is that right? Yeah, yeah, totally. And so we're exploring ways to reduce that transaction cost using different strategies, but yeah, it is a very clear area of optimization. Yeah, and I think this is like well into the world of subsidizable, which is like kind of like
to your point at the beginning, John, businesses can give users a full web 3 experience with true ownership of true on chain assets at a scale where they can agree the cost. And that's that to me is the threshold here. That's what's so exciting. I feel like a lot of crypto
research on compression is really about getting to where a blockchain is usable. And what's so exciting about this is where it's like subsidizable at scale. I know I'm repeating myself at this point, but that to me is huge unlock for mainstream adoption. Incredibly excited about it.
Actually, that was my only question on the cost breakdown. That's very interesting. John, you brought up another great topic that I wanted to cover. Hand it off to you. Tell us a little bit about developer experience rollout, etc. Like anything that's on your mind than that.
How about from. Yeah, as we've talked about a couple times here, the RPC is doing a lot of the heavy lifting because they're the ones watching the ledger, getting all this information, storing it in a database, serving it through the API that ultimately the client is in the end
up using. So that's the first step in this whole journey. Metaplex has a reference implementation working with all the RPC providers to provide similar implementations to support compressed NFTs. That's the first step. The second step is for clients like Dialect and Helium, places where they have control or
over the wallet experience to be able to make use of the information that's coming through those RPCs. In cases where you don't need to transfer or you don't need to trade those NFTs for now, and they can stay within that wallet ecosystem, then it's a great, it's a great bet to get started now.
Otherwise, you do need wallet support. There's an open PR and backpack that I woefully neglected, but backpack will have support for compression of teas and other ones like phantom. As soon as we can get RPC providers to support this, it should be sort of dead simple for them to be able to integrate.
So once wallets can display these and their assets that look and feel just like the regular NFTs that they already love, then it'll be a lot more tangible. The last step there is around smart contracts, providing the capability to operate on these NFTs in the same ways that you might explain.
like trading, transferring, you know, renting, like all the other things, because you do need to reason about them a little bit more differently. But yeah, those are the three, roughly the three or four steps that I think are needed to get this out to as many people as possible. My takeaway on what you just described is
And from our conversations internally, a dialect together with Metaplex, Slana, etc. And also with the RPC providers is the everything, like these major programs, the way you interact with compressing of T's to MIMT, etc. is live and deployed on Mainnet. Is that correct?
Yeah, yeah, you can you can mint all of this stuff today and use reference implementations that the examples for smart contracts are a little bit rarer and that's what we're going to be trying to get some more example code for folks to implement
once again, but again, we're going through this in series working with different teams. Unfortunately, I do have to get going to another call, but it's been wonderful chatting with you. Fantastic. Thank you, John. Thank you for joining. I'm going to dump these questions on the rest of you. Just a couple more.
- It's same as me, Chris, so thanks for hosting and thanks everyone. Hopefully people find amazing use cases for this new tech. - Excellent, fantastic, thank you for joining, young. Last couple of questions, Jordan and Noah, do you have a few minutes? Otherwise, we'll wrap up here.
Yeah, I can hang on and I have a question for Noah too. My understanding is although the programs are live on chain right now the last little step toward at least performing mince and like John said sort of like being able to enjoy this kind of in your wallet
etc. is the RPC providers implementing these indexing solutions like to read APIs. Is that correct? And my understanding is that's where we are right now. We're working closely with some of our PC providers as they get that tech into production.
Sorry, I'm confused as to what the question is there. That right now, if you really wanted to, there's still a little more work to be done. If you wanted as a developer to build this into your product and the work is for the RPC providers to fully bring on the last piece of this, which is the day they monitor and provide through RE-APIs.
Yeah, I think, okay, so 100% that's true. I think it's probably fair to say that most people will feel comfortable using the MetaPlex3 API through their official URL.
which like I think that's probably enough of a trust model obviously that's not true for everyone but it like that's where I would go and the meta-plakes read API
It's really stable. Ascent team have done a great job with that.
The one gotcha that we didn't mention adequately enough was that there is an additional parameter to using these trees, which is called the canopy. And the canopy basically says that
If you want to cache even more data than the buffers, like if you want to cache the top 10 levels of the tree, top 14 levels of the tree, you can increase your account space of the tree, and we will automatically cache the top 10 levels of the tree for you.
What that means is that now your proof sizes can be smaller and that allows for more programs to compose with your compressive teas. So just wanted to throw that out there and for developers that feel like the proof sizes are getting in the way of composition.
That's great. Jordan, you said you had a question for Noah.
Oh, yeah, I had asked this in Slack before, but I was essentially wondering about like what trust assumptions like clients might have if you rely on like RPCs and indexers to serve like proof data and sort of like data from the ledger.
that's not in accounts or clients, like what what the clients have to do to not trust the data coming from an RPC.
I think this is, it's difficult to answer that question because I think there's like two or three questions in there. Right. Like I think it's like one is like what's the trust model for wallets?
you know, when should they trust the MetaPlex Read API versus run the MetaPlex code themselves on their own RPC? I would suggest that they always run like their own RPCs and indexers because that's going to give them the most flexibility.
But there's like a deeper question there which is like, you know, how do we get people to adopt like indexing into RPCs at scale?
That question I think is probably a little bit more important because this is the trust model for it.
Compressant of teas is really simple. If the RPC or indexer serves you a lie about which NFTs you own,
That proof will be invalid and your transactional fail.
So really I think the question is like how do we adopt truthful indexers at scale into RPCs and how do we make it easier for RPCs to pick up even more of these extensions as time goes on as new programs can
need to develop on top of compression. And I think that at labs, we can probably do a better job of redesigning a new RPC standard, which is probably not the, what, which is probably a very big project. But
In the interim, I know that helios and iron forge and possibly others are getting some pipelines in place to support compression for arbitrary programs. So setting up RPC indexing for arbitrary compressed programs.
I hope that answers your question. I feel like I tried shifting focus. No, no, I think that's okay. Thank you.
That's everything I've got. Jordan or Noah, is there anything we didn't cover that you want to share with the audience?
More program examples are coming soon. We have a really sick project that someone from Solonet developed relations is leading. I don't want to reveal it yet. It's super cool. It's going to make developing on the
top of compression, super easy and natural. It's coming soon. Things are in the works, they're just getting started. But there's a lot of exciting stuff. We'll continue to post updates. And using this will get easier and easier over time.
So in the meantime, just feel free to harass me or others on Twitter or Discord if you have questions. >> Is Twitter the best way to reach? I mean, given that you're signed in here on Twitter, maybe that's the best way to have a call to action for the audience to reach out to you.
Yep, I think that's the best. Right. And wrapping up with a TLDR here, like we dove into some like of the nuts and bolts in the tech.
I heard this a few times. I want to reemphasize it. No, please correct me if I'm wrong. Especially with this last point you made. Generally, this tech is getting rule about. There's going to be broad adoption across major wallets and other projects like exchanges, etc.
the developer experience really should end up being comparable in that I import some tools that I if I'm a developer building some site that wants to consume NFTs I can really have eventually like as far as the code I
right, the same interface with compressed and uncompressed, roughly speaking, that I get to engage and use this code without a lot of conceptual overhead. You're trying to basically make these NFTs like any other. Is that a good, good, TLDR?
In the long run, 100% in the short term, it's still going to be a lot of glass during the long, it's just difficult to build on. And I think that's how we create great interfaces is just like to the glass find out what works and what doesn't as opposed to over engineer from. So that's very like Salona spirit that I love of like
overall kind of like deep in it. So this was fantastic. Really Jordan and Noah, thank you for coming. Obviously our guests who had to leave a few minutes ago as well. Given that we're a little overtime here, we're going to skip any audience questions.
I'm sure if anybody in the audience has any questions, they can certainly forward them to us at dialect, maybe just message at our account and some tweets, or DM us, NOAA, I hope I'm not signing up for too much here, but if you want to, like folks if you want to reach up to NOAA, he's made him
available. But this was fantastic, really, really enjoyed it. We are incredibly excited about this new piece of tech. And especially for early developers, like the way this comes online, as we all kind of get our heads around, what is this thing, how does it work?
And then what does it do actually for our products and our end developers? And like, while that interface isn't purely abstracted, I think for me personally at least is really helpful to understand a little bit more about what's going on in the hood. So thank you Jordan, thank you Noah, really appreciate you coming and telling us a little bit about compressed NFTs.
And to the audience. Thank you everybody. Yeah, thank you for coming.
Thank you for all staying. - All right. Have a good Tuesday everybody.