So guys, just give us a few seconds, or a minute, we'll get
everyone that needs to be on stage on stage. Here we go. And
then I will be adding falla as a host to to this so let's get them on stage Thank you. Thank you for having the technical difficulty. I think we're having technical difficulties here,
so just bear with us as we kind of sort this out.
All right, cool, cool, cool.
Awesome. Then let's just get fallon network up Thank you. Okay, cool. Can you guys hear me?
Cool. What's up, John? How are you?
Doing pretty good. How are you?
Good, good. I'm in San Diego, just near the bay. There's a gigantic battleship next to me, so I have a pretty cool view.
Kind of jealous. I'm in New York City, so I'm staring at a bunch of brick apartment buildings
outside my window. And, you know, love that, obviously, but just a little bit different.
Your view sounds better. My view sounds better, but yours is more nostalgic.
Fair. Did you used to live in New York?
Well, I mean, no, but I've always wished I could spend at least like a couple of months out there just to kind of explore and be around.
But I don't know, like it feels like it would be too chaotic.
I've never actually lived in inside of a city before fair i got it um yeah i mean it's i mean if you're if you're like a
coastal uh kind of person i don't know if you'd like it that much, but I could certainly see maybe you'd like it.
I've heard New York is pretty big with DeFi.
There's a lot of different...
For some reason, just a ton of DeFi companies just stack into New York City for some reason.
I mean, pretty much everything's here, right?
Probably because... I mean, the DeFi in
particular, probably because of the finance, like Wall Street situation, but who knows?
Is your guys' office based out of New York too? Yeah, yeah, yeah. So I live kind of downtown and
we're kind of midtown area. Is that where also everything is hosted as well?
town area is that where also everything is hosted as well um not not it's pretty spread out
we're pretty spread out you mean like where everything like all the sort of uh crypto
companies are well no no like i guess like where most of the the verification action takes place
i guess is it like centered in new york is just it's like because there's a bunch of companies they're utilizing it based out of new
york are you you're saying for zk verify yeah oh no i mean our team is um our dev team is primarily
actually located in milan um we just kind of happen to happen to have a New York City office and
there's only a couple of us here. It's not a time.
Well, Anthony, should we dive in here? Do we have everything sorted out, or are you on us to keep going on small talk?
I think we are good to go.
Again, if you're just coming into this X-Space,
we have a great one today,
FALA Network and ZK Verify,
from a test station to on-chain.
This is going to be a nice deep dive
into the FALA and and DK Verify partnership. I'll be
sharing some tweets or posts, whatever we call them nowadays, to this X base. That way you could
be up to speed on everything that has happened between this partnership and what lies ahead.
But with that, John, excited to get this started.
But with that, John, excited to get this started. So we'll happily kick it over to you.
So we'll happily kick it over to you.
And thanks, everybody, for joining.
Definitely really excited for this space today.
And we'll get it started.
Maybe I'll just do a quick intro of myself because I don't know if everybody knows me,
I lead the product marketing teams here at Horizon Labs.
And my sort of big focus for the past year and a half or so has been ZK Verify, which
is a blockchain, an L1 blockchain that's built to verify zero knowledge proofs super
built to verify zero-knowledge proofs super efficiently.
And, you know, we're talking with Val today
about how we're verifying some T-proof at the stations on ZK Verify.
So I'll pass it off to you quickly, Dylan,
maybe to give a bit of an intro on your side,
and then we can dive into more of a conversation about the
partnership here. For sure. And just for everyone that's listening in the audience,
if you have the green heart emoji, if you want to throw that up for like the phala,
green phala and just start spamming that so I know who's alive in here, just would love to see
your green hearts. And yeah, green hearts, I see one. Okay. So, um, hi everyone
who doesn't know me. I'm Dylan Kowalik. I'm the head of developer relations for FALA network.
Um, also known as FALA cloud. Um, I also do some TEE research and zero knowledge proof research.
My background actually stems, uh, all the way back, uh, early days in crypto. Since 2017, I had done a lot of engineering,
principal engineering for Invo Technologies. I was also the head of evangelism with Binance
back in 2022, 2023, where I led Greenfield and the ZKOPMB roll-up right before CZ went to prison. So I have all sorts of dark stories
in the crypto days you would not believe. And I also did a lot of ZK stuff with Discreet Labs
where I was doing the Findora project. So I love ZK. I love ZK Verify. I love what you guys are
doing. And yeah, I think there's a huge wave of TEE exploration that a lot of developers are doing
right now. And I think just getting their foot in the water, testing it out, I think there's a huge wave of TEE exploration that a lot of developers are doing right now.
And I think just getting their foot in the water, testing it out, seeing what it's doing, seeing what's possible.
I actually, I literally just had an AMA, sorry, no, no, a podcast with, I'm also the host of Black Box Podcast,
which is sponsored by Fallon Network, where we actually break down all of the technicals.
So yeah, John, I can get super technical.
If you want to ask really hard questions about what goes down use case wise,
I also know everything about ZK Verify
thanks to Armand, your developer relations specialist.
So that was a really cool conversation I had this morning.
And it's a really good preparation
for what we can talk about now
because I think there's a lot of really interesting use cases that I figured out during that call.
And some things that I sort of want to be able to break down for any developer that is in the audience.
So, yeah, that's me in a nutshell. I'm happy to have this conversation and happy to be here.
Cool. I'm glad, you know, you can handle the technical side because I'll just kind of put it out there to start.
I'm not super, super technical. I'm technical enough to do some damage. So we'll see where
this whole conversation goes, given your high level of technical acumen and my not so high
level of technical acumen. See, it's about having the right question, not having the right answer.
We have good questions and we're Gucci. Exactly.
So I think I have a really good question to kick off here.
And this actually comes from someone on the team.
He wanted me specifically to ask you if you take milk with your tea or not.
I assume you probably heard that one before.
Oh God, no, actually that's the first time. No, I do honey. Sorry. No milk. I do honey.
I promised him I'd ask that to kick it off.
Well, listen, it's red rose tea for everyone who wants to know. At least I do black tea.
Okay. So it's the simulation to the black box. So it is black red rose tea. You can get it at
Walmart. By the way, it's the first ever tea that actually had an NFT inside of every box. So it is black red rose tea. You can get it at Walmart. By the way, it's the first ever tea
that actually had an NFT inside of every box. So if you don't know what red rose tea is,
there's actually eight collectibles in the world. And if you collect all eight,
red rose tea actually ends up sending you a free box of red rose tea,
which is sort of the first NFT TEE on the market since 1997.
T-T-E-E on the market since 1997.
Looks like we might have a sponsor for this, a retroactive sponsor for this AMA too.
So maybe Dylan, you can start by, I'm going to go a little bit off script here.
Maybe you can start by giving us a little bit of your perspective on like what the,
like why you see value or, or maybe explain even a little
bit of like what an attestation is and why it needs zero knowledge or why it might need a zero
knowledge proof, um, verified on chain for it. Yeah. I mean, uh, to, to, to start from a high
level sort of overview is like, you know, um, what FALA does is it's a provider of not just like, you know,
TEE support, but also artificial intelligence. So FALA on a very, very high level is that we
actually support the GPU and the CPU for artificial intelligence. So you can run like a DeepSeek or a
QAN, like an AI inference, right? And I actually, I see somebody
in the audience that we just spoke with yesterday, SYNT Protocol, which is like an artificial
intelligence that wants to start doing confidential AI. So I love that we have, you know, people from
yesterday coming back in to kind of hear more. So, but like, what is a remote attestation, right?
So that's a part of the TEE. These are effectively like proofs that happen on the operating system level on the hardware
So you have a computer and then you have the processing unit and then that processing unit
then can create what's known as a remote attestation, which is like a proof.
It's effectively, it's a quote that basically verifies, hey, this is in fact the operating
system you're looking for, right?
These are the droids, right?
So this is the operating system.
This is the HTTPS that you're looking for.
You know, this is the KMS, the key management system
that does not actually come from a central vault
like most of the cloud providers.
So what DSTAC does differently, which is the TE framework that FALA runs, it's an open source framework.
It's an open collaboration between us, Nier, Flashbots, Nethermind, and so on and so on.
And also potentially even the Linux Foundation, which we're pushing quite hard now.
So I advise any GitHub maxi to go out there to GitHub, Dstack4 slash TEE to go learn more, go star it, please.
It is basically the first ever audited framework that has actually proven at the operating system
level that it's actually a secure zero trust framework for remote attestations utilizing
Intel TDX processors, which are known as Zion processors.
It's actually not just for Zion.
It could be for actually any TEE.
So it's an open framework so that we can start sort of actualizing and making TEEs more accessible and developer-friendly.
So the really nice thing about Fala,
and it's actually the really cool thing about what Dstack allows any developer,
is they can just package their code code like Python and Rust and whatever, and they can put it inside of a Docker container, which is actually quite easy.
And then it creates a Docker compose file known as a YAML.
And then you basically deploy it straight to the TEE.
And the awesome thing is, is that you don't have to do any crazy figuring out how to do all the remote registries, which is actually quite complex.
These are known as like RTMR1, RTMR2, RTMR3.
And basically just trying to configure all of this
back then using Intel SGX
was the most cumbersome thing in the world.
And so this framework was designed
and deriving the remote attestation proofs
operating system, the KMS, the applications image from the Docker container, that is all there. It's
all running. It's the thing that you know is running. And so that's what actually makes it
super developer friendly. So now we can actually start proving things like zero knowledge proof
programs inside of the TEE. And this is what I'm excited for with ZK verify,
because we can basically prove the prover in a sense.
So I know that you guys are running like SP1.
I know that you guys are doing like risk zero
And so these are really great provers or ZK VMs, right?
Zero knowledge virtual machines.
And so you can run the zero knowledge virtual machine
And that's really great because then you want to prove
that the prover actually was the thing that submitted
not only the zero-knowledge proof record,
but also you want to have an attestation that proves that,
hey, this particular machine in the world that's in Europe,
and this is hosted on Fowler Cloud,
this is the operating system running this prover,
running on this HTTPS link.
This is the exact computer running this AI and we have a zero knowledge proof
of potentially the model's training weights, right?
So I'm excited for these use cases.
We can break all of that down,
but that is, I think this is the future.
It's finally time for developers
who had once been segregated, right?
ZK developers, NTE developers.
Like there is an amazing path to combine the two of them
and that this is effectively that relationship here.
So let me make sure I get this straight.
Are you almost saying that you could run,
you could generate a proof inside of a trusted execution environment and then also create a proof of the attestation for ensuring that that hardware actually ran code kind of all the things that you just said? Or am I misunderstanding a little bit. No, yeah, yeah, exactly. So like, so we have like, there's two specific APIs that
are available on D stack. So one is known as remote attestation of the TLS endpoint. And then
another one is the remote attestation of the RPC endpoint. And so for the TLS endpoint, you can
basically do a zero trust HTTPS, which is basically every single, you know, like on the internet,
you go to HTTP, and then at the end, it's S, right?
That means that it's signed by a cert certificate authority.
So that TLS certificate basically comes with the hardware backed proof of what the actual app and the code is really running.
So you go to a domain, but you don't necessarily know what code is actually running.
You get a remote attestation that proves
that the program is what it says that it is.
But now if you want to run a prover inside of that,
this is basically prove the prover.
So this is ZKPs inside of a TEE.
So you can run the remote attestation of a TLS
so that you can ensure the ZK verifier
only accepts proofs from the attested virtual machine
running that approved prover code.
So, you know, this is akin to what you can say
is like sort of like a private smart contract.
You can instead with smart contracts,
you can just do it with Python.
So the Python code snippet or a script
could effectively act as like,
hey, I wanna prove that this program specifically ran
on this domain, and I only
accept proofs that are attested from this virtual machine. Another way of thinking about it is that
you can seal the proofs. So this is known as RARPC. So you can have an attestation or a proof that
proves the API. For example, you can have an application that calls like a simple endpoint, right? And then
inside of the confidential virtual machine, you can prove the API's overall state. And so this
gives you sort of like an on-demand hardware quote that the container itself is actually
fetching fresh Intel TDX attestations that actually bind to the cryptographic keys at any time.
And it can also prove that the keys actually sealed the code itself.
So you can derive signature keys that only work for the measured version of your application.
So say if you have an application on GitHub and you signed it and created it once,
but you want to open source the actual code,
you can basically fetch the git commit hash as a remote attestation
the world that the code has never changed because in the remote attestation of this domain that is
on the HTTPS link, the remote attestation comes back and says, hey, whatever app that you're
running right now on the web browser, you don't have to go through the Rick and Marole basically
verifying the front end. Here's an attestation that proves that the actual code that's running
in the prover that might be doing private proving, for example, like a zero knowledge
proof, is in fact not exposing your data, right? And you can prove that it's private.
Super, super cool. I was actually talking to somebody on the way back from lunch yesterday
about something similar, like how could you generate a proof potentially that
a website that you're visiting is actually the correct website. It's not like a spoofed version of it and not going to necessarily expose all of your data, which sounds kind of similar to what
you just described. And so essentially what you're saying is like, if I wanted to, I could create a private DeFi strategy where maybe I want to trade off some specific strategy that I think, I'm doing this based on, you know, I guess, like, valid, legal, all sorts of other kind of thing, all sorts of other, you know, regulatory compliant logic, you know, by using this, this sort of ZK VM inside a trusted execution environment, potentially.
execution environment potentially. Yeah. I mean, and just to dispel this, like TEEs are notorious,
like they are the normal culture of defense in depth strategies for most governments and also
for like compliance rails for, especially for fintech. So like the DTTC, like they do not run
or even MasterCard, they all use TEEs for PCIP or PCI for like the actual payment rails of the world. So TEEs are how you
actually can attest to whether or not a MasterCard is actually an authentic MasterCard itself as well.
So this is how every debit card, you know, like actually attests to the authenticity of like a
Chase card versus a Bank of America card. And so that's how you can prove that your
infrastructure that could also be like off chain, like a debit card is provable by its use, which
can also prove the identity of the cardholder. But as far as trading strategies, of course,
you could do co-processing inside of the TEE. And that's actually what TEEs are meant for.
They're meant for co-processing of any large workload.
And oftentimes they actually can create sort of like a concurrent architecture.
Say if you have like a microservice that needs to, I don't know, like have an agent run, you know, micro strategies on a particular trading strategy that you want to basically list on the market at nine o'clock in the morning.
You wake up at dawn, you're trying to basically figure out whether or not you want to run this trade on NVIDIA versus Intel, right, on the stock market. So yes, you can basically
run the strategy using a zero-knowledge prover inside of the TEE that can run your strategy
using Lean or something. This is like an open source library for strategic trading,
basically testing strategies
ahead of time. Now you want to prove to the market or maybe your firm, your brokerage and say, hey,
I got this proof. Everyone else is basically wanting to verify whether or not they should
take the strategy, but I don't want to reveal necessarily how I made the strategy, but I want
all of my DeFi users to basically compound into this strategy, again, without revealing to
them exactly how I did it. Because I'm the firm, it's my proprietary knowledge, it's my intellectual
property. But I do want them to bet on it, right? Or I want them to basically funnel in their funds
onto like a compound or like, I think Ribbon Finance could also take advantage of this,
which is another DeFi protocol that effectively compounds on these trading strategies before they hit the market. And so people vote using their governance
tokens to say, hey, I want to bet on the futures of it going up or down on BTC in September 14th
so that we can basically funnel $30,000 into the strategy. But then I don't want to see what the
strategy is. Me as Ribbon Finance, I don't want to reveal what that strategy is.
But I do want to prove that the strategy has basically this potential yield of 3.4%, right?
So that could be really important because then Ribbon is basically competing with other brokerages
in a DeFi sort of way, but they don't want to reveal and basically give away their strategies
because that's how they actually remain competitive in the DeFi market of things as well.
Super, super cool. And love the discussion of like a real world sort of use case here.
So definitely appreciate your thoughts there. I'll go a little bit back on script now. I think,
you know, going back to one of our first questions
that we had jotted down here,
there is a question about developer experience.
And so I wanted to, like, you know, get your thoughts on
what are some of the developer,
what were the developer pain points
in wrapping T attestations into zero knowledge proofs in particular, maybe
And what's the, how does skipping that step, which I guess for everybody's understanding
here, ZK verify can actually verify Stark proofs directly.
So you can verify the proofs that get spit out of these ZK VMs that we've been talking
about, like risk zeroC-0 and SP1, directly.
Whereas if you're using the typical sort of workflow that these ZKVMs recommend, you have to convert the proof into a Graph-16 proof so it can be submitted on an EDM typically where there are precompiles available for, for verifying these types of proof. So I guess going back to the question,
how did we alleviate some of the pain points with TX stations with this
solution from your perspective?
Well, you know, like if, if I were to be running, you know,
new modeling weights, right. For, I want to dive the AI part, because if I'm trying to basically protect the training tuning, the fine tuning of a model, that would be my first stab at this where I would basically use, I think it's called Ezekiel or basically PyTorch, right, basically the formula of zero knowledge machine learning.
PyTorch, right? Basically the formula of zero knowledge machine learning. And I want to be
able to, you know, basically fine tune my foundational model, say like I just uploaded
Quinn, but then I want to fine tune it to build a new workload. But I don't want to basically
submit to the prover any data that's not end to end encrypted. So what's quite nice about the
FALA to the SP1 or risk zero sort of architecture here, I've played around with SP1 a little bit, which is, it's pretty easy, like you basically just write a Rust script, you know, it compiles to a binary, then you can basically prove that the code ran as it says that it did.
to the fact that the code that was also running never exposed the data because maybe you run a
supplementary script on top of that that actually maybe makes the data confidential and stores it on
ipfs or maybe like in terms of you sending and submitting the data directly to the sb1 te prover
you want to make sure that the actual inputs were also private, and therefore you get the ZT or the Zero Trust HTTPS attestation that proves that any data that you're submitting to a remote prover
that's not locally on-prem, you can actually fine-tune the model in the cloud without actually
exposing the data that you're submitting, like research data or biometric data. And this is
really important for research labs that want to do decentralized research. But this is also for, in general, fine-tuning a model where if you don't
have the GPU clusters, which most people don't, then they're going to need to do ZKML in the cloud.
But then how do you do this where it has end-to-end encryption? Yeah, just run the, I don't know,
SP1 or risk zero program prover where you have end-to-end encryption that runs your PyTorch model.
Then, therefore, you can actually create a container that basically creates the PVV file or, you know, whatever the end result, the binary of that fine-tuning ends up being. And then you can basically prove that the GPU was running on Phala. So all
the weights, basically, the fine tuning was trained in a private and also confidential way.
Private and confidentiality are two different things, though. You can prove that it's private
because the attestation proves that SP1 was running it. But then you can also prove that
it was confidential due to the fact that the attestations proved that the end-to-end security was also private.
Private, like ZKVMs can't make the input vector
private. Like, if you send data, then it's not private. It's over,
you know, you could do it over HTTPS,
but then, of course, like provers, they actually reveal the plain text, right?
And so not all zero knowledge is actually private.
And this is known as witness accumulation.
So in the witness accumulation statement, basically, the plain text gets revealed in order for you to create the proof of a zero knowledge proof.
Now, this is why you run the zero knowledge proof in the TEE, so that the encrypted plaintext is actually revealed in what's known as
a black box. Because the TEE is already itself, it's its actual own zero knowledge proof.
Basically, it's a part of the processing on the processor. It only takes up so many registries
or registers or buses on the actual processing unit itself. So you'd like,
you sort of do double ZK. Like when you do ZK and TEE, it's sort of saying ZK, ZK proof.
So yeah, that's what I would be doing, right? I would be running my model weights, my ZKML,
right? Of fine tuning a training model that can prove that I had trained this model in a particular
way without revealing my intellectual property on exactly how I fine tune it.
And I don't need to do it on-prem.
because everyone has to be doing on-prem local tuning
if they're going to be building an AI.
But I think in the future for small language models
or maybe even new more data efficient
because there's going to be more,
100% there's going to be more.
Well then, I mean, most people don't have access to the cloud. And if they do have access to the
cloud, well, I mean, Google is going to basically train themselves on your data. I don't think
they're going to basically do a zero trust HTTPS, you know, version controller, where they're not
going to take your data. And I want to dispel this because I was literally at CCS, which is the confidential container summit or computer summit. It was funny too, because
Google goes up on stage and they're like, you know, talking about GCP and, uh, you know, their,
their, their, their ability to do confidential compute for AI. And, uh, you know, after 20
minutes of them talking, their CTO is just kind of standing there in silence for like almost 30 seconds.
One guy stands up and he goes, so if I build a private AI chat on GCP, how do I know that you're not going to like steal my algorithm?
And then the guy, you just hear this like big, you know, takes the mic.
And he's like, I cannot confirm or deny that we will not use your data.
And then I literally laughed out loud. And then like several other people around me goes like, that was hilarious, because of course, they're going to
do this. Like, of course, they're going to basically take, they're going to train themselves
on how to build proprietary services for their cloud. So that's the nice thing with FALA is that
we're given a guarantee that we have a zero trust HTTPS endpoint. So we cannot necessarily know what
we're fine tuning on our machines.
There is an insurance policy that says
like you cannot train something
that is like crimes against humanity, for example.
And you can't do illegal things, obviously.
But I don't think most people are gonna be doing this.
I think when it comes to like biometrics,
when it comes to research,
when it comes to biology research,
all these sorts of things. ZKML with zero trust HTTPS for off-prem usage of like ZK and TE together is going to be
vital for decentralized research. Love that. And I love the ZK squared analogy, which is something
I'm going to probably take and run with a little bit. So I definitely like that a lot.
And then just to kind of tie it back, again, I want to underline, because I think it's
one of the sort of big selling points for ZK Verify is we enable you to save some time
and obviously cost and being able to not have to convert your start proof to
a snark proof so that it can be verified on an EVM.
You can verify the start proof directly with CKVerify.
So hopefully saving people time and definitely credits or tokens or resources, whatever
they're spending to actually generate these proofs.
they're spending to actually generate these proofs.
Yeah, just so everyone knows that verification doesn't actually reveal anything, right?
Like verification, all it does is just prove correctness of whether or not the attestation
and the zero-knowledge proof that had created an attestation did in fact come from the specific
prover. And so you're basically, there's nothing to reveal other than
the prover's attestation report. So the verifier learns nothing, which is actually very important.
All they learn is that the attestation is provably from a SP1 prover, plus the fact that the SP1
prover comes back with an attestation that the application was actually running this very
Let's say the SP1 was running some Rust script.
You want to prove that that Rust script was also verifiable.
Then yes, you can do not only the HTTPS,
but you can also do the zero trust sort of proof
that the attestation of the program as well,
the operating system and the application's image was running.
do this with ZK verify to verify this pretty quickly. So that basically, you can tell the
whole world like, hey, this proof ran. And then you can tell 500 machines that, hey, like turn
the switch on or whatever the switch is, right? Yeah, and I think that's super important. The
last point that you make there, well, both points. First, that you're not leaking private information to these provers that are operating kind of out in the open,
and so you're not exposing anything.
But second, to continue your AI analogy, when you want to prove to a bunch of different computers, for example,
that something happened and therefore do something else, you can do that in a verifiable way
and also not leak any information,
which has kind of been the through line here
from all of our discussion so far.
and you spoke with Armand this morning,
and again, I probably should take a second
our developer relations engineer, for all the work that he did on this project.
But I think we recently integrated a Verify with ZK Verify button on your Explorer.
And maybe you can talk a little bit if you know how that works.
Maybe you can talk a little bit about that.
You know, I don't actually. I think this is a new thing. I don't actually know what this is. I wish he was up here
to talk about it. But what I did gather from our conversation was that basically applications want
to be able, like ZKP2P, for example, they want to be able to verify the front end.
So like some users will want to be able to use ZKP2P, but they also want to make sure that there's no man in the middle attack.
And so they can go to the link and then actually know that the console that they're running, they can just click the ZK verify button,
which actually just is an API hook to the actual Explorer, which is running the actual
application code of say ZKP2P. And it solves issues like this, because then basically it would
prove that the application that is in fact running ZK verify would basically end up basically doing
an asynchronous call. And then it would fetch the attestation report. And then it would basically
give you sort of layman's term, what that attestation report means. And then it would fetch the attestation report. And then it would basically give you sort of layman's term, what that attestation report means. And then it would give you a permalink to the Explorer on FALA in case you didn't believe it. Right. And then.
today, and I think that's what Anthony just posted in our feed here. One of the first few lines there
actually has a link to the Phala Explorer, and you can click on some of the recent attestations,
pretty much anyone that you want, and at the very top of the page, there's an on-chain
attestation button. You can click verify with ZK Verify, and it goes through actually generating the proof,
and then sending it to ZK Verify through an API product
that we're currently building out.
It takes a few seconds to generate that proof and actually submit it.
That's just because these ZK VMs actually take a few seconds
to go through all the computation required to generate the proof.
But you get a cool, nice little link to our Explorer.
And I encourage everybody to kind of go through that because it definitely
helps us on the ZK Verify side to get those transaction numbers up too.
So a little shameless plug there.
That's totally cool because I think for the SP1,
I don't mean to categorize, but I would say what I would do personally, if I were to be just building applications like that needed to prove gaming, for example, you have ZK Mastermind, you have all these like Dark Forest-like apps. So I would be running SP1 for this because it's like easy to verify the code in the front end
because I think even SP1 offers one of their provers
on the TEE on Phallic Cloud as well.
And what's nice about this is that they've also,
what they've seen is that there's an end-to-end secure gateway so that the proof can actually submit data directly to the prover, but it's completely private, which is nice.
There's not really, like, it's surprising because when you guys see that there was a 20% performance boost, that's actually really cool.
boost. That's actually really cool. The same was true actually for even Nemify, which was also
using the Aztec browser prover using their TypeScript SDK. And they also ran it inside of a
TEE and they noticed the same exact performance bonus. So why is that happening? The performance
is because the GPU, there's an accelerator. But FPGA acceleration has been talked about for the last two years as
being sort of the go-to methodology for increasing the actual speed of a zero-knowledge prover.
Next is going to be coming ASIC, right? So ASIC proving is going to basically increase that by
another, I think, like 30 or 40%. So ZK pre, zero-knowledge proofs are just going to get faster in the next two years again. So there's research being done for both FPGA and ASIC. Then there's going to be a war. There's been a war on FPGA versus ASIC now for quite some time.
came first because the TEE could actually be decentralized.
And that was due to the research that was done by Flashbots, which basically proved
that MEV, which known as Maximal Extractable Value, they could use PROV, right?
Which was formerly known as SWAV.
But the idea here is that basically you can eliminate MEV or sandwich attacking on rollups.
you can eliminate MEV or sandwich attacking on rollups.
So SP1, then you have like ZK Verify also offers like ZK,
And then you could do the rollup centric roadmap
for L2s on Ethereum using Risk Zero,
which I think is the preferred method.
I mean, personally, from what I see research wise,
this is the direction that people are taking, especially also Polkadot is taking the same direction with risk zero for roll-up centric roadmaps.
Because substrates want to turn into sort of like their own parachains on top of their own substrate parachains and like just chains on chains on chains.
So, you know, zero knowledge proofs is just gonna get faster.
Now, ZK verified, it's like,
hey, you wanna roll up your data and make a commitment
and verify that the commitment happened on L1, fantastic.
Or you wanna do some ZK identity application through SP1,
fantastic, you can do this through TEE,
it would be 20 to 30% faster.
is that you have absolute privacy and confidentiality
when hosting your Docker container through Dstack on Fala.
So this is like a new type of development experience that is easy to get into.
As long as you know how to Dockerize your apps like a rollup, it's the same thing.
Like you can roll up an app.
If say if there was an SDK for running like a rollup, which I don't know if there is.
Maybe you would know about this more than I do.
Is there an SDK for roll-ups right now?
I think there may be, but if there's not, it's going to be super easy and super quick to deploy a roll-up very, very soon.
I don't know if everyone knows this, but Horizon Labs also works on the Horizon ecosystem.
The Horizon is building an L3 roll-up on top of base, which is a roll-up to Ethereum.
But the roll-up onto base actually runs in a trusted execution environment as well. It's like a, it's a, um, um, an alteration to the OP stack that is kind of,
so it's exactly what you're talking about, like, uh, rolling up into, into a T and then posting
an app station onto base is kind of the way that they're, that they're approaching their,
their problem. Yeah, that, that actually makes a lot of sense. And then there's,
this is something that I've been thinking about recently, which is like, you know, why can't L2s be the new commitment layer? Well, I would say that's a little bit risky because L2s are notoriously centralized due to their sequencer.
you want to be able to optimistically verify that all the provers don't necessarily trust each
other. So they just, they basically all just trust one, one person to commit to. And then
there's zero knowledge rollups where you basically get a proof and you aggregate the proof and you
batch it. And then also you, you know, you commit it to L1. It's much slower. But if you did this
with L3 and you did an, I would say if you did did optimistic roll up on L3 and then you did an L2 ZK roll up, it would be much more secure because then you would at least have a direct line to Ethereum.
And that would actually be quite, I would say that would be better, in my opinion.
So speed versus security, at least you can at least say with an optimistic roll up that the commitment actually was pre-compiled at a very specific time.
And that's really what you can do with a TEE is that you can create these pre-comps ahead of time and then commit them to a validator.
And then the validator cannot assign or, you know, like, let's say front run these transactions before they actually make the L2 to L1 commitment,
which is the purpose of using TEE.
Now, why would you do this?
It's so that you can basically get gasless transactions.
You could basically host an entire DEX
that spends one penny instead.
Now, I'm worried about the cost.
How are we going to make Ethereum valuable if
it's never earning money? But that's my own concern. What do you think about that? Like,
as far as like just making it cheaper and cheaper and cheaper? I mean, it is possible. It's very
cool that we could do this. I mean, it's almost that you could bridge, bridgeless bridging, right,
between L2s. And I think this is sort of the universal roll-up roadmap where it's like,
if you have an L3TE gateway between different L2s, between Optimism and Base, and then I think
Arbitrum and the rest of them, then effectively you can create a bridgeless gateway that is also
trustless by design. Therefore, you wouldn't need to rely on things like layer zero and whatever? I mean, it's a super interesting question.
I mean, if the thesis is that transaction volume, therefore transaction fees mean adoption
and revenue to a network and that revenue somehow translates to token value, there's
definitely a big concern there.
But it also explains why, in my opinion, why
maybe super simply Ethereum price, although it has run up recently, has been relatively
static. And that's because fees are continuing to go down and down and down as Ethereum continues
its roll-up centric roadmap and all that kind of thing.
But if you want to secure the payload, you're going to have to basically commit the 4,000 whatever stake now.
So the stake is only just going to increase.
That's what Ethereum will do.
I do want to go back and talk on a couple of points that you made there.
One thing that I think is interesting,
you were talking about Polkadot.
ZK Verify is actually built with the same technology that Polkadot is built with.
So I just wanted to draw that, connect those dots for folks.
And I also wanted to go back to the first point that I think you made
after I asked the last question, which is, you know,
we're talking about ZK and gaming, and we have a bunch of games on the platform.
We have a bunch of games on the platform because it's actually like, if you look at the games,
they're all like single player games and they're all sort of based on random number generation
for the most part. And so I wanted to maybe ask you a little bit. One more question because I think we're quickly running up at time
I wanted to ask your thoughts
trusted execution environment generated
random number versus the other options.
And I ask this in particular, again,
because most of the games that we have
are based on random numbers.
So like a couple of the games that I've built personally
are there's a slot machine game,
which obviously the reels of the slot machine
are kind of just random numbers
that I'm generating on a server
proving we're generating fairly. But I also have a blackjack game recently, and I experimented
with different versions of this blackjack game, one of which I actually ran the entire shuffling
algorithm in a ZKVM and spit out a proof that the random number was fairly generated.
So a lot of the pushback that I got internally for these things was,
hey, how are you generating randomness?
Because in theory, you really should be.
And I said, I'm not because I'm kind of just running the server in private.
Nobody can see what's happening.
Nobody can see the actual way that the random number is generated.
And for the purposes of testnet, that was fine.
But that sort of architecture, not production ready, but very much resembles, I think, this trusted execution environment kind of situation where you'd be running this server in a black box effectively and saying,
look, it's, and so I'm curious your thoughts, again, on T generated RNG versus DRF.
Yeah, I mean, it's actually almost the same thing. You do verifiable randomness functions in the TEE
still. So it's just like you, it's the, it is used for this actually quite common because then, I mean, you can prove that the number was actually created and attested and ran in the program in your shuffling algorithm.
And then you use the actual operating system key.
You can derive the hard root key in the program using the decentralized KMS.
And then this way that you can actually generate a private
number or a private key at the time of the generation. Okay. So when you say run the program
to shuffle the deck of cards, and then you can basically create an attestation that the proof
of the program ran correctly. However, it ran inside of a TEE. So you want to verify A that
the program did shuffle a deck of cards. Now, the second thing is, is that where did the key derive from? Now, I want to be able to derive
the key from the KMS, which is the key central vault or key management system, but then I don't
want it to be centralized. I do want it to be decentralized, but I also want to make sure that
it's not exposed. And so that's the nice thing about DSTAC is that it's verified a zero trust KMS, in a sense, where basically it
uses the on-chain smart contract on Ethereum, which what, you know, FALA basically currently uses,
thanks to SP1 actually, to basically attest to new keys basically being generated in the TEE at
the time of derivation. And therefore, no one can front run
other than basically the pseudo administrator, which also does not have any peer view on the
CVM because it's confidential. And so that's actually like what you would do. And what's nice
is that you can basically say, hey, this key was basically generated, it was added and derived
basically from the KMS, it derived some sort of private key.
We do not know what the private key was,
but we can attest to the validity of this key
using the root derivation, which is the root key.
And that root key comes with a hash,
and then you can attest to that hash.
And this is just the KMS hash, basically.
And that actually comes in the attestation report on FALA.
And so now then you basically say, hey, this is the deck of cards.
You can prove from end to end, I would like to release the key.
And so the zero knowledge proof then can verify whether or not
if the key was actually the same key that was used to shuffle the deck of cards.
And so since it was a private key, the hash of the cards equals the hash of the output.
And so basically if you say A plus B equals C,
then you can say that whatever key was introduced
to basically shuffle the deck of cards
would equal the hash that is related
to the deck of cards as well.
So therefore, whatever the deck of cards is shuffled is now,
if I reveal all the cards,
and then you need to attest to the fact
that it was the same attestation key,
then all you need to do is expose the key,
and then you can expose it at the end of the program.
So once you verify, then the verifier can tell the TEE,
I would like you to now reveal the confidential private key,
and then if the private key's hash
attests to the same attestation quote,
then you can verify the deck of cards
was in fact shuffled in the same exact order. Therefore, then you can verify the deck of cards was in fact
shuffled in the same exact order. Therefore, you played a corrective game of blackjack. So,
yes, basically, I'm sure you could do that. Very, very cool. Well, I do want to be mindful
of time here. I know we're, again, quickly coming up on the top of the hour. Wanted to, I've been grilling you for this entire time and you've been
gracious with all your time here
and responding. Curious if
feel would be interesting for your community to
Verify side. Any questions you have for me?
I guess my big question is, you know, like, when it comes to AI, I guess, like, what are your real plans for AI? As a team, how do you guys think about it? What does your community want, right?
host AI. You know, we actually provide the GPU support on FALA, you know, private inference,
essentially, so you can host your LLM or SLM model. And basically, you can have a private
end to end chat. So we have like Red Pill, Priv AI, and a couple of other AI teams launching on
FALA. But like, what's your ideas for this? Yeah, so a bunch of different things.
Maybe I'll just start with, like, company-wide.
We do have a work stream that's kind of dedicated to building out agentic, you know, AI that can do kind of a bunch of different things for us internally, like generate reports on Slack, you know, activity or Slack messages that happen to keep kind of
everybody up to speed on what's going on because we are a very decentralized company. I think we
have people pretty much all across the world in all sorts of different time zones. And so
often challenging to kind of keep up to speed, but we are very cognizant of the security and privacy ram it can't necessarily be exposed,
you know, information that shouldn't be exposed.
And so I think like with respect to zero knowledge proofs, the interesting piece there is like
having two models potentially or two agents sort of interacting with one another.
How does one agent know that the other agent performed the job
that it was supposed to perform in the exact way
that it was supposed to perform it?
And so that's potentially the space where zero-knowledge proofs come into play,
not necessarily in this case for privacy,
but for one agent to prove to another that it did a job
without that second agent having to redo all the work
and check the work that was done. So I think there's some really interesting things there.
Yeah, I also think that too. I think, again, like when it comes to research, because like so many
teams, scientists need to be able to do their work faster, but they also do not have gigantic AI models sitting in their lab, right?
And like, that's the thing is, is that they want to be able to trust using their local LMs with
external LMs to do very specific acceleration faster. But then they also want to make sure
that it's doing the job correctly. So this is the key of ZKML, which is, I think Brian Armstrong
even mentioned that, like, I think the future of decentralized research would have to happen
using like blockchains, but also that relies on TEEs as well, because then if you are doing this,
then you would also need to prove that the weights had been trained in a very specific fashion.
So the ZKML basically proves the correctness of the model's training. So you know
exactly what model was ran without actually revealing what the data was to anybody else.
And you want to make sure that you have an end-to-end secure connection between your lab
and the off-prem device, basically, or the LLM. Also making sure that the LLM doesn't expose the
data to any other data provider, because you're only really looking for just like inference, really. And that inference is basically trained on some model
weights, but you need to verify that the model weights were actually trained correctly without
exposing the weights as well. So you're right. I don't know if there's any company, can you shout
out any companies that are like focused on this, or if there's a particular team that you're
thinking of, or is that private? I mean, we are, we do have a researcher internally that's doing some, like,
it's doing some research on, like, distributed sort of, like,
training of AI models across, you know,
potentially proprietary and private data sets.
So I'll shout out our own company there.
And I will also plug that we are working on an EZKL verifier that hopefully will be up and running shortly here.
We are coming up on our main net launch in about a month or so.
And so that's kind of fallen a little bit behind in priority,
but we are working on that as well.
Yeah, I think Ezekiel is going to be great for authentication as well. And I personally have
focused on authentication methods using ZKML for like three years. I think ZKML is going to be very
useful for pass keys, for 2FA methods, you could use EZKL for authentication of some sort
of identity, but you can train the identity on various challenges without revealing what the
challenge set was. And so then you can basically prove that you know basically how to prove as a
client something that no one actually has the ability to learn about,
basically secrets without having the secret stored. So you can prove that you know something
without ever having to reveal what the actual knowledge is. So proof of zero knowledge,
basically. So, well, actually it's reverse. It's zero knowledge, interactive proof of knowledge.
So that's the nice thing about authentication in ZKML is that
you could basically remove passwords effectively. Yeah, that's something I'm very
much looking forward to. I had forgotten my password the other day for an account and I
was locked out of my Gmail account because I exceeded storage limits. So we could, you know,
remove all that and remove the fact that I need to reset my password all the time.
I'm very, very into that idea.
And I think it definitely can be enabled by zero knowledge proofs.
And so excited for, you know,
all of what's to come with, you know, this whole entire space.
But with that, maybe we should wrap up here because,
uh, we've talked, we've talked a lot, we've covered a lot of ground. Um, hopefully this
was super helpful for everybody here. I know it was helpful for me to kind of hear your perspective
on everything. For sure. If there's anybody, do you want to invite anybody up on stage? If there's
any question, if there's no questions, I mean, unless you have to break for time exactly at 12, I'm happy to answer any more questions.
If anybody in the audience does want to ask one or two questions, if they're confused about anything that we did talk about, or if they want to ask you something about CK Verify.
There is a hackathon, by the way, for CK Verify at the end of the month, too, that we're going to be sponsoring.
That's going to be happening.
Yeah, in India, actually, near Delhi.
Yeah, and I think that's in September, maybe midway through September.
So very excited for that as well.
We've gotten tons of very cool project ideas through all of our hackathons. And I will also
plug that we, you know, hackathons are great and that they generate a lot of cool ideas for people.
But we are starting to build out, you know, further sort of infrastructure beyond hackathons
to help people continue to build those applications that they hacked
together during these small hackathon events.
And so there will be some more information on grants program from the ZKVerify side relatively
So with that, yeah, let's see if there's anybody.
I can stay for maybe three or four extra minutes.
We can see if anybody wants to come up on stage and ask any questions.
Yeah, so if anyone in the audience does have any questions,
please don't feel afraid to raise your hand and request the mic.
In the meantime, we do have some in the replies.
Here's one that could be really for
anyone. Can this tech be leveraged in permissioned blockchain environments too?
I think definitely. I wouldn't see why not. I think it might actually bring some legitimacy to, like extra legitimacy to these
private networks where you could actually kind of mask all, like the whole idea, I guess, behind
a permissioned, or maybe I'm misunderstanding the question, but I think the whole idea here is to
kind of create an environment where, you know, it's cordoned off from the rest of the world.
And so if you're able to bring some attestations on chain and potentially prove them with ZK proofs, I think you're sort of bridging the
gap between this permissioned and non-permissioned world or permissionless world.
Yeah, I would agree. I mean, by permissioned, I mean, the first thing that comes to mind is
something like Stellar or, you know, like an authority chain like Binance or something where, you know, you want to have very specific validators provide commitments as their own L2 to their own L1.
And I think this is how a lot of the beacon chains, the L1 BNB chain sort of even works, too.
one BNB chain sort of even works too. So if OPBNB wanted to basically make block slots,
permission slots effectively with pre-comps, then yes, TEEs would basically be leveraged by
validators. But then you would have to have every validator run its validator in a TEE,
which is possible. There are Bitcoin nodes that run fully in a TEE. There are obviously full ZKVMs that are running inside of a TEE as well.
And here is one for John or ZKVerify specifically,
and then we could end with you, Dylan.
For John, I just had to hear,
oh, how does ZKVerify approach false positive
or negative cases in the verification process?
I mean, I'm not really totally sure where that question is getting at because, I mean, if proofs are invalid, they're rejected by the network. So they'll be rejected by, you know,
the node that first receives the verification requests.
And if that node for some reason identifies this proof
as being correct, it would get distributed
to the rest of the nodes in the network.
But if for whatever reason that initial node was corrupted,
the rest of the network would work.
The whole idea behind consensus is everybody's got to agree.
It would definitely be rejected by the rest of the nodes as well.
Not really sure if that answers that question, but I'm also not sure I fully understand the
idea behind that question.
Maybe it was AI generated and Brock was trying to outsmart me.
Got him. One last one for you, Dylan. What distinguishes FALA Network's attestation
mechanism from other ZK-based protocols in terms of security and scalability?
Well, TEs are not zero-knowledge proofs, right like there's a separation so let me just
distinguish that and as far as like a proof record is concerned when you generate a zero
knowledge proof you get um you get a hash basically a proof package that comes with
uh the various uh weights of like what the potential configurations could have been
along with you know if you're running perersen or whatever kind of very specific algorithm, you're getting a proof package along with the public and private
and potentially view keys in order for you to basically set up a zero knowledge proof. But the
proof basically is a hash as well. But the TEE is basically just proving the actual measurements
of the operating system, the measurement of the application's
source code, the measurement of the end-to-end package proof commitment of the certificate
that's on HTTPS, the measurement, measurement, measurement, measurement, right? It's called RTMR.
So this is the difference is that what DSTAC is literally doing is that it's measuring the actual commitment or app or operating system through these channels,
which are known as like buses. But these are registers and these registers create signed
quotes and the signature of the quote happens within the confidential enclave, which is the
CVM. And then it gets its key from a very specific private key
And then it basically creates
what's known as an attestation report.
And that report is very, very easy to look at,
view, describe, understand on FALA.
So it's called the trust center.
And this trust center says,
hey, this is the git commit hash.
You can go to GitHub right now, literally see that the code has this commitment hash of the
code however if you don't believe us then here's the attestation that that
exact code which ran this gimmick git commit hash as well which is open source
then came back with this attestation proof and if you don't believe us just
run it through the zKVerifyer,
and then that would verify the actual proof
And then ZKVerify would return the exact same hash, literally.
And it is because it's a direct commitment.
That's how cryptography works.
And so it's just a symmetric encryption.
So you wanna verify that something was true
without ever revealing what exactly was true about the statement. So you're proving correctness with zero knowledge proofs,
but you're never actually proving correctness of a TEE. You're proving the confidentiality that
something was processed in a way that could have been correct. But in terms of correctness,
you use zero knowledge proof to actually prove the correctness of a statement, of a vector,
of A plus B equals C, right?
And you're verifying that I never want to reveal A, but I want to prove that A existed using the
attestation. Awesome. Absolutely awesome. Well, I know we are at time. Guys, John, I know you have
a busy schedule. Definitely appreciate your time and effort to carve out some opportunity here to talk with Dylan.
And Dylan, of course, thank you for sharing all the great stuff.
The Fowl Network is working on CK Verify and Fowl Network, working on some really interesting things within the space.
Check out and follow them on X.
Oh, and follow them on X, join their community.
Same here with ZK Verify.
Same here with CK Verify.
But guys, appreciate the great conversation.
And we'll have to do a follow-up next time.
Yes, thank you so much, everybody. Thank you.