Hello. Hey, Will. How are you?
Good. How are you? Can you hear me? Okay. Perfectly. Yeah. Cool. Thanks, Hector.
So I guess we're going to start in just a few minutes.
So the folks who are going to be talking shortly, do you want to just request.
So I guess we can kick this off. I think we're on the hour.
Thank you, everyone, for joining us for the spaces.
We're going to be talking today about Mina.
I want to just quickly introduce us, the ZK Validator, ZKV.
So we are a mission-driven validator.
We're currently running on 11 mainnet networks and a bunch of other testnets.
We have been a validator on Mina since the launch of the network.
And we're very excited to continue to support the ecosystem.
I think we've hosted three of these sessions so far.
So it seems like it's becoming a bit of a quarterly ritual, which is cool.
Yeah. And this time around, we're going to be hearing from teams that are
working on making healthcare data more accessible with their knowledge proofs,
geolocation, privacy-focused services.
And I know someone from O of One Labs who still needs to join as a speaker
will be sharing the latest and greatest.
So I know Will Cove is here with us from the Mina Foundation,
who's going to share, who's basically going to be moderating this spaces.
Hey, Anna. Thanks so much for having us on again.
Great. And hey, everybody, thanks for joining.
Again, as Anna said, it's, I think, our third time doing one of these.
And it's, yeah, it's a pleasure to be here.
My name's Will. I'm part of the team at Mina Foundation.
I work in our community across our different grant programs.
And, yeah, I think the goal of today's chat will just be to give you an update
on everything that's going on in the Mina ecosystem and hear from some of the teams
who are building interesting things with their knowledge proofs on top of Mina itself.
So really quickly, maybe I'll just ask the other speakers to introduce themselves.
And then I have some questions lined up, and maybe I can start by also just providing
a quick update of where things are across the ecosystem.
But really quickly, we'll just go around the room.
First, I'll turn it over to ZK Locus, Ilya.
My name is Ilya, and I'm the founder of ZK Locus, which is an application,
a framework, and a protocol that enables for private, authenticated,
and programmable geolocation both off and on-chain.
It's natively integrated with the Mina blockchain and closely built with its technology.
And it was the blockchain that really made the implementation of this vision feasible.
Phil, I'll go to you next.
So Phil Kelly, I lead business development at O1 Labs.
I think people probably know that O1 is the project that built O1JS,
which is the way you can build off-chain verifiable computation and ZK applications
to then verify and work with Mina.
So my main focus day-to-day is on use cases,
and it's a really cool job because there's a ton of innovation going on
and still needs to go on in the use case area.
And, yeah, happy to talk about use case ideas.
Really excited to hear more about the process on the call.
And then I can give a bit of an update on what we're doing on the O1JS side.
So I'm Robin, I'm a co-founder of Take Care.
And as we know, basically, it's a company developing healthcare data ecosystem
And I've been a Mina builder since it's possible to develop ZK apps.
And I'm also part of a Mina navigator program where I build the OSCAR framework,
which looks at enabling more use cases for healthcare with Mina.
Great. Well, thank you all for joining.
And I see we also have a number of folks who are building stuff in the crowd.
So maybe we'll have a chance to get to some questions in a bit.
But really quickly, just to talk about what's going on across the Mina ecosystem.
So we are in the final stages of testing as we prepare for ZK apps,
your knowledge applications on Mina mainnet.
It's been a two-year, two-plus-year journey since the original Mina mainnet
was stood up of just sending and receiving transactions.
But the theory of a blockchain that is recursively proved in itself,
now adding that same infrastructure on top to be able to build applications
and compute off-chain and prove on-chain is a big step.
So we're really, really looking forward to this milestone that's coming.
On the developer ecosystem side, we've had a few different programs
that are running right now.
So we are into our third cohort of ZK Ignite,
which is a community-governed grants fund.
So we have a rolling cohort-based fund that is driven by our community members.
They actually decide how the funds are deployed.
And we just had the second cohort, 25, I think, out of 26 or something like that.
Almost every single project had delivered on the milestones that were proposed,
which is a big deal because we pay the grants kind of upfront
and we'll get into exactly how that works.
But from a community kind of health standpoint,
we're quite pleased with where we're at.
The electric capital developer report at the end of the year
saw that, I think, I think 81 full-time developers
building in the Mina ecosystem by the end of the year.
And that was up from, I think, about like 30-something,
most of which were the core teams across the Mina Foundation
and O1 Labs building the previous year.
So we feel kind of a lot of this momentum going into these milestones
and there's a lot of interesting products that are coming out of it,
which we're here to talk about today.
So I think where I'd like to start,
one of the kind of narratives that we've been circling around
as an ecosystem is proof of everything,
which can mean a lot of different things,
but the idea centers around being able to attest
to any piece of signed data,
be able to wrap that into zero-knowledge proof
and then use it extensively across both Web2 and Web3 applications.
And a lot of that we're starting to see being tested out
with the folks on this call.
We think the ability to attest to the authenticity
of where data comes from is really the next step
in where the internet needs to go.
HTTPS, you might see HTTPS in my Twitter profile,
but HTTPS has certificates authorities that verify
the connection between the sender and the receiver
between the client and the server.
And we're thinking that now we need to take it a little bit
one step further and actually authenticate
where the data, the source of the data is coming from
in the form of zero-knowledge proofs.
And so I think this is a good place to start
is Ilya, who's building something along these lines
with geolocation, which can then integrate
into a range of different other applications.
So maybe Ilya, if you can give us a brief overview
of what proof of everything means to you
and how that is involved with your geolocation project.
Of course, I'll just quickly mention,
it's interesting that you mentioned HTTPS.
I've had the pleasure of contributing
to the underlying protocol, the TLS version 1.3,
as a part, it's built in the open source
by the Internet Engineering Task Force, the IETF.
So for anyone who's listening in the call,
anyone who is interested in security and privacy
and would like to work to make the world a better
place for everyone and our communications more secure,
I highly recommend you to check out IETF.
They're basically the ones defining the RFCs
and the proposals that run the internet.
Now, I will go into the proof of everything
and what exactly it means to me.
And I will explain what proof of everything means
by explaining the novel verifiable computational model
that is based on recursive ZK-SNARKs.
And it's that same model that is leveraged
by them in a blockchain, and it is at its very core.
So the core concept is that recursive ZK-SNARKs
enable a new form of verifiable computational model
that is infinitely scalable.
There is another way to do verifiable computation,
and it's done by other solutions like Ethereum's EVM.
And I will talk later and compare them with the one
offered by Mina and recursive ZK-SNARKs.
So essentially, Mina pioneers a verifiable computational model
that is based on recursive ZK-SNARKs,
and these allow you to prove the execution of a block
of computer code alongside all of the public
and private inputs and outputs of that computation.
Essentially, it allows you to prove that you have executed
some specific code and that that code had some specific inputs
and outputs, some of which may be private.
You can do all of this non-interactively,
which means that the prover generates the proof of execution
independent from the verifier,
and the verifier verifies it independently from the prover.
So essentially, Mina uses these recursive ZK-SNARKs
to provide a novel verifiable computational model,
which I will call constant verifiable computational model,
I'll describe what it means iteratively over this code.
So the constant verifiable computational model, or CVM,
it contrasts from the verifiable computational model solutions
like the ones offered by the Ethereum's EVM model.
In the EVM model, the verifiable computation is achieved
by having each and every single node rerun the same computation
to ensure that it's executed verbatim,
including all of its inputs and outputs.
This creates a model where in order to prove the execution
of a block of code, you need to execute the code yourself
and verify everything manually.
I will call this model the linear verifiable computational model,
or LVM, and the intuition for this name
is that it implies a linear graph in terms of the number of instructions
in your program and the total number of computational steps
that will be performed in total for that computation
to be accepted by the blockchain.
This includes all of the nodes that will need to re-execute
the same code that the submitting node is claiming.
Essentially, if you imagine an XY graph
with X being the total number of instructions in your program
and Y the total computational steps performed in total during the consensus,
the graph will be basically a line with some positive slope.
The problem with this model is that it does not scale and it's very limiting.
I can say that this fact is confirmed
and just judging by the direction where Ethereum and other blockchains
that are using the linear verifiable computational model are moving.
How many side chains and layer 2s are there in the Ethereum ecosystem
and how many of them use ZK?
Actually, pretty much all of them to address the scalability issue.
On the contrast side, and I'm reaching towards the end,
the constant verifiable computational model,
which is feasibly pioneered by the Mina blockchain,
it leverages these recursive ZK snarks
to enable a paradigm where verification time remains constant,
irrespective of the size of the computation being verified.
This means that unlike in the linear verification model
where the verification effort scales linearly with the paradigm complexity,
the constant verification model maintains a constant verification load.
This is achieved through the unique property of recursive ZK snarks,
which essentially can compress computational proofs to a manageable size
that doesn't grow significantly with the underlying computation.
As a result, it offers significant improvements in terms of scalability
and efficiency, making a deal for blockchains
that aim to handle high volumes of transactions
or complex computations without a corresponding increase
in verification time and energy consumption.
If you look at the XY graph of a constant verifiable computational model,
you'd see something closer to a line.
If X is the number of, let's say, just the size of your program,
essentially the number of instructions or steps in your program,
and Y is the total energy expenditure that needs to be done
in order to verify that computation,
it would be more or less a straight line.
Technically speaking, it would depend on the number of public outputs,
but in practice, we can consider it succinct and essentially constant.
Thank you. I think that that's a very helpful overview
for those who aren't aware of, I think,
the underlying architecture of Mina
and also Kim Chi, Mina's underlying proof system,
how it interacts with ZK apps.
I guess the next natural question,
geolocation service or a service for geolocation privacy,
I'm assuming where you're going with this
is that it requires a much different scale of computation
that might be required from your normal DeFi application or NFTs.
Can you tell us a little bit about what ZK locus is
and why it is dependent on ZK snark or recursive ZK snark architecture
in order to make it possible?
What ZK locus does is it leverages all of this technology
that I described prior to solve all of the problems
that are related to authenticated geolocation,
private geolocation, and also establish the blueprint
for low-abiding technology systems,
AI governing, automated and transparent legal system
that can govern both nationally and internationally.
Specifically, what is currently implemented
and what is currently ready is essentially,
ZK locus provides for authenticated,
optionally private and programmable geolocation sharing
both off the chain and off of the chain.
Whenever I say recursive ZK snarks from now on,
you can just think of this verifiable computational model
that is based on generating a proof of the observation of a computation
and providing a cryptographically assured proof of that
including all of the inputs into the computation and the outputs.
If you think of a blockchain,
you would think of these outputs as state updates
and essentially every transaction performs some computation
and then updates the state.
ZK locus leverages these recursive ZK snarks
to enable authenticated geolocation from arbitrary sources
and then use the authenticated geolocation
to enable the private sharing of it.
Let me give you in practice.
Something that is to present you a bigger vision,
imagine that you have on your mobile phone,
imagine that you have your GPS chip
being able to either produce a zero knowledge proof
but it doesn't even need to be zero knowledge.
It could be some signed proof of geolocation coordinates
and then you use them as an input into ZK locus
as a way to authenticate these coordinates.
This would be one of the most interesting use cases.
There are ways to solve it with secure enclaves
on the operating systems,
but you still need to trust that this operating system
hasn't been tampered with.
What is practically available now,
and this is something that I focused on integrating from the start,
is a component that allows for very easy integration
with any legacy system or any web2 system.
It can be integrated with the infrastructure
running on any language like Java, Python, Perl.
I call it the integration oracle
and I've designed it as an HTTPS service.
It can be as whatever you'd like,
but I've designed something that you can just install from node,
execute, and then use directly.
What basically this oracle does is it acts as an oracle
for the geolocation source.
Essentially, with this geolocation source,
ZK locus proof will contain the cryptographic assertions
that whatever geolocation point,
whatever latitude and longitude was used internally in computations,
that latitude and longitude pair 100% came
from an oracle. It was signed by a specific key.
You can also add arbitrary authentication methods on top.
Now, a very cool thing that is happening here is
essentially it's a compression of knowledge.
In the previous example of the authenticated geolocation proof,
you can think of it in two ways.
There are two numbers, latitude and longitude,
and there is something essentially proving
that these two numbers came from a very specific and clear source.
Example, it was provided as a part of a string signed by an oracle.
That's the simplest example.
What you're essentially doing is you are compressing,
you're building information on top of one another.
You can reuse this much further.
For example, also currently,
you can attach any arbitrary data to any ZK locus proof.
What it comes down to at the technical level,
it's actually the SHE3512 hash function.
It was actually possible by a very recent addition in O1GS in 0.15.2.
I've seen also announcements related to that in O1GS 16.
That is definitely something that I was very happy to see.
It was implemented relatively recently.
I already had code written that emulates that,
but I basically just used the implementation
that's already provided in O1GS.
What you can is you can, let's say,
associate an image file to a geolocation.
Now you have a cryptographic proof
that actually can be used as a legal proof as well.
I'll give an intuition about that next,
Now, when you submit this,
you can pick up this recursive ZK-SNARX proof,
which, note, so far I haven't mentioned the Mina blockchain for once.
Well, I've mentioned it previously,
but not in this explanation of what you do with the proofs.
I've mentioned HTTP services.
I've mentioned running code,
but I didn't mention any synchronization with the Mina blockchain.
What the model employed by the Mina blockchain allows,
and this is really the core,
is you can run computation of arbitrary complexity
and then submit a single proof
of the observation of that computation
alongside all of the state updates to the Mina blockchain
and have it verify in essentially constant time.
Have the network verify it in constant time.
You can basically have a machine learning algorithm
running there for a month
and then compress all of that proof
into a single number alongside other data,
and it will verify the computation of all of that execution.
This is also how ZK locus provides private geolocation.
Once you have an authenticated geolocation point,
well, now you can use an algorithm like point in polygon
to verify whether that point is located
within the polygon that you've defined.
Of course, you can combine multiple polygons together.
You can perform arbitrary computations.
What we've also achieved is
bringing geolocation on chain.
Technically, it can be put on any blockchain,
but where I was going and I mentioned this several times
is that, yes, you can do zero knowledge computation
on Ethereum and these EVM-based solutions,
but they're not feasible.
You're going to run into guess limits.
The process is not straightforward.
You have to compile separate contracts.
You basically have to put your verifier on Ethereum,
and it's something that you can do,
but for anything of a bigger application isn't feasible.
Well, Mina is designed with that in mind.
and this is why it enables all of these solutions
that are being developed in the ecosystem.
I think there's a key point there,
which is the infinite compression of proofs,
and you can kind of think about it
of being able to call if it is so efficient
to be able to verify and reference this data on chain
Mina is proofs all the way down.
Since the very big Genesis block,
Mina is a series of proofs in itself,
and like you said, you can prove or observe
any computation that happened off-chain, on-chain with Mina.
The idea that you can have your geolocation proof
and be used across a whole other range of applications,
whether it be Web 2 or Web 3 applications,
as long as they are able to be able to call the Mina blockchain
and verify that proof themselves,
I think it opens up a completely new range of possibilities
how we authenticate where data comes from.
It really means, in that case,
in this case you referenced a string from a specified oracle,
but we can take that same analogy
and place it across any type of really signed data
as long as we're recognizing a standard around that signed data,
or it can vary from application to application.
In this case, it's geolocation,
and you're in the process of developing
what that standard could look like,
but for those who are involved in the Mina ecosystem,
you might also see we have an RFC right now
for ZKApp related to reading data
from the signed passport standard
that many governments have around the world.
In that example, and this is specific in the RFC,
there's an iOS app that reads the passport,
stores the data in a Mina wallet.
Say you want to sign up for a new social network
and they require a proof of, like FarCaster, for example,
but they do require a proof of person
in order to register for an account.
If soon Ethereum will be connected to Mina
via the ZK state bridge that the Lambda class is shipping,
and that's nearly to MVP.
That same ZK app, or sorry, the RFC is asking for this iOS app,
but it's also asking for an ONJS library education
that would be able to read the data that gets stored in your wallet,
creating a proof that you meet the certain conditions,
say that you are a person,
or that maybe you're located in a certain area
or you were at a certain time,
but none of this information would be stored
or viewed by the application server.
It's all authenticated or verified in a proof.
But then the same user, and this is where Mina comes in again,
can upload that proof to the Mina blockchain,
which then is associated with the user's public address.
Again, because Mina is comprised of infinite proofs
or recursive proofs, that same proof that was used to log in
to prove that you're a human forecaster
can be used again and again and again and again
by other applications as standards develop
simply by looking up that proof into infinity and beyond.
I think that that's really, really interesting,
and what Iliya is building for geolocation
is one perfect example to wrap your head around
the type of authentication that needs to happen
on the internet going forward.
This kind of shared database in a way,
but we've now seen with different generations
where databases can be costly,
and so proofs that attest to information are much more efficient,
and that's exactly what Mina is built to do.
Iliya, thank you very much for that detailed explanation.
I'm going to now pass it over to Robin,
who's going to talk a little bit about what he's been,
his understanding of proof of everything
and what they're building on the healthcare side,
and then after that, we'll wrap up with Phil
for updates related to all of the work,
like Iliya mentioned, on ONJS.
Robin, I'll pass it over to you.
My idea of proof of everything is related to
how do you put proving everywhere
and how do you actually integrate current ecosystems
with proof systems with proof of things?
So I've been asking myself these questions
on how do you integrate the healthcare industry
and their standards with zero-knowledge proof,
because on the paper, it's obviously very useful
for privacy and for many aspects in healthcare,
but how do you actually integrate it and put it everywhere?
So I started from, like, the three aspects,
the purpose of the OSCAR framework
is to enable more development, more experimentation,
more use case to be applied with healthcare
So three aspects of how you build software
is the data, the data model, and the program.
So we need to answer that need
to enable development of proving system with current software.
So for the data, you've been discussing about it,
ZK Records will enable to integrate more and more data
that is necessary to build useful cases
and useful usage of zero-knowledge proof.
I take an example if you want to build a prescription software
that is somewhat related to the blockchain,
you need to be able, for example, to prove that a molecule
is related to a naturalized list of molecules for a specific purpose.
So for that, you will need ZK Records to link to actual real-world data
that is standardized and to link to standard data
that is recognized in the overall ecosystem.
And for me, that's the only way that we can create integration
with blockchain and healthcare.
Sorry, maybe also just for a quick context,
because I know it, but the crowd might.
Maybe if you can talk a little bit,
Robin's company has been in existence in the Web2 space
for years now providing kind of like...
Maybe, Robin, if you could give the quick intro of where you're speaking,
because I think it adds a lot of validity to this.
We've been developing a data platform based on open data
and public verified data from all over the world.
And we provide this data to many, many different stakeholders
from big pharma to biotech startups and also a public organization.
So we provide infrastructure and verify this public data
And that's also how I got into Mina
and the technology of ZK Records,
because I thought like we could provide this data
to blockchain developers, basically.
When you want to build software, there is the data model
For the program, you obviously have O1GS
that helps you build the programmation you want.
With OSCAR, we want to facilitate that
with a TypeScript library on top of that.
So it's pretty obvious that you can integrate your programmation
And then there is a data model,
and that's the topic of the Navigator program,
is how do we integrate with current data models?
So on the Navigator program,
I picked a standard that is called FHIR.
FHIR standard is a well-known standard
that is used by many, many organizations.
To explain you, it's used by Apple for their OSCAR application,
and that's how they structure their data also
for like your health works, et cetera.
So it's used by major companies,
but also by major public organizations.
there is a program in the European Union
called European Health Data Space,
and it's basically a standard
for exchange for information all across Europe,
and also it's a way to provide citizens, European citizens,
back their ownership of OSCAR data.
And this European program,
I've picked the FHIR framework for their data standards.
So basically, if you take this data,
it's a data model, this data standard.
If you take this data model
and you enable integration with a proving system,
you start to enable a lot of use cases.
You already have like a vivid ecosystem on the FHIR framework.
Google has APIs to use FHIR.
You can find UI components,
UI library to build medical software based on this data model.
So the goal of the Navigator program
is to enable to prove queries against this data model.
So you will be able to build any components
and that is linked to a proof.
So basically, you can prove specificities
about like you can query your data if you have blue eyes
and you can prove that you don't have brown eyes.
For example, I will take like a very easy example,
but you can also prove even more complex queries
against the FHIR standards.
And this could be easy to integrate with any ecosystem
and everyone in the SK industry knows
how it's learning this standard.
Yeah, I just have a question, which is,
so standardization across the board,
we know to be really important
and has been since the beginning,
I mean even before the internet,
but especially in terms of transferring data
on the internet, standards are everything.
If these standards are already in place
and let's say you mentioned
there's kind of these strong ecosystems
around them with APIs and interface components,
within healthcare, where do you see the advantages
of shifting towards a more ZK?
So yeah, it's very interesting.
So one of the big bottlenecks in healthcare
and innovation in healthcare is compliance
and sharing information, sharing data.
And basically with a solid proving ecosystem,
you could prove some aspect of the data
that is today a very strong blocker for innovation.
If you take the example of clinical trials,
you need to do a lot of data verification,
But how do you do data scouting
when the access to the data is almost already impossible?
It's very, very complicated.
You need to work out a lot of agreements,
With a proving system, you could cut back
a lot of this cost and prove some aspect
to facilitate collaboration in a lot of aspects.
Oh, okay. Yeah, that makes sense.
I think we're going to have to move to Phil in a bit
because I think we're working towards the 45.
So yeah, I'd be curious for kind of the punchline
on healthcare, where you think
the kind of like biggest opportunity is.
No, not so much a question,
but just to get, I had interrupted you, Robin,
So you were asking a question?
No, so I think in regards to querying the standard,
did you have anything else that you wanted to share for that?
No, I was finishing my explanation.
Okay, perfect. Thank you.
So I think just to tie it back,
and this is something that Robin has been like leading
in regards to healthcare.
We also have a team called Biosnarks,
which is Robin it works with,
who is working on zero-knowledge proofs.
It's a different standard.
I don't think it's the FIRE standard,
which is I think F-H-I-R,
but the standard that they're working on
is more related to proving the,
I think it's either a stage of protein folding
within molecular docking for rare drug discovery
So they want to, it's more on the scientific protocol side,
is to prove specific aspects of experimentation
that you've done and that you cannot reproduce so easily.
So you need proving system to solidify the protocols
and the proving of protocols.
And also, so you have also this use case of proof
of molecular docking for IP protection.
You could prove that you have specific properties
on your molecule without revealing sensitive data
Yeah, so I think for anybody listening as well,
the point that we're really trying to drill home here
is we're like, if we oversimplify what we're doing here
is that through O1JS, which is a framework
written in TypeScript that allows you to easily program
zero-knowledge circuits, you can take standardized data,
sign standardized data, decide what you would like to wrap
in a circuit that recognizes that data
so that it is able to basically say,
yes, I am over a certain age, yes, as Robin was saying,
this data says this person has blue eyes or brown eyes,
or in Ilya's case, yes, this data meets a certain requirement
related to geolocation or coordinates.
But then through that, you can plug and play
into infinite possibilities in terms of where
that information can live, and it is infinitely verifiable
on the mean of blockchain, because again,
the mean of blockchain is infinitely recursive.
I keep saying infinite on purpose there.
So we're just, because O1JS is still new
and being able to program your own circuits in this way
is also quite new, it's not writing your own circuits
that has always been possible, but the combination
of being able to prove on the mean of blockchain
These types of applications and these types of verticals
are just starting to emerge.
So if you're listening and you think that you have,
you know, from whatever industry you're working in
or you know that there's a gap when it comes to either
compliance or verifying, being able to verify specific data,
there's a reason we're kind of attacking it
from the HTTPS angle of like, it can be applied anywhere.
And so that, I think, is what we're really starting to see
and I think it's quite exciting.
So thank you, Robin, for coming there,
and I would definitely recommend giving Robin a follow
if you're interested in CK healthcare space.
Okay, and then finally, I want to turn it over to Phil.
So Phil is head of BD for O1 Labs,
and O1 Labs is the tech powerhouse behind all things MINA.
You know, writing, building the language O1JS,
but then also the team that incubated the protocol itself.
And if you pay attention to the O1 Labs updates,
every month the team is shipping new features,
making these types of applications possible.
So I want to turn it over to Phil to kind of give us
the latest and greatest of what's going on.
Yeah, and so we can, yeah, Phil, over to you.
And actually, you know, infinite is actually a great word
to be using at the moment because I think, you know,
the proof of everything is a good signifier of the fact
that, you know, we're opening up this massive new space
for innovation in building apps.
And, you know, ZK has been, I mean,
a lot of people even in the web3 industry think of ZK
just for infrastructure type use cases,
like, you know, new chains, new bridges, new like clients.
But what people don't appreciate enough is the fact
that you can use ZK off chain and then anchor it, you know,
verify and anchor it back on chain, which is, you know,
the amazing thing that MINA is about to be able to do.
But you can do things off chain in ZK and do them trustlessly.
And that, you know, it's probably a bigger space
for innovation than blockchain itself was in, you know,
seven or eight, nine years ago.
And so there's tons of new things you can do.
And we'll be able to do it.
We haven't even begun to scratch the surface on, you know,
what the right use cases are.
But they largely relate to things to do with identity
or things to do with computational routines
that are, you know, not necessarily identity-based,
but which you want to be able to prove to other people
And so to do, you know, what you could call verifiable
computation off chain, whether it's with, you know,
personal data or just the fact that the computation ran well,
there's a couple of things you need.
One is you need to be able to source data that you can trust,
so data that is in some way signed,
because there's no point in having a ZK application
and allowing a user to input their own data and say,
you know, hey, I promise you this is true,
and I'm going to make a statement based on it.
You know, generally you need signed data to go in.
And so that's a big, that's the starting point, signed data.
And then you put the signed data into a ZK application.
And, you know, of course, we've had the ability to build
ZK applications for a while now with the earlier versions
But what you need to be able to do is build applications
that can handle signed data from, you know, hopefully
in the future, any different source, because you want to be
able to handle as much data as possible.
And then once you've run the ZK application,
you can verify the proof that comes out of that on Mina.
So the start of that, like, you know, getting the use
of the data and the middle of it being able to handle lots
of different kinds of data is where we're making great progress.
The Mina Foundation, as you probably know, has got an RFP
out there currently to get some ZK oracles built to allow
more sources of signed data from TLS sessions.
And that will open up the ability for, you know, any of us
to have some kind of a session going with a bank
or with another provider and being able to attest to data
in that, so, you know, don't trust me, trust the original
source of the data, such as my bank.
So, you know, that's that.
I'm very excited to see that progressing.
And on the O1JS side, you know, we need to be able to handle
And it comes with different forms of cryptography.
And the original launched version of O1JS, you know,
a year or so ago had limited ability to handle that wider
range of cryptography efficiently.
But that's what we're now, you know, making great strides
to roll out new capabilities on.
And so very specifically, we have recently gone live with
O1JS with the ability to handle three new kinds of cryptography
as ECDS and hashing functions as ECDS-A, Ketchak,
So if you're not deep in the space already, I know that they're,
you know, potentially strange code words.
But SHA-256, I think is really interesting because that's the
cryptography that's used in Bitcoin for a start.
So you can now prove things that have happened on Bitcoin.
So, you know, I can send you some Bitcoin on Bitcoin network
and we can take the evidence of that and put it into O1JS
and issue a proof that something happened or use it as an input
to a proof of a, you know, bigger statement about something
And SHA-256 also is the basis for tons of, you know,
quote unquote, real world data.
So many passports have in them a near field communication chip
and you can read that, the chip with the device, you know,
And the data that's emitted from that, I mean, you know,
obviously varies by location in the world and kind of passport.
But in many cases, the data is SHA-256 signed.
So again, you can now get that data and put it into a VK application
that's built with O1JS and handle it efficiently and then, you know,
prove things about it without necessarily revealing the underlying data.
And then ECDSA and SHA-256 and Ketchak.
So Ketchak is the cryptography scheme that's used in EVM chains
and ECDSA associated with that and also used for things like signatures.
So although, you know, this massive new field of innovation,
in many cases relates to data from, quote unquote,
the real world and off-chain, the ECDSA and Ketchak capabilities
allow us to handle Web3 data that comes from EVM chains.
And so, for example, you know, you might want to use that to prove
that you own an NFT in a particular category of NFTs like, you know,
a Pudgy Penguin without disclosing your wallet address
or even which Pudgy Penguin you own.
So all those use cases that we've been talking about for a while,
that, you know, based on you proving things or proving statements
about things that have happened on EVM chains,
the Ketchak and ECDSA primitives that we've now got in O1JS
allow you to do that efficiently.
You could actually do some of it before.
It's just that, you know, the whole ZK world is working to get more efficient
to make the UI and UX better.
And those primitives are pretty important for that.
And then lastly, in terms of what's new on O1JS,
we've had the, you know, earlier versions of O1JS out for, you know,
over a year now and we've had third parties using it.
So some of what we're doing, I mean, I've given you like the big headline
grabbing new feature functionality with these cryptography schemes.
But a lot what we're doing is just hardening what we've had out there
for over a year and being getting feedback on
and just making it a really good, well-documented, you know,
hopefully mostly bug-free product, which is really important
because there's lots of things we do in Web3 that are, you know,
experimental and, you know, maybe slightly held together
But I'm confident that O1JS is becoming a really mature, excellent user experience.
And the other thing we did was making it faster.
So on each release, we're making the proven speed faster.
And so hopefully all those things will help to power this, you know,
as Will said, infinite set of things that we can do.
And I saw Ilya gave you the hundred emoji
when it came to the improving the developer experience,
which is, I think, a common sentiment across the entire community.
And I see Trivo is on the call,
who's one of the leading core developers on the team.
And just shout out to Trivo for leading the charge there.
And Anna, before I turn it over back to you,
I think it's full circle because Phil had mentioned the SHA-256 integration.
And I believe that that work had started from an individual hacker
So I think it's a nice full circle there.
That project actually was one of the chewing glass prize winners,
and I think won your bounty as well.
And I like that it comes together that way.
Thanks to everyone for joining the spaces,
all of you who were listening and also the speakers.
For me, it's really great to also just get to know what's being built.
I wasn't as familiar with ZKLocus and the other works.
So, yeah, thanks so much for coming on here.
And I think, like I said, there's a very good chance
we're going to be doing another one of these in a couple of months.
We really like doing these check-in points with the Mina ecosystem,
learning about new projects, companies.
I guess I have somewhat officially announced this,
that there is a new hackathon coming up.
I don't know if you guys know this.
We did say it yesterday at the finale for ZKHack 4.
But there's going to be a hackathon ZKHack Krakow in Poland
the weekend before ETH Berlin.
So folks might want to do both of those.
And even though it's different cities,
they're not super far from each other.
And the dates are May 17th to May 19th.
We have found a phenomenal location.
And yeah, I hope some of you will join us there.
I just know that at ZKHack Istanbul,
the OV1 showing and the Mina projects were so exceptional.
I think that was the spot where you could really see the ease of use
and the fact that you could onboard developers quickly,
that's where it really shines.
Because within a weekend, people are already getting their hands dirty,
understanding the concepts,
actually able to build useful, cool things.
Well, I'll definitely be there then.
Yeah, it's Krakow and May signing up.