The Look Out w/ Logan Jastremski

Recorded: March 21, 2024 Duration: 0:58:48

Player

Snippets

Hello, hello
Can you hear me?
Yo, can I hear you loud and clear, sir?
OK, that's brilliant.
So I can activate my microphone now and turn it on and off,
but I can't see who is in the space.
So do we have our guest, Logan, here?
Yeah, just now.
Brilliant.
So can we hear you, Logan?
Have you been given permission to speak?
Hello, how's everybody doing this beautiful day?
Good to have you.
OK, so that's off to a good start.
Welcome, everyone.
This is the Lookout Twitter space.
Usually, we give people a couple of minutes to kind of filter in.
And I'll start off by introducing my issues as well, Logan.
This happens every single week.
Something goes wrong.
Either I can't activate my microphone
or I get stuck in some screen somewhere.
So this week, I can turn my microphone on and off,
but I can't see the guest list.
I can't see who's in the space.
So if anyone wants to speak, you
have to request and someone else will give you the permission.
Great, so I'll just get started then.
Things I can't see if anyone's here or not.
Welcome, everyone.
This is the Lookout Twitter space, which
we do on the same network account.
I should really be calling it x space these days, I suppose.
But this is a weekly space where we
focus on technical discussion with people
from the blockchain industry.
So the idea is the Lookout is a kind of giant geostationary
platform that orbits the planet that the action
in the Dragon Ball Z universe takes place in.
And sometimes the characters go up there
and kind of crouch in a squat.
And they look down on the planet
and kind of think about things.
So that's what we're doing.
We're taking a kind of higher level look, a zoomed out
look at the blockchain industry to talk about industry trends
and things that are going on all over the space,
not just to do with say.
And so that's what we do.
We get together every week.
On Thursdays, we get amazing guests
on to come and give their perspectives on things.
And so you may know me already.
My name is Angus, and I do developer relations for Say.
So if you're interested in building anything,
we've got lots of resources for you
and places you can come to chat and hang out
and ask your questions.
So we've got a Telegram group chat.
We've got the Discord server.
We've got some good channels there you can get involved with.
We've got a weekly office hours
where you can come and meet the engineering team
and you can ask your questions.
And also we've got some exciting programmes
coming up for you in the future if you want to build stuff.
So if you're interested, you can DM me
or get involved by joining the community channels,
the Telegram and the Discord.
And also, obviously the documentation is always there.
So you can go there to learn more
about building stuff on Say.
So without further delay,
I'd like to introduce my guest, Logan Jastrzemski.
Is it Jastrzemski with a Y or with a J?
How do you pronounce it?
Jastrzemski is the correct Polish way to say it,
but our family uses the American version.
So we pronounce the J with Jastrzemski,
but you picked up on it.
So kudos to you.
Okay, so you can...
Can I say it either way?
What's your preference?
Jastrzemski, hard J.
Okay, hard J.
Logan, hard J, Jastrzemski.
Welcome to the Twitter space.
Thanks, guys.
It's great to have you on.
I'm looking forward to some really good
technical discussion.
But before we get into it,
can you give everyone an introduction to yourself,
what you've been doing,
and how you've come to the place you are today?
Yeah, of course.
So I started my career out in Silicon Valley,
ultimately ended up at Tesla,
whereas spearheading the supercharging network worldwide.
It was a very interesting opportunity.
You got to work with a lot of smart people
and really around the world,
figuring out how to really supercharge the charging network
and make sure that that can be expanded
as quickly as possible.
But fell in love with crypto
really during that 2017 bull market.
Was a little confused by it at the time,
but kept with it throughout that bear market of 2018 and 19
and just got more and more excited
as things were happening on Ethereum,
really along the lines of DeFi summer in 2020,
but grew increasingly frustrated with high gas fees.
So started just doing independent research
that led me down the rabbit hole of high group blockchains.
Broadly, I would kind of put that
in the category of like Solanus,
we say Aptos and Monad,
doing parallel processing and large blocks
and have become a fan of those architectures ever since
just because they allow unique applications to be built
and started frictionless capital
where we invest in those ecosystems
and projects that are being built on top of them.
Okay, awesome.
So that's really cool that you were working
with the supercharging network.
So a lot of the time in growth type roles
and in technology,
people talk about supercharging things
as a way to say you're gonna inject lots of growth
or you're gonna really amp something up.
Whereas you've done actual supercharging,
which is something that not a lot of people can say.
So I think that's really cool.
Yeah, interesting that you mentioned
you got into high performance blockchains.
So that's what we've kind of said
is gonna be the main topic of discussion today
is discussing how you can increase performance of blockchains.
And so I said it was the past, present and future.
So I think it would be good to kind of take a look
at where we've come from,
where we are now and where we're going in the future
to really give people a really good sense
of the kind of the meta game of high performance
when it comes to blockchain.
I think it's worth saying as well,
people like you in technology and crypto and social media,
people talk about KOLs and like these influencers.
And so people who have a large following,
you can do lots of things with them.
So if someone has a large Twitter audience
or Instagram audience or something,
you can contact them
and they might sell something for you, right?
So if you've got a lot of followers,
you can get them to do something like buy a pair of shoes
or I don't know what people buy on the internet.
They buy all sorts of stuff.
So you can get people to advertise products.
But I think when it comes to technology,
specifically in blockchain,
the technology is so complex
that if you're kind of building
or working on a particular platform or protocol,
you can spend a lot of time
and get really deep down the rabbit hole,
getting familiar with bits of technology.
So when it comes to people who are kind of
cross ecosystem, right?
People like yourself who are in VC,
but also I would say developers
or kind of general technologists
who have a perspective across lots of different technology,
that's so, so valuable to people who are building stuff
because I think if you're building something,
it can be very difficult to have any perspective,
to get any depth on any other tools or platforms, right?
Because you're spending so much time on your own
trying to understand it and trying to improve it.
So it's something that I think,
it's a rare thing for people
to be straddling across these different bits of technology
and ecosystems, so it's really, really valuable.
And I'm sure the audience would agree
and are looking forward to hearing from you.
So I was gonna mention as well,
the name of your company, frictionless capital,
sometimes friction is a good thing, right?
Because it lets you do things like turn corners
or start fires or something, right?
But I would imagine that frictionless capital here
refers to removing the friction
from using blockchain networks.
So in the past when people have been using
blockchain networks, you mentioned these market movements
as well, right, the influxes of lots of people and users
and activity of blockchains at these specific times.
I think, and this is the past of blockchains
that we're talking about now,
there have been these times when loads and loads of people,
all of a sudden there's a big influx in demand
for the blockchain network.
And the kind of theoretical method of dealing with that
is, oh, you increase the fees, right?
So that people have to pay more at times of high demand.
But the effects of that in the past
have been that people get priced out.
So it's only people who have large amounts of resources
who can put their transactions through at peak times.
And that can result in them getting increased access
to token sales or kind of NFT launches
and things like that.
And also it degrades the experience for normal people
who are trying to use the network
for everyday mundane things.
And also it takes longer, right?
So your transaction could take quite a bit longer
if there's network congestion.
So that's kind of what we were used to, right?
Up until about 2020.
And so what was your experience?
You said you were frustrated by gas fees,
but can you tell us a bit more about
what experience led you to that frustration?
And also, was it just gas fees
or was it this kind of idea of performance as well
in terms of, yeah, waiting longer?
Yeah, I appreciate all the kind words.
I think, yeah, frictionless, I think,
was really created because we were frustrated
with the limitations of blockchains
and how confusing it is.
I ultimately believe,
similar to the early days of the internet,
it was very cumbersome, at least initially,
to get onto the internet.
There was actually no UI.
You had to go on through a terminal.
It was extremely slow and painful.
And then slowly, over time,
engineers abstracted those complexity
and now you have the internet as it stands today
where you can drag and drop things.
And you don't have to understand TCP IP
or how any of it really works on the backend.
And I think a lot of that will ultimately be mirrored
in blockchain adoption more broadly.
And I think the earlier blockchain networks
are kind of a proof of concept.
Bitcoin really kicking off the industry
at theory and expanding that by building upon it
with turning completeness and smart contracts.
And then you now have kind of
the high performance blockchains.
And I think those are really kind of more in line
from going from, say, a dial-up type internet
that was extremely slow, kind of cumbersome,
didn't allow you to really experiment
in terms of what applications you could build
because you're just so limited
by how slow the internet actually was.
The first really internet applications
were really just chat rooms.
But when you went from dial-up internet to broadband,
then you could build more interesting applications
like YouTube.
And then that only really continued
as you expanded the sandbox
which engineers could ultimately play in.
And so I really, I mean, kind of fell in love
with Ethereum personally in that 2018, 2019 framework
just because I thought the idea of composability
was really powerful, kind of these Lego blocks
where you can plug and play applications
from one to another and those build upon each other.
But at that point in time, it was really twofold.
And one, having high networking fees
just to get included within the block
and then very low performance.
I mean, even today, if you submit an Ethereum transaction
on L1, generally it can take quite a bit of time
to have that transaction finalized.
I mean, just moving Ethereum from your wallet to Coinbase
and that having to be confirmed in 30 blocks,
it takes a while for that really to showcase
from one wallet to another.
So I think more broadly,
what we're just very excited about is,
especially within, say, taking a lot of what Ethereum
has built upon in the Ethereum community
with the Ethereum virtual machine
and really turbo charging that
with parallel processing enabling high throughput.
And so say V2 coming to DevNet,
ultimately coming to production here sometime this year,
I'm very excited for because there has been
a lot of mindshare, a lot of attention,
a lot of work, blood, sweat, and tears
that have gone into that ecosystem.
And it really just needs some love.
And I think, say, ecosystem is doing a lot
to actually push that forward.
So yes, you mentioned say V2 there.
We're all very excited about V2 as well.
People are already deploying things
and noodling around, as I say, on the DevNet.
So that's separate from the testnet
because the DevNet has this new kind of V2 version
of the say client.
And so that's got EVM, right?
It's got the parallelized EVM
and we're seeing great results already,
people building stuff.
It's super fast.
That's what developers say to me.
That's why they wanna build on say it's super fast.
And so, yeah, I think going back
to the kind of your experience with Ethereum,
I think coming up to the kind of the present
of high performance botchains,
you mentioned a fair few different botchains there,
but I think it's worth maybe going back slightly
into the past as well,
because this has been something
that people have been identifying
as an issue for quite some time.
This idea of transactions take a long time
and also they're really expensive.
So how can we fix that?
And I think there's different schools of thought
about how to approach the issue.
And there was certainly in that kind of time period,
like the kind of 2015 to 17, 18,
and then again, in kind of that 2020 market peaking phase,
there's been a lot of different solutions
proposed to these issues that we're talking about.
And so there's kind of some people are saying,
well, we need to scale Ethereum,
so we'll build stuff that kind of increases
the scalability of Ethereum.
And then some people said, right,
well, I'm gonna try to do something different,
build my own network that fundamentally operates
in a different way.
So through that period,
we had a lot of kind of innovation on the consensus layer,
which I think is worth mentioning, right?
People were thinking, right,
well, if we wanna speed up a blockchain network,
one of the ways that we can do it
is by tackling this idea of consensus,
how can we change that so it's faster?
And I would say it's kind of accepted now
in the industry that proof of stake
is the way that you get high performance out of a network.
Certainly in the earlier days of proof of stake,
I think people, there was some teething troubles, right?
There was some growing pains.
I think we're at a point now
where there's kind of the meta game
has kind of stabilized and people are fairly okay
with the idea of a proof of stake network
with a bunch of validators that kind of run the network
rather than this decentralized idea
of loads of different validators,
people running it in their bedrooms
or at their office or wherever.
So I think that's worth saying
that there's been some,
or should I say,
there's been a lot of innovation towards this aid,
but we've still not really quite seen it stick
versus Ethereum, right?
A lot of these higher performance blockchains
from years gone by have kind of faded off
from their initial excitement and significance.
And so when we're looking,
I would say this next generation of high performance blockchains,
we've got a lot of lessons to learn
from these past attempts, I would say.
And so I was wondering,
what lessons do you think we can learn
from these kind of Gen 1 almost,
like first generation high performance blockchains
for like where we're going now
with this newer generation that you've mentioned,
your Solana, Sui, Sey obviously, Aptos,
what lessons can we learn?
I think the first blockchains were
really just kind of experiments.
And with any technology,
you learn what works and what does not work
and you're able to iterate from that.
I think a lot of the ideologies
of the early blockchain networks
were kind of artificially constrained,
the engineers to a sandbox
that they were kind of forced to play in.
And at that point in time,
it was really along the lines of
you should have both low node requirements
in terms of hardware and very low,
essentially bandwidth connections in between those nodes.
And that's kind of like the opposite of today
and where we really think the industry is going,
which is really focused on
increasing hardware requirements
while still having high decentralization,
but really focused on adding additional cores
to that compute to really lean into parallelism.
And then making sure that blockchains
are going from low throughput,
which is those low internet connections to high throughput,
which is very fast internet connections.
You can almost, I mean blocks
and their most simplistic forms are a form of data.
And that data has to be propagated
to all other nodes within the network.
So you can think of it as literally
if you just have faster internet connections
in between those nodes,
the faster the blockchain will ultimately run
because it will come to terms with consensus faster
and be able to digest more data simultaneously.
And so I think from these earlier blockchain networks,
what they started to do is be a little bit more lenient
with the hardware requirements and open that up,
but particularly only experimented with generally block size.
I think Neo was kind of one of the first big blocks,
blockchains, they had some execution problems
where they raised a bunch of money
and ultimately didn't go out and kind of ship
to the vision that they obviously wanted to,
but I think they were open-minded enough
to experiment with, hey, some of these constraints
that the industry has kind of put on us
from a hardware standpoint are really artificial
and we can remove those.
I think Solano was the first blockchain
that was focused both on high throughput and parallelism,
which had bigger blocks
and then focusing on computation
that can do things in parallel,
which opened up, I would say, the design space
for many other blockchains to try similar things.
And I think where Sei has really experimented
was in what you mentioned with doing parallel processing,
having high throughput
and doing a very unique consensus algorithm.
And now with Sei V2,
ultimately taking those best practices within the industry
and applying them to Ethereum
and making Ethereum high throughput,
low latency, low gas fees,
I think it opens up a new sandbox,
similar to going back to dial-up from broadband,
which I think is going to be very important
for the industry historically,
because it will open up a new type of application
that really just wasn't possible in the past.
We've already started to see builders talking to us,
building on Sei, they've already started to kind of
be able to build things
that they couldn't build on other blockchain networks.
So I definitely agree with your point there.
When you give people high performance,
when you remove these limitations,
developers build stuff that you didn't expect before.
Just in the same way that developers built stuff
on blockchains that was really cool and innovative,
when you remove these restrictions,
you're going to see a lot of cool new stuff
that we didn't really imagine was possible before,
because we have this kind of prescriptive set definition
of what a blockchain is.
And at the moment, it has these limitations in it, right?
So I am excited about opening up that possibility,
that design space, as you said.
And so it's interesting that you mentioned
increasing the block size.
So I remember as well, yeah,
NIO, this kind of blockchain that was around,
generated a lot of interest and hype back in 2017, 2018.
And so I remember back then,
there was quite a lot of contention on Bitcoin,
in the Bitcoin community about increasing the block size,
because block space is kind of a bit like bandwidth
or having land to farm or something.
The more of it there is, the cheaper it is.
So the less of it there is, the more you can charge for it.
So I remember that being the case.
There was an economic argument to it,
but there was also a performance argument,
whereby you would say, if you increase the block size,
it can have an effect on performance in two ways.
One, the more transactions that you have in a block,
the longer it's gonna take to compute the changes
to your blockchain state
that all these transactions would result in.
So however many transactions you wanna cram into a block,
you've got to account for the time required to process them.
And then also the bigger the block is,
the longer it'll take to propagate
amongst the peer-to-peer network,
which at the time for Bitcoin was a lot of nodes
that were distributed geographically among a lot of places.
So it takes quite a while with the peer-to-peer networking
to kind of distribute those blocks.
So in order to have blockchain stability
and all the nodes agreeing on what the state is,
it takes a bit longer if you increase the block size.
So then you've got the other kind of dimension of block time.
So you can increase the block time
to deal with more transactions,
but then that increases the amount of time
that your users have to wait to confirm their transactions
and possibly to have their transactions included
if it's one of those times of high traffic on the network.
So then increasing the block size wasn't this kind of,
and by the way, this is a common thing
in designing blockchains and kind of mechanisms
for how they work.
There's no silver bullet, right?
There's always a series of trade-offs
with these kinds of solutions that people are proposing.
But increasing the block size
is one way to fit more transactions in,
but it had these kind of trade-offs.
And I think it's, yeah, again, right?
As you're saying, the hardware,
the ideology for design,
so that it should be accessible to low performance hardware,
does really constrain how many transactions you can process
and how many you can fit in these blocks.
And I think we're seeing now, yeah,
as you said, the metagame for high performance blockchains
in the present is that you have to have
high performance networking essentially
and high performance hardware running the blockchain nodes.
But I think we're seeing an interesting evolution
in the kind of topology of blockchain networks.
And I think we're seeing that users are pretty happy
to have a kind of backbone almost of the network
of high performance nodes that do things like
consensus block building or block production,
and then have kind of an edge network
around these nodes of RPC providers
or kind of other validators that validate the state
but don't necessarily produce blocks.
And I think there's a lot of flexibility there
to produce a network that participants are happy to use
but that still kind of doesn't constrain itself
to having to basically include every single node
in all parts of the process of running the network.
Now, something that I wanted to ask you,
because I've been reading your tweets
and I could see obviously it's been a big trend recently
or something that everyone's talking about
is EIP4844, which is a kind of addition
of this additional blob of kind of shorter term storage
that L2s can use to lower their transaction fees
on the Ethereum network.
And I was wondering, you've drawn a parallel
between increasing the block size
and basically this Ethereum improvement proposal.
And so is our blobs just increasing the block size?
Is that what it is?
Is that how it's giving more scalability and cheaper fees?
Yeah, it's 4844.
It's a relatively minimal amount of data
that's being added in the grand scheme of things.
It's 0.375 megabytes, which is not a lot.
I mean, what you saw originally happen with face
is that their fees were extremely cheap,
but anytime you increase the block size,
you will have essentially more data
to post within the blocks.
But as that block space becomes,
as more and more demand is really trying
to get into that block space,
it'll be competitive again and fees will rise over time.
It's not like, it is getting slightly more expensive.
The blob data is filling up,
but with base particularly,
they're actually having execution problems
because they're doing a single threaded
virtual machine instead of parallel.
But I think more broadly, if we were to take a step back,
what you really need to do to scale these blockchain systems
and I mean, while I was at Tesla,
one of their main things was you can't fight physics.
And so the physics of blockchains is,
if you truly want to scale,
you're going to need more compute either in form of L2,
which is another form of parallel processing,
or you can integrate that
and do parallel processing within the base chain.
I prefer the integrated approach that Sei
and others are taking,
I think it's just much cleaner
in terms of a both engineering point of view
and user point of view,
but you also have to scale the bandwidth or the block size.
And to me, both of those will continue
to have to be increased over time.
Again, which I think Sei
is very directionally cracked in their approach
in how they're scaling blockchains long-term.
So yes, I agree with that analysis.
So when you say the base chain,
I mean the base layer chain, not base L2 chain.
The base is at L2.
Yeah, but then I think you said there,
you've got to parallelize the base chain
as in the base layer chain, not the base.
Correct, so the L1.
I mean, really you need to parallelize the L1 and L2
because if you don't parallelize both of them,
you can potentially run into issues.
This will get super dirty.
So I don't know if we want to go down this rabbit hole,
but yeah, in my opinion, you need to,
if you're doing any type of execution
that needs to be parallelized,
both on the layer one, which generally has execution
and the layer two, which is doing execution as well.
I think it's a sensible motto or however you described it,
that kind of phrase, you can't fight physics.
And I think it's important to revisit.
I think when we talk about parallelism,
it's people know that it increases performance
in terms of throughput,
but I think it's a good opportunity here
to talk about why that is.
And it's basically in the design of CPU,
computer chips that do the processing
for software.
There was a time when you could kind of make,
fit more of these transistors onto a chip
and you could increase the, some,
I think it's the voltage, right?
You can increase the voltage
and basically run the things through it faster.
And that would give you more compute capacity per second.
So you could do more operations per second.
And that translates to higher performance
for all your programs.
So you can run programs faster.
But then it kind of, there was a time in history.
It was, was it 2005 or 2009 or something
where that stopped being a viable way
to increase performance of these chips
because they were getting to kind of,
it was getting to be very difficult to fit these transistors,
these tiny little switches onto these chips
in a way that would still work physically.
And also, you know,
you can't just increase the voltage indefinitely
because it starts to melt the silicon
that the chip is made of.
So you need to deal with the heat dissipation as well.
So you can't just increase voltage permanently, infinitely.
And you can't just keep making the transistors smaller
because at some point you get to the kind of
molecular level and there's only so small you can go.
So then the chip manufacturing industry started to do
these things, they started to add more cores.
So they would take CPUs and kind of stack them together
and they say, now you have eight,
instead of having one CPU that runs a certain speed
of operations per second,
now you have eight of these different CPU cores basically
and you can, it's up to you to figure out
how to use them in a way that gives you more performance.
So then, you know, software developers
and kind of hardware engineers and stuff
would put together these methods.
And then you, so consumers would get higher performance
with multiple cores,
they didn't need to know how that works necessarily,
but then it was a challenge for engineers, right?
And so that's kind of why you can't just run
a sequential blockchain on a, you know,
CPUs only go so fast, right?
And so now modern hardware has lots of cores,
so you need to figure out how to write your software
to use multiple cores.
And so that's what the team at say have done
by implementing this kind of optimistic parallel execution
of transactions in the EVM.
That basically means, right,
so instead of executing all the transactions that you have,
you know, so you have a hundred transactions,
you just do one after the other,
you can now say, right,
if we can split these up into blocks of transactions
that we can process independently of each other
or at the same time,
you can start to split them up
amongst multiple cores on the CPU.
So in that way, we're doing a new design basically
for transaction execution in blockchains
to, you know, take advantage of the modern aspects
of CPU design, which is this multi-core aspect.
So actually, parallelism comes down
to the actual physical design of this computer hardware, right?
And so taking advantage of these multiple cores,
you can then, you know, increase your throughput a lot,
but it depends on how you split these transactions up, right?
And so parallelism in software engineering
is one of the hardest things to do
because there's lots of different things which could go wrong
if you start taking bits of programs
that were traditionally designed to run in order
and kind of doing them simultaneously.
But I think, and we have a kind of an academic summary
of the EVM design that's available on our developer forums,
the say design for parallelism means
that developers don't need to do anything
to take advantage of the parallelism,
they don't need to learn extra, you know,
design principles or, you know,
refactor their code or anything,
they just get it out of the box, right?
They just get the performance benefits out of the box.
And I think that's, we've seen that that kind of
is the best way to provide parallelism to developers
by making it as easy as possible
to get the performance benefits.
So yeah, I think, I agree as well
with what you were saying about,
if you make either the L1 or the L2 parallel,
the other one kind of has to follow.
And in a way kind of branching out your network into L2s
to scale the compute and execution capacity of the network
is kind of parallelizing it in a sense as well.
But I wanted to ask you about your opinion
on another way of trying to combat this idea
of these periods of high activity,
which is to do something
with what Solana has called fee markets, right?
So all participants on the network have to pay gas fees,
right, or kind of network fees to use the network.
But then there's this idea that if everyone on the network
or like 99% of people who are using the network
at one particular time are all trying to do the same thing,
maybe you charge them a bit more
and if people aren't using that thing,
you can charge them a bit less.
So I wanted to ask about your opinions on that idea
of kind of localizing, charging people different amounts
based on what they wanna do
to combat this kind of idea of like,
there's hotspots of activity
that kind of can degrade the network conditions
for everyone else.
Sure, I think, I mean, hotspots arises
whether in traditional databases
or blockchain architectures.
I think one thing that people generally misunderstood
or misunderstand today is that these hotspots,
it doesn't really matter where they arise,
whether that's the layer one, the layer two,
the layer three, layer four.
If a bunch of people want to do an NFT meant,
but only one person can get access to that NFT meant,
then if you want to bid $5,
but I want to bid $50,000,
then I most likely would get access to that meant
over other people.
You have to base it off some priority access
and generally today,
you can do that as first come, first serve,
you can do it through other ways.
Generally, I think probably the best solution
that will ultimately emerge is,
how much are you willing to pay
to get access to that state before everybody else?
Historically, how the early blockchains have done this
is with global fee markets,
where, for example, again, going back to this NFT scenario
where everybody wants to get access to a certain meant,
those fees that people are bidding higher
to get access to that meant
affect not only that specific NFT contract,
but the entire blockchain.
So going back to 2021,
Ethereum was practically unusable the entire time
because during these periods of high congestion,
people were bidding up fees
and because the blockchain cannot do isolation
or these local fee markets,
fees got prohibitively expensive for everybody,
even though if that application did not have
contentious piece of state
or people wanting to get bid to that application.
So as like a more concrete example,
when this NFT meant was happening,
there is no reason why Uniswap fees should be high.
But in these earlier blockchain networks,
the fees were high for all applications
regardless of whether they had high traffic or not.
And the newer blockchains,
what they're able to do is localize those fees
to specific applications.
So again, and if an NFT meant is getting popular
and people want to pay more money
to access that NFT meant first,
you'll just slightly increase your fee and the network.
All other applications in the network
are not affected by this traffic
and would generally have lower fees.
And so I think to me,
this is a really going to be a requirement
going forward for all blockchain architectures
because if they don't have these granular fee markets,
then you're going to have to do application specific chains
or application specific layer twos.
But once you enable this parallelism and isolation,
multiple applications can live within one ecosystem,
which is to me, the magic of blockchain architectures,
which is shared applications
and a kind of large global state machine.
It's interesting to think about.
I suppose I've tried to think through it before
and thought, well, if I wanted to pay for something
in the physical world,
and then I had to pay extra because it was in high demand,
maybe I'd be a bit put off by that.
But then also if you're in a shop
and a lot of people want to go there,
you have to wait in a queue.
So, or well, as Americans say,
you have to wait in a line, right?
So maybe it's similar.
I think it's something that it's going to be a crucial thing
for blockchain networks to figure out in future.
And it's been interesting to get your perspective on it.
So thank you for that.
I think, yeah, it's going to be crucial, right?
This is the kind of thing as well.
Once someone nails it and figures out the best formula,
then I think that will catch on quickly.
And otherwise, yeah, you can't just have these kind of
network slowdowns when something's happening.
Some big, yeah, as you said, NFT launch or project
doing some sort of campaign
and then it slows the network down for everyone else.
So I think fee markets is going to be
a really important area of research and development
and also innovation going forward.
And so another area that we're hearing a lot about
at the moment, but we're still kind of figuring out
how the best way to use and implement it is, is ZK, right?
And so we've had quite a few people in this space
to talk about ZK from our kind of a deeper
cryptography and kind of security perspective,
ZK standing for zero knowledge cryptography.
And so let's say we've identified several places
where ZK could be really useful running the network,
as well as on the kind of more application level,
developers can use ZK to give users
all sorts of kind of new products.
But I wanted to ask you what your,
if you had anything that you were keeping an eye on
in the ZK world, where do you think it's going to be
most useful when it comes to running blockchains,
but then also giving users new experiences and applications?
I may have like a different opinion than most,
kind of going back to what we were mentioning with hotspots.
It doesn't really matter if that hotspot exists
on the layer two or the layer one.
If someone wants priority access to that state,
then the economic value to get access to that
is going to be the same,
regardless of where or which blockchain,
so to speak, it lives on.
And so I think zero knowledge technology
is very intellectually interesting.
I kind of view layer twos more broadly
as compute compression.
And what I think layer twos ultimately,
or zero knowledge layer twos allow you to do,
is run very large amounts of computation,
compress that data and post it back to the layer one.
I think it's very intellectually interesting,
but I'm not sure where exactly, at least for today,
the product use case is going to be.
And I don't really view it as a scaling solution
from a technology standpoint because fees,
like I mentioned, are going to be just as high
when you get applications that are deployed
in that ecosystem that require
or that people will want access to.
Those fees in the zero knowledge roll up
are going to be just as high as they would on the layer one.
And so I'm more cautious around zero knowledge technology
than I would say probably in my peers,
predominantly because I think most people view them
strictly as a scaling solution,
where I view them as a new execution environment
that is interesting for engineers to play in,
but the product use cases for that are still TBD in my mind.
Majority of the value I really think is going to come
from these high performance layer ones
that are not only focused on compute,
but also ultimately data synchronization,
making sure every node in the network
has equal and fair access to that data.
I think in the trad-fi markets,
high frequency traders, hedge funds,
even day-to-day traders that just want information
at the same time as everybody else,
pay lots and lots of money to make sure
that they have the same information as everybody else.
And if you're getting that information delayed
or slightly late, then someone has the potential
to buy these assets prior to you
because they have better information than you do.
And so ensuring that equal and fair access
to everybody on the network is really important.
And that's not something that zero knowledge technology
actually helps at with it at all.
It's only focused on compute, which is important again,
but it's only one part of the problem in my mind
for where a lot of this value is going to accrue.
It's both scaling compute
and also scaling the bandwidth or throughput of the network.
Zero knowledge technology is only addressing one of those.
That is an interesting take
because the canonical kind of common knowledge
is that yeah, ZK is really good for scaling.
And so it's interesting to your perspective on it.
I think what I would say from what we've heard
of the zero knowledge experts that we've had on this space
is that everyone's kind of in agreement
that we're still very early stage technology.
And so there's so much potential there,
but we're yet to see where the innovation will continue
in what directions the progress will continue
for the underlying technology,
which in this case is actually pure mathematics,
which I think is really cool.
But it's strange to think of the progress being made
by people kind of actually figuring out formulas
on whiteboards, right?
But once you have the right collection of math
and you implement it in tools that developers can use,
you do see crazy things happen.
And I think, as you said, right,
ZK can certainly be applied to compute
and kind of verifying that the compute has taken place
with certain variables going in.
So it's something that I'll be keeping an eye on.
And I think when it comes to network level,
like blockchain network level operation,
one of the things that we're really interested in ZK for,
at say, is using it to validate the state.
So as we were talking about earlier,
changing this kind of the network topology
where you have kind of the backbone
of high performance hardware producing the blocks,
and then you can have many different lighter weight nodes
or light clients who can verify or validate
the state of the blockchain without having to
run the high performance hardware to do everything like
receive the transactions, build the blocks
and execute all the transactions as well.
You can just basically validate the state
using some zero knowledge proof that's been provided, right?
So you can just verify that proof.
So that's another way.
Data available sampling is cool.
I mean, in all these blockchains, as I mentioned,
because you're going to increase
the hardware requirements,
because you're going to have to have
higher throughput and higher computation,
it's going to be more and more expensive
for the average state person to run those.
So data available sampling allows lighter hardware
requirement computers to digest some of that information,
compare notes with other blockchains
or other light clients that have also sampled
a different form function of that
and essentially keep the full nodes in check.
They are helpful for just verifiability of the full nodes
when you don't have the resources to run a full node.
They don't necessarily help scale the blockchain
harder than what essentially the full nodes can do.
And I think there's a lot of technical nuance
within these conversations that are misunderstood.
And parsing apart those is super important,
but it is cool that you can apply zero-knowledge technology
to some form of data available sampling.
I think the technology more broadly is very interesting.
I think just the use cases that it's going to be applied for
and why people care about it are ultimately much different
than people originally thought.
I would agree with you there.
And I think one of the trends I saw was
so I worked in the ZK space for a while
in the past few years and one of the trends that I saw
was that there was these kind of people
who understood ZK technology from a mathematical perspective
who were basically either applied cryptographers
or kind of theoretical mathematicians.
And they would understand the technology
in terms of the math.
And I actually, I tweeted one of them,
he's a cryptography professor,
and I said, what are the best use cases for this technology?
And he actually said, we don't know yet.
And so I think we're kind of in this process now of,
right, the mathematicians have found some really cool math
that you can do cool stuff with,
but that's not really intuitively understandable
by most people, right?
Most lay people and even people who work in blockchain,
it can be pretty difficult to get your head round.
So then there's a bunch of kind of wizzy software engineers
who build stuff, you know,
so implement this kind of math using cutting edge
software tooling.
And then they kind of make that available
to kind of blockchain developers.
And so there's kind of an early wave of blockchain
developers who are noodling around with ZK stuff.
And there's gonna be an iterative feedback loop
of the developers will figure out the limitations
or kind of identify avenues of innovation.
And I think that will inspire or motivate people
who are working in the theoretical mathematical arena
and also the high performance kind of software engineers
who are building the libraries and tooling
to build more stuff that they want.
But that kind of takes a while.
And I think, yeah, I agree with you
that we've not really solidified
around concrete use cases yet.
And I think it will take a bit of experimentation.
And that's kind of the whole stack,
the whole chain of like, you know,
improving the math and the kind of configurations
of cryptography to do different things.
Then also improving the software tooling
and the kind of platforms and libraries
before the, you know, the developers who build stuff
with it will have to also do a lot of experimentation
and iteration to figure out good use cases for it.
So I agree with you, right,
that that process is gonna take time
and we may well end up somewhere
that we didn't think we were going to in the first place.
Yeah, I think it also boils down
to like early on in like blockchains,
everybody was very excited about like,
a like Stanford professor or Cornell professor
or Picure top Ivy League university.
And they would launch these tokens
and the first iterations did fairly bad.
And that's because they were very theoretical
in terms of their design and not really focused
on applicable engineering to the day-to-day lives.
And I think more broadly,
the blockchain industry as a whole likes to pontificate
about these different unique technology solutions
that will scale to infinity,
but ultimately fail to deliver on their promises
because they're not focused on the day-to-day
actual engineering and the product work.
And so what I'm very much excited about for 2024
is the blockchains and the application engineers
that are focused on actually delivering value
to the end users because at the end of the day,
that's really what the industry has to focus on
to move forward.
I mean, the meme coins are fun,
the NFTs are fun,
but we need to actually to get the industry
to mainstream adoption,
build applications that people want to use
and not just want to use,
but is actually a better product
than what can be offered in the web to world.
And I think we're getting to that point now,
but I think there is still this very clear dichotomy
in my head from the people
that are primarily focused on research
to the people that are focusing on building products
and frictionless ultimately,
and where we have really focused on
is people that deliver products that actually help people
because I think it's very much needed,
especially in 2024, to push the industry further.
I agree with you.
I think that's what I'm ultimately excited for as well.
Through 2024 and also into the future,
I think being in this industry for a few years,
we've seen things, as you said,
basically kind of being a kind of a proof of concept
or an experiment.
Now that we've seen the potential,
I think we're entering an exciting phase
where, as you said, right,
we can start to build stuff
that will really challenge the kind of monopoly
that the ideas that kind of society has
about what technology is has a kind of hold over us.
And I think we can start to challenge those ideas.
I agree with Chris Dixon.
I've read his book, Read, Write, Own,
and blockchains have a kind of opportunity at the moment
to bring about a kind of new phase
of the internet and technology
and how people interact with it.
And I'm very excited about that.
I'm really excited to see people build products
and to help them build products, indeed,
that will do things in a different way
and ultimately serve users better.
So I think we can agree on that as well.
We're getting towards the kind of end of the hour.
And so I wanted to say thank you so much
for hopping on, Logan.
It's been really interesting talking to you.
I will say this to everyone as well, so you don't have to.
Logan has a great podcast series
with loads and loads of episodes
talking to a bunch of really, really exciting
and interesting people,
a lot of them from the kind of
the high-performance blockchain world,
but these are in-depth,
like long format discussions with people.
It's got audio, it's got video,
so go and check out Logan's podcast.
Follow Logan on Twitter.
You can apply to frictionless capital as well.
I suppose they've got a website
and they do blog posts and things as well
that you can go and read
if you're interested in hearing more.
So thank you so much, Logan.
Thank you everyone for tuning in.
Also, yes, follow me on Twitter.
And if you're interested in building stuff,
come and join the Telegram channels
and the Discord community channels
to meet other developers and discuss your ideas.
Do you have anything that you want to say
as a parting thought, Logan?
No, just thank you for hosting me.
Really excited for, say, in the say community,
not only with V2 but more broadly,
how the community is really shaping up.
I think, say, to me in my mind
is really one of those ecosystems
that is focused on products
and that's why I'm really excited
about what they're doing in 2024.
If anybody is building application engineers,
doing anything interesting,
please reach out to me either DMs or websites.
You can reach us through a couple of channels
but we'd love to chat with you
and thank you very much for hosting me.
It was a fun conversation.
Okay, awesome.
That is a great point to finish on.
Thank you again, Logan.
Have a great rest of the day
and see you next week, everyone.
Goodbye. Thanks, everyone.