. Thank you. Thank you. Thank you. Thank you. Thank you. All right, guys. Alright guys, can you guys hear me?
If you can hear me, just give me a thumbs up.
I think this is the first time.
First time I'm doing this spaces thing.
Let's give it a few more minutes.
I think we're just waiting for Juan to join us as well All right, I think we have everybody.
So why don't we get started here?
So hey everyone for the newcomers. My name
is Jason. I'm the co-founder of Principal Investing from Tangent. We've been big supporters
of crypto and deep in and over the past few months, we've also been doing a very deep
dive into robotics. And the latest investment we've made that I am particularly excited
for is one called BitRobot, which some have referred to as the Bit tensor of human robotics so today I'm really
blessed to be joined by a few special individuals to talk about bit robot that before we get started
you know usual disclaimers this basis is purely for educational purposes only we may have exposure
to projects we mentioned during the spaces but none of this is an inducement to buy any tokens
and as a recording bit robot does not have a liquid token but it does have a side
project called Sam where there is a liquid token so tangent which is my private investing vehicle
we have invested in bit robot we have some exposure to Sam as well and all the views reflected in this
spaces are our personal views and not representative of the respective companies we work for so let's
get started here guys Michael Juan Jonathan Juan, Johnston, excited to have
you guys join us. Why don't we go around and just give a one-liner intro about who we are and how
we're involved with BitRobot. Michael, you want to kick us off? Yeah, sure Jason. Thank you so
much for having us. So my name is Michael Cho. I'm the co-founder of Rotobots. We've been working in the intersection of robotics and crypto
for, I think, more than three years now.
And sometime late last year,
I started chatting with Jonathan and Juan
about this bigger vision that we have
to use crypto incentives to do something
for embodied AI research.
And yeah, so I think I'm super blessed
and excited to be you know
working with these two chats to build robot together yeah happy to chat more
about the robot but yeah and Jonathan you want to go next sure yeah great to
meet everyone my name is Jonathan Victor from the co-founder of answer research
which is a group that specializes in working with deep in teams helping them
with go-to-market and actually leading the BitRobot Foundation
as we get everything booted up.
So yeah, I was blessed to meet Michael a couple of years ago,
and then in Singapore last year,
really get in the weeds on sort of brainstorming,
how can we expand what FrotoBots was doing
And Juan, I think a lot of people know you already,
but you want to give a one-liner
about how you're involved with BitRobot and Protobots as well?
Sure, excited to be here. Thanks for having us. My name is Juan. I started Protocol Labs,
Filecoin, IPFiles, a bunch of other projects. I've been looking at robotics very closely for
the last few years. A lot of what I do in PL is look ahead to the next 5, 10, 15 years and think about what are the major trends in computing that are going to drive the biggest breakthroughs.
And we're finally at a great inflection point with robotics where we're going to start seeing lots of the sci fi oriented thinking from the 20th century start happening.
So pretty excited for it.
Yeah, awesome. And with this basis, I think we can probably assume that most of the people
dialing in are crypto natives, but maybe not as familiar with robotics. So why don't we start from
the very top, right? So it seems like there is some sort of an arms race for robotics, or some
people refer to it as embodied AI. And the past two years, we kind of saw that big moment with
text and images for ChatGBT and so many other AI companies.
So how close do you guys think we are to seeing like a similar leap in human art robotics?
Are we talking about five, 10 years away?
And if not, what's driving the current momentum?
Yeah, maybe I can take that one first and kind of set up the opportunity, but also in
a way, some of the bottlenecks and why we think that crypto has an important role in all this.
So I would say, you know, probably in everyone's timelines, you know, on Twitter, you'll be seeing a lot of like pretty impressive demos these days of like humanoids that are doing, you know, pretty crazy stunts, you know, parkour and whatnot.
has changed um and i'll say like you know all the recent excitement in in the last maybe two years
especially is in some ways really like justified in uh because you know clearly the transformer
architecture has has proven to scale very well for multiple modalities right so like you mentioned
just now jason so you have tags images videos um
um and and it's it's just amazing architecture that it has proven to scale uh whatever you
throw at it right in a way and so there's definitely an argument for the fact that
you know if you look at robotics data it's just another modalities that in theory if you have
enough of robotics data then, then you put a transformer
to it and things will work magically.
And I think there's strong evidence that why this may be the case.
But again, there are multiple bottlenecks.
So maybe the immediate bottlenecks that most people are probably aware of is we don't really
Definitely not near the scale that we have for other modalities
like text and videos and whatnot.
And really when we talk about robotics data,
I want to be very precise about what I mean.
So broadly speaking, in my mind, there are three types of robotics data.
So you have what they call teleoperation data.
So this is the case where
you have humans that will control essentially some robots and you record what the human is observing so usually videos as well as what the human is doing while controlling the robot so
they call it basically action labels so in a way if you look at the data flywheel at tesla you know
they have that exact setup right you collect the video, but they also
collect what the human driver is doing.
But clearly, collecting data like this is extremely costly.
Robots do fail in the real world,
and clearly you need humans in the loop.
The second type of data is synthetic and simulation data.
So I would say it has a big resurgence in the last two years,
especially with all the generative AI stuff,
where there's now perhaps a much higher hope now
that simulation can move the needle through robotics.
But later, if we get on more time,
I'll talk about why there's also some natural limit
And then lastly, you have video data.
Obviously, we have a lot of video data, especially on YouTube.
But again, over there, there's also certain limit, we call it embodiment gap, that would
just have a natural, I guess, upper bound in terms of performance with the kind of model
that you can build if you only have video data. So in some sense, we do have a very much of,
you know, various kind of bottleneck
with these three types of the robotic data.
And so that's already a bottleneck,
but moving beyond data actually, you know,
of course compute is something, right?
And you always need like a really good research talent
to know what to do with those data,
even if you have the data and the computer.
But I would say one more thing that actually is not
maybe immediately obvious to someone
that is not doing robotics is that even if you,
let's say, build what you think,
I mean, if you as a researcher,
you build what you think is a very performant,
let's say foundational robotics model, how do you know researcher, you build what you think is a very performant, let's say, foundational robotics model.
How do you know whether, how do you test that?
How do you evaluate that?
Because in, you know, unlike, let's say, digital AI in the cognitive, in the digital realm, you usually just run a bunch of benchmarks.
And typically, you know, within like an hour, you more or less have a pretty good feel, you know, how good or how bad is is that like particular model because everything is basically in simulation right in the digital one uh whereas for embody
ai or robotics what you really need to do is you need to test things in the real world and so that
actually you know the fact that you know you can't speed up time in the real world, just have a natural big bottleneck, right, to actually how
you go about evaluating that model. So that final step of evaluation is actually, in my view,
maybe the biggest bottleneck, maybe even bigger than data and compute and talents.
But anyway, I mean, so I think definitely the opportunity is huge, obviously, if someone can figure out how to basically replicate human level intelligence when it comes to physical movement.
I think I keep reading somewhere that something like 50% of global GDP is almost related to physical labor.
So huge opportunity, obviously.
But there's also like don-tri labor so huge opportunity obviously um but there's also like
don't tree food bottlenecks obviously um anyway so so that's a setup i would say so huge opportunity
but the bottleneck is it's uh it would need a lot of uh improvement and innovations along the way
still so my view is like you asked yeah so it sounds like there's a big data collection problem
because there's we don't have you know robots just lying around in all the households and all the warehouses.
So we can't collect that kind of real life data.
And it seems like you are approaching this from the perspective of D-PIN slash crypto.
You think crypto can be a solution to this.
I know Juan has a comment on that.
You want to jump in here?
Yeah, I wanted to just underscore some of what Michael was talking about and kind of help motivate it, right? So when you think about robotics for the last few decades, and even in the late 20th century, an enormous amount of progress was done on the hardware and settings, like many factories, some kinds of transportation and so on.
But the biggest bottleneck that has been holding everything back is kind of a control navigation
So, you know, it turned out that for whatever reason, in terms of the complexity of task,
having a robot maybe navigate a house and not break things as it's moving around, carefully path plan, open a fridge,
take out an egg, and cook breakfast, is a dramatically harder task than having models
write a symphony or do a bunch of legal work and whatnot. So the control problem of how to
navigate a physical space is quite hard. However, the models in neural nets
that have been built over the last 10 years
are now of a scale of being able to solve
So we're at an amazing inflection point now in robotics.
So all of the work that has been done in hardware
and improving systems and so on,
and in manufacturing can now be put to use
if we solve this hard challenge of
solving this control problem. The models are now at the scale to be able to do it, but the missing
ingredient is exactly what Michael is talking about, is this data piece. The training that we
do today for most of these models requires this massive scale amount of training data to be able to
get these models to learn how to handle a particular problem.
You could try and generate it in simulation. This is kind of like how many of the deep mind models
for, you know, AlphaFold and turning AlphaZero and so on worked out. However, for robotics,
trying to do this in simulation is extremely, extremely difficult because the real world is extremely messy and you're way better off just collecting actual data with an actual physical
robot. And so the approach that the robot is thinking is, hey, let's generate an enormous
amount of this data by building the world's largest robotics network, gather all of this data,
largest robotics network, gather all of this data, train the models and now be able to
have models pilot these robots.
And I remember I was watching this documentary about Disney recently, and they were training
these like duck-shaped robots for the theme parks.
And I realized a lot of the training was done in simulated environments.
And the Imagineers, as they call them in Disney, were talking about how when they deploy these robots in the theme park, sometimes they trip over things
and it just doesn't compare to having real on the ground data from the real robots if
you're just using simulated environments to train them.
So when I saw Michael's vision for BitRobot kind of creating all these different subnets
for different types of robots to contribute data to this massive network.
It made a lot of sense to me.
And honestly, it reminded me a lot of BitTensor as well,
because that's something that I think
they're trying to do for AI in general.
So kind of taking a step back here,
could you guys walk us through,
what exactly is this BitRobot network?
How does it tie to FrodoBots?
Michael, maybe best for you to start
there and then we can have others chime in too. Yeah, of course. Maybe I can say a little bit
about like FrodoBots and then I think John can talk more about BitRobot. So, FrodoBots, we, like
I mentioned just now, we started about three years ago. Honestly, when we started, we started with,
I would say, a lot more modest Go.
We just wanted to create a very cheap sidewalk robot and make it into a fun game.
It's kind of like Pokemon Go slash Mario Kart, except you do it in real life.
But even at the very beginning, our hunch is that the data that's collected in this game will be somewhat useful for research,
especially for urban navigation paths.
And so, yeah, so we just started doing that.
And then, you know, lo and behold,
there are actually some gamers who turn up and start paying us to play the game.
But like I said, we started collecting the data along the way.
And then we also started, started like at the same time
reaching out to some researchers that are you know doing research in this space and um we were you
know honestly pleasantly surprised that some of the really some of these best like really world
class researchers uh were really receptive about the kind of data that we collected and so um i
would say fast forward to today you know we have have like multiple research collabs with a bunch of labs.
For example, last year, we did a pretty big like robotic competition with folks on DeepMind.
Actually, in two months, I'm doing that again.
So I think that's a fair bit of validation on that front already.
And but what we started again, it with sidewalk robot right but i we started to think that okay
if uh the same thing that we're doing with our robot there's no uh reason why we can't do it
for other robotic embodiments you know could be humanoid could be could be drones could be
surgical robots and whatnot and and so uh we realized actually there's an opportunity to
do a really big thing thing and kind of go for
And maybe John, you can talk a little bit about how we set this whole thing up.
I guess, yeah, at a high level, the way to think about what BitRobot is being built as
is as a network of networks.
So it's funny when you work on like the edge of a technology space, there's a bunch of like spiraling ways that the world may pan out and it's not necessarily obvious what are the research pathways or what are the challenges that you should be solving for first.
of use the D-PIN incentives to target against specific tasks and specific goals, create
mini competitions around those things, and then be able to have that all aligned to the
same shared economic structure.
So Michael, as you were mentioning, with Photobots, you could think of the Sidewalk robot data
collection as one form of a subnet, so one challenge that you could create a competition
You could get the D-PIN flywheel going to get lots of people sort of contributing.
But you may want another version of that same type of competition for a different domain.
So let's say like a mechanical arm that is grabbing things or folding laundry.
And so in that way, it gives you the ability to use sort of the same sort of like shared economic structure to build up a
corpus of data or a bunch of different models that are building on the same sort of shared economy
without locking yourself in to saying like, we have one specific thing that we're betting on as
an ecosystem or a full robotic economy, and then not having the flexibility to pivot as the research
pivots or as the needs of the broader space sort of evolve.
So at a high level, the way to think about this as a BitRobot as a total network, there's different subnets that are formed.
Each subnet is trying to produce some useful robotic result.
That robotic result could be data.
So we sort of mentioned collecting this human tele-op data.
It could be creating models, so specific models that are making use of different data sets inside of the network. It could be actually acquiring physical resources. So you can imagine
a subnet that's just purely trying to acquire humanoid robots that could be rented by other
subnets, but really trying to build out all of these different pockets across the innovation
pipeline that you need in order to connect them inside of this economy. Yeah, and I think it's
really interesting because bringing it back to
the BitTensor analogy, and I don't mean to talk about BitTensor so much, but it's just that I
think a lot of people know BitTensor as a project quite well versus BitRobot as a bit newer. So for
BitTensor, they have a similar idea where they have a network of subnets where people are basically
refining or training different neural networks. So is this something that is doable in BitTensor's context?
Or what's the difference between having different subnets to train embodied AI versus just neural networks?
Yeah, I have a perspective on that, Michael. I know you do as well.
I think for me, the way I think about it is if you just think of the logical organization around this,
we're trying to map the structure of our network to the innovation pipeline for robotics.
And so especially as we think about how do we balance incentives, how do we flow resources to the most important contributions, you want that to be along like the full innovation pipeline.
So that especially at the end state, so like creating an embodied AI for robotics, you have an ability to sort of flow that value backwards to everything else in the chain. So at least for us, I think
it makes a lot of sense for why you would want these things to be logically sort of connected
inside of the same unit, but then also not trying to necessarily have debates about like whether we
should be training a chatbot or something like that and using ecosystem resources towards that.
I think it's more about like, how do you focus sort of the incentives of the network against
sort of aligned goals versus having it potentially be in competition with other useful things
that may just be sort of like orthogonal to the stuff that we're trying to build.
I think Juan has a comment as well.
Yeah, so like Davey is saying, the goal here is not to figure out one single metric and
then just kind of optimize the entire network around it, but instead to say, hey, there's
a lot of different things that contribute to a robotics network.
There are multiple different ways that parties can go and create significant value for a
And you want to count what all those ways are and then measure a set of groups that are working to work that. The
subnet model that BitTensor pioneered is really awesome construction for this. It gives you
a way of creating these clusters of activity that are related in some particular way. Think
of them as, this is why the word subnet makes a ton of sense it's a little network within a larger network um and then you can measure different things that that subnet is doing
uh for contributions into into the broader network and then reward those contributions with with
tokens and so this this is uh you know the kind of fundamental primitive and we're just taking it a
we're you know inspired by the bit tensor model um and extending it into
um all of the different things that contribute to robotics the way that jd is describing so
many other things like that go into the into the r d pipeline for robotics can factor in um certainly
things like training data but also amassing robotics hardware or um coming up with you
know new breakthrough results
or think of many different kinds of ways
of improving the overall performance of the whole network
can sort of factor in and be weighed
by the participants in the network.
Yeah, so I have a very specific thing
I want to say just about like, like why subnets
having different subnets doing various similarly different things in robotics may actually
have turns out to have a lot of synergy. So one of the biggest robotics conference every
year is this conference called ICRA. So last year was in Japan. And last year, the best
paper was this paper called OpenX Embodiment. And so in Japan. And last year, the best paper was this
paper called OpenX Embodiment. And so if you look at that paper, right, which is led by the books
on DeepMind, but a huge, huge collaboration from a bunch of labs, university labs. So if you look
at that paper, right, it's nothing to do with like a new algorithm or anything like that.
It's fundamentally just a crowdsourcing of, you know of data sets from different labs and it gathered together.
And because the data, the robotic data all comes from different labs, so they all have
different types of robotic, like robotic arms or sidewalk robots and whatnot. And it turns out what
they found, and in subsequent, there are a number of papers that kind of do similar line of research,
is that somehow if you build a model using a very diverse set of data that is collected
from very different bodies let's say one is from cyber robot the other is from robotic arm and
somehow you go and train a model let's say to fly a drone and literally there's a paper i think from
berkeley that's kind of like that somehow it's able to train a model to fly a drone even obviously
it's never seen data from a drone before and the intuition i at least for me that i think about it is i mean ultimately all these robots
right i mean uh operates on earth right under the same gravity right the physics is same everywhere
on earth and so there's there's this concept of positive transfer learning i guess um you know
between the different robotic embodiments so in effect what we're saying is that it's not just just about
the quantity of data that you want to gather like the diversity of the data actually matters a lot
as well and in fact you could argue and nowadays i would say that you have scientific evidence of
this as well um that if you are very careful or rather you just have a pretty diverse data set
right whether it's from humanoid crying crying an egg versus, you know,
a surgical robot that's like, you know, doing a surgery on a frog or something.
Somehow all this data in aggregate is quite possible
that the whole is way bigger than a sum of parts.
And so you can imagine the synergy among the subnets,
I think over time as we build up a big network,
actually the, I think exactly like the whole can be way bigger than just the
sum of parts. So I just want to bring that up. Yeah. And I think one of the big questions that
I always had about Bitensor is like, how are the subnets synergistic, right? Because for anyone
who's dug around the different detail subnets, they do a whole bunch of very, very different
shit. So there's one that's doing like protein folding. There are some that are training
like foundational large language models.
And I always wonder, okay,
how do those two things actually relate?
And when you pointed out that paper to me a few weeks ago,
because it seems like the different forms of data
you collect from completely different robots, right?
or like drone flying robots, cyber robots.
You can actually compose that data
completely new form factors, which I don't even know if people are aware of this yet. So that's
very cool to see. And I want to talk about kind of the common pitfalls that you might have with
a subnet model, because with BitTensor, I think some of the early learnings are starting to
emerge. So for instance, with BitTensor, for those of you who are not familiar, the subnets
are basically rewarded in Tau token emissions. And the emissions are determined by the price
of each of the subnet token, right? So the better a subnet token does, the more emissions
in the Tau token that they get. So this has led to some unintended
issues with, I think, one of the subnets early on where they basically just created a subnet that
was designed to, I think, pump the subnet token as much as possible without creating kind of
quote unquote real value. So that required the intervention of Bitensor. So for BitRobot,
how do you guys ensure that the contributions of subnets are measured correctly?
And it's not just going to become kind of like a pump.fun of like many, many different subnets
that are just not doing anything except pumping the subnet token prices. Like, how are you guys
thinking about the token design and the incentive alignment? John, you want to talk about this?
John, you want to talk about this?
the way that we've been thinking about this is with what we're calling
the BitRobot Center in Gandalf AI.
If you think of this network structure that we described,
there's a high level network and then there's subnets that are formed.
I think there's the intra subnet rewards and then inter subnet rewards. So intra inside of each subnet,
basically each subnet owner is defining how, we call it verifiable robotic work,
but when they set up the subnet, they're sort of defining when rewards flow in,
when rewards flow in, whether through payment, through stable coins, or through network emissions,
whether through payment, through stable coins, or through network emissions,
how is value going to flow throughout that sort of like mini economy that defines like,
what are the different types of contribution that defines how those contributions are being scored.
It also defines like who is doing the scoring. So what set of validators and so on.
And so that gives you sort of like the rules for how each subnet is going to operate. And then
as work is being performed, that work is being like pushed into public spaces. And then you sort
of like build up a providence chain, both of like, what are the input data sets that are being sort
of submitted, as well as what are the transformations that are being run? And then what
are the output data sets that are being generated? So you can sort of see like who did what work, how did they do the work, and then anyone can sort of
verify that. So inside of each subnet, hopefully that part is a little bit easier to grok with.
External to each subnet, and so how do subnets get reward if it's not direct payments and through
sort of like network emissions? This is where we have the BitRobot Senate and Gandalf AI.
emissions. This is where we have the BitRobot Senate and Gandalf AI. So different contributors
in the ecosystem, so folks that are participating in different ways, they can basically allocate
extra weight to the votes of individual senators or to Gandalf AI. Senators are people who are made
up from the robotics community, from the crypto community. These are basically folks that are
domain experts on sort of like the network folks that are domain experts on the network
itself that are helping define what are the weights that should be assigned to different
metrics that we're scoring the whole network on. So as an example, if we think it's particularly
important as an ecosystem to be focusing on the collection of specific types of data sets,
the BitRobot Senate and Gand.ai are effectively defining what those
metrics should be so that when each subnet is submitting their output data sets, that is what
they're being scored against. So you sort of get this weighted effect where you have subnets creating
some form of an output. There's public network-wide metrics that every subnet is being
scored against. Those metrics are being weighted in their importance by the Senate and Gandalf AI,
and the votes of the Senate and Gandalf AI are being individually weighted by the different
participants in the ecosystem. One important note here is obviously because the Senate itself
is comprised of different experts. There's the risk of corruption and things like that. This is why we've sort of been thinking about this idea of creating an AI that sort of acts as a check on the entire system.
are worried about sort of the influence of any individual senator or the senate at large,
they can sort of default to an arbitrary non-human entity as a different place to sort of like
allocate power to act as sort of like a veto. So hopefully that's clear. I've sort of worked from
the bottom up, but I think the idea is that we want a network that's sort of stewarded by
experts, but are sort of bounded by the sort of delegations of
the community itself, and then have a non-human check so that if there really is like a full
veto necessary on the entire sort of like Senate, there is an option that allows the ecosystem to
sort of steer itself back. Yeah, I just want to say I love the fact that the governance AI is
called Gandalf AI and the robots are called Photobots.
There's a big Lord of the Rings fan myself.
But so is this Gandalf AI a real AI or is this a committee of people that are voting?
No, the actual intent is to build like a real AI.
One of the things that the BitRobot Foundation will be working on.
Juan, I see a hand if you want to hop in.
No, finish your thought. I'll go on.
Oh, yeah, yeah. I mean, like, I think this is actually a testament to, like, how rapidly, like, AI is improving. I think especially while, obviously, like, we're early on in the network, I wouldn't expect the AI to be, like, well, it depends, honestly. I can't really predict where the community is going to go on this. My thought is like, over time, people will actually believe more and more in the AI
sort of like opinion as people generally, like, I think that's going to be a good beta to like,
how do people's opinion about like delegating opinions to AI overall go. But I think like,
there's like a pretty reason I've even from my own observation of how do I use like things like
chat GPT, it's increased dramatically. And we have more and more open weight models that are like
actually like useful so I think it's like a pretty reasonable bet if we're saying for the next
and number of years how is this network going to like progress that more and more power is going
to be like moving into the AI more than into like human individual senators. But in that gap,
and while we're in the booting phases, I think it's useful to have robotic experts, people who
are like close to the field to help steward, how do we like funnel sort of the growth of the
ecosystem? I just had a couple thoughts here, interspersed into what JV was saying. So maybe
at the beginning on the metrics that the subnets are optimizing and your
question, Jason, about how this fits like the kind of subnet models from Bitensor and so on.
A big goal here is to create high synergy between the subnets, right? So you don't want to,
you know, the blockchains that succeed massively create this collaborative
competition environment where different participants are competing with each other for rewards,
but overall their individual contribution ends up creating value for everybody.
So you align the incentives such that all the parties create an enormous amount of value
together that benefits everybody.
So it's like this, you know this highly synergistic collaborative competition structure.
And so what the subnet let us do here is because we can gauge different kinds of
contributions from them, we can tune the metric structure and
the rewards to benefit various different kinds of things that the network needs to do as a whole.
Right? So a robotic network like what we're describing will require all kinds of things that the network needs to do as a whole, right? So a robotics network, like what
we're describing, will require all kinds of things from, you know, lots of different types of
hardware, lots of teleoperation to happen on that hardware to generate a lot of the data,
the training of a lot of models, the actual deployments of the hardware in specific
locations in the world and maintenance and so on. So all of that work requires lots of different
participants and people around the world. And we want to be able to identify and measure and reward
all of those different types of contributions in a highly synergistic way. One idea that I've been
exploring, and we're not sure we're going to do this yet and so on, but it's, it's this idea of like,
yeah, making the rewards super linear when there's way more collaboration. So this is, this is a, you know, cryptocurrency incentive type of mechanism where you kind of like make it even
more likely that parties are going to coordinate and collaborate more so than in this kind of like
extreme zero sum competition that just kind of like a standard Bitcoin model gives you. And then on the, you know, some other thoughts on the AI piece. So look,
we're in a very interesting moment in time in human history where AI models are breaking through
many different intelligence barriers. I think a lot of the world currently is a bit asleep at the wheel
at the degree of intelligence that these models have.
Already the current models outperform lots of humans
at lots of different cognitive tasks.
And governance of systems is just one of those kind of cognitive tasks.
Already today, lots of models are being used
for doing all kinds of decision making
processes in business and organizations, you know, as like personal advice, as legal work,
like there's all kinds of LLMs being used to actually be do kind of like a lot of paralegal
style work. So kind of plugging in an actual AI model into the governance framework of a blockchain, it's kind of like a major innovation here.
And the leap that we think is, you know, as time has come, you couldn't quite do this two years ago.
But this year you can actually do it and it'll continue to get better year over year as the models get more intelligent and better.
One of the key things here that we have to do is make sure that it's very difficult
to kind of game or manipulate these models, but as these models intelligence improves, that's just
going to get easier and easier. So it's kind of like a bet in the long term that we think is going
to produce an enormous amount of return, especially because when you think about governance and kind
of like the problems with large scale decentralized governance,
is that there's an enormous amount of information that has to kind of be processed and communications that need to happen.
And so you end up with like this massive governance fatigue.
So all kinds of democratic decentralized systems incur an enormous amount of cost for humans in the governance process.
the governance process. But LLMs actually cut that down to zero, like, you know, close to zero,
where you can have LLM participants take in a lot of the goals and values and preferences of
various different people and figure out pathways to optimize between those. And so this is something
that we're pretty excited to incorporate and we think it's going to be very, very successful.
Yeah, and I think a lot of people have governance fatigue
just throughout the past few years of being in crypto,
seeing people kind of duke it out in governance forums
and things getting nowhere.
So maybe introducing AI could be a breath of fresh air there.
And I'm really curious about the D-Pin aspect of this,
especially from you, Hansen, the creator of Filecoin,
one of the largest D-PIN networks out there today.
I'm sure there are a lot of lessons in terms of just seeing Filecoin, but also other D-PIN networks out there in terms of bootstrapping a network of stakeholders through token emissions.
So we've seen this model tried so many times, sometimes to great success, sometimes not so much.
So what are some of the key learnings when you guys were designing the tokenomics because I know
all of you were very heavily involved in the writing of the white paper so yeah
so what are the lessons from observing deep in you know what what are some of
those things that you guys are gonna try to implement I'll maybe give you a few
thoughts and then turn over to Michael and JV. So, look, I think, you know, I've been working on different crypto networks of different kinds for many years.
Bitcoin is like the main one that I've been working on and involved in.
We were, you know, building Deepin before the category was invented.
And we created this massive scale result
where we amassed exabytes of storage around the planet
and we got hundreds of thousands of people involved
around the world across hundreds of countries.
So it's this, like the kind of crypto incentives
that you can deploy into the internet
just create this incredibly powerful force to gather resources
and work and align it. And so some of the, you know, to your point about lessons, some of the
hardest challenges here is how do you orient these communities to create value together?
When you have a very direct financial incentive that is coupled to some activity, participants
will try to game it in various different kinds of ways. And where networks succeeding or failing
comes down to how well can the networks actually reward the value that creates
value for everybody, as opposed to being sort of like leached out to kind of like overfitting or
kind of like contributions that
are not actually creating a lot of value, right? So putting it in kind of concrete terms, when you
think about something like Bitcoin or Popcoin, where you can have a very, very direct measurement
cryptographically of a particular activity, and you can reward that, then you're in a great spot
because you can, you know, the playing field is very even for everybody.
Participants can't really cheat and they just have to do the work
When you start getting into more harder to judge,
more subjective, more qualitative kinds of measures,
and here, you know, think about retrospective funding structures
like in the PGF landscape or, you know, think of like optimisms, retro PGF, or many other kinds of like, you know, structures like that, which are,
you know, similar in spirit to block rewards, you end up with like a much harder measurement
problem of being able to weigh the value, the contribution value between all these participants.
Now, the benefit of that, though, is that if you do solve it in a good way, you can incentivize a wide variety of activities well beyond, you know, like a single type of contribution.
You know, when, you know, Bitcoin is only able to incentivize just like the block reward mining, you know, with, you know, adding hashes into the network.
Optimism was able to incentivize a wide variety of activity.
able to incentivize a wide variety of activity. Now, again, the challenge is entirely in the
measurement and how do you correctly assess what the value is and how do you rank the contributions
to earn a certain amount of rewards. And this is where I'm pretty excited about the subnet structure
and especially the Senate and Gandalf AI as a way um it really get at better and better and better measurements here
um so that's some of the stuff that uh that you know some of the lessons um and some of the things
that i'm pretty excited about uh but yeah maybe turn it over to michael and then jb
yeah so uh i just want to say that like you know so when we first started this project
three years ago this wasn't a crypto project um maybe one or two months into it i discovered helium and i was just blind blown and i you know
i went from a complete like crypto skeptic to okay this is something i need to learn from right um
and uh yeah i didn't know how quite i didn't know anything basically i just discovered helium and
um so and so i i want to say like you know uh the white paper it's just
so i i i only have a theory about how tokenomics should work in these things um but i think one and
jv obviously have had actual battle scars so i think the white paper is just so much better
because of all of them um but i do just want to point out that like my observation is you know
you have different robotic embodiments, right?
And for example, we started with Sidewalk Robot.
And one of the key reasons we started with that, because in my view back then, it turns out to be quite true,
is that a tiny Sidewalk Robot is probably the cheapest robotic embodiment that you can build,
and yet still have the data collected be equally useful as something that is let's say 100x more expensive right so um so right now actually our our cheapest model we're starting to sell at
150 bucks so it's literally like a toy and at that price point obviously it makes itself very so-called
be cleanable right um you can really cross you can imagine people like just buying this and deploying
it hopefully um but then you know you, these robots do get pretty expensive.
Like if you look at a humanoid, a decent one today,
you know, at least 50K, maybe a not so performant one,
30K, but once you add the hands,
typically get back to 50K.
So obviously the cost is gonna drop,
but you know, it's still gonna be pretty costly.
So there's a question about like, you know,
how much the pin can do when the hardware
is of that cost but i do just also want to point out that the the requirement for embodied ai really
needs it to be pretty distributed right so for example you can in theory um maybe some of the
bigger labs they can build a big warehouse you know in in safe area and then just put let's say, 100 humanoid, which is kind of maybe what Tesla is already doing, right, and collect data that way.
But at some point, you cannot just do that in a hardware, in a warehouse, right, because that data is still in a controlled environment.
This is not where the robot really needs to be.
So in a way, that data is a very biased distribution of what we truly need in real world.
you do need to distribute it and have it all over the world. And so I think eventually,
that's why D-Pin is still to think despite the cost of the robot being expensive.
I suspect that the cost or the value of the data set is going to be so enormous that actually despite having a robot that's tens of thousands of dollars,
somehow you can still work out, make the tokenomics work such that it's deep in that trade still can do it
even though these are tens of thousands of dollars.
Because these things need to be out there in various environments, right, different homes around the world.
Just like outside robots, you know, you can't just do it in one small square in Manhattan,
it needs to be all over in different cities.
Yeah. So maybe JV, if you have anything to add to that. Yeah. Just like one little point, I think, and this is maybe a more broad, deep end point. I think like, so if you take a look
at something like a Tesla or like a figure, when you think about like the capital outlay that they
have to make, dramatically larger,
like, and if you take the number and obviously the cost of manufacturing is less than the cost
of what they're retailing these things for, but let's call it like a couple of tens of thousands
of dollars times the number of robots. And like, I think for centralized companies, the denominator
of like who bears that capital expense is like a single company. And I actually think this is where Deepin has
a unique opportunity for capital formation, where, yeah, if you can change that denominator to
hundreds of thousands of people, that becomes much more achievable. And I think that's where
there's actually like a real scale question. Either you have a couple centralized companies
with massive balance sheets that are able to build up large amounts of inventory to acquire large amounts of data and evaluate at reasonable paces, or you figure out
how do you distribute that capital base? And I think D-Pen actually is the best answer for that.
And I think related to that is also just this utility question. I think we talked about this,
I don't know if we talked about it today. If you take a look at Tesla and the self driving story, one of the interesting parts about the Tesla flywheel is that they basically have
distributed their capital base across everyone who's bought a Tesla, because a Tesla that can't
drive itself is still useful if you can drive it. That isn't actually true for robotics, where if
you have a humanoid that can't really do anything, very limited use cases, hard to imagine how you
distribute that capital base. That's why you of why you also see, I think, a lot of focus of like figure is putting
bears inside of like package sorting warehouses and places where you can stick a robot that is
as functional as like an old man. But I think for Deepin, especially since you can sort of take
these, like a subset of the population that's more risk-seeking, that's interested in helping
build for a longer-term outcome. I think there's an opportunity to build something that has a
distributed capital base that's supporting the acquisition of all of this hardware and then
putting it to work in useful ways. So that, at least for me, is why I think Deepin's uniquely
structured to actually meet the moment for what robotics needs.
Yeah. And I think a lot of the folks who are on here, maybe not based in the US,
may not be so familiar with Tesla's full self-driving. I actually just got to experience
this last year, end of last year, when one of our portfolio company founders was driving me around.
And he basically told me that when you're enabling full self-driving, obviously for Tesla's,
they're not fully ready yet.
So whenever there's a mistake,
maybe you are almost running into like a traffic cone,
you're supposed to correct it.
And then they turn on the mic in the car and they record, you verbally basically just explain,
okay, what just went wrong?
You know, the car should have turned left
instead of going straight.
And they take that data to refine the model.
So they're effectively, you pay them to buy this car,
but you're also giving them free data and helping them train the model so they're effectively you pay them to buy this car but you're also
giving them free data and helping them train the model which is effectively dpin without that kind
of reward element which i thought was really cool and it seems to have inspired a lot of deepened
founders as well but on the flip side i think that one of the common critiques especially from the
more kind of speculator dgen crowd i know we have a lot of DGens in the audience as well one of the common kind of skeptic systems
around deep in is that you know all the contributors to the network they're
getting paid in this network token their costs are denominated in dollars so
they're basically just farming and mining this token and selling them off
now it's not too different from Bitcoin where you have a bunch of miners
farming this token but the flip side is for Bitcoin I guess there's now some sort of a social
consensus that this should be worth something you know this is maybe some
sort of digital gold maybe it's a store hold up wealth for some now for deep in
tokens you know where does that value come from how do you prevent and the
the curse of you know a network expanding and expanding a lot more
farmers coming on but they're all just
basically just farming the token and selling it to zero.
I have some thoughts and I'm sure Michael and Juan do as well.
I think there's like a couple of things.
So I think number one is like, do you think you actually want to have a line of sight
to like, what are the things that actually fund your physical expenses over time? So I think as Michael sort of mentioned, especially for the most expensive,
like physical infrastructure to maintain. So as we like try to acquire and fix all of these robots
that are in the fleet, I think you just need like who would be the paying customer. I think part of
the core thesis that we have is like, as you have more and more labs that are trying to go after robotics, like basically paying to rent and get like massive evaluation parallelism is part of that core story.
I think also on the data collection side, if you are a deep mind of the world and don't have access to, I don't know, 10,000 robots that can go pick up cups for you, this might be the fastest way to go acquire all of that data set, which,
again, doesn't exist in the world. You can't just go scrape the internet and find a giant corpus of
hand manipulation data. I think this is where trying to have a line of sight to who are your
paying customers and where can that come from, at least over the long term, I think is an important
facet. I think one of the other things obviously for D-Pen is inside of like crypto
networks, things are very frenetic, a lot of volatility.
I think it means that you can't just wait for an outcome that comes maybe in
And so having an answer to what are the shorter term things that you can use
to sort of like bootstrap the paying demand early on.
Maybe Michael, since that was sort of like your core insight with
Photobots and then maybe if you even want to talk about UFB, maybe you can talk about how we're thinking about some of that now.
Yeah, so, I mean, my view is that, I mean, in terms of timeline, like, while we're all very excited about this embodied AI and what it could bring, right, the fact is that today, like most of this
is at least a couple of years away, if not more, depending on the setting that you want to put in.
And I also just want to add,
it's not necessarily just bottleneck by the embodied AI.
Actually, hardware in some ways
is very much a bottleneck as well.
So for example, we don't have a touch sensor
that's anywhere as good as what humans have, right?
We have touch all over our body. And the best touch sensors, a touch sensor that's anywhere as good as what humans have right we have touched you know all
over our body um and uh the best touch sensors i believe is from matter and and it's really not
that great right it's only like a fingertip it is no one need a like a dexterity and and uh
granularity that a human finger would have right and so i think we still need some, we will also bottleneck by hardware. So given all of that, right?
So I think in the near term,
like what John was saying,
I think like the data could be,
you know, there will be some monetization opportunity
because like the big, you know, research labs
will probably want to get hold of some of this data
because it's just nowhere to be found, right?
It doesn't exist naturally.
But at the same time, I do think um and this is kind of what we stumble onto with the
cyber robot um in fact initially when we first started the project we wanted to build cyber
robot that basically deliver food right and and um i would say it's successful maybe nine out of
ten times um i've driven our cyber robot into starbucks at least 100 times and like i say it's around like roughly it's successful 90 out of 10 uh 90 out of 100 times and and but what i reckon
that you probably really need to be close 100 times if it were to be something that can actually
replace let's say the dog dash dog dash guy right and so um but that last 10 could be you know still
some years away and so but in the meantime,
what we, me and the team realized
that we were fighting against each other
to get a chance to drive the robot
because it's kind of fun to drive the robot.
And so that's why we kind of stumbled
into this gaming use case.
So I would say like my theory right now
is that actually while the robots
are maybe not really good
for actually doing real utility
where like you replace someone, let's say a a cat take at home looking after your elderly parents
or whatnot today though these robots are all pretty entertaining and so john mentioned that
ufb so like it's it's a new robotic game that they were basically about just just launch kind of it's
uh just think of it like i don't know whether you guys know the movie real steel um
just think of a human a fight club for humanoids basically right except that in this case you can
remote control some humanoid somewhere around the world um you know anywhere you are right you can
sit at home let's say in singapore and then you control one robot where and then your buddy from
from let's say london control the robot and then basically you guys just just fight each other
let's say london control the robot and then basically you guys just just fight each other
and usually fire the humanoid and do it really with the humanoids in real world and um yeah we've
been testing this with some uh youtuber this couple of days and all of them are basically
just loving their ass off and so i think actually gaming could be like and entertainment could be
that so-called quote-unquote utility maybe in the meantime and
in a way i think that kind of things actually map quite well to the degen side of crypto um
so i think on one hand beat robot i just want to say it's definitely the north star like for us um
uh but you know it's going to take years right to build it up and and the outcome if it's
successful is really going to change the world but then in the meantime i think we can just have fun and do all the yeah you know meme coin
entertainment stuff uh with real robots today and so uh we're kind of also doing like that in
parallel um my theory is that if we if those uh efforts with the gaming with you efforts with the gaming, with the meme coin,
on top of say embodied AI agents,
all these things, if it kind of go well,
those would drive attention and will eventually still be very beneficial,
maybe not directly, but still very beneficial
to what the ultimate goal
of what we're trying to do with BitRobot.
Yeah, and I think to close this out here,
because Michael, I know you guys have been working
on a lot of different sub initiatives under BitRobot as well.
So BitRobot is this overarching BitTensor network of subnets for robotics data.
But to create that data to kind of kick things started, you guys have obviously launched
your own different robot projects as well.
So one of which is Photobots, which you mentioned, which is the sidewalk robot that you can drive
around and deliver coffee to people around the world. This other one that you
just launched is UFB, which is a Fight Club for robots that you just mentioned. I think it's funny
because I think Elon Musk, like a few hours ago, just suggested on Twitter or on X that they could
use Optimus to do Rock'em Sock'em, which is basically the Fight Club idea.
So what else are you guys working on in terms of these sub-projects?
How do you envision them fitting into the BitRobot vision?
Yeah, so I guess we don't have anything concrete to announce today,
but we are definitely working on multiple,
I'll say like a few categories.
So these are either partnership with other, you know, DPM projects that have, you know, adjacent either data or like resources that could be, you know, really beneficial for embodied AI.
So you can imagine compute network or like just data like that could be quite useful for training robotics.
So I think we've got a bunch of partnership that hopefully in the near term,
we'll be able to announce those. And then we also have partnership with actually just straight up
like robotics, hardware, in fact, companies. These are web too, right? They don't care about
the crypto angle, but they do very much care about whether they are data set that is generated
using that, for example, the hardware, right? And we have a couple of them that hopefully we will be
announcing those soon as well. And of course, like I mentioned earlier, we have a couple of them that hopefully we will be announcing those soon as well and of course like like i mentioned earlier we have uh i think more than a thousand
now like uh research collaborations uh some of them has been actually going for for like two to
three quarters now and i think some of them are actually some of the papers are going to be
finally published and um and also new data set being released. And I want to say,
if we were to shut down as a project tomorrow,
I think some PhD will really,
will relate a bunch of PhD,
you know, by at least half a year to one year.
and all this hopefully in the coming months,
we'll start to announce all of this.
And the way we think about this is,
these are all examples of what eventually subnets
could be right so you can imagine these could be like small projects that are pretty well defined
that could be you know um done in a couple of months but then it once those conclude then the
question is okay like maybe there's a certain paper academic paper that's published uh because
of those partnership then the question is is, is this thing worth 1000x?
If we do want to do this 1000x, then obviously we need to get to an incentive to do it.
That's the only way to do it.
And so naturally, we want those to become some nets, be robots.
So, yeah, so I think there will be a bunch of announcements about concrete ideas that
Yeah, and I think that's a great note to end this on.
I know it's getting late on your side as well, Michael.
So thank you for hopping on, and thanks, Han.
Thanks, Jonathan, for jumping on, and thanks, everybody, for dialing in as well.
And if you guys want to check out BitRobot, Photobots,
or any of the things that they're working on,
feel free to follow Michael on Twitter.
I think I will link it below on the comments.