Thank you. Thank you. you GM, GM, GM, everybody.
While we wait for people to roll in, I just wanted to remind everybody to take this time to retweet this space, share it in your communities, in your Discord, in your Telegrams.
I know there was a bit of a change with the time, so hopefully we can get everybody in here
that wants to listen. And that's what I'm going to do right now. Thank you. Okay.
I got that shared in our community.
I hope everybody took a moment to share this space in your community as well.
We'll still give people a few minutes to roll in.
I'm going to get, I see Martin's here.
I'm going to invite Martin up onto the stage.
Always good to have Martin.
It's been a long day for him already, I bet.
He'll figure out his mic situation here soon enough.
Good to see and hear you all.
Absolutely. here guys hi good to good to see and hear you all absolutely yeah so we can we can do a soft uh entry in here so welcome everybody to another keeping up with ecosystem ama my name is james i'm the community manager and host at peak network
and today we're joined by diego from over and martin of course um our director of
ecosystem i can't keep up with your many titles martin so i'll let you introduce yourself at some
point um and and today we're uh i think we have some exciting things to talk about i saw a new
pair on our machine x decks the machine decks so I think we're going to be diving into that,
talking a little bit about what that means,
But first, I wanted to give you guys an opportunity
to introduce yourselves in case we have people
in the audience who are new.
So, again, thanks to the Pico ecosystem for having me here, and to Martin, also specifically, we've been talking a lot during this time.
I am CEO and co-founder of Over the Reality.
And, yeah, we're just joining the uh pico ecosystem uh today i mean i would say that
this is the official uh joining since uh we we get in the machine x dex exchange uh so super excited
for that also uh but maybe uh if you want to come uh give a brief introduction on on what we do for all the people that don't know us yet. Yes, do it.
So, yeah, so first of all, so OVR is not a new project.
We've been launched back in the day in December 2020.
And I mean, it's been a very long run up to here.
And it also brings a lot of evolution in what we do.
So, saying about what we're doing now
and what is our main, our core proposition,
our vision is to create the largest 3D map of the world.
And so far, depending on how we look,
it's already the largest.
We have 148,000 3D maps of locations.
The other company in Web2Space that is doing this right now is Niantic.
They claim they have a million.
Actually, on the website, they have around 200,000.
It's not clear, but anyway, we did a quite important number there.
And so what are these 3D maps for,
and why do we collect these maps?
So first of all, the way the mapping works, basically,
we have a crowds of community members
that with a simple smartphone go around the world
and take pictures of locations that we turn in 3D maps.
And what we do with these 3D maps maybe is the most important thing.
What we do is three main downstream applications.
The first one, also historically,
the reason why we start collecting these maps, is basically AR.
These 3D maps allow for the publishers
to have a 3D representation of the location
where they're publishing, but not only that,
allow us to create what is called
a visual positioning system.
So we can anchor content very precisely to space.
The second proposition is the VPS itself.
So we started this visual positioning system
to allow to publish AR content. Well,
we quickly realized that this capability of understanding where our cameras are in space
at the end of the day can be much more general. So it can not only be used in AR, but for example,
it's important for any device and even robots to actually understand where they are in space.
Because if you think about that, if you want to know where you are in space, you have two options.
You can use GPS, but of course GPS is not so accurate.
And more importantly, it doesn't work indoor and doesn't tell you the bearing, so where you're looking.
and doesn't tell you the bearing, so where you're looking.
While this visual positioning system
is a computer vision system that, from a single image,
understands where it is in the 3D space.
So it helps you to orient and understand where you are.
So this is the second proposition.
The third one, that actually is a kind of new one,
it was born kind of six months ago,
is the fact that this data set is not only used for us to provide in these propositions, but also for many companies, AI companies, that use this data to train what are, these foundation models are what are used from robotics to actually allow
these machines to understand and to interact with the space around them. And that's it in a very
short nutshell, I would say.
Yeah, there's a lot and I think a lot that we could talk about.
But that's, yeah, it's really neat.
I think one of the things we've talked about previously is how important this is going to be for the machine economy moving forward.
As we, as a society, start to adopt a more robotic mindset, we're going to need this data.
It's as simple as that. We often think, I know personally,
I often think of data, geospatial data in a 2D aspect,
because I essentially, from a driving perspective,
But when you get into robotics,
when you get into these things, 3D is so important.
Yeah, I mean, maybe I can go a little bit a little bit farther on that so you know there's you know of course in robotics there is a lot of hype
everybody talks about robotics and AI applied to robotics but really we choose like make a
distinction here on what are the kind of models that power these machines.
So we can have three main categories.
So the first one is machine perception.
So the ability of these agents to actually understand the 3D space.
Then there is simulation.
Then there is simulation.
Very often, these agents,
these robotics are actually trained in
virtual environments before going in the real world.
This introduces a problem that is called sync to real gap.
Because if you train a robotic system in a fully,
let's say, simulated environment,
maybe you achieve very good results in the simulation,
but then when you go to the real world,
with all of these complexities,
the system basically breaks down.
And so right now there is a new cohort of models
called world models, like Genie 2,
that has been released lately by DeepMind,
that basically generates not only like game-like worlds,
but realistic worlds. by DeepMind that basically generates not only game-like words,
And then finally, this is a third family of, let's say, models that are called VLAs or vision language action models
that actually put it all together in some way.
So enable these agents to actually plan
through an LLM-like structure some way so enable these agents to actually plan through like an llm like structure and actually
interact with the world by basically planning actions and so on so our data sets can be used to
train all of these three models but specifically right now what we're working on is the first two.
So the companies that we're working on are working on these two.
So the machine perception, so the ability of these machines to actually turn 2D images,
so what they get from their sensor, to a 3D understanding of the space that is around them.
And this includes like reconstruction, like what we do.
Like if you think about our sensory system, when we look around,
we just have a look to the room around us,
and we immediately return this at the end of the day,
this is binocular to the view in a rich 3D representation
and understanding of what is around us.
And so because of that, we can actually move and interact with the space.
Not only that, we are also able, and machines need to be able, to estimate also distances
and how big are things from this limited data that they have, and also to understand what
understand what kind of objects are in space,
kind of objects are in space, what is called segmentation.
so what is called segmentation.
So it turns out that to train this kind of models,
the data used is exactly the data
that we are collecting for doing visual positioning system.
So multi-view images collected from multiple points of view
of locations and objects.
So this is the first, let's say, machine perception stack.
While the second part where this data is used for is word models.
So I was citing before Genie too from DeepMind, but there are other important companies like, for example,
WordLabs that is being founded by Fei-Fei Li, that is the godmother of Visual AI, that is working exactly on these stacks. So creating models that not only can generate videos,
but can generate coherent 3D worlds.
And these coherent 3D worlds can be the playground
for machines to train before they go in the actual real world.
But yeah, I mean, I will stop here.
Otherwise, I'm going to annoy everybody.
But if you have another question, I'm happy to, yeah.
No, no, it's not annoying at all.
I think when we met first, right?
I think actually it's a long, long time ago.
That was probably before you guys were categorizing yourself
Like I think you started with,
and this is something like one sentence that really stuck with me.
I don't know even who said it,
but like when we were speaking back then, like three years ago,
four years ago about metaverses and augmented realities
and the real world moving
into uh into digital environments the baseline assumption actually was that the metaverse
is something that people move into right and so like correct me if i'm wrong but the way i always
kind of looked at ovr back then was was you are basically providing the real world image for digital environments to be able to build these digital environments as realistically as possible.
And now what we're seeing, and this is like, I think what we've kind of seen this a long time ago, obviously at peak, because we've been building towards
this holistic vision of the machine economy.
Now we're seeing that this metaverse
is actually becoming a reality,
but not the way most people expect it.
So instead of the physical world
moving into digital spaces,
what we're seeing now is that
digital models are moving into the physical world.
AI moves into the physical world ai moves into the physical world
um and now what is happening is that the way i kind of try to look at this this terminology of
the metaverse is that the metaverse is not actually the digital is not actually the digital
environment it's enabling like providing a copy of the digital, like of the physical world in a digital environment
so that physical agents can actually move
And this is very interesting
because it is kind of counteracting
the initial assumption of how the metaverse
And I mean, of course, there will be digital twins
and all of these things that are happening
in the non-physical realm, if you want to call it this way.
But now you see that what OVA has been building from day one has actually been a deep end,
which is collecting the necessary information, the necessary data at scale,
incentivizing people to provide this value, to train models for people not to navigate the
digital world but the actual physical world
been in touch for a while right now I think like
more than probably more like probably
around a year or something or
speaking and exchanging about this vision
like when I looked into you guys and like understood, okay, like the, it's not just a deep end that is kind of collecting data, you have deep this is one thing that many data-related projects underestimate, right?
Like, we know from Web2 data business models are very challenging.
And you need to actually have a very decent understanding of how you prepare the data, how you prepare the models, so that they actually become consumable and interesting.
Maybe you can say a bit about that as well.
And thanks for bringing that.
And maybe I will also lean a little bit on the history and how we get here.
Because, yeah, I mean, you make an amazing story about how this happened.
But now also it will go a little bit chronological on how this really happened.
Hello, now this really happened.
Yeah, so and I totally agree with your framing.
So, and I totally agree with your framing.
But yeah, at the end of the day, starting back, it goes, but going back to 2021
when we launched, we were like categorized as like a metaverse,
even if you actually metaverse is more of VR world.
But anyway, it was doing AR selling digital lens.
So we're still selling it, of course, to actually have like spatial domain.
So the way the system works in the AR part is that basically we sell these 300 square
And if you own one of these 300 square meter parcels, you can publish in that location
because these parcels are basically connected to the physical world.
But actually, this initial feature poses us a problem. So we say, okay, now we are
selling people parcels to publish AR content in that location. How are we going to connect 3D assets
and AR experience to that location? And so the answer was, okay, at the beginning, we will use
GPS, good enough at the beginning. But then we quickly realized that it was not enough to have a rich experience and AR content that is really connected to the space.
And especially it doesn't work indoor.
That, of course, is a big problem.
So because of that, basically, we were forced, let's say, to build another system, this visual positioning system, because this was part of the vision.
another system, this visual positioning system,
because this was part of the vision.
But now looking at this in hindsight,
of course, when you want to solve the problem of AR,
of enabling a smartphone or a smart glass
to understand where it is in space,
you're actually solving also a problem
because robots and machines need to understand
So, fast forward, we built the pipeline to deliver the VPS and create the 3D maps, because, again, this visual positioning system doesn't work without you first mapping the location, at least for the moment.
for the moment, maybe I will talk about that later.
Maybe we'll talk about that later.
But since basically we had to generate these maps,
these maps basically become like a pivot
in our vision of what we're building.
So at the beginning, these 3D maps were just a tool
to actually enable AR content.
But then, I mean, the program of mapping the world
became so successful, and the size of this data set became so large,
that basically we understood that we could basically take this value proposition of mapping locations and broad what we are doing.
So because of that, basically, the 3D maps became the central part of our value proposition, so generating these 3D maps.
And then under this this we have the
three value propositions that i was mentioning so ar is still something that we are actively building
robotics so basically providing data sets to train the models and also the vps api itself so we
provide a visual positioning system for companies that want to use that.
Coming back to the other question, and so just to close this, of course, this is under a broader
hat, I would say. So it's not anymore just AR or metaverse or VPS. I would say that this goes
under the category of physical AI. So enabling the machines to understand the space,
and machines can be robots, can be smartphones, can be smart glasses,
but at the end of the day, this is the concept.
Enabling a computer to understand the 3D space around them to do different tasks.
Maybe you're a robot, you need to interact with it.
Maybe you are a smartphone, you need to project stuff.
But at the end of the day, the core idea there is is that you need to understand the 3d space and coming back to your second
question so or maybe the first so regarding these data sets and how you prepare it and so on yes
this is a very challenging task especially because we're talking about very, very large amount of data.
In our case, it's 80 terabytes, so data is 74 million images at the moment.
And of course, it's not that you just go out and say, okay, guys, please buy my data.
No, you need really to prepare this data, to segment this data, and to host it also.
For example, of course, it depends really on the acquire of this data.
But if you sell the data to somebody that is training 3D geometric models, so models that try to understand the 3D space,
maybe you can just say, okay, this is the number of datasets that we will give you, and that's it.
But if you have some more specific requests, for example, I want to train a model that can navigate inside of buildings,
or especially out in the nature.
So I will want specifically the datasets that are being taken in that location.
Or you have other people that train simpler models, but very efficient.
They want to recognize some specific objects.
So you need to be able to extract from all your data set exactly what they want.
So this, I mean, theoretically, it seems easy.
OK, you have classification algorithms and so on.
But the problem is that when you have very large scale
to actually run these algorithms, it becomes very challenging.
To give you an idea, since we have like 72 million images,
if you run like a YOLO classifier
to actually classify the objects inside pictures,
and if it takes 0.1 seconds to actually do this classification,
if you have 70 million images, you clearly
understand that it will take a month to actually do this job.
So this is a challenge, but still, it's also
the added value that can be given over there.
And then beyond that, right?
I think, and this is where when we also started speaking,
and it was one of these experiences.
I mean, obviously, I'm speaking with many many many projects i mean on an annual basis it's probably probably like close to a thousand uh projects but
like this was one of these scenarios where it immediately kind of clicked and immediately
made sense on a on a personal level which is of course always always important but most importantly
also on kind of the long-term fit to to to bringing this machine
economy to life because and this is where maybe it makes sense to speak a bit about
the deepened component of that right like i mean for the last two decades i think um you've been
hearing this this sentence everywhere in the world like data is the new goal data is the new
data is the new goal like like at some point goal, data is the new goal. Like, like, at some point, you can't even repeat it anymore, because
it becomes annoying. But there is a lot of lot of truth to it. And I think, initially, if we look at
it historically, in web two, that specifically, with with relation to any digital business model, right?
That has always been the case.
Like data, personal data specifically,
is what the big social media platforms are all running on.
And now, basically what is happening again
with this transformation towards physical AI,
towards real machines navigating the real world
now the value of data a the value of data has significantly amplified right because suddenly
it is it is not only a tool in order to understand which customer groups or consumer groups are buying
what product or which fashion label is attractive in which geography or which algorithm helps my social
network to grow the quickest now suddenly it has actual real utility that goes beyond let's say
consumer and financial market psychology now it is kind of becoming a crucial underlying asset for um for any sort of machine and um when we think about
the the deepened component in that it's a crucial element to be able to outsource this let's say
visual capacity and incentivize the people to to benefit and um uh yeah monetize this massive asset
that they are all sitting on right like i Like, I mean, everybody is with their phone all the time.
And it literally costs you nothing to just take a couple of photos, upload them,
and then get the respective rewards for contributing not only to this network,
but also to this tremendous economic opportunity.
Maybe, Diego, could you just briefly walk us through,
like, how does an everyday user use your application?
Just give us, like, a very quick, simple walkthrough,
So, by the way, we published just a few weeks ago
the new application that is kind of very simplified.
There are two main actions that you can do with our app.
and the second one is publish AR content or browse AR content.
But focusing on the mapping side, so on the deep inside,
basically the way it works,
so the mapping activity is a permissionless activity,
so everybody can map anything in the world they like.
But if the users map the locations that are incentivized,
the data from the location that we want to buy,
we will give them a reward in over-the-art token
in exchange for their time and effort.
And the way that you just open the app,
in your front you have the map of the world
with the world divided in hexagons,
the same hexagons that basically are used for publishing
as a reference in our system.
And basically you can see what are the prizes
or the rewards that we distribute for mapping the specific location.
The reward goes between $0.90 to $3, depending on how hot is the location.
The way we define how hot is the location is a little bit complex, but basically it's based on two main signals.
One is the wisdom of the crowds on the people that bought the lens for
publishing in our platform it's 30 000 people that bought more than 870 000 locations so
uh the the the collective behavior of these people basically give a very strong signal
to what are the important locations in the world plus we added some specific categories of location
extracted by open street map and so we have around 11 million locations in the world that are incentivized. And so it's very,
very likely that if you just open the phone, you will have some around you. The mapping process
itself is like taking a video. You have a guide that basically helps you to do the collection
You have a guide that basically helps you
to do the collection with the AR user interface.
And basically, you take a video for around three minutes,
three to five minutes, really depends.
And on the back end, basically, we
collect between 400 and 1,000 pictures.
Not only that, we collect also all of the data that the sensors of the phone
generate while you're mapping because this helps us to actually enrich the datasets.
Basically, we can do the metric scaling and many other things to actually give even more value and
usefulness to this dataset. That's it for the weight wars and just i want to add a few things regarding
the remuneration and actually the difference between what we do thanks to this deep in
framework and what web2 companies does so at the beginning i was talking about niantic and how they
have this also very large data set but there is a quite important difference between our data set and theirs so the way Niantic
actually and maybe usually what two companies operate is that they just extract data from the
user and don't give any feedback for this or maybe they just give them let's say they give them the
privilege to use a product but basically without making clear that in exchange they will
basically take away data from the user. So that's the way it works for Niantic. They have been
collecting data for years, basically from Pokemon Go players. So people that were doing another
activity, they didn't have the intent to map locations, but they still took all the data. But this approach that probably economically is very good
also has a weakness, I would say,
because people that actually do this Pokemon Go play,
they don't have the intent to map.
And so the quality of the data that they collect
because creating a correct demand requires a specific path in the location.
And so since they're not given any incentive and just collecting this data without the user knowing,
the quality of the different data set that they generate is usually a bit lower or even much more lower in many, many cases.
So the dipping framework is not only important because it's a fairer system and clearer system
that basically give rewards directly to people who contribute to an ecosystem, but also is
very useful because it can enable you to have a standard minimum quality.
Because again, also in our case, if the map
doesn't have the minimal quality requirements,
so if somebody tried to just cheat the system,
already just runs around with many unfocused pictures and so on,
we are not going to reward you for that.
You're not going to buy data that has corrupted data.
So this might seem just a detail,
but actually, when you use this data to train models
or to perform VPS, this becomes very, very important.
It's useless to have data that are not high quality,
because you're not going to be able to extract nothing from that.
I mean, it's like garbage in, garbage out
that everybody knows in AI.
And this is the beauty, right?
Because you can incentivize people to do so,
you basically get a much higher, let's say,
data quality input from the very beginning.
James, do you have any specific questions?
Sorry, I captured it a bit.
It's loud and clear. Yeah, maybe just one small thing
that I could add also on this data capturing process.
So I was telling about the numbers that are quite important, but are very close to 150,000.
Right now, we're generating between 8,000 and 10,000 per month.
We've been growing kind of 30% month over month in the last year in terms of number of map that we generate.
of 30% month over month in the last year
in terms of number of months that we generate.
But at the same time, we think that with this kind of technique
of capturing data, we are going to reach a plateau very soon
in terms of the amount that we can generate monthly.
And so actually this month, probably next week,
we are going to have a new feature
that will be an additional step function on an amount of data that we can generate.
Basically, we will allow to connect to our app 360 cameras.
So at the beginning, we support Insta360, X4, and X5.
We will have the GoPro quite soon.
But basically, from the test that we are running
in the last month, basically with the same amount of effort in terms of time that the user spent using the app, we can generate 20 times the amount of data that we generate right now.
So this will be an additional very, very important catalyst to the services that we can offer thanks to this data. Of course, this will mean a wider
coverage for the visual positioning system, but also it will mean much more data to train models.
And specifically, these 360 captures are exactly the data that are used, for example, for training
world models. So these generative models that can generate realistic 3D
words where machines can be trained on.
So we are very excited about that and also
to bring your community on board with this new mapping technique.
I still have dozens of topics I'd like to speak about, but maybe I'm sure James has captured some ideas that he wants to get out.
Oh, I'm just enjoying this conversation.
I think it's really important to dive deep into these topics and really understand the importance of these deep ends.
understand the importance of these deep ends.
one of the topics that we did want to cover is the launch of the pair on,
I'm looking at it right now on machine X.
So all I would say is continue with,
Cause I'm just fascinated sitting here listening and I don't want to
interrupt too much and work your way into uh somehow over the next half hour all right all right um
yeah i mean before we go there right like i mean obviously that that's a that's a big step i mean
the machine x right is is the machine economy um that's it's in it's still in its early stages but it's a crucial component
of the peak ecosystem it's going to be different to to any other decks um in in the future um and
maybe i mean obviously like i'm i'm biased about how great the the ecosystem is in my in my role
but maybe tell me a bit about why you wanted to to why we're so eager to
collaborate with peak right i think that is something that is always very interesting to
to hear also in the let's say in the outside world for the community for your community but maybe also
for our community what what was it that kind of um yeah excited you about the prospect of basically going multi-chain and really kind of building this very, very close relationship with us?
Absolutely, absolutely, Martin. to actually do this step is that, I mean, we really see an alignment in the vision
between what we are doing and what we are building as an infrastructure.
So to us, I mean, this DPN framework, I don't know how to call it framework,
narrative or anyway, infrastructure is really something that probably is not yet totally understood by the market.
But we really believe that this is going to be a very, very important narrative
and also piece of infrastructure in the future. With Deepin, we can create services and collect data that would not be possible to be collected by single companies, for example. the deep in, I mean, deep in chain and bringing together all of these projects with ShareVision
for us really took us loudly.
We said we need to be there.
This is where things are happening and where many of the projects that we live are building
are building on. And so, I mean, for us, it was kind of a no-brainer to go there.
And so, I mean, for us, it was kind of no brainer to go there.
But I would like to add not only that, I mean, this is a little bit more personal, but
I think, you know, I've been in this space for quite a long time and I met many people from
this space. But I think that the support that we received from the people from PIC is unmatched to this experience from any other chain.
So I guess these two signals together really convince us to move as fast as we can
to actually be an integral part of this ecosystem.
And to anyone listening, that was not staged.
Thank you very much for the applause. the flowers no no but that's the true
that's the truth no i can only also give it back right like i think the the this the the challenging
part of what we all do is that when in a in a space that is that is so so incredibly loud and so incredibly noisy um building meaningful stuff you can't achieve it
alone right and um whenever you find a project and it's so many of them and this is i think what is
so so special about this about this ecosystem is um when whenever you you meet these people that
dedicate their entire lives to building something of value, right?
And that focus on doing the right things and doing them right.
And that have the resilience also to go through cycles
and that have the resilience to go through hype cycles,
even when your special or like your individual narrative or your individual positioning
affected by by it because just other topics are trending um it immediately clicks and it
immediately matches and the thing is that this ecosystem is is incredibly synergetic and when
we speak about deepen and about the machine economy the way we envision it right like i mean
you probably also heard this and we discussed this as well, Diego, right? Like we're talking about democratizing the abundance of the machine
economy. This is not just like a catchy term, right? What it means is that we need to understand
that we need to rebuild our existing systems. There's multiple reasons for this. And this is
probably not the essence of this conversation today, but the machine economy cannot run on Web2 Rails.
It's technically impossible, right?
You need a dedicated stack in order to make it work.
You need a dedicated stack and set of standards
in order to make it work.
And it needs to be mutually beneficial and inclusive.
And when we speak about Deepen
and when we speak also about OVA,
I mean, I just have the map here on my phone, right?
Like I'm sitting here in a small town in Germany, right?
And then I'm seeing these different sites
that I can map, which is, by the way,
I mean, if anyone ever wanted to see
some touristic sites in their own city,
you might also use the OVA app for that
because I had no idea that stuff existed.
But think about emerging markets.
from critical infrastructure
like mobility, like connectivity,
where the economic incentive is not there
for enterprises to go there
and actually establish the infrastructure,
let alone source the data
from these local environments, right?
And this is exactly where Deepin comes in.
Deepin, to me, is almost like the first time
when we as a civilization have found an economic mechanism
that allows sustainable business models
to be more cost- cost efficient than the existing infrastructure
and and and when we've been speaking i've never been a fan of like that's my that's me maybe
giving more away than i should but like i've never been a fan of like big subsidies right
when it comes to supporting certain certain elements i'm i'm i'm a free markets guy. I think that the market always balances things out. But at the
same time, there is a necessity for sustainable business models, not only when it comes to green
energy and green innovation, but also when it comes to reaching certain geographies, people
that are marginalized simply because of the fact that they live in a region that is not economically
And this is a fundamental problem that Deepin solves.
By basically making the people part of the infrastructure, you're reducing the capital expenses
because you as a centralized business, you know, don't need to go somewhere and provide hardware.
You don't need in the over example, right?
Like if you want to have certain imagery data, you do not have to go there yourself, hire five to ten people, equip them with cameras and then they map the stuff out and then you pay them on a monthly basis.
And if the data doesn't immediately bring the monetary value, you have to let them all go again and you basically rack your balance sheet.
Like just the capital expenses, the operational cost for these types of services in certain parts of the world just don't match with the economic value but deepen is the first time where in many of these
cases you actually bring the operational cost for let's say an enterprise to zero because you're
incentivizing people to do the work for you and they are not and the good thing is they do not
work for you as a company they basically work for themselves but they give you the except type
of asset be it the data or the service that you are actually seeking and like me going through
through this map here right now is a perfect example for that right like i just said i can map
things here in my region like nobody would ever tell me take a photo of this right but here i can
actually create value by just walking around and then i mean thinking of other markets uh not germany but other places in the world that
becomes even more relevant i know absolutely martin i cannot put it better than you did
but what i could add is that actually this is a very uh interesting moment in time. And I think this is the perfect time for deep intermobiles,
especially for deep and connected to data and AI.
And just to give a real example of this.
So actually, I have to say that sometimes we change our minds.
And I have to say that I changed my mind quite strongly six months ago
because actually our thesis at the beginning was that we would never be able to actually sell this data.
But this data would just be for us to build high value services over that.
And that's because, of course, I mean, it all started with AR.
But also we built the VP, yes, and so on.
So the real thesis that we had, yes, of course, everybody says that data is cool, data is valuable.
But then how you monetize that?
But then something happened.
I mean, everybody knows about what happened with LLMs basically two and a half years ago that bring basically AI to the main stage.
But actually in the last year, this craze for large language models actually expanded
to this new cohort of models, to vision models.
And so if you see, like I was doing some research lately, I mean, the growth of the number of papers that are being published on vision
models have been growing like 40% year over year in the last three years.
And the last year has been exploding.
So really, we are on the moment in time where this data became really a valuable asset for
itself because there is like this intersection of AI models and robotics.
And basically, probably AI models are the enables of the robotics, because again, the
robotics revolution is not about building better motors to move these robots.
It's about giving them the intelligence to do that.
And so really, we are in a moment in time where this, let's call it business model of
data became something possible.
And this is only going to accelerate in the next year.
So I think that really we are at the beginning of something very big on this aspect.
And this is exactly what I was alluding to, right?
including to write like with only only with deepen can you actually get the scale of data needed in
order to fuel this uh this the the intelligence these physical uh bodies need right that there's
no other way to capture data at scale than than deepen plus you can you can get this relevant
data anywhere in the world right and so having autonomous cars at some point
that drive around places that we call developing markets,
it would not happen on Web2Rays.
Nobody would be incentivized to actually capture this.
But now it's also a bit of this concept that we've,
like in the context of the machine economy,
in the context of what machine economy, in the context of
what we see as smart cities, this concept of taking consumers of city infrastructures and
turning them into prosumers by providing the necessary input that their authorities actually
need in order to improve their infrastructure and their surrounding. In the era of robotics, this is magnified probably by 1,000x.
And this is where this moment is so exciting, right?
And again, just to stress this out again,
if we think about, we are in this AI era because of LLMs,
this is, of course, it's clear.
And then this pushholder research in AI.
But LLMs exist only because the internet exists,
because we had, I mean, collectively as a civilization,
we built this tremendous amount of text
that is out there, viable,
and can be taken by anybody to train these LLMs.
This is not the case for machines. There is no publicly
available large enough data set to train the future of robotics. And that's why Deepin is
the perfect moment in time and space to actually catch this opportunity. So really, really,
I cannot be more excited about the space
And now you're on Machine X.
I mean, we've been talking about that for a few times.
And I think that, I mean, this is, again,
very important for our integration.
And also, I guess that this would be also a very important part of our collaboration with the Get Real campaign.
So, again, the way it works in our platform, we distribute rewards with OVR.
Now, basically, this OVR, you can also withdraw them on peak
but not only that basically this is just the beginning
of a wider integration that we will have with the peak ecosystem
so a first step but a very important step
I would say. Oh absolutely
and there are so many streams
obviously we cannot speak about everything
because otherwise we would give
away stuff that is premature right but there is a lot of things that we are that we're working on
together you already hinted right the get real campaign is coming soon um and uh super super
happy to to to have you guys in the campaign as well i I think it's one of these fantastic cases
that will make this campaign what it's
supposed to be, which is the
history of the crypto industry
to see this happening very
soon, but I will keep it with that
Welcome, I think now officially, right?
Basically, a couple of weeks ago,
you've announced that you're moving
or like building a second house, right?
Now you moved into the second house officially.
So very nice to to to be working
together really enjoying it and with that maybe james i give it to you maybe you have
one or two things to to add before i can conclude with my monologue here
yeah i think no i think it's really fascinating. I can only repeat what I said
a few minutes ago, which is, you know, understanding the depth and the importance
of the data that's being collected and how this one really feels so interconnected with the machine
economy, with what we are doing, with what we are hoping to see.
I think that it's really important that everybody does take the time to understand what OVR is doing.
Download the app, participate.
I've been using the app for some time now, collecting my data and submitting that as well.
So I think it's something that we should all take advantage of, especially given, you know, I think what you just mentioned with the upcoming campaigns, it's a good opportunity to get to understand the system, contribute that data and get ready. Warm up.
When people ask me, people ask me all the time, you know, when Get Real, when season two, what do
I need to do? You know, what you need to do is you need to become familiar with these applications you need to become part of the of the machine economy yourself
so when the campaign starts there's no change to your daily routine because you got to get into
that routine exactly stretch we need to all stretch to be to be ready it's going to i mean
it's it's going to be very exciting i think it's there is a lot of stuff I mean maybe maybe some people
and rightfully so will say hey like it's been it's been a bit it's been a bit
silent recently it's been a bit less active around peak what is going on I
mean rest assured we we always say we are cooking.
And the best dishes often take some time, right?
So the next months are going to be very, very exciting.
And yeah, I mean, at the end of the day, I always like to say it,
like the soul of Peak is the ecosystem.
And the more great fantastic projects
like like over we have the more we also make this vision tangible and and and and clear and
obviously it's not only us that realize this right i mean many of you guys have heard of the machine
economy free zone i mean we are working very very closely with some of the most important representatives in the UAE to bring this vision to life.
There is a lot happening in that context that will be, of course, of benefit to you as the community members, to PEAK as a project, but ultimately also to its ecosystem.
to its ecosystem. So a lot is happening on that side as well. And we are speaking to so many
So a lot is happening on that side as well.
people, robotics companies, some of the biggest in the world. And it becomes very clear that what
projects like OVA are doing is not just a good idea that needs to be verified. There is a lot
of meat to the bone and there's a big, big meaning. And with that, I can only say congratulations for
what you guys are building
for the resilience for being around so long and yeah super excited to to do this journey together
no thanks uh thanks a lot again for for the opportunity and yeah let's do it again whenever
you want i mean it's uh super exciting and uh yeah i'm really looking forward to start working with the P community.
I guess that now, finally, we're
able to give the message that we are here.
And basically, we can, I guess, onboard even more people
I'm really looking forward to see all this progress.
But I'm really, really excited.
So thanks again for all the support
and the amazing network that you're building.
James, do you want to close it?
because it's been a great conversation.
thank you to both of you for taking the time.
I know how busy you guys are,
but it's always a pleasure
to listen to these conversations,
to be a part of these conversations.
It's such a privilege to understand how, you know, it's cliche to say, but end with over.
You guys have been around for a long time, but we're still so early in this.
And we're going to look back at these times and years to come and understand what the
impact really was that we were making.
So I feel very privileged to have the opportunity to listen to you guys talk.
And I think everybody in the audience, I'm privileged that you take the time out of your
day to join us and to listen to us, sometimes just wax philosophical about these amazing
ideas. But I could listen to Martin's monologues all day.
I don't know about you guys.
Just to echo this also from my side, right?
Like you guys, everyone listening,
this is all not possible without you, right?
Like what we're doing is we're building a movement.
We're building something that is so, so, so, so important.
I said it in another, in a radio show recently where I said,
I was asked this question that I've never been asked before,
which is, if you think about the future of the machine economy,
like, and if you think about your your your kids role in this new world like
what is it that is that that you have top of mind and i always like i said in this i don't know how
i came up with this but this is what i felt in this moment and i said i want to live in a future
where our children um and we as humans are not redundant as a species right and and this is
exactly what what what we are doing right This is what we are doing with Peak.
This is what Over is doing.
And this is what the Peak ecosystem is doing at the end of the day.
It's kind of claiming back ownership and creating value to all stakeholders involved.
And this is a big, big honor and something that each and every one of us,
and specifically you guys using the deepens
being a part of the ecosystem understanding understanding the impact that this will have
eventually economically and uh and of course uh on our lives um so kudos to kudos to you all and
a big big thank you okay so we're gonna have to wrap it up because we are out of time um
but all the all the thanks have been said.
For anybody who wants to learn more about Over, please visit their website at overthereality.ai.
You can also follow Diego and Martin on Twitter to get lots of insight and information there.
And with that, I'm sure we're going to be doing this again with you, Diego, because these are always amazing conversations,
and I'm really looking forward to seeing how the project folds into the ecosystem
and how the ecosystem folds over and surrounds you guys.
So I'm really looking forward to the future.
Thanks to everybody. It was a pleasure.
Yeah, and if anybody's free in a couple of hours,
there's another space upcoming I wanted to promote.
Yam is having a space talking about how they're going to be changing
the future of gaming via the Peak network and the Pico system.
So one to jump into in a couple of hours if you're interested with that.
But with that said, thank you very much for everybody.
Once again, I know we say that a lot, but we really do mean it,
for taking the time to join us today.
We will talk to you again very soon wherever you are in the world. I hope you have an amazing morning, afternoon,