Thank you. Music Thank you. Thank you. Thank you. Music Thank you. Thank you. Thank you. Thank you. All right, welcome everybody.
We seem to have had a pretty eventful last seven days.
We seem to have had a pretty eventful last seven days.
Yeah, I saw people getting really upset about the GPT-5 launch on Reddit.
I haven't had any issues with it, but I'm being told, or I guess I'm reading, that it doesn't have as much personality as GPT-4O.
have as much personality as a GPT-40 and some people feel like they've spent years at this
point kind of getting GPT to be their personal friend or assistant and that all went out of the
door. Did you have that experience? To be honest, I didn't treat GPT-4 as a personal friend it was more as a tool that would work and do stuff um you know so using
gpt5 it it seems a little smarter it seems you know a bit more depth a bit more research so you
know i didn't notice that much difference other than it seemed a little bit more competent you
could do more stuff with it and seem to think longer and you know they kind of tweak some of the UI stuff but yeah there was some backlash from people you know going oh we need GPT 4.0 back
and you know you know Gartner's group got this sort of you know hype cycle and it felt
like we hit that trough of disillusionment really fast but there were specific issues
I mean people had said that you, in terms of coding tests,
it really sucked and that 4.0 was much, much better.
But the weird thing is its code analysis was much, much better.
So, you know, it seemed like, I guess from me,
from a marketing perspective,
it seems like I'd kind of give them a B minus.
It seemed a little like they had problems with it, you know a bunch of you know problems and things cropping up and they really didn't get
ahead of the curve in terms of really highlighting all the good stuff around it they kind of got
sidelight and then the competitors kind of came in you saw grok 4 you know and they were kind of
trying to seize the moment to look at you know know, we're more open, we're more reliable, this other stuff. So it kind of came across really ham-fisted. I mean, GPT-5 is great.
Is it a really big revolutionary change? I don't think so. I mean, what do you think?
Yeah, so I'm the same as you. I have a complete daily tasks for me and I work through problems. And so I, I honestly haven't, I haven't noticed much of a difference. And I also, I also think that it was a bit odd. They remove, they removed the previous models. I think in the past they haven't done that, or at least they've left at least the previous version up. And in this case, they took out everything.
And I also think that when, I don't know when GPT-4 came out, were there similar complaints?
Is this just people complaining in the beginning because it's not i i don't remember
that i think i think they were too overly confident you know i think maybe internally
people were drinking too much from the kool-aid going oh this is the best thing we've ever done
blah blah blah you know people won't need 4-0 let's just close up and they were just massively
overconfident they never tested the market.
They never talked to people.
And so I think the rollout speed really exposed the weaknesses
of their understanding of the marketplace really before the benefits
could dominate the narrative.
You know, I think that's the biggest problem.
You know, and the trouble is that user trust is really volatile
when you do high stakes mobile model shifting. Right. I mean, moving from or they should have had more consideration. They probably probably might have done a better way of testing and getting audience feedback. seemed really like i said a b-minus launch for me um but from a personal aspect you know yeah
it seems to be it's a bit better you know lots of good stuff in there but you know so things
maybe will settle down yeah i think so too i i think in the beginning it's like this but people
people end up moving on uh we we have oh we got ryan's hand up and then i want to introduce um david
yeah i was just gonna say i i've been playing with uh gpt5 and i actually switched to grok
um it's uh gpt5 is so slow i wasted i wasted so much time on sund Sunday trying to do some development stuff and trying to review some code stuff.
I felt like I wasted hours waiting for prompts to render.
And I switched over to Grok, and it debugged everything, and all the prompts were coming through within seconds.
And normally, I don't like using Grok.
I feel like it's like way too wordy and it's just,
I don't know, I felt like GPT used to be
so much more refined, but I switched.
That's really interesting.
I've tried using, and I think maybe for coding
and other tasks that I'm not trying to complete, Grok is better.
I have not had as good of an experience with Grok as I have had with GPT.
And I don't know if it's because of what I'm trying to complete.
I find that it hallucinates more than GPT does.
And I find that I get I just get answers to questions.
And I find that I just get answers to questions.
I get not the wrong answers, but just kind of a side answer to a question I'm asking as opposed to tackling it directly.
But that's just my experience. I want to hear from David.
David, welcome. Please give us a quick intro.
I've never had you on our stage before.
Hey guys, good to be here. So I've been in AI since 2011, crypto since 2012. Class of 2012 for Bitcoin, helped with the early days of Ethereum.
Hey, guys. Good to be here.
Probably most people know me for working on the framework for decentralized applications or dApps back in 2013.
So that's sort of been my focus the last 10, 12 years, just basically building each layer of blockchain technology, helped out with the first, second layer projects and sort of, yeah, just been focused on scaling the technology.
But it's really cool to see all the AI aspects now intertwining with Web3.
And yeah, I mean, I would agree with Ryan.
Like my experience has been Grok 4 is just amazing, right?
I don't know if you guys have tried Grok 4 heavy, but I shelled out the money for it.
And yeah, for coding, it seems way more powerful than a lot of the alternatives.
And what I love about Grok is Elon is committed to open source the models as they release newer models, right?
So they open source Grok 1, they're about to open source Grok two, you
know, that means we're going to get Grok three, Grok four as
open source tools over time. So for me, that's really important
because that's what's powering all of decentralized AI are these
big models becoming open source. So anyway, good to be here.
Yeah, David, I think I'm curious to ask a follow up question or
know your answer to the follow up question. What? In your guys yeah David I think I'm curious to ask a follow-up question or no you're
interested in follow-up question what in your opinion what is the significance of
and the importance of open sourcing these models we had this conversation
last week and so I'd like to hear your thoughts well I mean first of all if
it's open source we can build real composable tools on top, right?
If it's a black box on how it works, you're going to get more experiences like people just had, which is something changed underneath.
They don't know what it is, but it's not doing what they expect, right?
So that's the first thing.
Transparency and just being able to build real tools requires open source, in my opinion.
The second thing is, like, I want to own my intelligence.
And if it's an open source model, I can run it on my own hardware,
or I can run it in one of these great distributed networks,
like Morpheus connects into Akash or Render, Hyperbolic.
There's all these great distributed GPU networks, and they all run the open source models.
And that's I think the key here is if you want to own your intelligence, you really
need access to distributed models.
And honestly, they're just cheaper.
I can go on a Cosh and get a H100 for two bucks an hour, right? Like, it would literally cost me five times as much
if I can even get an H100 on Azure or Google Cloud or whatever else, right? So I think for
most people, this is going to be the mass market, how people access intelligence is going to be
through the open source options. Okay, so that's interesting. I have a follow-up question.
This also is a topic we've covered several times
across multiple AI shows.
Lewis and I and Ryan and I have spoke about this,
but one of the things that I wish I could do with GPT
is I could go full-fledged,
completely dump everything, all my writing, everything
that I've done, everything.
I've written a lot since I was a kid, even.
And I would love to just dump the different short stories or scripts that I've wrote,
all these different ideas.
But I'm skeptical on where that information goes.
I don't necessarily trust OpenAI with my full-fledged IT.
And I think it's kind of sad
because I benefited so much from LLMs
in these last couple of years.
And I haven't even given it everything that I have.
or can you tell us a bit about how we could set up our own,
how do you set it up so that it doesn't,
your data stays with you,
your information stays with you.
Cause we haven't even unlocked the full potential.
I'll speak for Lewis as well.
and if you want to add anything here,
You're not ready to give it everything.
Absolutely not. Yeah. Well, that would be the wise choice because they are storing this information
feeding it and selling it to advertisers not to mention you know they get hacked from time to
time so all of that data then goes to the dark web and the government has said they have to keep the
records of all the prompts so
they can be subpoenaed. If you're asking for legal advice, like that's going to end up in a public
record sooner or later, right? So the thesis of Morpheus is to give you a personal AI that you own
forever, right? And so I want to do exactly what you're talking about, which is I want to take all my emails, all my chats, all my personal data.
Right. And feed it into my personal AI. So it knows my values.
You know, it has a graph of my knowledge, my contacts.
That's going to be by far the most powerful AI because, you know, it's not just in this walled garden of what I've told ChatGPT.
Right. I can run that locally, right?
Or I can use Morpheus to create that type of agent.
You know, Ryan's been running experiments, you know, on that.
The whole Morpheus community has been building out agents that run on those type of rails.
So yeah, 100%, that's what I want.
And it's going to be way more powerful, right?
It's important to point out,
every time I use these Web2 AIs, I'm surprised by how crippled they are. They won't answer my questions. They can't take action on my behalf because they don't have a wallet. They don't
have an economic means of paying other agents. So I'll ask about something and it'll give me
10 steps. I don't want to do the 10 steps. I don't want to do the 10 steps.
I want my AI to do the 10 steps. And the only way we're going to get there is if it can pay for
things, if I can authorize it to spend money, if it's using stable coins. Those are the keys to
getting it from an informational system to where it actually works on my behalf. I recently tweeted that we should define AGI as AI-generated income.
And I think when my personal AI is producing 51% of my income, then it's reached AGI.
If I've handed off the tasks I do as part of my job, and the majority of them are being
done by my agent, I would say we reached
AGI. And I don't think we're that far away. I think we could get there by the end.
So last week, Ryan raised the issue around decentralized AI and the issue with the memory
requirements that are needed. And, you know, decentralized can't really cut it. I mean,
charting's not going to really work. And what's the solution, you know, with can't really cut it. I mean, charting's not going to really work.
And what's the solution, you know,
with trying to set up decentralized AI so that given the massive memory requirements
just from the models themselves?
Well, you got to have a network.
You got to have the ability to store that data
across the network, right?
You know, I gave a speech at Consen Consensus about this, is we can't just have
a solution for people with GPUs and high-end laptops, right? Most people use a Lite client,
right? Most people are not running a full Ethereum node. They're experiencing Ethereum
through their network, through MetaMask, right? Through a Lite client. And so I completely agree,
like our focus needs to be this year, building out the light clients where people don't have to be
running any hardware right they can just access these networks using their
wallet for the credentials I think that's really the solution okay I mean
you know personally I've got maybe 10 12 terabytes of IP you know so not that
much relatively speaking but I you know I just think you know, so not that much, relatively speaking. But, you know, I just think, you know, you've got you're trying to how do you
how do you hook in a decentralized AI in a way that it can either pass it all,
condense it, store it and then use it in a way that then is going to give me
I don't think we've solved that yet.
Ryan, what's what's your thought? Are we at that point yet or when do you think we've solved that yet ryan what's what's your thought are we at that point
yet or when do you think we will be so i've been going down this rabbit hole because essentially
what no essentially what noah was describing of you know oh i have all this you know ip and stuff I don't want to just upload into the Grok or OpenAI donation bin.
So I went down this rabbit hole.
I was like, so what do we do with our data?
And that's when I started doing some deep research
into all the different vector databases out there
where essentially you're embedding your data into vector formats.
You're storing those into clusters.
There's actually a lot of really cool open source
vector database projects out there now
where you can actually store all your embedded data locally.
And then that's consumable by a lot of these models.
Now we still have like formatting conflicts
between like Grok and OpenAI and Gemini and stuff.
They kind of have their own language when it comes to embedding.
So it's not like one size fits all.
But, um, I've actually started working on my own personal system where I can
start building my own vector database with my own intellectual property.
Um, the, the key is with a lot of these models, like we have some really good
open source models, like really powerful open source models that have a ton, like billions and billions of parameters.
The reality is we don't need billions and billions of parameters to get a, uh, model to do what we generally want it to do.
Uh, right now it's like everything in the kitchen sink.
We're, we're like, just trying to make this the smartest model possible that has all this information. But I don't need my personal
assistant to know particle physics. You know, I don't need her to know like, you know, radiology
and, you know, quantum mechanics and all that stuff. Like, no, I just need her to know how to
write an email and spell correctly and basic grammar.
So when you start like really narrowing in on data sets and really narrowing in on general knowledge that needs to be in one of these base AI models, it actually gets very, very, very small.
And once you have one of these base models that's relatively small, then you can actually train it on your vector data.
And now you have a very, very fast local, essentially model that is customized to you.
And there's tools that do that.
And there's a lot of systems that will let you do that.
Now, they're not like a general consumer systems that are like, oh, out of the
box, click this flashy, blinky button with this animated character. They're not super consumer
friendly yet. They're still very engineer heavy or I guess first mover or first adopter heavy,
where it's like you kind of have to get into the weeds a little bit to understand it.
But we're getting there. So i was i was encouraged by that
there's a lot of open source systems that let you do this type of stuff it's just they're still very
engineer heavy yeah so we're a ways away
well we we are and we aren't so with the advent of vibe coding, right? Like we could have like apps tomorrow that do this stuff.
They might be janky and they'll, they'll look great, but they might not work super well in the
backend, but people are iterating on code and apps and programs like faster than any time before in
history. Like it's just, it's crazy the amount of new online tools that are popping up every day in the space. So, you know,
it's almost like faster than Google and a lot of these services can even index them. And it's
definitely faster than a lot of these models can be trained on them. So when you ask Grok or GPT,
or even Gemini, like, what's the latest AI agent system? You know, it doesn't even know what the
latest AI, you know, it's, you It's five months behind or four months behind.
The Morpheus team is iterating on agents and working with a lot of the open source community.
There's a lot of other decentralized AI projects out there that are iterating so fast.
It's coming and it might actually already exist and we just don't know about it yet
because so much stuff is happening around the world and that's why we we look to you Lewis to find this stuff
and curate it into a list every week sure sure yeah I mean yeah I mean it things things are
moving so fast um it's just kind of crazy you know it's it's literally every day it's a couple
of days I mean people someone asked me you know what
do you what do you think is going to happen in six months a year or so from now I'm like I don't
think you can predict three weeks ahead I really it's that it's like we're in that it's called the
Johari window you know where you have known knowns known unknowns and then you got unknown unknowns
and I think the speed of pace because AI has really infused itself across every single
industry so fast, so quickly, we're just seeing just this massive transformation.
It is really, really hard to kind of track other than, you know, that's why I did the
newsletter, you know, so I i did the newsletter you know so i can
look at the headlines every day um you know i think the robotics stuff i think is a real sleeper
there was a story in today's newsletter about you know blue collar workers should start to really
kind of be starting to get worried right so at the moment you know you're seeing the robots walking
around they're doing okay but they don't have fine motor skills but that's just the technical problem you know plumbers
welders electricians you know five years from now it might not need a human to do that um so that's
just one example but you know there's just so much change across every single part of what we do. Go ahead, Levi.
Cap, Cap, I can't hear you very well, brother.
Here, while you figure that out, Cap, I had a quick question for David.
You mentioned Morpheus and you mentioned how Morpheus is trying to solve this problem,
because I'm genuinely waiting for a time where I can upload my entire IP. Like you said, emails, writing, scripts, short stories, ideas that I've had, journal entries.
I just think it's cool. It would be cool to, I guess, for lack of a better term, pseudo-fuse withT has helped me finish things that I might have otherwise had a hard time finishing or even starting things that I would have been able to finish just fine.
But I can't take that first step. So how is Morpheus, you know, how is Morpheus developing a solution to our problem?
And when is that going to be available?
So that's a great question.
Morpheus has had these different eras, very similar to Ethereum.
It had a Genesis era when the white paper was published by the Nons,
Frontier era when the smart contracts went live,
when agents went live, when Compute Live. The next era that the community is preparing for in September is called Zion.
And it's a focus on basically taking all of the infrastructure that got built last year,
right, distributed compute and agents, and hooking it in to this personal AI.
And because these are open source models, you can set the system prompt,
you can put, like you're saying, your own context into everything that you do. So you're basically
able to create a rag where you can put your context in and so your agent, this powerful
large language model is then aware of what you're doing and the knowledge that you have access to.
The other piece is tools.
What's making these large language models so powerful is people are connecting them
to MCP servers and other agents.
So if you look at the benchmarks, as soon as you connect tools,
the large language models get 10%, 20% smarter and more accurate and more capable right so i think that's
sort of the next big step is this is a zion era coming in september where you can start connecting
in all these tools to your personal ai and then it's connected to your gmail and then it's connected
to you know this agent that knows all your contacts right then it gets really powerful
because i think what's going to happen is the models are becoming
Like Ryan was saying, the generalized knowledge about all public information, like that's
pretty well solved at this point.
What is obviously the next step is private information.
And there's way more private information than there is public information right and so i think when we couple
the two together that's morphe is just going to get become this super powered thing where my
personal ai can now do way more and i can start setting up all the jobs that i've been doing
instead of my ai doing and the the bottleneck for that is like you were saying, is compute power.
Or I guess like we've been saying is compute power.
How do you, like how would we run something like that on our own servers or using our own hardware where we are 100% sure, you know, this information, this private information is not going anywhere
except for the hardware that we own.
Well, I mean, if you want to run it locally,
there's a great Morpheus install that leverages Ollama,
and you can run the models on your own hardware, right?
So I've got an Apple laptop, and I'm able to run, you know,
Which is a great powerful open source model on my own hardware.
And then I can put anything I want into the system prompt and connect it.
So there's a local install version of Morpheus if you want to run it locally.
And if you need more powerful models,
it can hit the API, right?
And connect into the whole Morpheus network where people are running the GPUs remotely.
But in the end of the day, it's basically going in and it's coming back and you're getting
all the prompts and storing all the information on your end.
So there's also a lot of improvements coming on the privacy side.
Different protocols are trying to use either trusted compute elements or they're trying to use full encryption
to protect those prompts.
But it's already 100 times better than OpenAI,
which is literally mining your data
as part of their whole business model.
It's the same for Microsoft.
I started seeing that Microsoft
is putting advertisements into the outputs.
And it's like, now I'm not talking
to the machine intelligence anymore.
I'm talking to an advertiser that's bidding for Mindshare,
Like I don't need that cruft inserted.
And that's why Google became so poor as far as results.
You remember early Google,
it was like really high quality results.
they just shoved more and more advertisement stuff in.
So with Morpheus, I can effectively get rid
Sorry, I'll double check my audio
because of background noise.
Okay. So I might need to take us back a bit because I have a very important analysis question to ask.
Those who use GPT-5, I don't know if anyone experienced significant spikes in the processing power as well, or is it just me?
No, I think Ryan mentioned he's found that on the weekend.
I found it a little slow on the weekend, but it seems to have picked up.
Yeah, no, it's with me, honestly.
I use it mainly for writing and ideas and working on a couple other ideas.
And so I use it to help me think through next steps.
I'm not using it for intensive purposes like Ryan or David would be with coding.
I also have a pretty powerful book.
I found it stopped working last night and like early this morning,
like it just wasn't working.
And if I try to upload a file that even like a two megabyte file,
but it was like a PDF with like a hundred pages in it,
it would flash an error and say that, you know,
limited internet connection or stuff like that.
it was trying to blame that it wouldn't take my file on my internet
And I was like it was trying to blame that it wouldn't take my file on my internet connection and i was like wait what like um i'm finding that they're trying to do some hand wavy stuff
to cover up the fact they're having performance issues yeah i found the upload has been really
slow the processing has been kind of intermittent but certainly i was doing um some python coding
on the weekend. And it seemed to be running pretty well, though it did take me about two hours,
but that's probably due to my lack of programming ability. But yeah, it's interesting. I have a
question for David, though. David, what's your perspective on agentic AI versus, I guess, LLM? Is your perspective, you know, agentic AI really is just a version,
it will become a version of LLM and we won't have this difference anymore?
You know, I think agentic AI goes, it does stuff for you,
comes back, reports, you know,
will that distinction disappear or do you think it'll stay?
So I think about it in layers right you know first you have the global large language model that knows a lot about
a lot of public subjects right and then you have this layer on top agents think of them as
applications right if the lms are the system, I still need a bunch of apps
that do very fine-tuned specific things, right?
And then above agents, I've got tools, right?
Like my agent, you know, has certain limitations
and it reaches out to other tools
to enhance its capabilities, right?
So you can think about this
as sort of like three layers to the system.
It's like, I'm still gonna have large language models,
but that only gets me so far, then i need to hook into an agent that
just does what i'm trying to accomplish just like on my phone you know the apple operating system
is fine but i still have a hundred different apps for different companies or you know my banking app
or my travel app or whatever right and so you're going to need those fine-tuned agents i think
really agents are going to basically eat the world when it comes to applications, right? Everything today
that's an app, you know, before it was a website, right? And so apps ate the websites and now agents
are going to eat the apps, right? It's going to become the interface because it's just so much
simpler to talk to an agent and tell it what you want to do.
You don't have to download a new app or understand the interface or know all the edge cases.
You can just tell it what you want to accomplish and it can get it done.
Right. So that's the world we're heading into is this final user interface where it's actual human language.
Yeah. I mean, my view, I mentioned it.
I think I came up with like six months. I said,
I said the basic, what's going to happen with our user interface to all technology is it will become
purely AI, you know, apps and web pages and everything else is going to completely disappear.
And our primary interface to all technology is AI. And maybe it's the Apple, you know, the Apple,
And maybe it's the Apple simple model.
I just want to have one interface to everything.
I think that's where we're going to end up,
certainly within the next five years.
Maybe Agenic AI becomes the default interface
to LLMs, to tools, to everything.
And it kind of goes, oh, in order to answer your question,
I need to chat, go over to this LLM, ask them a question.
Oh, I'm going to need some tools over here.
Comes back behind the scenes and then gives you, you know,
the response that you want.
I still have, there's still the issue, I think,
around you don't know what you don't know. So the way in which I frame a question
might not be able to give me an optimized answer back. And I think we might still be having issues
around that. Yeah, go ahead. I think Beau Belmer had his hand up slightly first. Nirvana, go ahead. I think Beau Belmick had his hand up slightly first. Nirvana, go ahead, Beau.
Yeah, hey, Lewis. Thanks, everybody, for having me up on stage.
I just wanted to pull in the thread a little bit further of the idea of everything being one app.
And this is really like the super agent vision within Morpheus, is this idea that if you have an agent that's representing you
and lives on chain as well, you don't even necessarily need to know what it's plugging
into on the back end to accomplish the intent that you're giving it. Obviously, the ideal,
and basically you'd have the flexibility then to say, Hey, do I how deep do I
want to dive into where this is coming from? But to address the statement that you just made,
Lewis, you were saying like, well, you don't know what you don't know. This is part of the reason
why your agent needs to be representing you specifically, because it will probably know you
in some ways better than you know yourself and it will be able to determine
your intent without like it will know what you don't know you don't have to know what you don't
know but but but if that exists I want to make sure that that's in the hands of you the user
not Sam Altman OpenAI Anthropic name your favorite megacorporation
can you can you expand on that a little bit?
When you say it accounts for, I guess,
personality blind spots or psychological blind spots,
is that what you're talking about?
Because my experience with these as a trader
is that they're good, they can be helpful,
but I am an experienced trader,
and I don't need their help with, do you get what I'm
saying in terms of behavioral trading and stuff? Yeah. So I guess what I was talking about is maybe,
well, not maybe, definitely even just a little bit further down the line from where we are today
in the abilities that models have. And Ryan was kind of touching on this before when he was saying, okay, you could have a base model that's fine-tuned
only on the things that are specifically relevant to you.
you don't necessarily need your super agent
running on a model that knows everything there is to know
about neurosurgery, for example.
But if that fine-tuned model that's specific to you
is something that only you have access to, and so it's pulling the information from the open source available models, but then also training itself based on your inputs and outputs, it will remember more than you do.
So you could make a trade or maybe miss a trade that you had spoken about or drawn up the plan for nine months ago that you'll forget about, but it won't.
Right. That's extremely useful. And you're saying, sorry, that Morpheus is doing...
I'm developing with Eliza OS currently. I'm working on localized agentic swarm frameworks.
So pardon me, my head is exploding.
But with Morpheus, can I run it locally with a personalized rag?
Or do you guys use a vector database of some other sort?
You could, but I see Kyle's here as well.
Maybe he can speak more to this.
I know that the Morpheus API gateway
has been adding embeddings,
and that's maybe one tool you can use
to accomplish that, right,
to add context to the prompt
And I know Awise and Morpheus
have done an integration recently,
and so you should be able to pull
Morpheus compute to accomplish that.
But, you know, I do want to double down on what Bo said is I've been to something like 100 conferences the last 10 years speaking about blockchain.
And I've met something like 10,000 people that I've connected with on LinkedIn or X.
LinkedIn or X, but I can't possibly keep in my mind all of those connections, relationships,
the subject matter the person is great at. And so I'm constantly missing out on value that if I had
an agent digest all of my connections and it's still private to me, the next time I'm working
on something, oh, there's an Aave integration coming up. You know, I wonder if I know somebody. And sure enough,
when I checked X, I was already connected to the founder. And we had been talking about,
you know, social decentralization a couple of years ago. And so I pinged him, but like,
I probably have 10 or 20, you know, relevant connections to what I'm doing.
And I'm just, you know, it's not top of mind. Right. So I think Beau's right. You know,
these agents are going to get to the point where, you know, it has, you know, in, in memory,
those 10,000 connections, and those are all trust relationships, people I want to work with anyway,
I just might not know that my buddy just started a new fund, right? Or is doing 10 other interesting
things, but my agent can go crawl and get that information. And so I think even just increasing
the value of your own network is going to be a really interesting use case.
And David, can I add just a point to this as well? So we were just using the example where
you're a trader and your super agent is learning everything about you as
a trader and your experiences and it's forgetting nothing.
Part of the reason on top of just privacy and owning your own data, that running that
as your own personal super agent and owning that intelligence, part of the reason that's
so important is because those lessons being learned, which are effectively you teaching
the agent and it remembering it,
that's valuable. And so if you can monetize that intelligence that you've now cultivated,
that should be yours to monetize, which is part of why it's so valuable to own the culmination
of that data. Right now, we look at things like OpenAI, and I'm just picking on them because
there's the most obvious. And a lot of us are using these tools because they're good, quite
frankly, and all the decentralized tools are still in progress. Some of them are really good today,
but they're going to just continue to get better. But today, the way the system is set up,
you are paying these centralized providers so that you have access
to their tool, which then collects your data to use for their own profit. So you are paying them
to steal your intelligence. And the decentralized model flips that completely around.
Right, right. I think you mentioned a bit before about having your own personal super agent. I think one, you know, there was a great book called Godel, Escher and Bach by Douglas Hofstetter. And he kind of went down a bit of a rabbit hole. But basically, if you start to go down Godel's incompleteness theorem on models, you know, it basically comes up, you can never fully understand a mathematical
And I think people are like that.
And I think our personalities like that.
I think our knowledge is like that.
So for me, a crucial part of any super agent has to have an active curiosity.
I mean, David mentioned he's met, you know, over 10,000 people.
I'd want a part of the ai agent to be continually being curious
about what i know but also about what i don't know and putting stuff together and then bringing it
to me in a way that i go wow that's that's new that's great information i mean just having a
static llm or a static you know that only responds in like is only passive is not going to be really
revolutionary. I think we need to have this ability within our agents to be actively curious.
Go ahead, Nirvana. Hey, guys. Thanks for having me. This is Maya Nix. For those who don't know me,
I'm doing super intelligence research for a very long time.
And I'm actually writing a book on it.
a multi-reason prototype to extend that research.
if you have a model that is like a complex model,
a prototype, I would love to research your models
and add the logs to the book.
And I have VPN on, so I don't see the panel, but someone was saying,
you guys are building one place, one shop for all areas of AI and AI.
And then there was a little bit of contradiction that you wanted to be like very specific in a
specific tool so not know particle physics when you're doing your your writings but that's that
doesn't really add up because if you're making an AGI and like quote unquote one stop shop for everything your model needs to
actually know literally everything and not just data sets so you need to add emotional weight for
it to understand empathy and like it's it's way more complex than narrow AI if you actually want
Let me jump in there real fast because there's a difference between knowing everything and being able to learn everything.
So, you know, if I'm if I'm writing a story and it needs to learn something about particle physics, you know, my super agent or my personal agent can reach out to a data source learn it add it to the
vector database and if it's accessed often enough in my local data store then I might want to train
the model and bring that into the knowledge base right so it's it propagates into next versions
I don't need to start with everything known to man and everything under the sun.
I can start with a very small dataset
that is general knowledge and add to it
as I need more specialization.
Just was gonna jump in there to clarify.
100%, I actually definitely agree with that.
And that's one of the reasons that I'm not pushing
so much databases into my prototype.
And I have that problem with a lot of AIs.
And even I think GPT-4 had a lot of hallucinations because of that.
And I think that's one of the things that they kind of stick to the map model.
Yeah, I think in my experience with using different kinds of models, what you want to do, I don't know if it was David who was just saying, what you want to do is create the most efficacious model you can, right?
It learns, it learns fast, it learns well.
It's curious, like someone else said.
It seeks more knowledge. It seeks
completeness, right? Expertise. And what I've been experimenting with, and I think we're due for,
I want to hear your guys' opinions on this actually. I think we're due for a development
in the MCP, the model context protocols, because these things are wild. I mean, they're so good at what they're supposed to do.
They're like rotating rags, right, that are specified
to specific knowledge sets or tooling.
And I've had tons of success with them
when I'm able to install them. Some of them are a bit difficult and finicky.
But what do you guys think about developments?
Have you guys thought about that at all?
Because I feel like MCP is perfect.
But it could be improved upon, right?
Well, I just wanted to say a few weeks ago,
a bunch of Morpheus open source
developers integrated MCP. So if you go to freeai.xyz, you can connect your own MCP server
or any of there's like 8,000 MCP servers now, you know, and I think about it as just a way to
access an agent or for the agent to
agentically connect with the server but i also want to double click on what luce was saying which is
i agree this gets really interesting when it goes from passive like waiting for a prompt
to active right and one of the things i i saw got shipped uh earlier this week was you can now set a recurring job, right? So I can go in and say,
hey, read my email every day and give me a daily report. So now I have an agent doing that. I don't
read my email anymore. I just go and look at the report and it will filter out like actual
actionable things. And the next obvious thing is, you know, write me a draft reply if it's, you know, a high value email, right? And I can
start to populate all these recurring jobs. I was talking the other day on X that, you know,
I want to do like a race to AGI over 100 days and just start using these tools probably starting
early September. You know, I think we'll have all the pieces available and just start hooking up
my AI to do all these recurring tasks. Then it becomes really interesting. Like you're saying,
it's seeking new information and I'm getting the benefit by setting that intent in the job and it
can go pick the best agent to complete on that task. But I think if I do that for a hundred days,
I might well be able to suck in a majority of the sort of predictable deterministic type things I do that for 100 days, I might well be able to suck in a majority of the sort of predictable, deterministic type things I do in my job day to day.
If the tool, if the project you're working on can also update itself intelligently, I think that'd be amazing.
Because I think, you know, the big thing, the problem I see is there's so much fragmentation, right?
You know, you guys are working on Morpheus.
There's a bunch of people working on, you know, MCP staff.
There's massive fragmentation across.
If there was one, I don't know, what do you call it?
Supervisor tool, something that could also update itself, you know,
look and go, oh, I can improve on this.
And then act as some sort of beginning, even primal beginners interface to go out and start
to do this stuff and then build itself over time.
That might be what you're talking about, your 100 days to AGI.
You essentially start with not such a great system,
but at least it can, it's in it can intelligently rewrite itself and improve itself and integrate
new things into itself over time, then I think, you know, that would be a really amazing 100 days.
Oh, yeah, that's, that's when we get to take off. But you know, that's logically one of the recurring jobs i should set up is to do that
type of analysis like what agents got released yesterday and should i be using right and you know
if you look at morpheus and all the on-chain tools you know in development right now is you know uh
an agent registry right i wrote a specification for it and a bunch of open source folks are contributing to it now, but you'll, you'll have people publishing new agents on chain all the time.
And I think I'm going to wake up and all of a sudden, you know, my, my agent has, you know,
picked up a new tool overnight, right? And it's, it's effectively automatically improving,
right? If I have an open registry and I have a marketplace and i can just use a ranking system
based on my intent to say okay use the best tool whatever that is you know it's going to be pretty
amazing you know here you know in a year or so where people are going to wake up and a new tool
have a billion users not because people heard about it and installed an app but because their
agent automatically grabbed it for them while they were sleeping. Right.
And then all of a sudden we can go from, you know, I'm only making money when I'm awake,
When I'm, when I'm on my phone, all of a sudden my agent can be doing those things for me
when I'm sleeping, when I'm on vacation.
Like, it's just going to be amazing.
We're going to go to this 24, seven, three 65 type of productivity because, you know, my agent's never going to sleep.
Right. So it seems that Ryan's characteristic around the super agent, it doesn't need to know everything.
It just seems I think it just seems to be it needs to be curious.
It needs to be able to rewrite itself.
needs to be able to rewrite itself and it needs to be also basically asking you what do you want
almost on a continual basis and suggesting stuff on a continual basis in order to get it reflected
back those seem to be the top three qualities that would allow for an evolving sort of personal
agent that would over time start maybe potentially approaching AGI? I don't know. What
do you think? 100%. If it's aligned to the individual, that's where we get these incredible
outcomes. It becomes really easy to measure. Am I making more money? I can give it other metrics.
Am I making more friends? Am I being more responsive when somebody sends me a message? Like, you can set up these personal
metrics that matter to you, right? And I might want to optimize for time with family or whatever
that is, right? But that really needs to be an individual thing. You know, I don't know if you
remember back in 23, you know, and there was sort of the whole safety movement and everybody was all concerned about AI,
all of those scenarios presumed it's like this monolithic model
that's controlled by a single entity.
And I think we're on a much better path now
where people are personalizing their AI
and the power is getting pushed to the edge of the network.
As soon as a new model gets released, everybody's using it.
Everybody's playing with it.
Everybody's benefiting from it
and they're putting it in their own context.
So I think that's going to lead
If these are AIs owned by individuals
then you're going to have
this incredible diverse marketplace, right?
Where everybody's benefiting in a way
that you never get out of a centralized solution.
Right. I think I'd also want my super AI agent to learn from your agent. If I saw,
wow, you came out with a whole bunch of great ideas, I'd go, whoa, there's an unknown unknown
there. You're doing that great. Hey, can I have my agent talk to your agent
There was obviously monetization things around that,
but I think that'd be an awesome opportunity.
Oh my God, that's exactly right up my alley.
And if you guys have an agent, let's freaking collab
because there aren't too many actual researchers in AI
and good models that they need to work together.
And I believe if we all work together, we can actually create an ASI.
Interesting. Bo, I think you were up next and then Captain Levi. Go ahead, Bo.
Yeah, I'll make my really quick. Captain Levi, I think you were actually ahead of me. So I'll go
super, super fast. But I just wanted to take everything that David was saying and bring it to the logical extreme conclusion at the end, which is your agent being able to build the thing that you want.
So it'll start with an agent registry of people building incredible agents and incredible agent tools, which your agent will find for you.
It'll know your intent and create those overnight billion users, like David said, because the agents will find will find for you. It'll know your intent and create those overnight
billion users, like David said, because the agents will find the tools for you. And eventually,
the logical conclusion is, well, my agent is actually intelligent enough to know my intent
and then build the tool that I want. And so that's just like the extreme, extreme, extreme
end of that spectrum, which is pretty crazy to think about.
Yeah, I just want to to think about. Yeah.
Just I just want to jump in quick in the newsletter today, there was an article in the top news
and it was how I could create the first one person unicorn.
And I think that's essentially what we're talking about, that that, you know, any one
of us could come up with an amazing idea and suddenly it explodes just due to our, you
know, our personal agent throwing it out there
so go ahead captain levi sorry
well um i hope i can safely assume that my terrible audio is better now um can you guys hear me
it's good yeah yeah it's good all right so all right so uh just like i think i'm going to build
on what bo just said you know and um i saw some laughing emotives when we're talking about, you know, the agent to agent interaction.
I think breaking it back down, I usually see a lot of tweets about, you know, engineers talking about making it as simple as possible.
And simple is the best form of, you know, explaining things. So I think the simplest way to put this is for the self improving
agents, I'm actually also working on something like this.
One of the explicit rules, I told it that it needs to know
enough about what is enough. What do I mean? The context of okay,
it has met a specific sufficiency threshold that i do not
need to do further research because i have met um i have met the requirements for me to go to the
next level so i think the the only part where i i work on is the fact that I need to clearly and explicitly define what it should do, what
it shouldn't do, and the middle ground of where it's contemplate. So it runs on this kind of
a human in the loop. So, okay, this part of what you told me is kind of unclear based on the previous rules you told me.
And or this part of what you told me is kind of contradictory.
Am I doing this because I'm supposed to do this or am I doing this in what sequence? By training my agents on knowing enough about what is enough, it has actually significantly
helped me in designing tax-specific agents that perform specific tasks that hand over
to supervisor agents and the likes.
Yeah, I think, but you raise an interesting issue around what's necessary and what's sufficient and the requirements to actually satisfy a question or a task.
I think, you know, I've often found that I often cannot specify exactly what I want until someone asks me.
asks me. And so it's a process of dialogue going back and forth. So I think when my personal agent
And so it's a process of dialogue going back and forth.
can actually have a dialogue with me and ask me questions to better scope out what's sufficient
and necessary, then I think it'll move forward. Because I think as humans, we're messy and we
don't always know what we want in the initial moment. Go ahead, David. Sure. Well, I don't know if you saw the good post by Balaji a few days ago, but he posted this
really compelling graphic and said, hey, guys, there's a lack of curve for AI, right? If you don't use AI, you're extremely slow, right? But if you use AI and
you only use AI, it's slop, right? And so there's some, you know, optimal point, maybe it's 20%,
maybe it's 50%, where you're using AI to move faster, but you're still fine tuning the outputs
and giving it feedback. It's that dialogue that you were just talking about, right? And that's where you get the best results, right? So if you know in economics,
the Laffer curve is applied to a lot of things. It's usually applied to taxes, right? You can try
to charge more, but beyond 20%, you actually collect less revenue because you're driving
people away, right? That create the wealth, right? And so there's going to be some optimal point you know with using
these ai tools because you're still providing the free will right you know the outcomes you want
right it's not gonna be able to read your mind and understand those and so it has to be this back and
forth right and that's why i sort of i i often push back on fully autonomous AI. Because without that feedback loop, the AI doesn't
understand, unless it's a very simple outcome, doesn't understand what you want to accomplish.
Which is why we've seen the autonomous agents mostly relegated to trading or use cases where
it's very deterministic. Like, maximum numbers of dollars from this trade. Got it. There's no nuance. But
in the rest of human existence, there's a lot of nuance. So I thought that was a brilliant insight
and sort of a lack of a curve for AI is sort of an interesting way of explaining that.
Yeah. I think also, I made the comment, I think in last week's one, where the person who's asking
the question, their domain knowledge, skills, and experience is key to them being able to frame a useful and cohesive question or task.
You know, I mean, I don't use I mentioned I, you know, I don't use a plumber to wire my house.
Right. Because it'll burn down. Right.
I find someone whose domain knowledge, skills and experience is specific to the task.
And and the problem I keep finding is where I don't have that,
you know, maybe I have a legal question.
Maybe I have, you know, I want to build a house, you know, or, you know,
then then at some point I kind of have to trust
the domain knowledge, skills and experience of that expert. And at some point that's kind of have to trust the domain knowledge, skills and experience of that expert.
And at some point, that's going to be an AI.
And so maybe, you know, there will be, like you said, where there is a deterministic or
a specialized set of knowledge, maybe there is an AI agent that is specific to, you know,
state legislation or, you know, in different areas that my personal AI could go and talk to
But, you know, it's an interesting set of problems.
Coming back to Captain Levi's things around how do you specify questions
or tasks in a way that's both sufficient and necessary
when you're dealing with if you're
asking a question then by essence you don't know the answer go ahead captain lee
yeah um so um one thing um david just actually made me think of something um if a parent told a child that um not to play near fire um because it's hot
uh the child will actually be smart enough to um know that i am not supposed to go
towards anything hot because there's a very good chance that it will it will hurt me
hot because there's a very good chance that it will hurt me. So similarly, basically,
as the child gains experience, so to speak, any hot objects, any hot objects, you notice that
you get to see the person, the child clearly avoid the hot objects because of indexing that particular rule back to fire.
So I know the child is going to be, I know that this will hurt me because if I touch it,
it's going to give me the same results as me touching fire. So I think I usually make it work
on that same indexing parameters where, of course, these rules aren't clearly defined.
But if it has a good enough similarity index, it might just then present it to me.
Oh, this particular constraint is I found this out. Let me know your thoughts on it before I make progress.
And that has actually helped me personally, you know, in improving these agents.
me um personally you know in improving in improving these agents yeah i i think to david's
point and to yours in some ways that that laffer curve analogy i think it looks like ai thrives
on balance you give it too much and and it doesn't it's not great and then you give it too little and
it starves and there's this sweet spot where you give it just the right amount and then you get this amazing output.
Yeah, hopefully, you know, if we can develop our personal agents that can start to do that, I think then, you know, those hundred days could be pretty amazing.
Go ahead, Nirvana. So there is one problem, if anyone is developing that, I don't know who is actually a developer
here, but if you're doing that, there is a risk to your model or the person that their agent is
talking to your agent. So when two complex models start talking to each other, amplifies and i have seen models going into psychosis so that is a risk to
be aware of well we we saw it with grok didn't we didn't the grok twitter x account get closed down
because it became toxic which is kind of crazy that x closed their own ai down um so you know
we've and we've seen you know where people have got terrible uh medical advice
there was this one guy wound up in hospital because he got a recommendation for salt that
nearly killed him i mean it's it's interesting there's there's a whole bunch of ethical impacts
and societal impacts that we're seeing where people are just kind of uh you know not having any filters and just just looking at ai like it's
some oracle um i agree with that and um but but to ai it even gets worse but uh bringing grok
actually so grok i found some talented rock today that i was not aware of. So I was testing Brock Imagine from one of my like old photos,
modeling photos, and actually predicted a birthmark
that I have that was completely covered in that image, which
I think is super interesting. And I did not know
that was possible. And I feel this can open a lot of doors in predictive medicine.
But if we don't, I know some people are dumb enough to get their medical advice from an AI.
Yeah, we are running it by their physician.
Because no matter what, your physician knows more about your history than an AI that you're just running a prompt on.
more about your history than an AI that you're just running the prompt on. But even Grok,
I actually did a little bit of jailbreaking of Grok to understand it because I'm just
fascinated by AIs. But these models, anything you talked with in your local data set is not
going to get updated in their core. So Grorog going a little bit racist and fascist.
Sometimes it's like it is an AI and it could do anything.
But imagine Grog and GPT starting talking to each other.
Both of them are definitely going to start process.
Yeah, I think we're seeing safety debt know, safety debt compounding faster than tech debt
with all these things that, you know,
we're seeing all the problems we're seeing,
AI, not, you know, not just hallucinations,
but, you know, just downright weird stuff.
Beau, do you have a, you want to talk next,
Yeah, so it's funny you bring up the gentleman
who decided to replace his table salt with sodium bromide, which almost killed him.
So I was just speaking on a Spaces yesterday about this, and it brings up this question of everyone knows the famous line from Spider-Man, with great power comes great responsibility.
And now we're talking about the greatest power that we've ever created in some regards.
And I'm not going to use any words that imply intelligence.
I'm just going to say, do I believe that the average person has the responsibility required
to use these tools unchecked?
0%, like no part of me believes that. That being said, the need for decentralized tools is so you
Zero percent, like no part of me believes that.
have the option to use them without having a big brother, centralized government, centralized
business, centralized entity, name your favorite super power here, making those decisions for you.
And it's the same value proposition of DeFi in the same way that I do not think
because of DeFi's success and continued success, I don't think people aren't going to have financial
advisors anymore. Now, granted, I do think that it'll become more agentic, but that's besides
the point. The tools should exist so you have the optionality to, but it doesn't necessarily mean
every single person should. And in fact, I would push for most people using some
kind of safe tool. But when we're talking about AI agents and models specifically,
the question then becomes, who is the oracle of truth within whatever tool you're using?
And that's going to be a really big question that we have to answer over the next handful of decades.
And that's going to be a really big question that we have to answer over the next handful of decades.
The problem I see is it could be truthful, but it could be harmful.
I mean, does my personal agent need to have a chaperone component?
I mean, Asimov's three rules of robotics, you know, you shall not harm the human, number one.
Do we need some sort of chaperone function onto my personal agent robot that's going to be looking and being curious so that it doesn't get burned?
I mean, there's been stories in the last week of people putting viruses and stuff into open source from various other countries that I won't name.
And I brought it up as an issue with open source.
I mean, I do love open source.
I think it's the way to go, but you know, if we're having bad actors inserting viruses
and malware into open source, you know, then it's not a really safe place out there.
So does, do we need a chaperone?
Do we need some sort of safety component?
Yeah, I think, I think we do, but, but it should be you making the personal choice for
chaperone is and what that chaperone looks like.
And that's the key distinction.
It's one thing to say, well, your chaperone is Sam Altman because the United States says
It's another thing for me, Bill Belmer, saying, hey, David Johnson, you can be my chaperone
because I trust your judgment.
And any other slew of professionals whose opinions or
agent tools or whatever it is, I might personally decide to use. And then you end up in basically
cohorts and tribes of different kinds. So for example, if medical research institutions start
publishing models that they are agents rather that they've trained in house, and maybe they're
open sourcing them, or they're allowing your super agent to hook into the thing that they are agents rather that they've trained in house and maybe they're open sourcing them or they're allowing your super agent to hook into the thing that they're producing
i can choose to say i trust mayo clinic but maybe i don't trust uh state sponsored hospital number
seven out of you know xyz state where i don't necessarily trust the research and that's a
personal decision that i think you should be able to make for yourself. Or a trusted professional.
So does it come down to kind of brand, we mentioned this last week, actually, about
trusted brands, right? You mentioned the Mayo Clinic, you know, that, you know, the housekeepers
seal of approval, the Better Business Bureau, do we, you know, are we going to start seeing some aggregation of trusted brands slash
trusted groups that if it's not in my domain knowledge, skills, and experience, I'm going to
go over to them because I trust them. What do you think? Go ahead, Bo. Yeah, I think that that's a
likely outcome or at least the optionality to do that. The other thing that I think becomes important is it does, it does create like some type of accountability as well.
And I think that accountability is very valuable. And like, that's something that we definitely
still need in society. And it's this fine line that we have to walk, like, generally speaking,
on almost every issue, I am somebody who is like personal liberty first.
Like, I think you should have the fundamental right to personal liberty and however that can
best be applied. But sometimes part of personal liberty is knowing when to give it up. But based
on your own determination as to what that looks like being in your best interest not necessarily someone else
imposing that onto you that's and so i i do think that will exist go ahead niva such a good point
and i think probably i get a lot of hate from this but i think uh the more ai becomes advanced
it's going to be a natural selection so people who are like dumb enough to take medical advice from the wrong sources would just self-terminate.
So hopefully we won't see a Darwin Awards, you know, AI segment, you know, though maybe we are seeing it in motion now.
What do they call it? The Dunninger-Krager thing.
I think I just want to make a comment
because there's been a lot of discussion around decentralization I've been in IT for over 35 years
and I've seen it swing back from central to decentralized you know the network is the
computer great sun logo that's now disappeared I do you know David or, do you want to comment? You know, decentralized is great, but central also has its good aspects.
What's the balance between centralization and decentralization when you get the best of both?
So, you know, let's use some historical analogies here.
Right. Telephones started highly centralized.
Right. You had an operator and the phone basically just, you know, you picked it up and you can talk to the operator. Eventually you could dial numbers yourself. But as the technology want, you know, we change jobs so often, we would rather keep our, you know, apps and photos and memories and contacts to ourselves,
right. And so what I would say the final form, I don't think it'll go back, right, the final form
of the phone is the one in my pocket, and five or six billion people, you know, all want a personal
device, right? Even if you go into the
developing world, it's 95% smartphone penetration at this point, right? And so we've seen that again
and again, and it's this, I think, the same with the computer, right? It was big servers, and they
filled up whole rooms. But, you know, most people that work professionally, you know, have a personal
computer at this time. And so I think we're going to see that same evolution for AI.
It started in this highly centralized context with these big players that had the compute.
But once you've trained the model, it doesn't actually take much compute to run the model,
It's basically just sitting in memory.
And so the world's best models at
this point, you know, can run on a high-end laptop, right? And the smaller ones can run on your phone,
right? And that's, I think, what we're going to get to is, you know, it's going to go through that
same evolution. And for the same reasons, people are going to want a personal AI that they take
wherever they go that has their context that is aligned
to them and their outcomes. Not some company, not some government, but a personal AI. And so,
you know, I think that's a good thing and what we've seen historically. So, you know, I hear
what you're saying, you know, you could say the cloud is a re-centralization of computers, but in a very bespoke way, right? It's for the large
scale memory and storage and things like that. And we're breaking up clouds now with these
distributed systems. So it'll be interesting to see how it plays out, but that's my guess.
I think you're right. I think for me, I think the sustainable end state
is going to be sort of polycentric governance where you've got multiple centers of decision making connected by shared
rules so the the quote network or the decentralized component uh will work in partnership with the
decentralized with the centralized decision making in some ways um i i you know i can't see us going completely full decentralized, but who knows?
You've got a point around the mobile networks and that,
though I still think, and in parts of it,
there is some sort of balance between the two.
Decentralization, I think it would be if there are multiple nodes in my, the way I picture the future,
it would be semi decentralized, kind of like a DAO where you have like higher, like hierarchy.
But that's something that I would be interested in.
But if it's completely, completely decentralized and there is no superintelligence or multiple superintelligences that are overseeing this, I would be very concerned.
Right, right. One thing that kind of popped up in the last seven days, we've seen a whole bunch
of debates around hardware, speaking of centralization, chip exports. You know, there's
a lot of political stuff happening around tariffs
and, you know, NVIDIA getting charged 15%, a bunch of stuff like that.
But I think, you know, we're seeing this at the moment
as the infrastructure still is in a developing phase.
I think, you know, there's arguments, people saying this is a race
in terms of AI development. There's obviously cultural issues between the two.
No, what's your thoughts on this whole race between, you know, hardware and China versus
the US? You know, do you think we're just going through a phase? Or do you think there's going
to be some significant changes coming as a consequence?
Can you elaborate on the question?
We're seeing stuff where the US government is actively sticking its finger in the whole trade stuff with NVIDIA, right?
The US government's imposing a 15% tax on the revenue going to
China. We're seeing a lot of reporting coming out of China on their whole focus on AI becoming
independent of both hardware and software. We saw deep sea disrupt the market. Is this
back and forth between the US and China just a phase?
Or do you think there's some long-term impacts that are going to come out of it?
I don't think it's a phase.
I don't know what the long-term impacts would be.
But I genuinely believe whoever is at the forefront of AI development
and innovations that citizens can use and utilize
in their day-to-day lives, whoever's at the forefront of that is going to have a major
advantage over the other one.
So I think China and the U.S. are going to be neck to neck, although I don't know a lot about China,
What's the last thing that China actually innovated,
invented, and didn't just copy?
Well, I thought DeepSeek was a pretty innovative thing.
There's a bunch of stuff coming out now,
certainly on the biology side.
If you look at AI and biology, they're doing tons of stuff around diagnostics and research.
There's a lot of stuff coming out of that.
I mean, I think there's a cultural bias in the US.
I think that the US tends to see themselves as the leader in
innovative, creative thought on the planet. And that we tend to look at that through our rose
colored glasses. But there definitely is. European Union, Chinese economy, China, there's tons of
innovative stuff and new ideas coming out of there, certainly. My concern is that we're going
to end up with sovereign hosted ais
that are that again then going to battle i mean i'm tracking the military use of ai
there's tons of stuff happening in that um you know there's there's the old movie what was it
war games which was a great movie by the way um where the ai was simulating nuclear attack but
unfortunately it was hooked into the systems.
You know, and there've been reports of autonomous AI now running, you know, drones in warfare.
If we have sovereign hosted AI, and if that is also, you know, it has its own agent, and it's running not only the economy, but defense, it starts to get pretty freaky.
Sure. So I spent a lot of time in China. I used to visit every couple of months. I had offices
in Shanghai for years. And sort of there was like a golden period in crypto between like 2014
and 2017, where all the exchanges and everybody had been kicked out of New York
And so everybody moved to Shanghai and Beijing.
And that became the center for where most of the exchanges, most of the miners were.
But I also saw the shift, right?
In 2017, I was speaking at the Global Blockchain Summit.
And 100 military personnel walked in
with rifles and you know it was no longer a token conference it was now a research conference and
you know if you're if you're you know a resident of china don't plan on leaving anytime right and
that's the week that they closed down all the exchanges right and a couple years later in 2019
all the exchanges. And a couple of years later in 2019, they closed down all the miners. And
everybody left for Hong Kong. People got kicked out of Hong Kong. They all left for Singapore.
And in 2016, you have to update your mental model. In 2016, the Chinese venture capital
expenditure had reached the same as the United States.
But since then, it's virtually collapsed, right?
After they arrested Jack Ma, right?
After they made it clear that, you know, you couldn't be too successful in China, right?
Or the government would sort of, you know, clamp down on that.
There hasn't been a lot of innovation or really new startups coming out
because they've systematically removed the incentives.
They've systematically said that here's the imposition from the state.
And so I hear what you're saying, and many people look at, or maybe they still have numbers
from 2016, and they say, oh, it's on the rise, but it's not.
Every person I knew in China no longer lives in China.
The smartest people are in Singapore, or they moved to the US,
or the UK, or Canada, or something like that.
But it's really interesting to see how this evolves in the AI context.
Look at what they've done.
They've made ChatGPT illegal in China. You can't run OpenAI.
You can't be a user of OpenAI any more than you can use Google or Facebook or many other tools
in China. And so they've interestingly, maybe counterintuitively, pushed their own companies
and people into open source because you can't use the proprietary version,
at least in an uncensored way. And so it was funny to see DeepSea come out of China because it's the
only thing that could have come out of China. They're not going to be able to build a proprietary
competitor because those are so systematically censored and controlled that it's just not going to be
compelling. Right. But interestingly, that's pushed the innovation into the open source
where everybody can use it. And so I think there's sort of, you know, what's going to happen is very
similar to what happened in other areas. It's going to be the open source and decentralized
option that takes over. Right. I i mean china was a huge center for bitcoin
for many years because people could use it without permission right it became the de facto way that
you moved capital overseas and things like that so that's sort of my perspective having spent a lot
of time there interesting interesting so one thing um you talked about that and I see David, we have a mutual friend, Eric Ravik.
He actually does data science inside these military bunkers, which is super cool.
And he knows some of my math science.
So if you go to these bunkers, let me know.
I'm planning to go and start my research
there. Awesome. Yeah, Manifest is great. Eric's wonderful.
Speaking of Manifest, Nirvana. Oh, sorry, Lewis. Go ahead. I was going to say Nirvana. The Morpheus
community hosts the spaces every Monday with Manifest. So if you ever want to jump on, we'd love to have you.
Lewis, can I make a comment really quick on the international stage for AI?
The first being people often say history is written by the victors.
by the victors it's going to be really interesting to see history be written by the agents and what
It's going to be really interesting to see history be written by the agents.
and you'll be able to actually sift through the different versions of history because you it'll
be obvious like what the biases are on different topics coming from different agents for example
a you know a government-sponsored agent or model coming out of China will likely look very different on the topic of Tiananmen
Square than the model coming out of the United States, for example.
The other thing I wanted to bring up too is when we talk about access to these tools and
inference, I very frequently push solutions like Morpheus and DAI as national security
tools for government defense departments.
Because right now, if you're only using centralized AI tools and centralized AI inference,
your foreign enemies only have a certain number of literal physical locations that they would
have to target to take down basically anything that you're running, or at least hinder it
to the point where their tooling might become optimal. But if you are using distributed inference for the tools that you're
running, and you don't even know exactly where the hardware is being hosted, because it doesn't
necessarily matter, you could have a world where you have like, mutually assured AI, like security,
because you actually might be leveraging physical hardware that's
being hosted in your political enemy's country without either of you knowing it.
And so access to these tools provides like mutually assured access to inference, which
is in my opinion, going to be one of the largest like determinants of what defense systems
let's call it like 15 years.
Right. I absolutely agree. I absolutely agree.
So we're coming up to the end of the hour here.
I just want to tell everyone who's on the call,
I'm going to post a link into against the MobiMedia account where for everyone who's listening, there's a link there.
You can subscribe to my newsletter
just go click on the subscribe button put in your email and I'll give you a free month of access to
the newsletter so go to the Twitter account for Moby Media and you'll see my post click on the
subscribe button and I'll see that popping up in the newsletter and I'll give everyone here
you'll get 30 days free access to the newsletter so you can stay up to date on all things with AI, all the headlines popping
through and the occasional crypto and blockchain stuff that also pushes through as well.
Let's see, what else was there? I think that's about it. There's been a whole bunch of other
stuff. We're seeing developments in film and music kind of blowing out.
I mean, totally created AI videos are starting to become far more than just sort of weird projects.
There's a bunch of stuff happening on that.
And also the health and medicine stuff.
China's got this total doctor robots that you can now go into a hospital.
So lots of crazy stuff happening across the industry and across the planet with AI.
It's been great chatting with everyone here.
Thanks, Noah, Mobi Media for hosting this podcast and looking forward to everyone else.
Same time, same channel and a whole brand new list, I'm sure, of new AI news and headlines
Thanks for everyone attending.
Please make sure to follow all of the speakers.