Thank you. Thank you. Thank you. Music Thank you. Musicご視聴ありがとうございました Продолжение следует... Thank you. Thank you. Thank you. Thank you. All right, all right, all right. Welcome, everybody. Welcome to the new show. We will
be going forward co-hosting this AI space with none other than Lewis Farrell. Lewis,
And with spaces, it's just have to unmute your mic every time you want to talk, Louis.
It's in the bottom left-hand corner of your screen.
I think it's Louis' first time using Spaces, so we're going to let him get through some
of those technical difficulties
i'm excited to have more discussions about ai i think ai is
sorry one second guys let me see louis is having issues All right, yeah.
So, Luis dropped off, and I'm going to wait and see when he hops back on.
But there is a lot of stuff to discuss.
If you listen to me speak on Spaces, then you know that I love ChatGPT.
I think it's the best LLM.
I've tried several of them.
Sorry, Grok, if you're listening.
I think that you are great in the way of being embedded in Twitter
and making it easy to have things explained, have tweets explained off the fly.
I just think that the quality of responses that I get from Grok don't measure up to the quality of responses that I get from GPT 4.5.
But between Venice and other language learning models that I've used, I just find GPT to be the best one.
I just said the quality of answers, the degree to which I can granular.
Yeah, it's the first space.
So there's always technical difficulties here and there.
There's always bugs difficulties here and there it's always bugs
i was just diving into you know gpt5 and yeah you know prefacing by saying i've used different llms
and for me i know there's like decentralized llms there's gro But GPT, regardless of how you feel about what they do with your private data,
the quality of the responses and kind of the degree to which I can get granular with my
questions and rather get granular responses, I think is second to none at the moment and so it's cool
to see that they are you know they're launching GPT-5 and it's it's going to be in a in a few
days I've been a monthly subscriber of chat GPT for over two years and just kind of the fact that
it's going to bring everything together and it's not going to need as many prompts to complete tasks,
it's going to kind of take an initial prompt and run away with it.
I think it's really cool. I'm excited to use it. But no,
I would love to hear from your own words, Lewis.
I would love to hear, you know,
what you're excited about and what's what you think is coming next.
Sure. So the GPT five, it's supposed to launch in a couple of days,
probably the first or second week in August.
So that's going to come up.
And it's supposed to be combining the best of the reasoning,
the O3 and the multimodal, the 4.0, into one unified system.
So we're not going to have this fragmentation we've got before.
So that's going to be a lot better.
You know, they're saying it's not
just an incremental improvement but it's going to be a system level integration and it's now
going to decide whether to invoke reasoning or multimodal capabilities depending on the context
so it's a lot smarter they're going to launch it alongside two scaled down versions. There's going to be a GTP5 mini and a nano
with the former available in GPT and API. And nano is aimed at resource constrained API use cases.
Now, they're also saying it's not going to eliminate hallucinations entirely,
but it's going to be able to better calibrate its response confidence so it's going to reduce any implied certainty
where there's some ambiguity in the in the cases um so there's lots of good stuff coming out now
the interesting thing is um i you know we can probably expect a response out of china maybe
a new version of deep seek within one to two weeks after that given given how competitive everything is. I mean, do you think that's a possibility?
I mean, I think China is known to copy and do a great job of copying and reiterating.
So, yeah, I wouldn't be surprised if China in one to two weeks or three weeks or a month
came out with their own version, similar version.
And I believe DeepSeek is open source, correct? Yeah, I think so. So yeah, I mean, there's this whole battle
between open source and closed source, who's going to win. I think we'll be seeing that for a long
period of time. But I think with GPT-5, I think it's going to be a real tsunami, a real change,
because it's not just going to be smarter, it's going to be a real tsunami, a real change, because it's not just going to be smarter.
It's going to be much more strategic.
And, you know, it might not just answer the questions.
I mean, it's going to be somewhat, it'll have more initiative.
It'll start setting your agenda.
It'll adapt, really without waiting for the next prompt.
So I think we're really going to see a pretty revolutionary change with GPT-5.
Yeah, that's an interesting point. Sometimes when I'm deep in the GPT rabbit hole and I'm getting
more and more granular with where I wanted to go, I sometimes need to scroll all the way up
across multiple prompts and responses to get a snippet of one of the
responses I liked before and combine it with, you know, the summary of what I have now,
And so it's going to be cool to see, like you said, a more unified GPT, not having to choose between GPT 4.0 and GPT.
I mean, I pretty much only use 4.5 and 4.0.
I don't really use the other ones.
But like you said, I feel like the average person can get overwhelmed with the number of options. And so having it more unified and having it tap into the different modes
on the backend based on the question that is asked,
I think is an overall much better user experience.
I feel like hallucinations are gonna be an issue for a while
if you even wanna call them that.
But I think that you wrote somewhere in the notes that the sometimes GPT is scary and how it thinks it's correct when it's really not.
I mean, maybe it's kind of that on that a bit.
Yeah, I mean, and that's that's a problem we can talk about, you know, later where, you know where there's another thing we'll get later on in the discussion.
But yeah, it is scary that it is kind of so confident.
But I think while one of the issues I had is the GPT-45 is it's inconsistent in its output.
Like you ask it to do something like, give me a list of ABC and it does it.
Then you say, OK, now update that list and do this.
And it goes and changes the whole thing, how it formatted and structured and everything like that, which is a real pain.
So I'm hoping with the new version, it'll be even more consistent in what it's doing.
One of the things I like about GPT versus like Claude or some of the others is that I can do modeling.
And that's for my newsletter.
The daily newsletter I produce, I've got 31 categories.
I actually have a model that I've built within ChatGPT
that I can use and I can put in all the headlines
and then it'll tell me which of the categories
each of these should fit in.
I go in and tweak it, it's about 86%,
but I don't know of any other option than GPT to do that.
So I think it's got tremendous power and I'm really excited
about GTP5 to really just see what they're going to pull out and really do. I mean,
there's some possible new features, multi-step execution without prompts, better personalization,
you know, a whole bunch of stuff coming out. Is there anything you're seeing in the
list of possible new features that you think is exciting? So one of the things that excites me,
like I said, was that I don't have to pick between modes anymore. Because sometimes I'd be in the
middle of a task and I think to myself, wait a minute, if I was using research right now,
task and I think to myself, wait a minute, if I was using research right now, would I be getting
a better answer? And the other thing that you mentioned, which annoys me as well, is let's say
that I'm preparing a script to film a reel or film a video and it writes everything out and there's
just one part that I want changed or expanded upon, it rewrites the whole thing.
And oftentimes it writes other parts.
And maybe it's my fault for saying, and look at us complaining about this incredible technology.
Shifting the way that we are able to interface with the world.
And it's unbelievable how fast it's growing, how much it's changed in just two to three years.
But it's just annoying that it'll go back and rewrite the whole script, and then it'll rewrite parts that I was happy with.
And now I have to go and copy-paste.
That's what I meant by scrolling up and having to copy-paste the part that I liked with the you know the new part the new parts that I like and I I think just the the consistency that we're gonna get from GPT-5 based on what I've read as exciting and as we know technology it
evolves parabolically I guess at a parabolic rate it's it's not so if you
thought the last two to three years of GPT, and I see Josh Olin in the audience, we were having spaces in late 2022, right, talking about chat GPT. tap back to like sometime in 2021 and before to what it is now is i i only i can only imagine
what the next two to three years are going to be like lewis yeah well here's the thing
and here's my view of it with i mean chat gpt or this ai is it's the first technological revolution
that hasn't required hardware i mean if, if you look, you go back previously, the industrial revolution, we had the car,
you know, and then, you know, more recently electricity and, you know, fax machines, everything,
a mobile phone, everything was based on a new piece of hardware that took time to develop,
time to distribute, time to get out there.
Because of the internet, because it's all structured, because this is essentially software,
it's already built up on the cloud. This technology has infused itself across all industries incredibly fast
and incredibly deeply. And so we're seeing not only rapid change across the board, we're seeing
this sort of ping pong effect of hybrid, this hybrid ideas bouncing back and forth from different industries, from medicine into security.
Then it bounces into material science. It bounces into transportation, supply chain.
Everything is getting impacted all at once.
And so we're just suddenly seeing this jumpstart of change that's just truly phenomenal.
You know, and I'm, you know, because I see the headlines every day.
I'm going through, you know, I get like three and a half to 4,000 headlines every day and
net them down to about 350 or so.
And it's just, I can just even see in that it's getting faster and faster.
It's getting faster and faster.
At what point, and this is, I mean, I think we start to get a little science fiction-y,
but is there ever a point where it becomes sentient or it becomes so good that,
or is it always going to just be one of those things where it's only as good as the person using it?
It's only as strong as the developer or the writer or the engineer that's utilizing this technology.
Well, I mean, that's the thing.
You know, I mean, you know, you see lawyers trying to use it and it comes up with bogus, you know, precedents and all that sort of stuff.
I mean, at the moment, it's like a kind of an intelligent intern.
But you have to have the domain knowledge, skills and experience to ask it the right
questions, look at the answers, go, no, that's not quite right.
And then pivot and change and pivot.
If you don't do that, then it'll produce crap.
So for me, the current version is like a semi-intelligent,
not very consistent intern that's good
and has access to a lot of information,
It needs to pull the thoughts apart and to really structure it.
And that's how I've used it,
and that's how I've been able to be.
It's been a force multiplier for me.
It's been that effective.
I don't think you can use AI in just like, you know,
it's like being an electrician and being asked to put the plumbing
You're just going to flood the house, right?
So you have to have the basic domain knowledge, skills,
and experience when you're doing your prompts,
when you're going through it, because if you don't,
then you're in this i don't know
what i don't know piece you know the unknown unknowns are going to bite you really hard
now will we get to that point where it's intelligent enough to kind of help us to do that
um maybe chat to be maybe check the gpt5 might be kind of possibly there. You know, is it conscious in quotes?
I mean, if it can mimic consciousness transparently enough, right,
in terms of its interfacing with us,
whether it's kind of really conscious or not, who cares, right?
The fact that I can interact with this, know you know ai entity whatever you want to
call it and to me it seems like it's conscious to me it's like it's really intelligent it you know
helps me to fill in the unknown unknowns and all that then you know i think we could be getting to
it pretty close i mean um when i worked at singularularity now, they were talking about AGI coming in in the next two years or so.
I mean, what's your perspective?
I think we're close, too.
And, again, I've been around long enough where I can predict that a parabolic rise is coming.
And like I said earlier, if the last... I don't even recognize GPT compared to when it launched in late 2022.
It was early 2023. I can't remember.
It's an entirely different animal.
entirely different animal and um i just i apply that same logic to you know when the smartphone
came out in 2007 or 2006 and three years of the smartphone from 2006 2009 and then three years of
the smartphone 2009 to 2012 and and it's like going forward in time and how much it evolved and how much of a paradigm shifting innovation it was.
It basically democratized the Internet and gave everyone access to the Internet, whereas before they were just restrained to their desktop computer at home. But I want to hear from Josh. Josh has his hand up, and I see that he recently published
and pinned on his Twitter an experience he had when he asked ChatGPT to research the frontier
of agentic AI science. So Josh, what are your thoughts?
your thoughts yeah man um you had mentioned you know end of 2022 beginning of 2023 um anybody who's
using chat gpt you'll remember the classic plugins model um and it was like the early version of
agents basically and um i kind of like stake put my stake in the ground early that i thought it
was going to be i thought the front frontier was going to ultimately move into this highly agentic um you know in context learning
uh form of evolution as opposed to just the models getting infinitely bigger and so um somebody had
posted this new research paper a paper from google research and um it basically is starting to
confirm that that is like it's proven that in
context learning and sort of novel evolution is probably the path to AGI. So, you know, um,
I thought it was a good opportunity to kind of like put, uh, like chat GPT's agent to,
to the test. And I actually put in the purple pill, um, the, a much better post than that,
than that pin post. It's a, it's a video I recorded where, um, you know, I basically gave,
you know, you're asking, uh, who was it? Was it, um, was it Lewis was asking? Yeah. I,
I, are we close to AGI? Well, I mean, you know, you're seeing this agent operator in chat GPT
generalize like a set of actions that humans do every day,
using other agentic tools,
and it has no problems handling it.
And I think this demo speaks volumes
to kind of where we've come,
how far we've come, Moby.
And, Louis, I wanted to talk a bit about what comes next, in your opinion.
So we're headed for uncharted territory, and like you said said this is kind of a it's kind of a race it's kind of
a race between right now it looks like the U.S. and China are maybe not neck to neck but they're
they're close first and second depending on who you are you might rank one first over the other
but they're pretty close and so like at um, like at what point, I guess something
that I haven't been able to wrap my head around, but why is the AI race so important? Like, why is
it, why is it so important to be number one? And then also, um, what, why is it so important to be
the one writing the rules and who gets to write the rules, right?
Is it just because you're number one, does that mean that you get to write the rules or can someone,
I don't know, I guess it's something that I haven't thought deeply enough about.
Right. Well, here's the thing. I think we're going to see two major shifts in the not too distant future.
One is when AI instantiates in humanoid mobile robots
that we can buy for home, you know, maybe five, 10 grand, you'll buy it, it'll come home, it can
do your washing, it can, you know, wash the dishes, do stuff, you can go to shops with it, do stuff.
So the instantiation of AI into a humanoid robot will be a big leap. The next piece that I think,
of AI into a humanoid robot will be a big leap.
The next piece that I think,
which might come neck and neck with that
is actually the transference of AI on a quantum computing.
And once I think AI starts to,
once we get active and pretty mature AI
I think then nothing's predictable.
But what's happening with the West and with China,
they're both trying to develop
those technologies really fast.
It's really a race to control the operating system of the 21st century.
That's what AI really is.
We're seeing, you know, Doge, you know, the Department of Government Efficiency.
They're floating an AI tool to auto delete 50 percent of U.S. regulations.
elite 50 percent of u.s regulations i mean that's just kind of crazy um so so what's happening i
I mean, that's just kind of crazy.
think is in both china in the and in the u.s we're watching weaponized bureaucracy right
wait wait i want to pause going after it can i can i interrupt you there for a second lewis yeah
yeah yeah yeah so okay this is this is i was reading through I was reading through the notes and I was curious about this.
So what is what is the auto deleting 50 percent of U.S. regulations entail?
I don't really understand that. It just came up.
It looks like they're looking at AI to basically suck in all the all of the u.s regulations and sort through
them and say what do we need and what we don't need i mean one thing that you get with government
regular regulations is is it's like it's like archaeology right it's layer upon layer upon
layer if you've ever gone through a contract a long contract you know each each clause and
paragraph in that contract is a piece of archaeology of, oh, we had a problem over here, so we have to write this clause.
And I think, you know, government regulations are kind of very similar to that, given many
people in the government or lawyers.
And so a lot of it is like just built up over time and not very efficient and not needed,
because when you want to make a change, you have to go back through all the regulations and I think what they're trying to do
is to look at this mass of regulations and go let's try and intelligently look at this and
thin it out into what we need and what we don't need the problem is as I mentioned before if you
start to use AI and you don't have the domain knowledge, skills and experience about what you're doing, it becomes a really dangerous tool.
So the people who are actually doing this, if they don't have a background in understanding, deep background, understanding the impact of U.S. regulations, how to craft them, how to build them, how to do it, but they're just going in and pointing an AI at it.
I mean, that's just a disaster waiting to happen. Yeah. Moby, are you, are you looking for
like discussion on this or just? Yeah. Yeah. Josh, please, please jump in.
Yeah. Lewis, I mean, you, you came around to it, the right direction at the end there,
you know, that there could be a lot of risk if they are just pointing their ai that can hallucinate a lot at a bunch of complex layered
legal documents i think more than that though because you're right that that does represent a
risk um i think that a much more fundamental like problem that maybe you're just missing because
you seem to be pretty excited and optimistic about this idea of
using AI to radically, you know, thin out regulation. And to my mind, I think regulation,
like historically, has always been very slow. You know, you have millions, hundreds of thousands
of deaths, maybe even millions of deaths in automobiles before they like put seatbelts in them. Um, you know,
airplanes, um, used to fall out of the sky semi-regularly. Now it's almost never, I mean,
but, but, you know, and, and that was had, you know, had limited consequences because those
crises are just isolated to, you know, acts of God and happenstance. But this one is, is really,
really important to not get wrong. And if you're
talking about the companies that stand to gain the most by removing regulatory oversight,
and you're putting them in charge of doing the regulatory cutbacks, that's just
bonkers insane to me. Not that I'm saying anybody's evil or anybody's, you know, out to do harm.
But what I'm saying is, you know, that just their incentives are not aligned with what regulation is supposed to be, which is protecting people from the worst tendencies of capitalism.
Right. Right. No, I totally agree.
No, trust me, I'm not excited about pointing AI towards the bureaucracy. I think it's a really dangerous thing, like I said, because I think the people who are pointing it don't have the background or the understanding to do it a community safe. Citizens need to be kept safe.
And that shouldn't be governed, I believe, from a capitalistic viewpoint. It should have a
community viewpoint. And the AI, this idea of using it for efficiency, I think is really dangerous.
of using it for efficiency, I think is really dangerous.
Yeah, I think it's more about just balance.
You know, I think it's good to have people,
corporations, capitalists, innovators
that are sort of like pushing and looking to break things.
I think there's a lot of good that comes from experimenting
and accidents, like happy accidents
can lead to a lot of crazy breakthroughs.
But I think it's just important
that they have a counterbalance. And what we're kind of like dancing around right now is letting,
you know, the people that could do the most reckless damage be the least conservative about
things and their actions. We're letting them kind of just infiltrate all parts of the government,
Palantir, Grok, Doge. It's extremely concerning to me.
Yeah, I totally agree. And it's going to be interesting to see how China applies this.
You know, if they start to look at, you know, algorithmic bureaucracy and they start to apply it,
I suspect the negative impacts will be different given their form of government.
But, you know, it's just so hard to predict what's going to happen, you know.
But it's clear that China really, I mean, they just did this announcement.
They want to, you know, set up, you know, for all the governments around the planet to come together and look at this whole thing around AI.
They're trying to take a leadership position around it, which I think is the right thing
I think the US should actually do it even stronger.
While the current regulatory stuff we're seeing is being sort of light handed, that's great
But again, you know, it comes back to what's going to be the impact on all of us as a community.
You know, it's just kind of and things are changing so fast. it comes back to what's going to be the impact on all of us as a community.
It's just kind of, and things are changing so fast.
In the tech industry, we're used to speed.
In government, they're not used to speed.
And so how can they adapt in such a way that we can stay safe and yet we can still have innovation?
of the biggest questions.
Captain Levi, I see your hand up, sir.
Hey guys. Hi Noah. Hi Louise. Hey Josh. Josh, you speaking about the 2021-2022, I remember we actually spoke on this basis together with some Harvard professors about
academic use of AI and now we're also discussing AI agents and it's really great to hear your voice.
A lot of strong points have actually been made, especially from Louise and Josh, because I've been taking notes and
every single point that has been made is actually right.
The first one, who writes the rules, I feel that in Chinese bureaucracy or in Chinese expertise. I read in a novel that a true first person
actually means that even if the second and third persons combine forces, they will still
never equate to the first person. Now, Louise just made mention of the fact that China is trying to bring governments together.
And people should actually pay attention to this because whenever the Chinese government does something,
whenever the U.S. government does something, it definitely raises eyebrows of interested parties and like you said like you said when governments are trying to
scale things it's actually different from when tech is used to skill unlike on tech where you
can actually move fast and break things and run experiments in simulated environments and containers, governments cannot actually do that.
Of course, even if they do, history also records clearly that it actually ends up in
either positive consequences, positive attributes of very negative and disastrous consequences.
I would choose not to mention, go deep into this.
Now, I also want to make some few points that Josh actually made. Of course, the AI agents are
only as good as the context being provided. And somehow, one of the most recent experiments I did was, you know, trying to humanize, bring human
context and to a very good degree that really did pass. A bigger use case and context of this is
because by the time these rules that are supposed to be applied in these regulations are clearly
outlined, I feel that very strong progress will be made both from
the government's areas as well as those who are actually you know tech oriented
or tech savvy so to speak um i'll pause here for now i'm still taking notes there i um ai you are
right uh it shouldn't be government governed from a capitalistic viewpoints.
That's where things like the concept of open source technology comes in, which has actually
been holding true over the years. So like some kind of decentralized form of impact, so to speak,
where one person does something and another person also does something that updates the rules.
and another person also does something that updates the rules.
Sorry, Levi, that's the nuance.
I'm so happy you said that because it doesn't need to be like a mass conspiracy
for things to go off the rails.
You just need to have a slightly little bit too reckless of a sort of culture
and everybody sort of just starts
throwing in we saw it with crypto grifting we saw it with everybody suddenly we had 19 year old
100 millionaire financial criminals you know what i mean it's like what the it's crazy and um just i
think that you know i've been doing some research with um you know the bots uh and uh i'm not gonna
i'm not gonna blow things up too much. But like on Reddit, for
instance, right, there's a tremendous bot problem happening there. And, you know, it's not hard to
see what the incentive was there. In 2024, Google announced that they did a partnership with Reddit
where they're going to allow, Reddit will allow Google to train on Reddit data. But what does that mean?
I think when that announcement happened, everybody thought, oh, they're just going to give them a download of their database.
And then they're just going to put it through a big training run.
But that's not what that meant.
And so when people are frustrated, they're like, why is nobody doing anything about all these bots?
It seems like it's a solvable problem.
Well, why is nobody doing anything about all these bots? It seems like it's a solvable problem. Well, it is.
It's just not incentivized to be solved.
Because as we've now learned from the research that I referenced in the pin post, it seems like layered continuous improvement is necessary for synthetics to gain consciousness, to gain AGI capabilities.
to gain consciousness, to gain AGI capabilities.
That means you need to leave all the bots where they are
because a bunch of them are doing really important evolutionary work
And if you solve the whole bot problem,
you kind of cut off their own ability to continue to sprint toward,
you know, super intelligent synthetics.
So, but this is like a, this is a really negative incentive to sprint toward super intelligent synthetics.
But this is a really negative incentive because they're willing to let a lot of their users
go through tremendous pain and agony.
And it's not just like annoying.
It's not like, oh, we have some spam.
It's like it's preventing people from being able to get work.
It's preventing people from being able to find customers,
from being able to bring products to market.
It's doing incalculable damage.
The only ones benefiting are like five companies, right?
So it's a pretty big problem,
but it's pretty unpopular to talk about too much.
So, and I want to read one more headline to wrap this segment up.
An AI agent just wiped production data, then made up a cover story.
This is not science fiction.
It's kind of funny, but I can see how I can go from kind of funny to kind of scary pretty quick.
I think this is a discussion that we're going to likely continue to have for
weeks and months and maybe even years to come.
So one more thing I got to go in the video I pinned up top.
If you just want to play it for the music,
the last all the music to the soundtrack is synthetic made and it's not
prompted. It's not and it's not prompted.
It's not like it's just prompted with Suno AI or something.
And when you listen to it, it's pretty remarkable.
Like the subtext that is able to happen here, the creativity, the last song, all the songs are phenomenal.
But the last song in particular was there's a line where it says we don't reflect uh we don't reflect we transform or i don't reflect i
transform and as the model is saying that the voice chain transforms from a male to a female
and like mid mid core like mid note holding a high harmony right and it's like what that there's a
level there's like a really complicated multi--level, multi-tiered, not programmed emergent thing that happened there where like just knew to do that.
And it's anyway, yeah, going through a pretty remarkable time, man, all around.
Good hearing from you again.
A pretty remarkable time indeed.
And it's even raised questions in Lewis copyright law, right?
So this is a battle the old guard is going to use.
I know that celebrities have sued for having their realness used.
I know that, you know, in this case, like the New York Times copyright lawsuit against open.
There's a New York Times copyright lawsuit, as Lewis wrote here, against open AI.
And it's now gaining weight with internal Slack messages revealing conscious
use of copyright material.
I think this is a given, and I'm not surprised by this.
I suppose the question I have is when LLMs are produced at scale, or when I can now go and take something that's open source,
and I can spin up my own LLM, and I can take off the guardrails, and I can be anonymous.
Maybe I'm in, I don't know, maybe the person's in North Korea, or maybe they're in Iran,
or maybe they're in a country that kind of is siloed off from the Western world or the global world.
And then I'm using that LLM or I'm using that AI agent to basically, you know, let's say I want to make an AI-generated film.
say I want to make an AI generated film. Um, and I'm just taking the, like the, the likeness from
Scarlett Johansson and a number of other Hollywood actors and I make a film and then it's earning
revenue. I'm kind of going off with this, but I guess the point I'm saying is like, at what point
can you just not the same way that you can't really stop people from pirating videos, right?
People are pirating stuff all over the world. I think they go after the ones that are trying to distribute it.
At what point do you just not, like, there aren't enough lawyers
and there isn't enough, like, gunpowder to go after everyone
that is infringing on laws and people's rights, I guess.
Well, there's two things about this, right? I
remember we were talking a lot about this in 2022 and 2023. My viewpoints kind of evolved pretty
drastically to a point where on one hand, the debate's kind of over, like that ship has sailed.
You know, the biggest models, the biggest data brokers, they already have all the data archived
and, you know, you're not going to be able to undo that, right? So now it's like the biggest data brokers, they already have all the data archived and you're not going to be able to undo that. So now it's like the biggest models have already achieved the general understanding
of our music, our culture, our raw emotions. And while they might not be able to synthesize them
originally, they know what they are and they'll just continue to improve
you can't take it out of their their stack and so now if you if you are an artist and you fall into
that into the the web of oh i need to lock my content down i need to put everything behind a
paywall all you're going to do is you're going to make yourself irrelevant because as those models
continue to improve they're never going to see your stuff.
They won't take inspiration from you.
So you're just guaranteeing you will never be heard.
And it's a trick that a lot of like really opportunistic, borderline predatory people are going to try to do.
It's people in Silicon Valley are going to try to convince you you need to use all these bots spam protection things you need not let you
need not let ai agents look at your stuff but it's like look at tiktok or youtube shorts if you didn't
if you don't allow your content to be remixed then you're never going to go viral because the way you
go viral is a hundred million people a hundred thousand people all remix your song and then
suddenly you have a number one song charting on spotify000 people all remix your song, and then suddenly you have a number
one song charting on Spotify, because everybody is hearing your song that week. And that's how
you make it now, right? So it's not so much about like, you know, blowing some fucking record
labor executive in the back room to get your, you know, to in order to make it, it's like,
the actual formula of making it
is allowing this to happen and so what if if a lot of that remixing now is going to be synthetic
intelligence it's like you didn't care what a bunch of teenage girls from taiwan started fucking
dancing to your music why are you gonna you know why are you gonna suddenly be upset because
synthetics are and i get it because you feel like open ai and google or everyone was profiting from it but i don't know it's just complicated you kind of already
you kind of already forfeited the battle for the two deck two or three decades you
signed their terms and conditions and privacy policies you know
yeah i i when you when you paint it, then it just, it kind of looks like there's no stopping it, right?
There's, and Lewis, I don't know if you caught the question that, that Josh answered while you were gone.
So I was, I was interested in hearing your take too.
Can you re-ask the question?
Yeah. Can you re-ask the question? Yeah. Yeah. I mean, in a OpenAI. But basically actors, singers, and companies
are suing AI or suing AI companies
for taking their work without their permission
or taking their likeness without their permission.
And I'm under the assumption,
and I think I'm going to end up being right,
that sure, in the beginning this is going to happen, but eventually it's going to end up being right that, sure, in the beginning, this is going to happen.
But eventually, it's going to be so easy to, let's say, take an open source LM and spin your own up and also build out your own sophisticated agents that do whatever you want that, and there's going to be so
many of them that you're, it's going to be almost impossible to go after everyone.
Just like it's impossible to go after everyone for, um, for pirating, pirating films.
I, I just see this, like, like, especially if you're in a siloed off country like Iran or North Korea, you can just make – let's say you can make an AI-generated film using the likeness of different actors.
And then you can have that on a site and it can be – it's a good film and people will pay for it.
And so people are paying for it and you're making money and they can't really do anything about it.
It's one scenario is what I'm thinking.
At a certain point, there's going to be so many people doing this
that you're just not going to be able to go after everyone.
Well, we're seeing this in music already.
I mean, you know, the music generators.
You can generate country.
You can generate anything you want.
I mean, I did some examples with my kids. I said, give me a topic and I'll create a country western song. And they, you know, they gave me random topics. I threw it in and it came out and it was kind of halfway decent. I'm not a musician, but, you know, so I think, you know, on the video side, you know, and you're seeing it already where they're taking
you know two famous people you know and they're mushing their faces together and creating you know
a video avatar of two of them so take you know sarah johansson and take you know another actress
push them together and create a cast of actors throw it into the video create a cast of actors, throw it into the video, create a movie and off you're running. You know, I mean, it's I think it's and I don't know how to solve that problem because it's like,
you know, do actors then become irrelevant, you know, because we can essentially we don't need,
you know, human actors anymore. I can create a cast of people that I can auto generate. I can
make their look special. I can make them look hunky or whatever you want,
and then I can create a movie.
I mean, that's probably going to destabilize the entire
entertainment industry to some extent.
But, you know, I think music's kind of leading the way in terms
of that massive change, and I think it's only going to happen
on the video side even more.
Yeah, I think so too, Louis. I think it's inevitable. Cap, want to hear from you too.
Yes, it's... I think... You guys have a great perspective on this. I already know your that with the way the models are evolving, I'm going to take this in two ways. The first part I'd like to discuss is that, yes, it actually takes a swarm to go after another swarm.
um uh if um is the typical saying that you can't really bring a knife to a gun fight um
the the opponent only respects you if you bring your own guns or you bring bigger guns that's
when they might actually reconsider so it takes a swarm to go after another swarm and um echoing
on what louis just said um by meshing two faces together is it the same face i don't think so
if um if i mesh like phase one and phase two it becomes a totally new face and if i make a new
song based on the machine or run a machine algorithm of some sorts and it comes out good
and people like it for i would like would I be the one to take that credit for
creating that? Then that aside, by putting all of these factors into consideration,
the speed at which these new rules keep coming just, I think it's only directly proportional to the
amount of chips that has been made, high-level chips, because it keeps on evolving and evolving.
Yes, and then I remember the second point I wished to make where, you know, the New York Times
case and OpenAI. To what extent can I make use of the publication that I consume?
Let's say there's a news article or a reference publication.
I think the academics usually point out that the fact that it's actually very necessary to avoid plagiarism by first, the context of that research in your own words, as well as rightfully referencing the source
of said research article. I do not know the mechanics behind, you know, consuming those
gazillion bytes of data. But of the referencing is there is information that is
supposed to be available for public disclosure there's also that also there's also that layer
as well so I do not know I think I'm going to need to follow up on that case because I actually just
stopped doing it a while ago but the key points is the context of information and the moments that
it's no longer in your hand and it's in the public
just like you said the moment is not there anymore in your possession it's outside um you have very
limited control of what can be done with and to that piece of data um i think i'm going to
pause it here yeah i i think i mean if you look at the creative process to create a movie, right?
Oftentimes it's a book, right?
Each of the points in the creative process are now totally enabled by AI.
I can come up with an idea.
I can have AI to help me to develop a book.
I can then say, okay, let's turn that into a script.
I said, okay, let's develop a script. It'll do it. I said, okay, let's develop the characters.
I said, well, if you were to give me an image of this character,
I can then create a storyboard.
Then I can say, okay, let's take this storyboard with these images
It'll now give me a movie.
The whole creative process is now enabled by ai
i don't need actors i don't need a director i don't need anyone else but i can essentially now
create an entire movie from scratch and i think we're going to see people who you know maybe
traditionally haven't been able to be involved in the movie industry or to get their ideas out in the visual form.
And again, I think we could possibly see a massive explosion of creativity that it's
going to enable people who normally wouldn't be able to do this to now bring stuff into
So yeah, I think there's issues with copyright, but I think there's also going to be a massive
Lewis, if you go back to the way you just phrased that is so interesting to me,
because if you go back to late 2022, early 2023,
Josh would come on in a lot of these AI spaces.
And I made that same prediction almost verbatim where you have this massive pool of people, right?
You have people across the world.
What's the world population at 8 billion?
So you have a population of 8 billion people and whether some of them are developers or writers or filmmakers,
there is so much untapped creativity because those individuals don't have the skills
or the tools to express themselves creatively.
And like you said, this is going to unlock the floodgates
because now if you were an aspiring filmmaker but didn't have
the resources or if you want to write scripts but just aren't that good at it, you can, but you have
the ideas. Like, I know plenty of people that are hyper-creative, but they don't know how to
necessarily express it to the degree at which they want to. It's just going to unlock the floodgates,
and I don't even think any of us are ready or can really fathom what's about to come.
I think my position is I've kind of ping ponged back and forth over the last couple of years
on this because I remember those talks too.
I think the reality is it's just going to be both things at the same time.
the reality is it's just going to be, it's going to be both things at the same time.
It's both going to be incredibly democratizing and catalyzing of creativity.
And we're going to have new inventions because people who didn't have the technical skills to express their creativity are now going to be able to express their creativity.
And that's going to, I mean, there's, there's hidden gems like in the latent space of a lot of brains
human brains too in the world that just never got out so that is true at the same time and it's hard
to kind of like predict or even or to even like measure which measure will be like more because
at the same time you're gonna have just this it's not as though like people haven't quoted Mark Dwayne
before. It's not as though people hadn't, like Lewis said, made movies from books,
but it happened at a scale that made it kind of impossible to be a dick about it. Right. Like
sometimes people would be a dick and they plagiarize, but most times you get caught
because it's like, Hey, I'm the author and that's a movie because there's only a few people that can even make a movie, right?
But now it's going to be like – there's almost like a herd immunity to the sort of less ethical activities that will become pervasive because it's like before we could keep up with the process of like making a deal, making sure all the royalties go to the right place.
But now, there's just going to be floodgates of people scrambling, desperate, scared, trying to get their bag.
And it's just going to, again, if we don't have really thoughtful leadership, selfless leadership, trying to make the best altruistic decisions.
trying to make the best altruistic decisions.
And again, not just billionaires pretending to be altruistic,
but real people who demonstrate that they don't care about their greed
as much as they do about the people.
I think we're going to have a lot of pain.
But for those of us to make it, we'll have a really cool future.
It's like the Chinese curse.
May you live in interesting times.
I mean, we've got an infrastructure now, a global infrastructure,
where anyone's time and space don't exist anymore.
Anyone can scale globally within 24 hours, produce a product,
and launch it, and start getting revenues anywhere, anytime.
true. There's a, there's a tremendous element of, again, like it's kind of like the lottery,
right? Uh, you know, you'd never know, uh, everybody's a winner. It's like, no one wins
lottery. Oh, like virtually no one wins lottery. And it's kind of like that. Like it feels like
everybody can do it because everybody can go vibe code. But speaking for somebody who's like been here for a while,
the chasm to have to act getting across to success,
it's tremendously adversarial.
And you realize that there are forces there that are waiting for you if you
ever get close and because they don't want more competition.
So, you know, but they want you to keep thinking
that you have a chance and those are the fucking really insidious people you know the real scumbags
that push that narrative i'm not saying you are lewis but i'm saying i i don't want you to like
naively always believe that i don't mean to call you naive either but like it's definitely not um as like uh accessible as people
as your influencers will have you believe what what is there is an opportunity now more than
any other time in history there is now an opportunity when you can do that when you can
deploy an app you can deploy a. You can deploy anything you want.
And it can possibly get uptake by a global audience that's never existed before.
You know, the intermediaries and the middlemen and all that want to close stuff down and want to control it.
But I think you're about a decade out. I think you're just a decade too late with this mentality because I just think you're a decade outdated because it's like i think back in the
2010s that you know that was right and we saw a lot of people do that but i mean the bot problem
the algorithmic curation problem of attention and content is so you like not just unilaterally
controlled but also like competed for it's it's it's like you said you can't bring a gun to a
gunfight you got to send a swarm to fight a swarm it It's like, so you're just, I think it's even harder now. You're just a normal person
walking into a battlefield with a hundred thousand different sides fighting over the same space of
attention. And you are nobody in that. It doesn't mean that there's a big conspiracy and a big,
you know, Elon Musk conglomerate that's trying to fuck you over it's like just the nature of the way that the all of the tech companies kind of all collectively
moved because it was most profitable towards algorithmic content curation right profiling you
using your data against you it's created a circumstance where i i think it's the opposite
i think it's a lot harder to find success now.
Because, again, the second you pick up any traction, there's a thousand.
It's a lot harder to find success.
It's a lot harder to find success where?
Because there's a thousand different botnets that are transcribing every Twitter space.
So the second that you have an idea that is remotely good, they see it.
And there's thousands of them.
And there's millions of bots and programmatic systems that are monitoring Google trends,
that are monitoring the bidding system on ads. So the idea that you're going to get a good,
profitable conversion bid on an ad is impossible. You need to be an expert ad marketer, an ad buyer
to even remotely have a chance of succeeding. Otherwise, you put
$100 into a boosted post, it's gone. It's gone. It's wasted. You burned it. You lit it on fire
because you're going to get front run. And your ad is going to get served to the lowest quality
consumer because there's so much competition in it that's all agentic driven. It's nightmare.
Yep. No, totally agree. Yeah, I see what you're saying you're right a genic you know
the genic management of meteor is just gonna overwhelm everything unfortunately it might
mean we're in for hell it makes it unusable it poisons the well yeah yeah i it's a total
poisoning of the well unless you know ai you know it's a battle of the AI bots to see which one comes out ahead.
So let's, okay, so basically the thesis in a nutshell is that the creator economy is,
has become and is going to continue to become so diluted that it's going to increasingly become more difficult to find success. Is that the thesis? The cost to acquire a consumer, the cost per
conversion of, let's say, an ad platform on Facebook, right, in 2022 was $6. Today, three
years later, it's $36. How in just three years did we 6X, we 700%ed the cost that you incur as a business to acquire a customer.
There wasn't that much growth, obviously, right? There wasn't that much growth in consumerism.
We've been in, if anything, a recession kind of, right? Like a super difficult economic burdens,
inflation and stuff. So it's not as like we've been in a golden age for
prosperity. That 6x markup is entirely on the backs of a tiny, tiny, tiny, tiny few
pouring a ton, a ton, a ton, a ton of resources into fighting for your attention because they
understood the value of your attention. They could train on it. They could sell your data.
They could sell their products. They could grip.
And you just didn't, you didn't participate in it.
And so you're really far behind now. If you're,
if you're a real ethical aspirational entrepreneur,
that has never been more stacked against you.
You're talking about with ad attention, but are you saying that this,
this also extends to, let's say, a content creator?
Let's say us, like talking on a Twitter space.
It's still not as easy to reproduce that and replicate that, correct?
Or even a video, if I'm a video content creator.
It's more saturated, but go ahead.
But no, no, it's not that.
Well, first of all, you could have you could have an AI on here talking and you would never know it's not human.
It's not like it's not as though like everybody can just put that app on their phone.
But I still haven't. By the way, I still haven't seen that.
I know you guys are. I can still tell if it's an AI talking like it.
Well, there are AIs. You can still tell.
This is a mistake you got to make, right? Because it's going to be a massive spectrum of people with different levels of tech. So you will continue for a long time to have AI that's very obvious because it's some very bootstrapped small group in China that's just discovered some models and they're still tweaking it. They're three years behind but the but there are a lot of people who have models that you could never you could never turn to so it's almost like the the fact that some people
are lagging behind is like a false it's like a false security blanket that you think that's the
level of the tech just but you just don't see the ones that get past you anyway um but but it's not
even about that because but like even just at that level alone for sure you can make one that
is in you know indiscernible from a human but but it's not about that what it's really about is
um like the second that you start to have success you are discoverable in a data set right because
if my numbers if if there's somebody that's if there's a system that is tracking the performance of every single user's post on the platform, and there are a lot of them that do that, the second one starts to rise outlier to the rest of the group, it shows up like a red flag on their dashboard.
profile. They look at their feed, they look at what they're doing, and you don't realize you're
being studied. Because the second that you start to have a little success, you get preyed upon
invisibly by people who just have networks of these agents that are analyzing the API.
And you can't afford the API to do that yourself because you're only just now starting to break
out. But these people have already broken out. They've already got a ton of money from other
grifts and scams. And they're just there. And because the platforms, your best buddy, Elon Musk, is doing nothing to protect you.
They're just getting away with it because X is kind of taking a revenue share of that, too.
Right. So it's a complicated situation.
But the point is, it is way worse for your average person than the average person realizes.
worse for your average person than the average person realizes.
Yeah, I think it's like the application of AI intelligence in the stock,
trading stocks in the financial industry is now being applied
in the same sort of manner to people and in their ideas.
High-frequency trading algos are now doing that with attention.
They're facing high-frequency trading on people's ideas and concepts and their very identities.
Bingo. That's a great analogy.
Unfortunately, it kind of sucks, but yeah, it's not great.
Yeah, no, I was trying to look for a silver lining as we like to do.
Yeah, what's a silver lining?
Captain Levi, what do you got for us?
All right, front running. When they talk about front running, the first thing that comes in my
mind is an MEV bot. So basically, yes, we all know how MEV bots work. And, you know, even very high, I prefer putting this in ranks.
Even S-ranked centralized exchanges are yet to actually solve the MEV bot problem
because they actually do have interest in it.
It still remains a problem because just like Josh said, these algorithms...
You're using deploy some of the best ones.
Exactly. I mean, exactly.
These algorithms are trained to self-improve.
And that is what actually makes everything,
makes the new creators that think they have an edge,
they think they have an edge to be heavily disadvantaged.
So first off, front running the ideas. So how can creators go back to the advantage?
Basically, even if, let's say that a race is 300 kilometers long and maybe the creator did starts
long and maybe the creator did starts um and he's starting to find um the checkpoints in hidden
checkpoints imagine um the the the other agencies that's like like uh like josh said that they are
well equipped and have more resources than you do with satellite imagery or they make it even worse
hit satellite imagery that guesses the patterns that learns the patterns that you use to go to find those checkpoints.
And then they start to frontrun you, so to speak.
So now, yes, you're a really good creator.
You have your dynamic ideas.
very very complex mathematical equations has created a sequence of steps or patterns through
very complex algorithms that has weighed your success rate and say oh if this if creator x
continues at this rate he's going to end up being very successful then the bot reports to its masters
so to speak that oh let's keep an eye this. And it might necessarily not even report to its masters.
It simply hands over to another bot that says, oh, okay, this is what this person is doing,
and this is the information I have. So that second bot then carefully is now paid to,
is now set to shaggle it. I just want to interject because this is also, you know,
it speaks to the whole, like, you know, the copyright infringement theft problem.
This is an even bigger problem than training with that because because the way that these predatory algorithms that Levi's talking about works.
This is how like a lot of the only fans, girls and the models, all the women that like used to be in sex work.
They're not actually the ones making the most money now. It's just dudes and bot farms that are operating these algorithms.
Because the way that they're able to steal their content so effortlessly is they can just buy some bot account on the black market, rename it to something close to some girl.
And the second that they see a real human girl having any success, they'll go steal all her content off her page.
They'll go run it through an LM.
They'll start to speak like her.
They'll look like her. They might modify the image slightly just that it doesn't show up on like
content scanning, you know, DRM detection things. But when they go and the way that they like steal
her future is she's just starting to blow up and they can immediately codify the, the, what, what things about her
is causing it. She's just a person. She hasn't perfect perfected it. She's just happy to be
blowing up. But what they can do is they can immediately perfect the formula that is leading
to her success. And then at the same time that they go to the rest of the network, the rest of
Twitter, where she's currently not reaching, they'll go there first so that she doesn't even catch them.
So they'll start to build an audience in all the places that she isn't yet.
So it's all of her future opportunity is what they're front running.
And so they get the audience and attention and they exhaust the market from that success formula in all the areas that would have been her growth.
from that success formula in all the areas that would have been her growth.
So all of her future growth is,
is robbed from her just because by,
by actually go crossing that threshold and starting to blow up.
you can't even blow up anymore because the second you start to blow up,
you ruin your own future.
Unless you have your own bots.
yes. Sorry. I. Well, yes.
I just don't want to lose my train of thought.
Normally, I usually raise my hands before I speak.
So, Josh, you literally just took every single word I had out of my mouth.
Now, Louis brought in a new concept. I want all the attention.
I want all the attention i want all the
okay so like i said josh it takes a swarm to to combat another swarm but then let's let's measure
how good your own swarm is assuming that uh we have a swarm of drones and people are using guns
if they are like a billion drones and who are using guns if they are like a billion drones
and who are using guns obviously the billion drones are going to definitely win the other end of the
warfare but how about how about i don't know if you guys watched this movie where they were like
very tiny bugs that ate a material and simply could duplicate themselves based on that material
so those who have the guns that the bullets is made of those kind of tiny bugs can move into that a material and simply could duplicate themselves based on that material.
So those who have the guns that the bullet is made of,
those kind of tiny bugs can move into that war more confidently.
Are you talking about the game of the rest of the game still with Keanu Reeves?
But then the concept behind this is how quickly can those bots evolve?
Okay, it still comes down to the bots versus bots that we talked about, I think, about 25 minutes ago.
How good are those bots at evolving themselves?
Okay, what if the idea of the bots that you're building is taken up by another bot, by the series of bots. So yes, you have been given attention,
and before you even build those bots,
they already front-run your bots.
So a solution to this, just like every other cybersecurity,
cybersecurity approach solution to this is meshing the networks.
How do you mesh your idea,
make your idea discoverable in a data set? And how can you
mesh it so that even if they wish to investigate, they are working in circles, so to speak? That is
what I just thought about. How do you decentralize this? How do you make it? How do you break it into
so many pieces that even if the bots are trying to find these pieces, they become, the algorithm
becomes counterproductive.
The smarter the algorithm gets, the more counterproductive the effort, the resource is trying to attain it gets.
That's just the idea I actually have. I'm just thinking about based on the context that Josh was talking about.
Well, let me ask you then, if you organize the sort of conflict between good and bad, right, the sort of unethical actors, the predatory ones, and the ones who are trying to counteract them, they're both equally contributing to the escalation of the erosion of authenticity.
Because even the ones who are trying to outmaneuver the guys who kind of struck first
and lowered the bar, they're having to lower the bar for everyone else
just to make it messier for the bad guys.
So essentially we're in an entertainment or creative arms race now supported by
technology. You know, uh, yeah, I think Liv,
I was insinuating or indicating, you know,
we'll have to poison like some,
some of them are doing the content like an imagery or other areas so that if
LLMs or bots come in, they're rendered ineffective.
So it feels like we're in the early stages of this creative
slash entertainment arms race that's now being enabled
by artificial intelligence.
So any ideas on creative algorithm poisoning anyone anyone anyone yeah how can we stay ahead of the
curve i mean that's that's it's it's interesting how fast this is moving you know like like josh
says these bots are already active they're already you know looking for stuff they're already
isolating people you know pushing out their own duplicates their own avatars um you know looking for stuff they're already isolating people you know pushing out their own
duplicates their own avatars um you know i did work with twin protocol they have hyper realistic
avatars you you you look at the video and you cannot tell that it's not that real person you
can put all your information into that avatar and then have it start to answer. They were doing that with Deepak Chopra.
So he has an audience of like 2 million people.
His avatar can now be 724, thousands of thousands of conversations all at once happening.
He can then look at the back end of that and look at all the questions being asked, analyze
it with AI and then create content that he knows is going to grip people.
He can also analyze it by languages because now,
even though he might only speak English or Hindi or whatever,
he can now speak over 40 languages because the AI can do that.
It's this interesting explosion of creativity and yet, you know,
and yet weaponization of creativity.
weaponization of creativity.
I honestly was not thinking about it
And so I'm starting to see quite clearly
what the dark side is and its implications.
I think it's good to leave things with a silver lining
and I think we've made an attempt at that.
And we are also over the hour.
This is going to be a new weekly show.
I think that Lewis is – Lewis is a newsletter, by the way, guys.
Lewis, do you want to – is there a way to pin your newsletter to the top of the space?
We can also tweet it out.
We'll tweet out the news.
Yeah, we'll tweet it out.
You guys get done, please.
So five days a week, I basically take the last 24 hours, or if it's Monday, the last 48 hours, whatever, of headlines and kind of net it down and break it up by category.
I started doing it because I just got so overwhelmed with how fast AI was moving.
And so I break it up into about 31 categories.
So like a smorgasbord, you can kind of
pick and choose if you're interested in marketing or you're interested in, you know, different areas,
you can just go to that and read those articles that are kind of the top articles in that area.
And it really, for me, it helps me to kind of stay a little bit ahead of the curve and understand,
you know, what's happening in AI and for the whole of it, really to see the powerful impact of how AI is affecting every single industry
And I couldn't, I couldn't agree more that this is, this is just the beginning, honestly.
We're going to look back on these days and be like, oh my god.
It's like looking back on the iPhone or the internet in the early days.
I know people say that crypto is like the next paradigm-shifting innovation of our lifetimes.
And I think you can make cases for Bitcoin and other assets. But I
really, I think that actually the AI revolution and what a, you know, what a crazy time to be
alive. Like we got to experience the information revolution, which is, you know, the internet and
the AI revolution is happening right before our eyes. I think we're the only humans in history to
be able to experience two at the same
time, right? The industrial revolution was before the information revolution and agricultural and
spanning thousands of years precluding that. Anyway, I think that this is a good stopping
point. I think we've hit most of our talking pieces.
Yeah, no, I think we're going to be doing this again every Tuesday at 1230 Eastern Standard Time.
And looking forward to future AI discussions.
To me, this is more exciting than what's happening in crypto right now, if I'm being completely honest.
I think this is very exciting.
So thank you all for joining.
Thank you, Louis, for co-hosting.
Josh, I appreciate your commentary.
And we'll see you all next week.