Thank you. Thank you. all right sorry for opening late everyone lew Lewis, how are you doing, sir?
Doing great. How are you today?
Not too bad. Not too bad.
I know there's a lot to talk about in the AI world.
I honestly find the AI world to be so much more exciting these days.
I suppose over the last two years relative to the crypto world.
So I'm excited to be diving into these topics.
And at this point, I feel like everyone is using AI to one degree or the other, regardless of whether they realize it or not.
I mean, some of us are actively using LLMs like GPT.
Others might be engaging in customer service on their Amex app and they might have their
problem solved by a mini agent. So I think it's important to dive into these topics and
what's coming up and keep the audience updated with latest trends. So I'll pass it over to you,
Lewis, while I just do a little bit more promotion on my end for the space and just
hear your thoughts on what's going on this week and
what you're excited about sure i i really think we've we've kind of gone over a hump you know
a lot of the past you know it's been sandbox stuff and testing and playing around with not being
really serious i really think in the last three or four weeks we've kind of transitioned and things
are getting really serious in the ai space. People are doing practical work, practical use cases.
They are generating revenue.
They are seeing a return on their investment
and it's really moving along pretty fast.
And we're starting to see this AI arms race.
There was a recent use of a $250 million talent grab.
Meta was offering a single 24-year-old $250 million.
That'd be a nice position to be in.
And I think that's really, it's not so much filling a job,
it's really buying the future for them.
And AI isn't just reshaping technology,
it's really reshaping the power, the status, the structure,
what ambition looks like in this 21st century.
And the battlefield really isn't about code.
It's about minds and capturing the best minds.
And it's crazy how Meta's just handing out these golden crowns and really trying to capture talent, the talent base across the entire globe.
It's like a nuclear arsenal.
And Zuckerberg just kind of launched his first strike on that.
We're also seeing self-improving AI at Meta and OpenAI.
It's no longer training wheels, as I mentioned.
They're now writing their own playbook.
Meta and OpenAI are watching the creations actually evolve in real time.
And I think the implications for that is staggering.
You know, when AI start to rewrite their own code and rewrite themselves and update themselves,
then we start to get into a level of AI that's just really unpredictable.
It's no longer automation. We're now seeing
generative increase in creativity. And it's really accelerating, I think, beyond human control. And
I think that's one of the concerns with many people today about, you know, what's the regulatory
control? You know, are these things going to go off and do crazy stuff? There's a story I'll mention later where the military,
not only the US military, but other military around the world
are actively integrating AI.
And, you know, what could possibly go wrong with that?
You know, but self-improvement means the feedback loop
really has kind of snapped in half.
We're no longer teaching machine.
It's really teaching itself.
And I think we've gone from the, you know, baby crawling.
Now we're in adolescence.
And in many ways, it's probably learning faster than we are.
And we may not even realize it.
And if people didn't think the genie was out of the bottle,
it's certainly out of the bottle now.
And in fact, it looks like it's building a better bottle while we're doing it.
So, I mean, this is the progress.
Do you think we've reached a point, Louis, where AI is actually teaching itself or we're not quite there yet?
I think we've just started to see that. We've seen hints coming out of Sam Altman and also out of
Zuckerberg about the AI is doing amazing stuff. I was actually having an interesting conversation
with the other day. And it seems like we're getting close to that point where human beings
are going to be the bottleneck and the limiting factor in going forward, which is going to be really interesting.
So I don't think we're fully there yet, but I certainly think we're seeing strong indications.
What do you mean by human beings being the bottleneck?
There being a lack of creativity and development on our end when it
comes to AI no not not a lack of creativity but in comparison we could be we could be in a situation
where the AI is more creative than us wants to go in a different direction but because of you know
human belief systems human cultural limitations and blind spots, our own personal blind spots, our own limiting beliefs.
You know, the AI or even a swarm of AIs may not have any or some of those
and therefore could suggest creative directions that we go,
wait a minute, wait a minute, this doesn't make sense.
So, you know, if we keep, you know, it's going to be interesting.
If we insist on understanding, you know, having the transparency
so it's not a black box, will we accept solutions from AI
that we don't fully understand?
I think we've started to do that already.
You know, people throw something into ChatGPT
and they blindly just throw it, you know, copy and paste it and do it. We saw that in the legal. You know, it's
going to be interesting how much people actively regulate the AI in terms of implementing solutions
or, you know, we just become a pass-through method. You know, it's kind of going crazy.
a pass-through method. It's kind of going crazy. And you mentioned these massive deals that
Zuckerberg is offering talent. What kind of talent are we talking about here? And has anyone
actually taken these deals? I saw a headline basically saying that no one had grabbed any of these $250 million deals
that were offered and they wanted to stay where they were.
But I'm not entirely sure if that's true.
Yeah, I think this particular deal got rejected.
There seems to be some ambiguity in the news around it.
But, you know, I mean, there was a report in the last two weeks
on a list of the new people at Meta who are now the
advanced super intelligence research team.
You know, they're the ones researching it.
They're the ones looking on how to implement it.
And I'm sure they're all extremely well paid.
And we've also seen headlines over the last three weeks of, you know, stealing people,
you know, from Meta, from open AI, back and forth.
You know, people are desperately trying to get the talent on board because they know that in order to develop the next generation of AI, you need the best of the best of those AI researchers.
And these are mostly developers, I'm assuming, right?
You know, I worked at SingularityNet.
They have a whole crew of AI scientists that have been studying this for 30 years.
So there is some dev talent, but a lot of it, I think, is more on the absolute deep research side of how to create these LLMs and AGI and how to, you know, maybe in some ways, hopefully regulate them as well.
What does being an AI scientist entail?
Like what does, what kind of training does designing an LLM require?
I've always, I actually never thought that deeply about it, to be honest. honest yeah to be honest i that's totally out of my wheelhouse um i've talked to many ai scientists and they're they're
kind of totally off the planet they live in another world um you know the the mathematics
and the science around it i think you know is some of the deepest and most most complex um
but there's certainly an art form oh r, Ryan, Ryan just popped up his hand.
He probably, as a scientist himself, he would know.
Sorry, I'm joining late here.
Yeah, so when it comes to training at LLM,
it's you're adjusting the calculations
So it's, the best way I can describe it is like a solid state
circuit or like an ASIC right when you design an ASIC system it's you're
sending electrons through a labyrinth of circuitry that will give you you know
one in one out it's the black box always goes through the same transfer function
an LLM is very very similar where it's a vector field.
So as you parse up the tokens of the text input,
it's going through the vector field and giving you a very similar output each time.
I believe that's called the transformer model.
It was all written in a Google paper.
I believe it was titled like it, it's all about focus.
It was something to that extent.
But it's literally just a vector field that you have to retrain every single time.
That's why it's so compute intensive.
When you add additional information to a model, you're not just retraining a piece of the model.
You're actually recalculating
the entire vector field. That's why decentralized training is very, very expensive because you
actually have to pass the entire model back and forth, or just training in general is
very expensive because you're actually having to recalculate each time you add more data
And that's why memory, you know, having it all in memory is probably really important
and probably the only way to do it.
And that's so there are there are projects that have worked on sharding models in order
to train different parts of models. There's been work on information
or like information specific or subject specific models.
So kind of sharding the knowledge base
There's a lot of different research.
I mean, this is all like cutting edge, right?
We're only a year or two into this level of research. But the common theme
that we've seen is, yeah, it does take an immense amount of memory because you're having to rewrite
the entire vector field each time you want to add data to it. Wow. So there you go, Noah.
Answer from the expert. So, you know, we're seeing a lot of the self-improving ai starting to come out
so i think that's the most interesting thing where you know you know it's it's like