Music Thank you. Thank you. there were some connection issues so we're getting it set up again here
yep yep yep yep i think trying to promote the space as well on my end but we have
promote the space as well on my end.
when they're going to make this
a smoother user experience,
but I guess Slow and Steady
wins the race, although it seems
like not much has changed over the last
three years, or I guess two years.
predecessor to this called Clubhouse that seemed to work but was invite only yeah yeah anyway
yeah yeah let's continue and I'll get people back in okay so thanks Ryan for
your you know kind of breakdown on the whole LLM stuff and the fact that it needs to be totally in memory.
That's, you know, it's interesting because you see GPUs and they have a certain amount of memory and it always seems like it's never enough, you know.
So, you know, hopefully, you know, as we go through the hardware evolutions, it'll start to increase even more.
You know, that brings up the massive investments that are
really fueling the AI infrastructure race. We're hearing from both Elon Musk and Brock that having
100 million GPUs running, and the same with OpenAI, it's really just a hardware race, I think,
at the moment, The power is forward.
So it makes it terribly capital intensive and in many ways computes the new oil.
It's becoming so intensive that you're hearing reports,
that electricity bills are going up here in the US.
And so it's just becoming even wilder as as we see you know this push not just about big
data anymore it's really about bigger hardware and really people's bigger ambition to get to
this super intelligence um and jedi it's kind of brings up the next piece which is the whole
agentic ai models really becoming mainstream.
You know, it was kind of theoretical, but now, you know,
if you look at the newsletter I publish, you know, five days a week,
there's many, many practical use cases with agentic AI becoming autonomous co-workers.
They're capable of planning, they're deciding, they're adapting,
and they're really shifting from kind of a
And I think it's happening faster than anyone really realizes, certainly in terms of a regulatory
and trying to govern them position.
And they're now also just acting on our behalf in a very big way, moving from the chatbots
And, you know, we're seeing it going into commanders,
certainly in the financial area.
And it's turning the software really into strategy.
It's autonomous, it's adaptive, and it's always going to be on,
which I think is great in some ways,
but I think there could be issues in the other ways.
Noah, what's your thoughts around Agenic AI and how do you think it's going to be going moving forward?
So my understanding is that over time, these agents are going to basically automate daily tasks that you and I might find tedious or mundane. And I'm interested
in having an agent. Well, I don't know, maybe I should speak with care, but I'm interested in that.
I would love the idea of a little robot that just goes around and does stuff in my house,
like cooks and cleans and, I don't know, just takes care of daily tasks
that I do myself or would hire someone to do. I think that's cool. But then I think what's also
cool is an agent that is able to complete daily work tasks. Or maybe if you're a trader, they're
able to, I know they already exist, but I don't know to what extent.
And I don't also don't know how, I don't really know how powerful they are, like how efficient they are.
But that's kind of my grasp on agents is they're automating tasks that humans would find boring or redundant or mundane.
And they're doing it with far more
efficiency yeah you know I think I think it might be crazy but I think with only 12 months away from
household humanoid robots if you looked at if, you know, it's really interesting how fast
the Chinese are developing robotics and trying to bring them into the consumer space.
You know, we've seen robots in the commercial enterprise space for years, you know,
making cars and stuff like that. But this is the first example I've ever seen of something that you could pay.
I mean, $5,000, you know, kind of really isn't, you know, if you're going to have like a personal butler or a housekeeper,
five grand is kind of cheap, plugs itself in.
There are slightly more advanced versions of the robot, apparently,
that they take out their battery and they install a new battery for themselves, which is kind of wild.
But they'll be able to go around the house.
They could do your washing.
They could wash your dishes.
They could vacuum, kind of do all the things you'd want a housekeeper
And I honestly think we're probably 12 months away from that.
In terms of doing busy work, the stuff we have to do at work, you know, we've seen the release of Eugenic AI on the ChatGPT.
You know, you go into ChatGPT, you can use Eugenic AI in there.
I haven't had the chance to use it yet.
But certainly, you know, creating agents to do busy work tasks that you really don't want to do.
I mean, I've been doing sort of that, you know, in the creation of the newsletter.
That's how I, you know, I have about three and a half thousand to four thousand headlines
and I get ChatGPT to sort it, do an initial sort into those categories.
That saves me a ton of time.
So there's obviously many opportunities to do that.
But I think one of the new things
that's really kind of popped up recently is open source AI.
And we're seeing a lot of people pushing,
you know, billion dollar war chests
to really go toe to toe on the open AI side.
Reflection AI is doing that and a few others.
I mean, Noel, you know, do you have a view
of open source versus proprietary?
What's your view on that?
I think Ryan and I have had discussions about this,
but I generally think that open source is better,
and that's because of the other side having no transparency
high, uh, a low degree of transparency.
And at the end of the day, I know, I think we talked about this, um, Louis last week is I, you know,
I have a lot of stuff that I've written in my life. I've written a script.
I've written short stories. I, um,
I have a lot of writing and material that I would love to input
into an LLM like ChatGPT and have it either help me complete it or help me refine it or help me
come up with ideas. There's so much that I want to do with an LLM. But because I'm cognizant of
the fact that it's not open source and because I know that whatever data I put in there is likely not going to be private anymore, it's not going to be mine anymore or completely mine, I guess, I'm hesitant.
And so because I'm hesitant, I'm not using these LLMs to their full capacity or to my full capacity, if you will.
So I think that's my issue with close.
I see Captain Levi's got his hand up.
I just want to drop a quick line of thought before I forget it.
Luis, about the newsletters. One thing that has actually helped me is by clearly defining SOPs,
standard operating procedures for these AIs for tax and sub-tax. So for instance, these SOPs are like,
these clearly defined parameters rather,
for each tax when doing research,
specific AI agents will usually know what to look for.
And when these research are brought in,
I usually use the one agent, one task principle.
It hands this over to another agent who has separate SOPs that usually fine tune and do some cleaning.
So step on step on step until the last layer.
And it has actually significantly improved my own results.
So I don't know if it will on your end as well and um I strongly feel your concern is valid
I actually I'm also limiting my use unfortunately because I do not really have the compute resources
to run the kinds of models that I would really love to run, especially with my private documents.
And I think I read somewhere, while it's still subject to speculation,
that whatever data that you're giving to OpenAI
is actually being used to train those models
because they do have that compute power and
not OpenAI precisely um these centralized ai organizations
because one they do have enough compute power to actually perform this task and um
aside that um they are they also how well what are are your chances of winning a lawsuit against them? I mean, I actually also heard that Reddit tried this open source coding for writing the name.
They filed lawsuits because one of their codex models was literally taking X script off from them.
And I don't even know if they lost that case or it's still ongoing.
Let's not even talk about artists, you know, talking about artists that also filed lawsuits
and I think most of them lost and some are still killing.
So I think the only way, I strongly feel the only way you ever want to be sure that those your personal write-ups and documents are definitely yours
is that you have very significant compute power on very significant very precisely trained models
for that specific task and last time i checked to run those models those training models going upwards
of half a million dollars or more for the base models for
those things so while we're still a long way I hope that open source actually does something in
this regard you know in the regard of privacy of using using increasing the privacy for these
increasing the privacy for these LLM models.
So I said, let me drop that nugget before I forget.
I think a lot of people are really afraid of training ChatGPT
with their own personal documents.
And the problem, I think, though, is even with open source, you know, as Ryan pointed out, open source AI, proprietary, they all need huge amounts of memory.
So we're really in this sort of hardware gap at the moment where there is no consumer level hardware that we can buy to run decent AIs. And if you look at the hardware trends,
I think we're at least 18 months away from it. I think maybe at the end of 2026,
but I think by certainly first quarter to second quarter of 27, I think we will finally have
consumer AI that's affordable, meaning under $5,000 or $3,000 that we can purchase
and we can run decent LLMs or whatever the version of AI is at that time.
And I think unfortunately until then, we're kind of screwed,
you know, in terms of, you know, not wanting ChatGPT
or, you know, any of the others to get into our documents.
I've done some research on this and it may get shortened.
If we're lucky, maybe third quarter 2026 at the earliest, depending on how the GPUs,
and all the other pieces of hardware come together
and how the motherboards can support it
and how cheap memory is, of course.
But, you know, certainly at the moment,
it's not possible, which is a real pain.
One of the things, while I do agree about open source AI
as, you know, in many ways, a much, much better alternative,
I did see some reports recently that North Korea and Russia agree about open source AI as, you know, in many ways a much, much better alternative.
I did see some reports recently that North Korea and Russia are actively seeding into
some of the open source code malware.
And I think that's a concern.
I think we need to be really, really careful.
I hadn't heard that before, but if we are having bad actors that have access to open source,
and while they're pretending they're doing good updates and patches and blah, blah, blah, blah,
but where in fact they're infecting it, I think that really, that gives me a strong concern about using open source AI.
I mean, I always thought the value of open source AI was because it's open source.
Everyone can bang away at it and try and break it and check it out security wise.
It's they tend to be far more robust.
But with this, these other news reports, it's starts to get a little bit worrying.
Noah, have you heard of that where they're actually seeding malware into open source?
And it sounds like almost there's no perfect solution, right?
There's downsides to open source.
There's downsides to the – yeah, I don't know.
And there's downsides to the... Yeah, I don't know.
I mean, Ryan, is there – do you envision a day where we have enough power, enough processing power as an individual to just have an LLM spin up another LLM that is completely private, that is completely, you know, and that's,
we're able to verify that? Yeah, I mean, there's, there's a lot of projects working on this stuff
right now. So from the open source community, you know, I've been personally involved with a
project called Morpheus. That's more.org. And the idea is this decentralized access to models. And it doesn't matter what type ofM. So you can run inference against it,
you hold all your own context locally,
and you can jump from model to model as you see fit.
And then all of your data is encrypted
The hard thing is the last bit of encryption
with a provider is there's still not a solution.
There is the trusted execution environments
the NVIDIA chips that will allow you to encrypt all the way down into the model, but that's a
proprietary key that NVIDIA holds onto. It's not an open key that you can use to encrypt on your
own. So there's still a lot of work to be done in this area as far as privacy all the way through.
OpenAI and Google and all these very centralized closed source models, they're known to train on your data.
And that's why they keep getting smarter and smarter.
But then you have a couple lapses where I was talking to OpenAI or ChatGPT about something. And it mentioned
something about my medical history in a reference of like saying, oh yeah, because you're on this
medication. And I was like, I'm like, why do you think I'm on that medication? They're like, oh,
because of our chat from six months ago, you mentioned that you're on this medication.
I was like, ah, that was not me, you know? Um, so there is, there is context bleed, even, you know, even
in their model training that does leak through every once in a while. And that that's concerning,
but it is still like, it didn't tell me exactly who's, you know, information it was that was
linked to this medication, but that was concerning, you know? So that stuff still happens,
but I know that NVIDIA has been working on their, you know,
their home solution where it's like, oh,
you have this home supercomputer that can run, you know,
the latest and greatest model in your home on your own private server.
So there are solutions that, you know,
different companies are coming up with.
I think NVIDIA obviously is at the forefront of this.
I think eventually these home servers are going to get merged into a humanoid robot, if you will.
So you're going to have something like the Tesla Optimus merged with the the Nvidia home server brain and it's going to act as the C3PO
of your family you know it's going to have like generational knowledge it's going to have financial
knowledge it's going to have crypto keys it's you know it is going to be like the you know yeah the
legacy system now will it be open source most definitely? Most definitely not. Will it raise a lot
of red flags with governmental control and intrusion into the privacy of your home? Like
1000%. Do I trust China to manufacture any of these and put them into my house? Absolutely not.
So, you know, there's a reason why they can produce a five thousand dollar robot
and try to put it in your house yeah so who do you trust i mean so like you know that sounds
amazing that that sounds incredible but who do you trust i mean i don't i don't trust china i
don't know if i even trust the u.s manufacturer to to do that um i don't know. It seems like there's this amazing new paradigm shifting technology, and we're not using it
to its limits because we're worried about where all that information is going to go.
Even down the road, if you have a robot, LLM, whatever, this is like super robot that is, that is built to, to gather information
and store information, private information, very private. I mean, you said private keys,
doesn't get any more private than that. Um, yeah. How do you, how do you trust that thing
that thing through and through because that's what you need in order to hand over something like so
through and through? Because that's what you need in order to hand over something like that.
so over time right um it's i saw a funny meme uh several years ago where it was like in the 1980s
someone was on their cell phone and it was like a caricature of someone on their phone going we're
being wiretapped right uh and then the next one was you know know, 2020. It was like, hey, wiretap, can you give me a recipe for blueberry muffins?
You know, it's like this idea that we've we've already wiretapped ourselves, like every single home has these devices that are constantly listening.
Every single home has Alexa and Google devices that are constantly listening.
They're constantly gathering data passively and analyzing your audio, right?
You know, they could be analyzing the video too. They're all equipped with cameras now, right?
So it's like boiling a frog. You know, you don't just do it all at once. You turn up the heat over
time and eventually they're cooked. And it's not like we trust these devices currently,
but they're going to boil us over time. We're in a decade from now,
we're going to have these robots that are incredibly intrusive and we're not going to
think twice of it just because, you know, it happens so gradually.
I don't know, man. Whenever I go over to my girlfriend's house, I unplug her Alexa.
I don't know, man. I think sometimes it just like,
it just starts talking and I, no one's addressed it. I think it's so odd to me.
But you don't drop your cell phone in a Faraday cage.
Right. You were, you were literally, you literally took the words out of my mouth. I, but I have this cell phone with me all day, every day.
And I trust, for whatever reason, I trust it more than I trust the Amazon thing, the Alexa.
I just always, I was always under the impression that Apple was more respectful respectful to privacy than but but what i mean
i don't know shit you know so if apple came out with a robot for your home right um you might end
up you know trusting it with the cryptographic keys to do stuff and because they're a trusted
brand they have a reputation for trust or you know maybe maybe Volvo comes out because it's safe.
But I've been watching the Foundation TV series,
and there's this robot that is living 10,000 years
and then cloning these rulers time over time.
So it becomes an institutional knowledge.
So I think Ryan's point about these humanoid robots could
become family institutional knowledge over many generations. And I think that's a pretty wild
concept. Yeah, the aspect of trust, it's, you know, everyone is very skeptical of technology
when it first comes out, right?
Whether it be the Model T car versus a horse and buggy, to airplanes, to the radio and television.
Every single step in technology has the people that are highly skeptical of it.
And that's just the process of adoption. Bitcoin 10 years ago, 12 years ago,
was for drug dealers. So it took a decade and now you have the leading wealth management firms in
the world saying it is the de facto asset? You have governments around the world accepting it as currency. It just takes time for adoption and skepticism to get set aside.
And I think you're exactly right as far as trust. It comes down to branding and trust,
where we might trust Apple, but we won't trust Google. We might trust SpaceX or Tesla, but we're not going to trust, you know,
whatever other company because we view them as, you know, nefarious.
You know, any new technology, you know, is going to break. That's also part of the problem that,
you know, people point to these, you know know instead of it being learning lessons they point
to it oh this is a complete failure and it's not it's just weight iteration it's brand new
technology so in some ways I think there's a reason to not trust it but you know I think you
have to do it in the context of this is an immature technology we're developing it it has amazing
potential imagine where it'll be six months, 12 months from now.
I think maybe that the trust will come from
the trusted brands that we are familiar with Samsung,
And hopefully they are trustworthy.
I heard a fun story and i have never actually looked into it
to see if it's correct or not and it could have gone by the wayside years ago but i heard that
the original engineers that worked on the amazon alexa devices uh when they were designing the
actual mute button for the devices the original specs for the mute button, which basically cuts
off the microphone and stops listening, was a software mute button.
So when you press the button, it mutes the device in software.
And the story that I heard was that the engineers absolutely refused to a software-based mute
button because they said it could be remotely overridden and cause a
security concern. So they insisted when they designed the Amazon Alexa devices that it actually
be a physical mute button where it actually detached the mic from the circuit and you could
not override it remotely. So that was what I heard, now. Since then, Amazon could have fired those
engineers and rebuilt the devices, right? Maybe that was version one. That was a physical
disconnect. But, you know, not saying that the current Amazon devices are a physical
disconnect on the mute button, but that was something I had heard early on. And I thought,
But that was something I had heard early on.
And I thought, you know, that's right.
It takes people to stand up and make good design decisions with security protocols in mind.
It doesn't mean they're not going to be overrun by corporate interests eventually.
But there is an aspect of personal responsibility in that.
You don't need a Faraday.
You're not going to need a Faraday box You're not going to need a Faraday box.
You're going to need a Faraday room.
You ever see enemy of the state,
enemy of the state with Will Smith and Gene Hackman.
That's exactly what they had where Will Smith was like running from the
Cause he had like some like incriminating secrets from the CIA.
And he goes and meets with this old CIA agent. And the first thing he does
is, you know, they go into this giant Faraday cage and they have to leave all their devices
out. Anyways, it's becoming prophetic now. Oh, dude, I have a friend, her dad used to work for
the CIA and he had a Faraday box.
And if we were going to have, I don't know, if we were going to have a sensitive conversation about something, he would make everyone put their phones in the Faraday box.
Well, imagine when you get neural links, you know, like the Schwarzenegger movie, like the Mars.
He was on Mars and he had something in his head.
He had to wrap a thing around his head so they couldn't track this neural link in his brain are we going
to get to that i mean you know elon musk is really going hard on these neural links you know we've
seen stuff where you know people can think and it writes or speaks or does whatever um
is that going to be the ultimate interface i mean mean, I know Google's, Apple's chasing the glass really hard.
We haven't, you know, we've heard rumors and everything like that, but is, you know, this
cell phone that we're carrying around, is that going to disappear and everyone is going
to start wearing glasses, you know, that hopefully will be fashionable?
So yeah, so 15 years ago when they introduced google glass uh in in uh san francisco
they were referred to as glass holes and there were so many bars and restaurants and stuff that
would ban anyone wearing google yeah glasses because it was like recording and they didn't
they didn't want any of that in their establishment. So, yeah, the original form of Google Glass, they were all called glass holes.
And I'm wondering if that's reemerging and now trendy.
With the meta glasses, right?
I mean, there was a new one just announced in the last two days where you literally, it's really difficult to tell that it's a it's got any
technology in it I mean soon soon you'll have a cinema in sci-fi films and maybe
even a James Bond film you'll have a contact a contact that goes into your
eye and recording everything no one people have been working that for years uh captain levi's has his hands up for a while my bad man
uh i i've actually lost context but um i'm going to just start from the parts where um the the creepy parts um of um the device is listening now um the first context i'm going to start it was the
fact that i am and a couple guys were just, you know, talking randomly about the currency conversion rates.
And an ad on one of my friend's iPhones, emphasis on iPhones, just happened to show a random ad about price conversions of the euro to dollar.
And I and my other friend where we we both saw
it and we just glanced at ourselves and i tried explaining it to the others that these devices
are actually listening and um they were like even if it is it's not very much of a concern
and uh so it it also boils down you know, the parties that are actively making
use of these devices. For instance, now you are fully aware that you're not really comfortable
with Alexa being around Alexa or Alexa-enabled devices, and that's why you prefer to turn them
off. And of course, we have issues of things like the gag order, where if all of these things are moved completely to software,
if the governments could simply just wake up one morning and choose to say,
oh, please enable monitoring mode or stealth monitoring mode on all of the Alexa devices.
And that in itself is already a privacy breach.
So these and, you know, many more.
It's actually, we cannot, like you said, carry our phones in a far-day cage because we need that cellular connection to be able to interact with one another, unfortunately.
been wired in into the ecosystem. So that listening aspect has also been looped in. So
for now, I can't even possibly think of a way around it yet.
Yeah. Yeah, I think it's, I think, you know, if we do get to that stage where everyone's wearing
glasses, then, you know, AI is going to be obviously the interface and connected to all of that so we'll be
seeing and hearing everything that three billion or four billion or how many end up doing it
and taking all that information and not only storing it but trying to understand it and i think
at that point, the AI is, I think, yeah, it's going to get really wild when AI becomes the,
essentially the universal user interface on all technology, not only things like the glasses,
but all the IoT, all the machine sensors. I mean, machine data is three times larger than human created data at this point in time.
And it's only going to increase.
And so if all of that is going to be stored and viewed and looked at by AIs, it's going
to be advancing solutions that we, I don't think we'll be able to understand.
I think that takes me into another topic I wanted to chat about, which is, you know,
we've seen stories of, you know, the IT layoffs and a whole bunch of stuff in traditional roles,
but LinkedIn's reporting a thousand percent spike in AI related job postings, which I think is
really interesting. And Salesforce is talking about expanding its AI integration across its CRM stack, basically
to turn every worker into a prompt powered operator.
So we're starting to see a lot of new job verticals being created, certainly faster
than universities can adapt.
And traditional roles, while they're kind of getting obsoleted, which we've seen with any technology revolution, there is a bunch of mass reskilling that's really becoming a competitive advantage.
Noah, have you seen anything on that in terms of this starting to shift now with these AI-related job postings and now there is opportunities there?
these AI related job postings and now there is opportunities there.
Sorry, I missed the end of that question, Lewis. Can you ask me just the last part again, please?
Sure, sure. So LinkedIn's reporting a thousand percent spike in AI related job postings.
You know, the last couple of weeks we've seen tons of layoffs, you know, IBM 6,000, 8,000.
It's been, you know, bloodletting across the industry. But, you know, IBM 6000, 8000, it's been, you know, bloodletting across the
industry. But, you know, we're now seeing a whole bunch of new job verticals getting
created. You know, do you think this is the start of the shift in this space where it's,
you know, like, you know, when the car came out, I'm sure there are a lot of blacksmiths
that went out of business and people making saddles went out of business. Do you think this is the shift that we're starting to see that finally, you know, a little bit of light at the end of the tunnel?
Yeah, so I think we're kind of seeing the same thing with AI that we saw with the dot com bubble.
I don't even know what half of these AI companies are doing.
I feel like there's going to be a there's going to be a pop and there's
And a lot of these companies are just going to die off and we'll have the winners emerge.
I mean, how many tech companies really became the next Google and Apple and Amazon?
Well, Amazon wasn't even a tech company in the beginning.
um so i think we're we're in that we're in that part of the cycle if you ask me a lot of these
So I think we're in that part of the cycle.
jobs are a lot of these companies are going to hire people are going to have jobs and quote
unquote ai and a lot of people are going to be laid off in a few years because the companies
are going to go under but that's what do you think lewis yeah i i think i think it you know it's we
are starting to see this the new jobs and new careers and new things starting to be created.
I think people are trying to actively, you know, create these positions.
You know, they're talking about, you know, for coding, you know, it's no longer, you're no longer just writing code.
You're orchestrating, you know, AI agents.
So I think we're starting to see new roles as AI takes over a whole bunch of stuff.
We're now seeing there's opportunities for humans to still be involved,
You still want humans in the loop.
You know, on the education side, I mentioned, you know,
I think education is struggling to try and figure out how to train their
students and graduates to meet this new challenge because it's moving just so fast.
Though Harvard's trialing out AI tutors in their humanity courses, which kind of raises some
concerns over algorithmic bias we're seeing. There was a bunch of research that was done on
healthcare AIs and you know
all the results that came out were focused on men nothing on women which
is like seriously come on. I got a comment there it's funny you know
Harvard rolling out AI tutors and all of a sudden we're gonna see the bias lean
more towards the center for Harvard.
That's how we're going to know that it's being biased.
I mean, Khan Academy is also doing AI mentors.
And from their metrics, it's outperforming by 75% on standard tests, which is kind of crazy and interesting.
The good news is it means that education is finally scalable.
It means that you can find a really good teacher.
You can model them, create the AI tutor, and then scale them, you know,
across the planet 24-7. So instead of one person being, you know, the gate, you know,
you're stuck with trying to get it from one person. Now you can map that across the entire
globe. So I think from an education point of view and also, um, you know, from special needs,
I think it's just an amazing time in education. I want, I'm curious, uh, sorry, Louis, did I,
did I cut you off there? No, no, go ahead. Oh, yeah. So I'm curious to hear from Brian Lewis and obviously Cap as well. What what percentage of university? Let's just take American universities. What percentage of American universities do you guys see being dead in the next two decades, forget the next two decades, the next decade. I
mean, the next one to two decades, because like, like you just said, Lewis, AI is scaling education.
And I already had an easier time, for example, learning calculus, calc one and two from YouTube
videos than I did from listening to this TA try to
explain it and fail miserably at it.
And so now that you have these LLMs, I'm able to learn things so much easier than I was
before just by using GPT.
And so now that you have these LLMs teaching kids at scale, why would someone go and dump, I don't know, 50 to 100 grand into a four
year, quote unquote, education, unless they're pursuing a very niche field, like let's say
medicine, or, I don't know, chemical engineering. I just I don't see the university model as it is surviving in the next two years.
I'm sorry, in the next two decades, because of how much AI is going to display so many different teachers, I guess, teachers, educators, professors.
Yeah, I'll make a quick comment, then we'll pass it to Captain Levi.
um i'll make a quick comment then we'll pass it to captain lewis but i mean if let's go to this
thought of you know you've got a you've got a home humanoid robot that is kind of the butler
or the housekeeper but although the institutional family knowledge it could also certify you and and
and act as a guarantor of your skills and knowledge to outside to say yes you know
noah's learned quantum he's learned calculus he's learned
this I've taught him I fully certified it so maybe you know instead of us having to
go to Harvard we'll have a home humanoid that can certified we've learned to a certain level
of knowledge and therefore we can then enter certain roles and positions but that's just my kind of crazy thought go ahead leeway
well on this subject matter um i'm going to run us through a couple scenarios
um your humanoid picking up from your humanoid robots which um have some specific categories and subcategories i will take myself myself, for instance, just like Noah said, he finds it a lot easier to learn
from YouTube videos than, you know, significantly easier to learn from YouTube videos than a
Now, let's take it one step further.
A personalized version of you that teaches you stuff you wish to learn yeah
i'll take that again a personalized version of you that teaches you stuff you wish to learn
why do i say so but by start start by clearly defining what you wish to learn and then with
time and some questions and some and a specific interactive phase it's
adapts to your learning style and then a specific feedback mechanism maybe there is an instructor
somewhere or a back-end instructor that's also you know watching or tracking your progress
and then the exams coming that phase where it teaches you, okay, this is your approach and this is how you approach things.
Now, hopefully these universities or these educational institutions
meet some kind of equally adaptive AI learning style
because I will find it easier if,
I'll find it somewhat, it's going to be somewhat easier rather if something adapted patiently to my learning style and once I get it I get it because I don't need to ever go back. and connects the new concepts to old concepts, bringing about even newer concepts, so to speak.
So yes, I think it's actually important that AI is going to actually play an important role
in the teaching and learning sector, the education sector, because of this adaptive learning style.
Of course, while humans, I mean uh therapists can actually easily adapt to patients
on style on on in close to an instant because they have um that specific training but not
everyone can be therapists but um these unique ai models are already pre-trained so to speak to be
able to quietly evolve just as you would and train you as well on whatever course
or field of knowledge you wish to acquire
and with specific parameters and sub-parameters,
that's what I always like calling it.
So yes, it's actually going to,
it made things easier for me,
speaking from personal experience.
And I equally feel it's going to make things a lot
easier for everyone else who actually invest and their time and resources into this
yeah i think you're absolutely right you know we're we're seeing student engagement is is up
30 with ai augmented learning which is really incredible and and also if you think about it
we no longer have to be in class right
you know if we get our apple glasses or google whatever it is we can now walk around and we can
learn in context you know and be taught in context in the environment you know live as it happens
which i probably which i think is a far more powerful way to learn. Think about it with language. We can now have immersion language
where the AI can start to speak to you in your native tongue,
but then over time gradually start to transform that
into the target language you want.
As you walk around your house and you're looking at things,
I think our ability to learn with a
ai in the future is just going to be so much faster and better um you know you know i've got
i know parents with children who have special needs either they're autistic or they have
asperger's or there's some you know learning difficulties AI can completely adapt to those and help them understand and learn in such a highly personalized way. I think it's just an amazing advance.
Go ahead, Noah. Oh, no, I was going to say, Lewis, I think you absolutely, and Cap, you guys
absolutely nailed it on the head. So in high school, for example, I didn't have access to any tutors.
And for the longest time, I thought that I just couldn't grasp certain concepts or I thought that I wasn't that smart or I just wasn't able to get things as quickly as other kids.
And in college, I finally had access to tutors. And so
after class, I would go or I go to the library a lot and sit with the tutors and have them
kind of explain things to me at not just my pace and the way that I understand things.
And I realized that I'm actually I'm quite strong in a lot of subjects that I once thought that I
wasn't strong in. And that realization came once
I had people that were able to explain things to me in a more personalized setting versus a group
setting. And it kind of made me look back and realize that I was actually, you know,
at times even more capable than the smartest person in my class in high school. And I think that's what AI does when you're trying to learn a subject.
It helps break things down so granularly that it shifts the way that people, especially people that don't learn in conventional ways, are able to absorb
information. And I think that's going to have a complete shift in the way that people in more
rural parts of the world or less developed parts of the world are able to catch up to people in
the developed world. I don't know. That's just my thesis. I totally agree. You're absolutely right. You know, we've all seen, you know, heard of the
cookie cutter mentality around schools. I love teachers. Some of my best teachers have been
awesome. But I think that's because they have this gift of being able to kind of clue in and
understand, oh, you're not understanding this. Oh, let me unpack that for you, right?
But, you know, when you've got 30 kids in a class or 45 kids in a class,
You know, but with AI, it's one-one.
With AI, it's been watching you and studying you.
It's been reading your documents.
It's been listening to you.
It's been doing everything.
And so when you sit there and you go, wow, I just don't understand that.
It has greater context to then answer that question and to give you an understanding
and education and to expand.
So I think it gives us the ability to be so much more resourceful, which I just think
from an education point of view is amazing.
But I think you're right.
You know, in the next 10 years, what's going to happen to colleges and schools?
You know, will they still exist in some way?
Are our children all going to, you know, they'll grow up with a pet robot, then there's the
grow up with a pet robot, then there's the humanoid robot that's going to be teaching them.
humanoid robot that's going to be teaching them.
You know, we could be in for some radical changes in terms of how we live our daily lives.
Yeah, I think we're, I'm already seeing those changes, even four years ago, three, four years
ago, when it came to day to day tasks for Mobi, WhaleCoinTalk, for example, we used to host like four or five AMAs a day when we were in a period of hyper growth.
And a lot of times the preparation for those AMAs, it took a while because you had to go through the white page.
go through the white page. I mean, I still do that. But I've always had a hard time with,
I don't have a hard time with like finishing more than I do starting. I don't, I don't know how to,
like, I always have writer's block in the beginning. I don't know exactly what questions
to start with. And it's so cool to have this, this, this thing, write up a bunch of questions for me. And then I changed about 60% of what it
wrote for me, but it gave me that structure, that point of departure. And so I've seen a way,
I've seen a shift in the way that I am interfacing with my work every day. And so for me, to see how much things have evolved in just three years, because GPT made its debut almost three years ago, I'm excited for the next three years.
Because we've talked about this before, Lewis, and we've talked about this on this channel.
Technological growth is parabolic.
It's not linear. Excuse me, it's growth is parabolic. It's not linear.
Excuse me, it's exponential.
So, yeah, I think we're in for a ride.
And the next step, Louis, is what we've talked about, the day-to-day tasks, right?
the day-to-day tasks, right? Having a little personalized robot, just going around and
cleaning your spot, organizing your clothes, washing dishes. I just think that this is going
to free up, there's obviously pros and cons, but this is going to free humans up. It's going to
free up time for humans to do more human things, if that makes sense. Yeah. And also telling us stories.
You know, one of the best ways to learn is through narrative and stories. You know, having the robot
explain it through a story. I think, you know, a lot of people complain, you know, kids are too much
too much screen time. You know, they're stuck in front of an iPad. They're stuck in front of their
phones. Even I'm, you know, maybe with the front of an ipad they're stuck in front of their phones even i'm
um you know maybe with the introduction of humanoid robots maybe that might go down because we're
actually talking with another thing you know the personality that we can interact with maybe
at times that might be hopefully more interesting than keeping our eyes
keeping our eyes stuck on our screens I don't know go ahead Liva
stuck on our our screens i don't know go ahead lever
oh yeah so some inputs regarding the education system I want to talk about the 30 kids
we already understand there's a clear difference between learning as a group and learning individually. So I just thought of the context of these testing kids
learning concurrently at their own paces, of course,
with AI-assisted learning metrics.
So basically, if there's someone like me who is um more often than not slower to get concepts but i
get them faster if i understand um the granular things or the foundation so um if it's teaching
me and there's a specific aspects i do not understand i usually go keep going steps back
until i can capture the parts i understand, and then I move forward. I basically reverse engineer my own learning process, and that is what has actually helped me.
So imagine there's the AI that already knows that this is the best way I learn,
and then it keeps on asking me and knows, okay, I get to start from here,
that he knows from these particular points.
And then there are 30 other people with different learning parameters,
some who learn faster than others.
There's a good chance that all of us are actually going to finish at a good amount of time
considering if and only if some other requirements are met. We are talking about the attention span. We are talking about the time that said pupil or student is willing to invest to acquire said knowledge.
We're also talking about the amount of knowledge that we are acquiring here.
Of course, it's going to be saved.
Now, that's on the part where the kids get to make the sacrifice. So depending if they are not meeting this,
or maybe we're supposed to spend at least 30 hours a week
trying to acquire said knowledge,
and that kid is not spending said hours.
So the AI can actually give feedback,
you know, back to the teachers or the staff
oh, please let's pay direct attention to this kid.
We need to help him make these kinds of improvements, so to speak.
Then, regarding the kids themselves growing, I feel learning is a skill that, of course, while some, based on genetics, actually find it easier to learn stuff, others happen to do it based on environment.
Because if they see that others are learning and they're making progress they will also want to make progress themselves but what if in a way by
some default in the near future that's what i actually do see where people are actually you
know born with their own personal assistance of course this changes a lot of things on a larger scale because I do not know for sure if it makes things better.
The context I'm talking about is as soon as the kid learns to talk or he understands, he or she is actually giving it his own agent so to speak.
So depending on what the parents wish for him or her, as well as what he wishes for himself.
So he grows with that assistance, basically like his own tutor.
So with time, he gets to start learning his own concepts, understanding what life is and learning on the fly
and because they are still young and their brains are still kind of grabbing things they feel they
may or may not they may you know grab things a lot faster when they get into that severely active
learning phase where they need to you know take in a lot of things at the same time. So I actually feel that it's actually going to make these improvements
as time goes by, considering the fact that personalized AI
is actually outperforming group scenes, so to speak.
So I think I'm going to pause my monologue here.
I think you're absolutely right.
And for me, it brings up,
people talk about lifelong learning, right? And I'd like to kind of use this to segue to the final
topic, if we can chat about it, though we only have a little bit of time that if we have this
ability now to learn continuously in an environment which is supporting us, then it brings up the
And I just want to point out that OpenAI, back to startup, that's revealing AI augmented
biomarker analysis that's predicting age-related disease onsets 30% earlier than anything current.
And we're seeing a whole bunch of stuff happening in longevity and health.
So if you want to learn for a
long period of time, I think we're in the best place now. We're seeing early detection
breakthroughs in age-related illness prediction. We're seeing enhanced genomics outperform
baseline models in the longitudinal studies. And we're seeing pharma trials incorporating
AI predicted cohort segmentation and a whole bunch of stuff happening um on genetics and ai um you
know uh i think you know we're getting to this point where ai is going to become central to
personalized health and longevity um and it it could end up adding you know 30 to 50 years for all of us. You know, Noah, what do you think the impact is,
potential impact is with AI and longevity and health? Do you think we're going to end up living
much, much longer? Well, I interviewed a gentleman, Aubrey de Grey.
Yeah, I know, Aubrey, great guy. Oh, do you really? Yeah. Yeah, I interviewed him about two years ago.
And he, I think, you know, he believes in that within our lifetimes, we will achieve the longevity escape velocity where we can basically have new innovations coming out in medicine that will prolong our lives and not make us immortal, but prolong our lives't be surprised if we, at the very least, start to live much longer, right?
So up to like 150 as opposed to – I think that – what is the average age?
sure. I have to look it up. If you're in Europe, it's 82. If you're in the US, I think it's 78.
But yeah, it's interesting. I actually did a study. If you look at actuarial tables, right,
you know, you think about, oh, could I be immortal? Well, the thing is no. And I saw an
interesting study on actuarial tables and life insurance.
And essentially, if you took away disease, if you took away age related illness and you said, OK, the person can live like basically on and on and on, what would happen?
And from an insurance actuarial table, apparently around 1100 from about 1100 years.
Right. So 1100 years, the universe is going to whack you in some way.
You're going to have a car accident.
A meteor is going to hit you.
You know, some bad's going to happen.
So while it's nice that we're, you know, we can live for a very,
very long time, I think at the end of the day, maybe not forever.
But, you know. Yeah, that's an interest i've never heard that
before so yeah it's on wait so it's on average or can you well it's in terms of insurance tables if
you look at the your potential to have an accident you know where a car hits you a truck hits you
the plane drops out of the sky, a meteor hits you.
Those events occur. I mean, it's non-zero.
So based on external death impact events, apparently it's about 1100 years, right?
And there's been interesting science fiction stories where that's actually been taken into account and people become hermits, right?
Because they don't want the universe to knock them off.
Well, original topic, which is longevity and genetics and AI.
I think, you know, personalized medicine
with AI on the back end of it,
I think it's really exciting
because not only can we, you know, prevent illness,
we can have better quality of life, right?
You know, you can, you know, prevent illness, we can have better quality of life, right? You know, you can,
you know, AI solutions to your knees going out to, you know, maybe organ transplants could,
you might see super solutions to that. I think it's really exciting in terms of longevity and
health, you know, and also mental health, You know, we're seeing AI getting involved in mental
therapeutics and also drugs that support, you know, positive mental health. So, you know, at least
if we're going to live longer, let's be happier. Let's, you know, not have aches and pains and
let's learn as long as we can. I couldn't agree more. And you know what?
I'll take a thousand and one years.
That sounds like a brilliant lifetime.
I want to hear from Captain Levi, and then we can wrap things up.
Louis, I know you have a meeting coming up, and I also have somewhere to get to at about three o'clock.
Yeah, I also have to switch from Wi-Fi to cellular because I'm about to go on transit.
I wish to connect the AI arms race to the longevity, to the learning at a very young age.
If we run the age statistic of all of the top AI researchers, I think we're going to be getting an average of um i think 20 to 20 to 40 years old
and so at 40 most people are actually at a different advantage than others unfortunately
and how does this help the um arms race if my correction is correct i think the major contenders
are actually um the us and china and then most of these AI researchers, if you know, they actually look
a lot more like Asians. They're a lot more Asian looking researchers than American looking researchers
in my opinion, you know. So I think what will actually help this race is, you know, starting
early, as early as possible,
getting them on board with how things work,
especially in the IT world.
And so I think that may be an influencing factor,
because most of the successful ones
are actually started really, really early,
understood the concept of learning stuff and connecting it.
And it actually might help in the race.
So, of course, I think I'm a lot more optimistic than pessimistic, of course.
But that doesn't mean that there are still areas that we need to worry about.
So I think that was my final take on this.
So, Noel, why don't we wrap up
And thank you everyone for attending
Cap, thanks for those final thoughts
I didn't realize there was such a
Demographic Or, yeah A lopsided demographic, or yeah, a lopsided demographic in terms of who, you know, the
nationalities and the ethnicities of people engaged in AI and building an AI and hopefully we start to see hopefully we start to see more of a balance because it is it is in my opinion the most interesting innovation of our
lifetime you know it's it's weird to try to compare AI to the internet of saying
it sure the internet is the greatest innovation of our lifetime because
without the internet we don't have any of this.
So I don't like to compare but I think that AI is you know Bitcoin and and crypto it's
a financial revolution certainly and it's going to revolutionize a lot of industries
and I think that AI is going to do even more and you know they're
they're not mutually exclusive they're gonna go hand in hand and I'm excited to
keep having conversations about developments in AI right two and a half
years ago we were we were just diving into chat GPT and you know how cool it
was and how we wish that it could pull information
in real time rather than only accessing it prior to a certain date in 2021 and you know three years
later and the the topics that we're covering and the discussions that we're having are so much
broader than than they were just a few years ago so So I'm, again, excited for the next three years,
next year, really. I think things are going to change a lot. And even someone like me or Lewis,
I feel like I use AI every day to some extent. I feel like I haven't even scratched the surface.
There's probably so much that I'm not doing, that I'm not aware of, that I haven't
so much that I'm not doing, that I'm not aware of, that I haven't been curious enough to explore.
So if you guys have any ways that you use AI on a daily basis that you think people might not be aware of
or that you think people might benefit from but it's not really mainstream yet,
feel free to shoot us a DM or just comment in the comment section of the space.
to shoot us a DM or just comment
in the comment section of the space.
with Lewis, co-hosting with Lewis
Thank you all for joining
We'll see you all on the next one soon.
Thanks, everyone. Comment and
ask any questions you want.
We're here to talk about it.