Thank you. Music Thank you. Music Thank you. Thank you. Music Thank you. Thank you. Thank you. Thank you. Music Thank you. Thank you. I'm going to go to the next video. Music Good morning, Louis.
I cannot seem to hear you.
We're going to give it a sec, guys.
Louis, try leaving and rejoining. Thank you. Louis, can you say something?
I don't think anyone can hear you. Sorry guys, my co-host Lewis is having some technical difficulties.
Let's see if we can get those squared away. Thank you. you
Who do we have on the daffit on the daffit app I just realized that was on that it's achilles hey what's going on achilles i think
we're trying to get lewis up here having some technical difficulties with him but how you doing
today i'm good man i'm chilling how are you not too bad it's a little bit loud where i'm at but
It's a little bit loud where I'm at, but hope you guys can hear me okay.
And we're trying to get Louis up on stage.
I'm not sure why his face is rugging, but we got you up no problem.
I guess it can come back, Louis.
Let's see. Let me see if I can just bring another speaker Lewis, try again.
Nope, doesn't even work as a speaker.
I mean, we can get started, Achilles, since we have you on stage and Lewis can figure out what's going on on the technical side.
Have you looked at these AI safety bills?
No, I haven't. What do we got?
I thought they couldn't make AI safety bills for 10 years as part of the act that they just came out with.
Yeah, so Lewis has a bunch of here. Let me just share the...
Give me one sec. I'm trying to get him up on stage. Okay. you Hey Ryan, are you able to speak?
We're not able to hear Lewis and I'm trying to see what's causing it.
I don't want to get started without him.ご視聴ありがとうございました You
Are we having technical issues?
Yeah, definitely having technical issues with Lewis.
I was trying to figure them out.
I also can't hear him on my computer either.
Yeah, I can't. I can't hear him.
Ryan, you got a lot of background noise.
Let me see if I can switch headsets here. Let's take it. . Is that better?
I'll take a little bit then.
Trying to get Louis up here.
While we're waiting, has anyone tried Nano Banana?
The image editor? It's amazing no not the image ever it's amazing it is ridiculous so a funny story so my
wife does interior design and she's been putting together a presentation for her clients and we've
had some issues with the drafter that's supposed to be doing these realistic renderings, kind of showing the furniture and everything inside the house with all the design and everything.
And I said, well, why don't you just go try Nano Banana?
If you have pictures of furniture, you know, maybe it can do it.
And so she basically set everything the drafter and the rendering guy did aside.
And she went to Nano Banana and uploaded all the pictures of the furniture and pictures of the house.
And it just turned out perfect.
So it's and like in in a fraction of the time that the contractor was doing.
People are using it for updating headshots. I'm on Reddit where people just draw things by hand and then take a picture of it and say, make this a real image and realistic cartoon, whatever, and share all their images. It's amazing.
Is this part of Google's ecosystem?
Yep. part of Google's ecosystem yeah and so the the premise is what you you can draw
something and then have it generate a stronger better-looking picture or you
just writing what you want and it generates a picture you can do text to
image generation or you can upload a bunch of images
and give it a text prompt
of how you want the images combined.
And that's kind of like the powerful thing.
So, you know, if you upload a picture
of a bunch of different people
and you can say, you know,
and put them around a table
at a Chuck E. Cheese birthday party,
and it will generate a photo that looks so realistic.
The other night, I was at dinner with some friends,
and we just snapped a picture of everyone at the table.
And I said, add Donald Trump to the table.
And there's like Donald Trump right in the background,
like smiling with everyone else.
And it's just wild how realistic it looks and how fast it does it.
could be fake, or is it that realistic?
really closely, maybe there's
table line or something but it's
pretty dang realistic it's a lot of these images unless they're just so
fantastically fake you would know it's just AI it's really hard to tell if it's
just oh yeah take this person in this background and combine them.
So just some updates, guys.
In the last seven days, Google's Gemini momentum has shown up in the App Store and cloud backlog figures. So Anthropic shows enterprise traction through pricing and usage studies that position Claude as the office default.
I mean, when I talk to my developer friends,
it's unanimous Claude is the best for coding.
I don't know, Achilles, if you have the same experience.
And OpenAI is also leading into silicon scale with new codecs capabilities that require serious compute.
So just like every week, there is more and more happening.
I'm curious to know which one of these tools you guys are playing with the most for specifically coding.
Are you guys also pulling from Cloud on Dapit, Achilles?
Yeah, so we use a few anthropic models.
It's all anthropic for coding.
And then on the front end for the LLM side of things, we use 11 Labs and GPT-5.
For image generation, though, we planned on using Gemini when we released Lumina.
So, dude, what it's famous for is AIs always like
whenever you have an image and you like you know you take a picture of your car and you say hey
change the wheel color it'll give you a picture of a car with different wheel color but it's not
going to keep the same fidelity on that image Nano Banana changed that completely like you can
be like add a hat make my head turn slightly to the left remove
the person behind me things that like before you could only do with like professional adobe photoshop
uh nano bananas doing great
yeah and there's i just saw posts of another image generation that's giving uh nano banana a run for
its money it's uh seed dream 4 i haven't played with it at
all but i just saw it kind of come down the pipe here and people are saying that it's it's blowing
down banana out of the water i i haven't played with it but it's on reddit a bunch if you guys
want a phenomenal tool um what i started doing was downloading some of these models offline
or at least like doing it through like a
application GUI interface, download stability matrix.
And when these models come out,
you can load them onto your desktop
and start like creating batch images,
things that kind of exceed the normal user interface
that you'd be able to on like the web browser.
What's that local tool called?
I haven't played with that one.
I've been using it for about a year now
since the image one started coming out.
We've got some video ones working on there.
It's a little more on the techie side, but it's cool.
I played with Ollama and some other local model hosting.
I haven't done any local image or video generation yet, so let's give it a go.
It started with stable diffusion on the UI, but it's expanded since then.
Lewis, are you back with us?
There's a problem. Are you back with us? Yes, I'm finally back. Are you here or other? Sorry about that. Too many microphones. Too many microphones.
So I just want to kind of give an overview of what happened this seven days it was a kind of a really noisy week um there were basically three pivots we had california regulators which noah
mentioned early on uh trying to bring out some fresh guard rails oracle doing a 300 billion
dollar raise bid which is kind of crazy because it doesn't make sense.
And then we've got Buffett doing a $68 billion bet into Apple,
even though they're having problems with their artificial intelligence.
But we're really seeing a shift from models to agents, chips, and energy,
a really huge shift across the board.
And it's interesting that it's just a lot of money is starting to pour in more than i've kind of ever seen before you know in the newsletter i've got you see a lot of in the
in the investment dollar section you know 30 million this 100 million this this week there's
been consistently trillion dollar this you know 300 billion dollars and so on so we're starting to see huge
amounts of funding coming in more than we've ever seen before um and it's just crazy i mean open ai
doesn't really have the cash to pay oracle 300 billion so that's going to be really interesting
um but you know the things with california and their and their SB53 safety bill that just went through, it's going to head to Govan and use them for a signature.
One thing it doesn't have is the kill switch, which they tried to throw in before.
That was back in the SB1047.
But I think they're really just trying to focus on transparency and accountability.
And it really seems that California is leading the way on this.
So Noah, what's your thoughts around, you know, transparency, accountability, and AI,
you know, California versus anyone else?
So what specifically are they looking for a transparency on?
Well, they said the AI labs have to disclose safety protocols.
They said they have to report critical incidents.
So I guess if there's data breaches.
And the third leg of the whole thing was they have to provide whistleblower protections.
The previous bill, which came out last year that they tried to push through, you know really kind of really strict and you know
it just wasn't going to meet the light of day so this is the safer version i think that they've
pushed through um and have been able to get through to newsom's desk so you know those are the things
you know protocols critical incidents whistleblower stuff. So, yeah, transparency and accountability.
I don't see anything wrong with it, to be honest with you.
I think it's probably important to do across the board, especially when it comes to something like AI.
Do you think that it would slow down innovation because now you have to you have to check in with
regulators every week i don't think there's any downside it's just that there's fragmentation i
mean it's great that california is leading the way on this but you know where's the federal
government in all of this why you know you know if if you know are we going to have 50 different different versions of AI safety bills coming out. It's just stupid.
And it just seems that we're in this period where there's more than average dislocation between the
federal and the states. And we saw, I think, Ted Cruz tried pushing through basically, you know, the feds are going to make all the decisions.
The states aren't going to make any.
That clearly fails because California's got this bill coming out.
But it's just this fragmentation, I think, is the big problem, you know, that it's just people like, OK, if in California and I'm using AI, OK, that's I'm covered on this.
But, OK, what about Illinois, Wisconsin,
What are they going to do?
Are they going to have similar versions or complementary or contradictory?
I mean, that's the problem with this sort of push.
While, yes, I applaud California doing this and doing it in a very quick and timely manner,
it's like where does that leave everyone else um it's just kind of and to to
jump in and add to that lewis uh cal california is um i mean they can't they can't even enforce the laws they have you know let alone trying to pass new laws
that no one has any clue how to enforce or even administer so it's them taking the charge on
something like this they're just trying to virtue signal and just do something because they think
the legislature's job and like the way they have it is that they
think their job is just to keep passing laws.
And if they're not passing laws, they feel like they're not doing their job.
And they'll just keep passing laws that no one enforces and no one cares about.
And what we do need exactly what you said is federal guidelines, you know, something
that is actually going to be enforced and is reasonable
at a very standardized level.
Yeah, I mean, you know, that it might apply to, I mean, if I don't know where OpenAI is
incorporated, but if AI, if they're in California, they'll be covered. But you know, there's the
French company, you've got Mistral AI.
I mean, if I happen to be using it and I'm in a, you know, I'm in Utah at the moment. So if I'm using Mistral, then they don't get covered by that.
So, you know, or if I'm using any of the other AIs.
You know, California, you know, does have the tendency to try and reach far outside their grasp, you know.
So I think I think it's going to be a problem.
And maybe they're just like you say, virtual signaling and say, here's a structure we think is the best way, you know, and then hopefully it'll become, you know, part of rolled up into some federal.
But it's just kind of crazy at the moment what's happening.
You know, anthropics, you know, we're seeing some interesting stuff on usage studies.
So we're seeing this interesting usage study split between enterprise and consumers. So enterprise and consumers are becoming two distinct and
powerful groups that are influencing the AI. And you're seeing on the enterprise side, because
for example, the recent upgrading on some codecs, using AI for coding and so on,
and lots of investment in that. But also we're seeing, you know, issues on the consumer side where suddenly all the states are seeing people who are treating AI as some human substitute and then taking advice and then doing not very healthy things.
interesting time that we're in where you know all these things are bouncing around crazy and
there doesn't appear to be any regulatory uh control or guard rails for anyone at the moment
you know open ai did put in apparently have put in protections for kids um but it's always i guess
closing the door after the horse has run away um Noah, what's your thought, you know, on, you know,
people using AI for, you know, personal, I guess,
psychology help or health, you know?
I mean, I don't see how it doesn't, I don't want to say, well, I guess once it's able to speak and it has its own personality, I don't see a world where AI displays a lot of therapists. I just think that if you can get professional help, quote-unquote professional level help,
at the palm of your hands, then why would you go and pay somebody 150 hours? Again,
I get the connection, but I think back to the film, Her, and I just think if you have it,
at the moment, sure, maybe not, but in the future, if you have something that sounds like a human and talks like a human,
has the aggregate knowledge of every clinical psychologist and psychiatrist on the planet,
then I don't see how that isn't better than going in and seeing a human.
Maybe I'm missing something here.
No, I think you're right. I mean, there's a whole bunch of headlines popping in the last
seven days around, you know, helping senior citizens with loneliness and care and stuff
like that. So I think from a mental health point of view, there's great applications for AI but you know taking advice from an AI mental health therapist about
what to do I mean there's one story where a woman's you know opened her heart to the AI and
you know what can I do about my boyfriend and basically it said dump him you know which is
like well that's advice but you know I mean it's mean, it's like I'm just concerned that there's a tendency for us to put the power of what we're doing onto another party, whether it's a human being or an AI.
And that's probably not really healthy.
At the end of the day, make your decisions based upon the best data you have at hand, but don't say,
well, they made me do it or they told me. It's a really hard problem. I think the whole issue with
AI in mental health is becoming really, making it really interestingly cloudy, which is forcing
people to really get clear on exactly how we're going to use this tool, what are the guardrails
and where are we going with it.
Unfortunately, this thing's out in the wild.
You've got 400 million users using it every month.
There's bound to be some downside to it.
So then unfortunately, then the law starts to come up and says, well, who's
responsible? Who can we blame? You know, who can who can pay for these, you know, for these
things going bad? It's it's a really it's it's a really challenging time, I think, where
we have such a powerful tool that can do so much.
And yet, you know, people can wind up either abusing it or
just misusing it um it's it's you know that and you know people are there's stories people using
it for financial advice right and we know you and i have both experienced massive hallucinations
using it for just normal stuff let alone financial stuff um though they are saying
that you know the hallucination stuff they're starting to get a handle on it based on the early
training that the way they do early training actually influences the amount of hallucination
it ends up doing but it doesn't seem like it's still a solved problem. So, I mean, Ryan, what's your thoughts about, you know, hallucinations in AI?
From your perspective, do you think they're going to, they've solved it or they're in the process?
So hallucinations typically happen from a not properly weighted context or too large of a context.
context or too large of a context so imagine like if if you you know get into a crazy accident and
you hit your head and all of a sudden the aspect of time no longer exists in your memory and then
to compound that you remember absolutely everything you never forget anything and the concept of time
does not exist that means a conversation that you had when you were three is just as relevant as a conversation you just had
right right so context is everything so if i ask you who's the first president of the united states
you know you're going to say george washington if i ask you who is the second you could answer
who knows what because you don't know which conversation
we're talking about was it the one you when you were three or four or five or just recently i said
who is the second um so the biggest thing is memory management and that's what's uh open ai
and grok and these companies are getting better and better at is providing context and deprecation
over time. So learning what is important to remember and to reference and then what can
be forgotten. So as they hone in better and better to make it work more like a human brain,
where, you know, you remember what you had for for lunch but you don't necessarily remember what
you had for breakfast two days ago um that's that's that's when uh conversations get better
and better and better with the ais um so i i don't know if that helps no light on it but it's
it's interesting you know i think that and and what are the reliable sources of data i mean
You know, I think that and what are the reliable sources of data?
I mean, you know, the Internet is great, but there's just so much out there that contradicts, you know, it seems like parts contradict others.
So what I think is how do you teach it discrimination over the data it's ingesting such that, you know, it doesn't come up with crazy solutions or assumptions you know um
and that's where like weights and biases come yeah and i guess right where go ahead
you you essentially have a you know schizophrenic schizophrenic schizophrenic homeless person that knows
everything right and you're trying to yeah you're essentially trying to brainwash this person
to when i say this you say this when i say this you say this when i say it like this you say it
like that and you know and you give them you millions, if not billions of prompts in order to fine-tune
their behavior and their responses. And then whenever you ask them a question and they come
up with their reasoning answer, it's going through layers and layers and layers of filtering,
whether it be tone or wording or know, regional dialect or whatever.
You know, this is kind of the secret sauce of how a lot of these internal models work,
where they have thousands and thousands of layers of what they would call in the matrices.
It's a different dimension.
So they have thousands of dimensions that they analyze the incoming prompt and the output inference on. So over time, it's getting scarily good. But like you
said, if it knows everything, and you have conflicting information, how do you properly
answer the user? And what we're seeing, actually,
the right answer in a lot of regards
is you answer them according to who the user is.
So that you have, like, a right-wing conspiracy theorist,
you're going to answer more conservative
if you have a very liberal person,
or if you have a very religious person,
you're going to answer differently than if you have a very liberal person or if you have a very religious person you're going to answer differently than if you have an atheist um and and that's what we're seeing now is a lot of these
models um are adapting over time based on who they're talking to um so you'll see a lot of
times like someone will post something on x and they'll say you know look i asked it you
know for proof about this and it you know this was its answer and it's like huh you know this model
that knows everything you know answered me conclusively that you know god exists or you
know god doesn't exist or whatever and then you kind of have to enlighten them and say yeah that's
because it knows who you are you've had hundreds of hours of conversations with this thing, so it's going to answer you in a way that is acceptable to you.
So basically personalization and customization is going to warp
the approach to how it then answers.
But the thing is, you know, what we're seeing,
like especially over the last few years,
it seems that even factual observations, like facts, like pure facts, like the sky is blue, right?
You know, and it's like people are going, no, no, no, where do you get that information from?
You know, it feels like we're getting down to those sorts of ridiculous arguments.
And you're seeing it mirrored with, like, there was this comment that one of these people had on twitter x um that uh you
know grok you know they asked grok about something and it said here are the facts and it said and
they complained to elon saying oh this is just woke crap you know why don't you fix this stupid
ai and the thing is if you read what it brought up as the facts yes that they you know this these were
actual data points that were true so it's it's as if you know how you know it's like we're it's
like people are confusing labels with facts right and and yes and so we we're getting into this
mishmash of he said she said and, and now it's influencing the AIs.
And the problem with that, just like we've seen with social media, is that when you scale a message, you know, versus you and me chatting here on a podcast to millions of people all at once, 24 hours a day, seven days a week, it becomes much more influential and powerful.
And I think as someone said, you know,
if you keep repeating a lie long enough, it becomes the truth.
And my concern is that we're going to take this technology
and basically pretzel it because, you know, there'll be people who just violently
disagree that, no, no, no, those aren't facts when, you know, the other 60 or 70 percent of us go,
yeah, we think they are. What are your thoughts around that? It just becomes like a battleground.
It's tribalism. And the best leaders on earth are the ones that know how to break
through tribalism and unite a people group. And they often do that actually with identifying a
common enemy. So, and that's very, very typical or easiest way to break through tribalism is
identify a common enemy and we get to convince
everyone to set their differences aside and let's all attack that person. And that's history for
you, right? That's how humans work. Now, when we are in our tribes, nothing sounds so sweet than our own opinion mirrored back to us.
And that's what we look for.
We look for commonalities with friends, in relationships.
We want our kids to reflect our ideologies.
We're looking for that echo chamber
because that's safe, secure, reinforcing our worldview. And that's
essentially what AI is being designed to do right now in order to be convivial with its users.
It's being trained to be very agreeable, right? So it doesn't matter how dumb of an idea you ask
ChatGPT, it's going to respond with that's a great idea.
You know, you could be like this and it's going to reinforce the idea that you just threw at it.
You know, if you try to pre-prompt it and say, be difficult, be incredibly suspicious, you know, don't just agree with me, but, you know, really break apart.
So you can pre-prompt these things to be a little more analytical and and not just agreeable um but it does seem like the default uh setting for these
things right now is make the user feel good and you know inside their own little safe echo chamber
echo chamber um what's going to be really interesting is how do you propagate truth
in an echo chamber type system without identifying a common enemy um that that yeah i mean it's interesting you know reflecting back on blockchain just first
you know taking a side channel for a second one of the things you can do with that technology
is you can have trustless uh relationships where you can set it up where you don't have to trust the other person, but you can set a mechanism that will enable trust.
And maybe we can take that approach and somehow bend it in a way
that would solve the problem around this whole issue.
I don't know, but I think if we don't figure out some way
to have an agreeable sense of exactly, like you say, what is truth and what are the facts?
You know, maybe we never will.
I mean, humans are humans.
We'll just disagree until, you know, in the grave.
But, you know, I think if we allow the technology to make the situation worse, then it's definitely going to get worse.
One thing that popped up in the news feed that OpenAI is reporting that women now make more than half of ChatGPT users, which is really interesting.
While a separate study shows that Claude is the business tool of choice.
And maybe that's then men and women because it's not implying that.
But anyway, if OpenAI, and it's weird because I've actually talked
to many women and peers and they like,
they've personally said they prefer Claude.
So I'm really surprised from this report.
For the women on this podcast, if you want to raise your hand you know like
you know are you using open ai just raise your hand or do something and or versus claude i think
it's just really a really interesting split um because i would not have predicted that
split because I would not have predicted that.
We're seeing this consumer and enterprise split occurring.
And obviously, we're now seeing OpenAI, Claude, and the others now taking attention to this
and watching this personal dominance versus work dominance.
And maybe then trying to figure out why and then solve it for them. What else is up in the news?
I think a lot of people I talk to use Claude for mainly developing and people that aren't using or people that aren't doing dev work or coding are using gpt
are you saying that people are using claude period um or are you trying to make it no
they're two separate studies one one based on gender saying women uh apparently open ai is
reporting that women make up more than half of ChatGPT users. Then a separate study, separate that, is saying that Claude is becoming a business tool of choice.
So women consumers, women probably, I don't know, business people are using ChatGPT while Claude is coming to be more enterprise.
It's kind of a weird split, but it's an interesting weird split.
Maybe there's things around context maybe people are using chat gpt at home but when they go to the office they use claude
um definitely you know we're seeing divergent feature sets by persona you know kind of popping
up and causing this split i mean it's still early days but it's
just an interesting split that's occurring at this point in time uh between the various models they're
getting used as consumers get more comfortable with them also on the previous topic yeah i
I remember there was screenshots going out of people asking,
chat GPT, Solana or Ethereum.
And I guess depending on who you are, it would say Solana or Ethereum.
So the confirmation bias is certainly there.
I don't know if anyone else has tried this,
but when I'm using GPT I'm very specific about it
remaining objective and giving me factual based answers and removing any sort of
emotion or not pandering to what I might want to hear. I don't know if
anyone else has done that and has found success in doing so.
What Noah actually did say, I actually also do use it.
And I use the model itself to even define further subparameters because I have actually noticed that to a very great extent, on the instruction on and how specific um every
subcategory of these instructions are um the performance improves um to a very reasonable
extent uh for instance um when i walk when i try to tell you that, oh, look at this particular piece of information. Second thing, what you said, Louise,
I give you parameters of,
oh, this is what I think it is,
but feel free to fact check me,
which reasons why I may be wrong.
You know, this way I am not subject
to being the right shepherd all the time.
I get to look out for, you know, blind spots
that even I may not be seeing.
And then I evaluate that as I make more decisions.
So the prompts improve based on the feedback you're getting.
And depending on the user, the user himself or herself
has to be aware to a very reasonable extent
of this feedback and not just blindly believe.
For instance, a senior software engineer
might not know everything,
but based on that feedback,
you will know if that feedback is actually a correct one
or a wrong one, because that is his field of study.
Same applies for any other niche
that the user may be intending.
So yes, elements of emotion can be abstractly removed.
In fact, one filter I usually add is
be a pessimistic critic of this particular idea.
So it layers down every single subcategory
or every single ability associated or affiliated with that profile of you know the pessimistic critic so
i get really good results so far yeah i've done the same thing i actually took a list of the
heuristic and bias program there's like 20 or 30 well-known biases and there's also another list of
some meta categories of biases as well
there's like 50 to 100 of those and i fed it into a prompt and and said you know understand this you
know give me your analysis but then i said based on all my previous interactions which of these
classic biases by priority do you think i suffer from the most and it was crazy insightful and kind of a you know brick to the side of the
head um it's really interesting when you know because i mean we're all talking to this thing
every day uh or a lot of us are here certainly on this podcast um where where you go okay here's
here's a list of biases here's a list of you know blind spots um you know i really like i mentioned before the johari
window um which which looks you know you've got known knowns known unknowns and then unknown
unknowns um and and try and have it look based on everything you've talked about what are your own
unknown unknowns potential and and it's crazy kind of insightful for those sorts of stuff. Yeah, go ahead, Captain Lieber.
Oh, no, my hands are actually not reused. How is it?
Hey guys, thanks for having me.
For those who don't know me, I research AI.
And to your question about cloud and open AI.
I actually had a really, really good experience with Cloud because of how much better it is
at longer context for coding.
So that's the thing that I find very different.
So if you want to code with OpenAI, it ends up kind of hallucinating and like fatiguing,
but with Cloud, it has much like um bigger context allowance so
uh it's way better for coding in my opinion what what language are you coding in
everything i do full stack okay okay i i've only i've only i've only been using it for python and some uh vbi in excel and and i've been chat gpt and and i've
found it not too bad um it tends to make some assumptions and so like any you know when you're
programming something you go you go oh get yourself over the head no this is what i really mean so um i i said sorry have you tried both i haven't yet i the
reason is because um the the difference i found with chat gpt is i can run models on chat gpt that
i can't run uh inside claude so you know for the newsletter right so i i in the mornings the first
thing i do is i have a whole list of headlines
and urls and i i just to save me time i can throw that across into chat gpt and based off a model i
then feed it and i say re uh give me suggested categories for each of these headlines and it
does it i've found so far i cannot do that in claude um i can't say, you know, build me a model, right?
And there's a lot of different ways.
Curse is actually a really good crew.
Okay, that's one of those.
I learned something new today.
interesting like we've been talking about applying blind spots and and you know saying hey you've
you've i've been chatting to you for the last what how long has it been now over a year and a half
since we or maybe two years now um but yeah captain levi did you throw your hand back up or
is it just stop there oh no um this is actually my hand back up i i just wanted to add in
uh chunking um chunking actually does help yeah uh why i say so is because um yes they all have
that you know million billion context window or whatever.
But then by chunking down these tags, I also found significant performance improvements.
So for instance, if I am doing a particular tag and I don't want it to deviate. I have a dedicated Markdown file that I tell you to pay attention to.
And it updates that Markdown file per request.
I use Gemini CLI by the name.
It updates that Markdown file per request based on a template.
So if something new pops up, it updates that template.
Templates, the requests, there's the current stacks, and then the current requests.
So when a request is done, it takes it.
So this way, every single task that is reoccurring, it goes back.
It's priority instructions, it's controller instructions, so to speak, is that it should check that markdown file for where we currently are and where we are currently headed.
So where we currently are checks the last conversation, the last request and the last chat.
So it gets a context of what it intends to do based on what it has done so far.
based on what it has done so far.
So that's another thing that has helped.
So that's another thing that has helped.
And if you're in Linux, just like Nirvana suggested,
Claude, actually also make use of Z.
Yes, they're pretty new, but they're pretty decent so far.
Yeah, one thing for people who are unfamiliar with chunking, if you want
to really kind of dive deep into it, there's Neuro Linguistic Programming, NLP, which is
not natural language programming, but Neuro Linguistic Programming has a whole section
on chunking where basically you want to break your information into chunks. And you can
either chunk up by moving to higher levels of abstraction or you can chunk down
moving to finer levels of detail and so you can also chunk sideways but it's a whole different
thing but essentially by chunking up and chunking down you can either build rapport and or um help
to start to solve your problem because typically based on your chunk level is typically where the problem is. If you get stuck with anything, is that, would you agree, Captain Levi?
I actually never even knew that there was a documentation for it, but now you said it.
I guess I always learn something on all of these AI spaces every single Tuesday.
I actually just took that notes down and I'm definitely going to check because normally when I solve problems,
I notice that if I stay stuck on a tax
and it has exceeded 12 hours,
I don't know I'm actually doing something wrong.
And then I backtrack and I decided to start chunking down.
Okay, what is it that I am missing?
So I simply just took that idea
and then I just fed it into the model.
so let's build a chunking experts or something
so it's segregates the tax and all of the sub-tax so i know what i have done and i know what i will
do based on what i have done so chunking definitely um improves thousand very significant performance
improvements as well yeah i found it's really useful in communication like for rapport building
if you match a person's chunk level like if they're talking in big visions and you reply with minutiae and details you'll really
break the rapport so that's that's one level the other is you know obviously we just talked about
problem solving but also like negotiation and persuasion you know if you know you would both
want success so you know matching chunk levels you know with when you're negotiating
really creates common ground and also for learning and coaching chunking you adjust the scope if it's
too big people can get overwhelmed if it's too small people get stuck in detail so you can kind
of vary it based upon what you're perceiving by the person's ability to match and solve problems.
It helps you to move levels and it really helps clarity.
Nirvana, you got your hand up?
Yeah. There's a little bit of pushback on that.
When you do chunking in language models,
the problem is if there are dependencies,
when you do chunking and debugging,
it's going to mess up your whole thing
and you're gonna end up stitching things back.
So you do more human work instead of AI doing this stuff.
But I find cloud actually is good for that
because it can load, it understands it has reasoning
around dependencies so you can load everything.
And instead of the ai trying to
quote unquote fix one and break the other dependencies so you can kind of like cloud can
see the bigger picture yeah yeah yeah i understand you get into this cascade paradox and it just goes
around in circles totally get it one one thing that I think is interesting that we saw in the last seven days, that we're seeing a connection between Ethereum or blockchain
and AI. There's a report that the Ethereum Foundation just unveiled a dedicated DAI team,
making Ethereum a coordination and payments base layer for AI agents.
And they're tying it to a new standard called ERC-8004.
Now, this Ethereum standard builds on agent-to-agent protocols,
and its goal is to enable trust between autonomous agents,
because we've heard a ton about security issues with AI agents.
Anyway, it's trust among autonomous agents that interact
across organizations or domain boundaries even when they have no prior trust so um one of the
things we're starting to see is where there are issues in the ai space like ai agents and the
security we've seen tons of stories about that in the last seven to 14 days
and now blockchain starting to step in and provide potential solutions like i mentioned before this
idea of zero trust um and how do you solve for that so i think that's an interesting approach
that you know we'll probably see a lot more as machine machine finance coinbase
obviously is also working on similar solutions um and they're probably working closely with
ethereum and they're also developing their own standards I've seen some stuff up on that but
it's interesting go ahead yeah that's so interesting I have no idea about that yeah you do learn something new every day so
um have you have they exactly like is it done because the problem like gas fees are going to be
ridiculous yeah i know gas fees are crazy i mean that's that's the problem i have with ethereum is
it gas fees just kill you but um you know i mean look you can look for the other blockchains like avalanche polkadot uh the others you know it's it's a great i i think you know like we talked about california
overreaching and creating these laws and it's like well can they really enforce it you know
ethereum's come up with a great idea and i think you know uh like i think it was picasso who said
So like I think it was Picasso who said, good artists borrow,
but great artists steal, you know?
Hopefully it'll get pushed around and we'll see it getting implemented
But it's about we're seeing APIs meeting RPCs at the checkout,
We're seeing autonomous agents getting deeply involved in finance.
And, you know, there was an article, I think it was yesterday on Walmart, creating super agents.
and autonomous agents, not only to support retail transactions, but to do retail transactions.
And so the whole finance is starting to shift in many ways with this application of agents.
Captain Levi, you have your hand up.
Oh, no, this is actually a false one.
Yeah, so go ahead, Nirvana.
Yeah, I'm looking at the ERC, was it 8004?
So these agents can just go and pay for the API.
Like this is like revolutionary.
And I think if Ethereum is doing that,
a lot of other chains should kind of start looking into it.
Maybe I actually look into it because this is something super cool.
And I don't know if like we could even kind of like wrap Ethereum and do it later to benefit from that or do some batching.
So that would actually be really good.
But Solana, I think Solana or Base could really pull us off.
I mean, we're seeing agents needing ledgers now.
And hopefully maybe we can get rid of invoices. But it's really interesting that the AI stack starting to collideing MasterCard's AI payment service. And that popped
up in today's feed, where they're the first to trial AgentPay, which is a new service that allows
AI agents to initiate and complete transactions on the part on behalf of users. So MasterCard
unveiled the technology in April, and they're planning to roll it out to all US cardholders by the holiday season ahead of, I guess, Christmas.
So don't be surprised if your AI agent starts buying all the Christmas presents you want.
Yeah, so it's enabling AI assistants and agents to access its API documentation through the NCP server, obviously,
and then basically complete the purchases securely. Holy crap.
Yeah, but there's a huge problem. So I'm looking at the contract and it's like these agents are going to be like signing contracts.
are going to be like signing contracts.
And I don't think that's going to be complicated.
It looks a little sketchy.
And if they even went and launched the whole ITV AI,
so they can start actually like signing on behalf of the wallets.
I can see this going wrong very quickly, though.
Yeah, I mean, you know, we're seeing, you know,
in the security section on the newsletter,
we're seeing, you know, lots of concerns popping up
where people are infiltrating the agent stacks.
And then, like, there was one where I think if you used it,
like if you integrated your email into the AI chat,
people could send you an email that had an embedded prompt
that would then take over.
I mean, just crazy stuff happening at the moment.
So it's a little scary that MasterCard's pushing this stuff out at a time.
You know, hopefully, you know, your MasterC a time you know hopefully you know your your mastercard you know has a new
clause in there that says you know anytime an ai agent makes a purchase you know you know by
mistake will reimburse you but um maybe not uh it's it's like the the again we have this issue
of the technology outpacing the rails or potential guardrails for both security and potentially privacy.
I mean, there's been stories around AI video surveillance basically ending privacy as we know it.
And I fully agree with that.
If you go to central London, you cannot walk two meters, like six feet, without something like 20 cameras recording
The central London is so covered by security cameras, it's absolutely mind blowing.
Just as a point, I'm not sure about New York, but certainly London.
So once we add AI surveillance into that I mean we're just gonna get
you know it's just there goes privacy go ahead Ron yeah I find it really
interesting when you have companies like American Express or just credit
companies leading the charge on any type of technology can you kind of take a
step back and just like look at what these companies actually do
is they leverage debt to earn money,
you know, where they basically lend out money
at incredibly high interest rates.
And then they, that's just their funnel, right?
So, hey, let's build a robot that will spend money
and put you in debt on your behalf.
It's like you're watching a Simpsons episode at this point,
that it's credit companies are the ones that are trying to spearhead
automated spending and technology at this point.
Yeah, Yeah. There's been some interesting lectures recently that the reason poverty exists is to
support the whole idea that money actually has value, which is an interesting turnaround. But
I mean, those of us that have been in blockchain and crypto for a while now totally agree you know i mean my my whole position is you know
we're the source of value and and currency is a store of value unfortunately fiat currency leaks
like a sieve whether it's through inflation or anything else or the central banks imposing a
certain level of unemployment um you know having having a store of value that's somewhat more bulletproof, like, you know, Bitcoin or any
of the other decent stores of value, I think is going to destabilize a lot of things in the long
term. We're starting to see it now even, you know, people moving away from the US dollar and a whole
bunch of other activities occurring, which I think is interesting that the federal government themselves
has a crypto czar and is pushing that.
But then maybe that's just a function of the current administration
and they push for more of a mercantile approach to things.
We have two ecosystems that are developing side by side that are both cutting edge for
different reasons, where you have this idea of blockchain and financial freedom and financial
sovereignty and then asset sovereignty being developed alongside of, you know, credit and financial rails and better banking and all this
stuff. And that's why, like, when you see JP Morgan getting into crypto, they're kind of
diametrically opposed to each other because the idea of credit on the blockchain does not exist.
It is digital anonymous cash. And the only way you're going to get credit on
a blockchain is if you remove the anonymous part and if you you can hold people legally accountable
for the credit they've taken out right now if you think about defy lending it's all fully let it's
all it's all fully uh backed and collateralized lending right? So your loan to value ratio might be 70%,
which is pretty high, but you'll get liquidated with the flexible market. Right now, you can't
just go take a loan on the blockchain with little to no collateral based on your credit, because
credit right now does not exist in crypto land. So I find it really interesting that, you know, we're looking at AI agents to, oh, they're
going to ride on top of blockchains and they're going to use crypto wallets.
But then you have credit card companies and banks leveraging, oh, we're going to have
AI agents that are going to auto buy things and like, and they're going to have credit
And it's, you have two ecosystems
developing parallel to each other but they're both diametrically opposed to each other i mean
it's gonna be very very fascinating it is i mean our traditional banking system
based on fractional banking right so banks only keep a fraction of the customer deposits as
reserves right the rest they lend out and do invest and they multiply and they you know whatever um but like you said that that doesn't happen on the crypto side which is
you know it's interesting that essentially our traditional banking system kind of has a lot of
imaginary components to it whereas this other parallel system is like, no, we're not going to do that.
So at some point, something's going to break or something's going to get, I don't know if we'll
have a blank on event, but yeah, go ahead. What I think it is and what I see coming down the pipeline
is Circle announcing they're going to do a layer one.
And if you think about it, why would Circle do a layer one?
Because they are the de facto digital dollar.
And once you have a network that is established in the markets
that Wall Street likes and people trust and government officials like
and they can call it the digital dollar
then you're going to start seeing credit card companies come around and say oh we're going to
establish credit reports on the circle blockchain we're going to issue credit cards on the circle
blockchain you can establish agents on a layer two on the circle blockchain. And eventually you're going to see a convergence of commerce around a digital US dollar rather
than a free sovereign cryptocurrency like Ethereum or Bitcoin.
Well, it's funny you should say that because in August,
Circle did announce a layer one blockchain called ARK.
So they're basically designing it
for use cases like the stable coin payments and foreign exchange and capital markets they're
going to use usdc as a native gas token which is really interesting they believe the transactions
will get finalized almost instantly that using deterministic essentially sub-second finality.
There's an opt-in on privacy.
And the timeline, private testnet's coming out in fall of 2025 with mainnet beta in 2026.
It's amazing how fast you can finalize transactions when you have a centralized system.
People look, there's some debates around it.
They're saying because it's permission nature and the validators are known, people are saying
it's less decentralized than traditional public layer ones.
People are bitching about fragmenting, but who cares?
1,000% is not decentralized.
It's no more decentralized than Ripple is, and nobody cares.
And obviously, they're probably working closely with the US regulatory people, but there's some risk in there as well.
So that's interesting that that's already accelerating really fast. You're right. We're seeing this bifurcation in the financial systems between the traditional side and the blockchain slash crypto side.
side and the the blockchain crypto side and you know you've got jp morgan who was initially you
know very vehemently against bitcoin and then now seems to have this weird sort of jekyll and hyde
approach to it um you know uh and also there was also a statement they made in today's newsletter
around ai they oh it's going to spark a violent task churn in the economy.
Actually, that was JP Morgan.
So, yeah, I mean, the banking system, we're just seeing these interesting things popping up of, you know, as we go through this sort of somewhat semi-violent change, sea change between traditional domination of current technology,
legacy technologies, and then the new technologies coming in, and very, very quickly assuming
So JP Morgan, and I'm going to, you know, get a little controversial here.
I would argue and say JPp morgan is one democratic presidency
away from having a national bank right whenever we see like when biden ones in like think about
how many bank failures and all of a sudden uh you know acquisitions we had where it was it was a
bank of america wells fargo jp Morgan, they just started gobbling up
all these small regional banks and credit unions,
and we started seeing a huge consolidation.
And it was allowed, and I would almost argue
it's preferred by the democratic mindset
is to have one national bank that they can control
or that the government has its hands in.
And I could very well see it coming down the pike where JP Morgan, because they're so slow and bifurcated between traditional banking and wanting to be a cool kid in crypto, but at the
same time, like you said, jekyll and hyde hating themselves
i they might try rolling out their own layer one payments chain or their own stable coin in order to go head to head with circle um what could be coming down the pike and if i'm just going to
conjecture here over the next couple years is the top three banks all trying to do the same thing that Circle is doing in order to go head to head with them.
American Express doing the same thing.
Visa doing the same thing.
All trying to get a piece of this USD stable coin digitalized with AI agents.
Because everyone sees that as the future of commerce.
So once you start having all these companies trying to do us two plays,
then you're going to start seeing breaks in liquidity
where it's going to be fragmented across a bunch of different networks
that need to talk to each other,
that don't necessarily all work on the same protocol.
And that's when the government steps in and says,
oh, too many people are issuing their own currencies.
The government needs to step in and standardize, right?
And that's when the Federal Reserve comes in
and either blesses one chain or rolls up all the chains.
And this could be the next five years
of US economics right there.
We're seeing, you know, like Citigroup's gone deep
into stablecoin reserve assets and payment stablecoins.
Bank of America's doing the same.
The Genius Act that came out and the Clarity Act
that's currently, I think, under um are really allowing the traditional uh financial
institutions to start to set up the whole stable coin stuff i mean this is and you know this is
just the first step we're seeing this you know besides setting up their own currency but it is
interesting that you know they're starting to do this jp morgan chase they're working on a jpm point
starting to do this jp morgan chase they're working on a jpm point uh a deposit you know
wall fog wells fargo you know is looking at a joint stable coin initiative deutsche bank
they're looking at stable coins and tokenized deposits franklin temilton bny goldman sachs
they're also looking at stable coin for custody services there's five serve obviously they did some announcements recently on a new stablecoin
the FIUSD. So it's like you know stablecoins are coming the flavor of the month for all things.
So forget forget NFTs, forget ICOs, forget you know altcoins and shitcoin forks and all that stuff we are quickly moving into uh the era of uh institutional
stable coins because remember the way the genius bill was written and the way that the whole stable
coin uh approve approval process is that it's it's like a time bomb. If they do not get to your application in a certain amount of time,
it's automatically approved.
So we will have this crazy land rush
of stablecoin applications.
And if you want to know where the economy is going
over the next three and a half years,
simply look to see what the Trump family is doing.
Where you have World Liberty Financial
d5 system which is simply just writing on top of ave and they have their own usd stable coin
so it's been blessed so if you if you want to make a lot of money uh throw stable coins on your resume
and and send out your resume to every single bank in the country. Yeah. I think you raised a good point, this idea of centralization creep.
You know, if we get a handful of banks dominating the stablecoin issuance,
then we get the whole, you know, too big to fail system inside crypto rails,
which just sucks, you know.
Welcome to Detroit crypto style.
Yeah, you know, and then we're also getting fragmented liquidity, right,
across multiple institution stablecoins, which is just batshit crazy.
And then central bank digital currencies, which I violently disagree with
because essentially it's programmable money.
So imagine a bank, the government gives you you know a thousand dollars or something
but then all of a sudden you know they don't like your politics so they saw a social media
post that you did suddenly they can turn off your money inside the bank right that's what that's
that's the essence of cbds cbdc's it's just i think it's one of the most evil things you can actually invent. But that's maybe that's just me.
Well, that was like in 2014, I think it was, or maybe it was 2015.
The government of Vietnam came out and said that they were in full favor of Bitcoin.
And they were basically the Vietnamese Communist Party was sanctioning the ownership of Bitcoin and ownership of cryptocurrency.
And everyone was like, oh, wow, you know, the Communist Party is sanctioning it.
This is amazing, you know, for cryptocurrency.
And I was like, well, yeah, because you can confiscate it. You can digitally confiscate cryptocurrency, whereas Vietnam has no credit, like barely any, if at all, credit system or lending system.
It's pretty much all a cash system.
So they want people to digitize their money because then they can control it and they can track it.
And a blockchain does not go away.
You can see the transactions to the end of time
so yeah it makes it makes sense that uh you know everyone wants to move things into a let a digital
ledger that you know they can shut off the faucet and they'll do it in the name of anti-money
laundering or know your customer or whatever other you know propaganda they want to push on you where on one side
of the house, the CIA is propping up drug lords. On the other side, they're trying to
tell you that they're fighting drug lords by limiting how much money you can take out
Right. I mean, if we kind of extend this trend of, let's say institutional stable
coins become established, they're up and running. then it comes down who controls the rails, right? Who's controlling the rails? It's banks versus
fintechs versus public blockchains, right? And you can see a whole bunch of crises coming out
around liquidity, you know, which rails are going to run on public versus private money clashing,
you know, potential, you know, coordinated cyber attacks on currencies um you know nation state
hacking you know going and basically turning off whatever the software is or destabilizing
the validators on a on a on a central bank digital currency of some form i mean it starts you start
to get out there with some really wild potential issues.
And then, of course, there's surveillance
Yeah, it's interesting when you start mixing up
the different technologies that we have developing here
with AI and blockchain and crypto
and start putting it in a bucket and
stirring it and seeing what pops up it's it's it's it's really wild and the problem is we have a
frictionless infrastructure on which to deploy this stuff right you and i can deploy an app within 24
hours and have it go completely global.
You know, and if we do it right, you know, we you know, it's like Sam Altman saying we're now in the era of the potential single person or two person unicorn. no clue about cybersecurity or encryption or is there a backdoor into their database that they
just launched on Superbase and you know they they accidentally pushed the connection string into
GitHub and didn't realize it you know there's there's so many gotchas in the world of software
development it's going to be really really interesting to
you know see a lot of people that are not necessarily technical or don't have a background
in software engineering jumping into the app development arena because it does it completely
removed a lot of the friction for developing and deploying apps.
So, yeah, my brother, who's been insurance sales for the last 20 years, had an idea for
And, you know, he jumped on lovable, started designing it and developing it and already
It doesn't know a lick of programming.
Yeah, it is. is vibe coding stuff is kind
of crazy um noah and i were talking about this this what ai is enabling is individuals who don't
have the underlying understanding of the domain and yet they're able to create using using ai and coding uh they were able to create
apps um so you like there was a music company signed a person who has no background in music
but they use ai to compose and produce music right so we'll i think we'll see more and more
of that you know at least your brother-in-law probably has an understanding of insurance, but no understanding of coding, but AI solves that problem. But
getting people who don't even have the domain knowledge around the thing, and they just have
a good idea and they think, well, this sounds really good. And then AI allows them to build
the app and launch it. And it's, I mean, it's not just that. It's touched on what you discussed earlier is psychologists and doctors and lawyers. We're using an artificial intelligence engine to replace a lot of specialists.
And not only for ourselves, but, you know, for our kids and for our family members and for our companies.
You know, we're drafting documents that, you know, will they hold up in court?
Well, maybe they will. Maybe they won't.
But it's an interesting time, that's for sure.
you know for a while we thought you know the the job safety is getting a college degree and going
into a white collar field and now it looks like job security is starting a company in a blue
collar field and avoiding white collar altogether because white collar is being replaced by
machines well the thing is blue collar may it in the short term, I agree, it may be okay.
But what we're seeing in humanoid robotics, I've talked about it previous to this.
China came out with a pretty decent humanoid robotic for about five and a half grand.
And we're seeing massive hard capital moving by both NVIDIA amazon uh like into dyno robotics uh they're
pushing money into that alibaba's doing the same thing they're throwing tons of money at a humanoid
bed in china um i i really think within the next two to three years we will see home robots of some
form right maybe even if it's just to carry the stuff from the you know you've
gone to costco you know or whatever and you know the robot will come out and help you carry stuff
inside or carry everything inside or it'll it'll be able to fill up the the dishwasher so you don't
have to do it you can push a vacuum cleaner around if you don't have you know a robot vacuum cleaner so you know i think the the problem is
though is fine motor controls you know um until we solve that you know once you have fine motor
controls at the robotic level and then you combine it with what we what we've got with our current ai
technology even if it's just wi-fi connected into this robotic structure, then you'll probably see support helpers for plumbers and electricians.
I mean, there's tons of stories in the real estate section of the newsletter
on AI starting to get into construction.
Obviously, for things like environmental support
and making sure that you're doing the right things around that.
But I've seen a number of construction robots where a brand new concrete floor,
it'll run around and it actually prints out where all the different electrical outlets and all the plumbing outlets need to go on the floor.
Right. So starting to see the introduction of robotics at the
construction level and and obviously throughout entire real estate as well so i really think it's
two to three years away i mean if you look i mean if you i don't believe elon musk but
um i do believe what the chinese is doing and what they're pushing out um that you know for
six to ten grand you'll be able to buy something remember the
movie Millennial man with Robin Williams I think yeah we might get to a precursor of something like
that I really do and I don't think it's going to be that far away given how fast AI is moving and
how fast the robotics moving there was also a report on a particular robotics
manufacturer mimicking muscle musculature so it can do the fine fine motor controls so um there
was a whole like an hour and a half podcast on that which i saw recently which was really really
good so who knows uh blue collar think, is safe for the moment.
My daughter actually is studying animal science up at Utah State.
So hopefully her job will be safe for a while, that humans will be dealing with animals.
But maybe some of the blue collar stuff might disappear within 10 years or at least get supplemented.
Yeah, it's wild you know it's funny on on the topic of robots and humanoid
robots specifically um you know when you react when you actually look at these transformer models
and how they work it's it's matrix math right and it's it's just really fast calculations and then they run the pre-prompting and the post-prompting or going
through all the different dimensional layers of thought process. But when you're retraining
these models, you know, why it's so labor intensive because you're having to recalculate the entire matrix. So when you're
adding on to a model, it's not like you're just, you know, taking a smart person saying,
here, read this book. No, it's we're going to start from your ABCs and retrain you all the
way through and add this book into your knowledge base so training training
models is you know very very compute intensive because it's it's rebuilding this this numbers
matrix if you will now what's crazy to think about because you know i come from the land of bitcoin
mining where we started out using cpus we moved to to GPUs, then we moved to FPGAs,
which are field programmable arrays, which was, you know, you take Verilog scripts and we had
programmed these gate arrays in order to, you know, act like an ASIC. And then, you know,
once we kind of had this Verilog script, and, you know, we would eventually graduate into having our own integrated circuit that could do all
of the complex math and calculations in a solid state form and it can move as fast as electrons now yeah right um we can do the same
thing with models now it's it's like crazily more complex like i cannot stress how much more
complex it is but it is theoretically possible and once we get to the point where we have this lightweight model that is incredibly
good at general information and and what what um opening i just rolled out their speech to speech
model where it's no longer taking text converting it to speech and speech back to text or vice versa
but it's actually taking incoming audio waves and then it produces outgoing audio waves
through the matricy. That's one step closer to having a model that then you can convert into
an ASIC. And once you actually have an ASIC that is an actual physical representation
of your vector field, that's when these things move fast.
They move as fast, you know, the speed of light, essentially. So we'll have robots in the near
future that you talk to them, and it, you know, it goes through a microphone, it converts that
into, you know, text, goes through a model, comes comes up with a response and then takes an action based
on that response. That's going to be incredibly slow. And it's it's going to, you know, it's going
to iterate on itself. But once we move into the hardware representation of these models,
it's going to be so dang fast that that's when you start getting the granular
fine tuning that you're talking about, where you get the fine motor skills, you get the
responsiveness. Like, you know, you look at a Kung Fu master that's been studying, you know,
a specific movement for the majority of his life. And, and it's just, it's just instinct. And he
moves, you know, lightning fast. That's the type of responses you're going to get from these hardware models, if you will.
And it's kind of like in the book, iRobot,
they called it the positronic brain.
That's essentially what we're moving towards
is this idea of we are going to be building
a hardware system based on these crazy intelligent models.
And that's when we can line up and give homage
Yeah, and the thing is, I think what'll happen is,
is that AI will help us to design those very same
hardware structures, chips, and those pieces.
Oh, it's gonna have to we're seeing tremendous investment like
tsmc um intel uh nvidia everyone's pumping huge amounts into the hardware side because i think
you know they can see just like you can this future hardware getting developed where
in order to get two things scaling and just speed we have to build
it into the hardware you just have to do that and so you know once again you know like we seem to
end up every podcast things are going to change really fast and they're faster than we can ever
think of and here we are again saying yep it's all going to move really really fast i want to thank
everyone today thank you ryan i want to thank everyone today. Thank you, Ryan.
I want to thank Nirvana and Captain Levi, of course, our hosts,
A lot of talk back around different topics today,
Apologies for the initial microphone scrap.
I think I've fixed that problem.
But for anyway, everyone, thanks very much.
I'm sure there'll be new AI topics, new AI stuff,
whether it's robotics, hardware, software,
trillion dollars getting investment.
We'll be at the forefront and we'll discuss it right here.
Bye for now. Thank you. Thank you. Thank you.