MosaicML $1.3B | Harvard's A.I. Professor | LLaMA Romance #AITownHall

Recorded: June 27, 2023 Duration: 1:27:27
Space Recording

Short Summary

The transcript highlights significant developments in the AI sector, including Databricks' $1.3 billion acquisition of Mosaic, reflecting major investment trends and the concentration of talent and technology in the Bay Area. Additionally, discussions on infrastructure dominance by cloud providers indicate emerging trends in AI's technological landscape.

Full Transcription

Okay, we're getting folks all on board. Hey, everybody. And Black, welcome.
GMGM, GM, how's it going? Going well. It's going well, a lot going on in AI. We've got some big headliners and a lot of little stuff. It seems like the world of AI never stop despite all the things happening in geopolitics and elsewhere. So, very busy space.
Yeah, it's crazy. It's like this kind of constant backdrop to everything else that's happening.
The world is in chaos and AI is kind of soaking it up, you know?
Yeah, definitely. I'm not sure AI is going to make things less chaotic over time, but I guess we'll see, you know, obviously only time we'll tell. You know, I think we're still getting some folks up here. But yeah, I mean, Black, what's your favorite, you know, before we get to the meat of the matter, there's a lot going on. What are some of your favorite things happening in the world of AI outside of the headliners we got?
I think one of the craziest ones that I know we're going to jump into is the Harvard integration of AI.
You know, it's just amazing.
I think some of the stuff I know we're going to kind of deep dive into, but some of the stuff is just on the surface, you look at it and you're like, wow, you know, I...
you're wondering what the use cases are going to be as these things roll out. And the thought of an
AI professor at Harvard didn't really cross your mind until you read it. You're like, well,
that makes sense, you know? So I think things like that are crazy. And then the other thing that really
kind of blows my mind, I know we're going to talk about too is Google's integration into sheets.
Just watching some of those short little kind of demos they had of that, that's just kind of
mind boggling. And it just goes to pointing the direction that,
all this stuff is going to be integrated into everything.
It's just going to be a part of everything we do.
And I think it won't be so controversial and much more use case-based type stuff.
How about yourself?
What do you think is like really standing out right now?
Yeah, I think just updated things.
I think this Mosaic thing is pretty interesting.
I think we're going to dive into it.
So obviously yesterday, Mosaic, the $1.3 billion acquisition by Databricks, you know,
lots of people talking about, you know, there being a bubble in AI, yes, maybe, perhaps.
But I think we should all wake up to the fact that a two-year-old San Francisco company,
62 employees, so that's like $21 million per employee.
just got acquired by another startup that has raised billions of funding.
So yeah, I mean, I think it begs the question of like what is, like where are the moats in,
what are investors and certain companies seeing as the moats in AI from like a business perspective?
A lot of people are saying, hey, there are no moats.
but, you know, clearly some people believe that there are, right?
So I think it's kind of interesting to dive into.
I mean, yeah, because we're diving into that.
I mean, you know, John, let me know.
Or sorry, Black, if you let me know if you have some perspectives on that.
And then, yeah, then we can dive into the rest of the group here.
Yeah, I think the moats are just going to be something that is going to be built in per use case.
So I think that's something that I think one of the things that happens often in I would love to not do this during this conversation is to blanketly label everything as AI because every single.
AI is its own use case and therefore everything sort of has its own limitations and what it's built to actually do.
So it's much more like a program integration for the most part with all these different things.
Having said that, we as humans tend to just build things though asking, you know, we've kind of asked like how can we do it rather than if we should do it.
And so I think that's really, to me, that's really where there is concern, if any, is how far are we willing to go just as almost like a space race or a nuclear race?
You know, how far are we willing to go, you know, different companies, even in the private sector to put that flag in the ground and say, hey, we did this.
You know, maybe that's where there's a lack of a moat is our own ambitions into things.
That's sort of where I see it kind of playing out, to be honest.
But I don't know.
We'll see what happens.
There's so much happening.
I can't wait to jump into it.
So I have a speculative question.
I have a speculative question.
So everybody expected Putin yesterday to give a very dramatic talk on TV.
And then his talk was ho-hum.
Did he use Chad GPT to write his talk?
That's hilarious.
I had an interesting experience.
So, you know, looking at Harvard's AI,
and I've published in cardiovascular physiology at Harvard.
But I am very interested in the Khan Academy.
And I think Khan Academy as a learning modality for school children is very, very interesting.
And Khan now is launching Con Migo.
which is a GPT-mediated learning modality.
So I had the opportunity of working with my kids who are homeschooled
and leveraging GPT and teaching Tom Sawyer.
So I think that it's going from the rarefied era of Harvard
down to the base aspects of homeschool.
So this is something that I've got my eye on and good day, everybody.
Good day, John.
Thanks for diving in.
I mean, that's, you know, one of our big headliners.
So we may as well, you know, we may as well dive in.
So for those who aren't aware, so Harvard has intro to CS course, CF50.
And basically it is a course that people like Mark Zuckerberg, Steve Bomber, other luminaries
have taken.
So, you know, basically it's, you know, it's one of the most popular courses.
I hosted, eons ago, I hosted a.
lunch there with David Maylin, Professor David Malin, the sort of the main person who runs that course
and his students. And it was great. I mean, it was amazing to see the minds there. But basically,
they're going to allow chat GPT to help the students code. And for anybody here who has used
things like chat GPT and other similar models to code, I think you realize sort of the power
that it enabled, right? Even for non-coders, even for coders, right? It helps
I mean, some people are saying that it makes a one, you know, maybe a one X engineer or a 10x engineer.
I don't know if that's entirely true, but, you know, or, and it makes 10x engineers even better, even greater.
So that's the, you know, that's kind of the setup.
And I think the big question to be asked, well, first off, what is this, right?
Number one, what are the details?
The Harvard Crimson posted, you know, an article about it.
It seems a little less, you know, crazy than it first seems, right?
It's not going to be a full-on professor, just the learning tool, which it is already for most of us.
So, yeah, maybe like what is it and what are the implications to education to allowing AI to train and educate our youth?
AGI, I'd love to go to you.
Yes, I just want to say that what I love this week was that Google DeepMind announced Gemini
that showed Eclipse chat GPT by offering the possibility to do some planning and problem solving.
So that means that they will be using kind of reinforcement learning to train the...
those language model.
And that's what I've been saying
for a few months.
So that will make things very interesting.
And for the AI tutor,
I think at Harvard it will be
Extremely powerful. I've been trying myself to develop some kind of courses with chat GPT.
And it's amazing the kind of courses that you can develop. You can develop.
It's very personalized kind of tutoring. It will be one-on-one.
And I think it's the best way to learn.
That means that you have a tutor that it's exactly adapt to the student.
So I think it will do amazing.
I think people will be learning much more in a way that it is much more compelling.
You can learn at your own rhythm.
You can learn what you want to learn.
You can ask, if you have difficulty, you can ask clarification to chat GPT.
So I think it's, I've been using it myself.
And it's just amazing.
It just, I love it.
AGI, I want to echo your sentiment.
Not only have I been using it, but I have a ritual now that I have my morning coffee and chat GPT4, and I go through a tutorial.
I sit down and I have my cognitive and technological shot of espresso.
I find it very, very interesting.
Here's the point that I'd like to make.
It's extraordinary personalized.
It's extraordinarily convenient.
But it's also harkening back to fundamental aspects
of learning a la the Socratic method.
And the iterative nature of the process
is profound and transformative to me.
because I have that back and forth.
And I want to push on this just a little bit more
that not only do we have the iterative nature of learning,
which is extraordinarily powerful,
but it harkens to this sort of internal monologue
that defines much of humanity,
the way we think, the way we act.
And what we're creating now with AI and the LLMs
is what I refer to as an internal dialogue.
There's a very, very highly personal component here, and Brian can really speak eloquently to this,
that we have our own private LLM that allows us to have this iterative personal dialogue
that leads to not only education, but transformation, introspection, mindfulness.
It's a lot of cool stuff.
So I'm going to go back on mute, but it is an amazing time.
Lots going on.
Lots of hands.
You've got a lot of reactions, maybe Shakar, and we'll go to others as well after that.
Yeah, I think the iterative nature that John you just called out is a critical aspect of really what makes learning through, you know, large language models so incredible.
And, you know, even broadening the scope a little bit, I think there's, you know, obviously a lot of different AI applications, just most people primarily are most comfortable interacting with LLMs.
When we...
Think about really the value that that has for a student, right?
It's no longer my job to learn itself.
I know we have to learn through examples in a textbook that are not, you know, germane to my interests.
I don't have to, you know, be afraid to ask questions that I otherwise wouldn't ask in front of my peers.
I think it really changes a lot about the nature of what it means to learn and rewards people that are genuinely interested in learning.
Now, I think the flip side of this is also true, right?
A lot of people have obviously seen just the rate at which you can produce something mediocre with large language models.
And I think that, you know, that really goes to the default of what does it actually mean to learn and what does it actually mean to be a student.
And so, you know, upshot here is that I think the Harvard model or the primary integration with ZS50 was not only using it the code, but using it to debug your own code.
We're going to start seeing more and more changes and really how we think about learning, what it means to complete assignment, what a good score is.
Just relative to, you know, really focusing on the point of education in the first place.
I think, frankly, you know, there's a lot of authors that have talked a lot about, you know, the,
the banking education model, which I think is what we live in today, which is store a bunch of facts and are asked to regurgitate them.
And, you know, honestly, I think that's kind of a shitty model to learn anyways.
So really excited to see how this continues to, you know, change.
education as a paradigm.
But I think, you know, we're going to start seeing much more AI entering the education space.
And teachers that, you know, embrace this as a part of their pedagogical method
are going to be significantly better position to support their students.
I'm looking at this, though.
I find it really interesting to think about how we're kind of machine learning the AI
and AI will then be machine learning us.
I love to kind of hear maybe some feedback from GP if you want to go to GP.
Yeah, thank you, Black. I love the Harvard AI use case, the professor use case.
I've seen a lot of students flounder on the practical assignments in remote learning contexts.
And I think in executive programs also where they're giving an overview of a domain that does involve a need for a certain level of coding, from people who are from non-coding disciplines, I've seen people drop out at that point because they're just not able to get the type of continuous feedback that the professors are, you know, they're limited.
And the time zone difference too.
So in the Saeed Business School and University of Oxford,
you saw a lot of people drop off who were really keenly interested in the subject matter.
But around week six, when it started to get a little practical,
even just a small bit practical for the economists, the doctors and others.
on the assignments relating to even interpreting
the curves and how reinforcement models work,
they started to drop off. I love the Harvard AI Professor Use case.
I think though especially for postgrad, not for postgrad, for undergraduate
students, I think it'll enrich as the previous speaker, as black screen so I can't call
out who it was, apologies, it'll enrich the learning experience because one of the things that
that happens is you'll always have people who want to work hard.
And I think the meritocracy will work much better in that environment
where the AI is available 24-7.
And the meritocracy will reward those,
reward those who delve into the areas that are outside of the curriculum also.
And I think that's when we'll start to really see students rewarded
at a performance of meritocracy level,
then a curriculum check the box level. I love it.
I just know.
The GP makes argument that that's really interesting that you make the argument that, you know,
some of this other lower level stuff can go away, whereas like we can start to compete on,
you know, higher level, potentially higher level thinking. But, you know, now we've got several
speakers, perhaps it comes to the territory, but everyone here loves seems to love AI and seems to think
this is great. We've already had several speakers talk about that. Does anybody think this is a bad thing,
right? I mean, we now have,
AI and chat GPT able to get into places like, you know,
top-tier business schools, for example,
the rest days and stuff like that, right?
I mean, maybe that's a good thing.
I don't know.
But like, does anyone want to paint the other side of this picture
or is it just uniformly good?
Because, you know, we're all AI sims here.
Well, I just, I don't be doing in skeptic.
I'm always the name you say it on this space.
And, you know, I'm always the dude that says,
nay, nay, you know, be the 10th man.
But in this context...
I think it's a beautiful use case because I think despite the bias that may exist in a profit-driven AI to facilitate education, it's still education.
Now, in the context of a dogmatic, not in exact social science, like the psychology, psychiatry and so on, where the bias will be more predominant, I think in this case, as the normal naysay or 10th man on this panel, I do not...
dislike this use case at all.
I think John spoke first about there's a different use case
for every AI and some of it's not AI.
But in this particular instance, as the normal naysayer,
I'm a yaysayer today on this particular matter.
It will reward people for effort, and I like it.
And I think it's going to be less biased in the coding.
All right.
We got him as, we got GPS as a yaysayer.
Any naysairs?
Who thinks this is bad?
Does like literally no one think this is bad?
I'll be the skeptic.
Let's hear it.
So I actually think in some, I do think it's great, but I want to remind people why
Socrates was eventually put to death.
He was put to death because he asked provocative questions.
Now, the people of Athens didn't like it.
But when I think about it, we want to be able to ask people provocative question and force
them to think very hard outside of the box.
And I'm not convinced child GPT is yet being able to do something like that.
So for routine stuff, absolutely.
But for teaching people to think, people teaching critical thinking,
asking provocative questions.
Maybe not yet.
Gotcha, yeah.
I mean, we went to Plato, right?
We wanted to ban a book, like literally writing because he thought it would affect memory.
So similar sort of sentiments there.
You know, maybe, Brian, I'll go to you.
I've seen your hundreds.
But, yeah, I mean, do you feel you could agree with this?
Any new insights on this development?
Well, thanks, Eugene. Absolutely. First off, you can start doing what Harvard is doing in your own home using personal private AI. I'm using actually right now, as we speak, something called StarCoder. It's a wizard LM AI model. It's 15B and can run on a CPU of consumer grade.
And it can code better than most university graduate students.
And this is according to the benchmarks that is being established.
You know, what is coding?
Coding is storytelling.
Humans are toolmakers and storytellers.
And it's telling a story.
You're telling a story to a machine.
And because the machine did not understand us when they were first made, let's call it the first computer, we had to use machine language, right?
So now we've made machines intelligent enough to understand human language.
And this is where the bifurcation takes place.
And I'm in both worlds.
You know, I learned how to code in machine language and fourth and all sorts of languages that are now in my brain but not very useful.
So what happens when you give creativity to everybody?
What happens when everybody's language can code into something powerful
that only a very elite group of people had access to?
That's what we're witnessing with AI right now.
And that's why some of this is sort of,
You know, we're candle makers talking about how the, this new wire inside a glass bulb from this crazy guy called Edison is not really going to change our job.
We're just going to make better wicks and, you know, more creative wax.
Yes, we have all these real existential philosophical threats to us, but.
That's what you can do right now.
It's not in the future.
And yes, do you need other people around you?
Of course.
But as the models grow better,
and this model is only about three weeks old, right?
Most of the models I deal with are days to weeks old.
The Cambrian explosion that we have right now in AI models
are absolutely fascinating.
They're not coming from corporations.
They're not coming from government.
They're not coming from academia.
They're coming out of people's garages.
People who are just dedicated.
Brian, because I've been using,
I recently found Faraday,
which is actually a very similar setup
to what you just...
described. It's a private.
It's also, you can download, you know, uncensored models that allow you to kind of have a
gateless access to everything that's there. And there's a lot of different ways to use.
I run it on a regular, you know, an old school like Intel Mac. So like, I mean, it runs completely
fine. And so I find that to be one of the interesting factors is that there's this sort
of decentralization that happens with AI as well, outside of sort of the mainstream
narratives and some of these kind of larger stories that we were talking about.
there's a lot of people who are looking at these things that have been around in this space for a while
who know where to get it and know what the accessibility is. I think accessibility is one of the biggest
points that we've talked a lot about, especially in the art world of things, with people having access,
especially for like with disabilities or being able to use
some of the more AI art type tools for art therapy or things of that nature.
I want to kind of ask Illustrada for maybe her perspective,
because I know this is kind of a conversation we've had a lot of conversation about
between us and the art world of things.
But I'm really curious to maybe Illustratas point on the kind of Harvard use case
and what her thoughts are there.
Absolutely. Yeah.
Taking it back to kind of the more practical implications of that,
I think it allows for the...
the learning to be much more tailored to individual styles
and then also the potential for using it with folks
who have learning disabilities as well.
I think this is leaps and bounds better than what is currently
going to be able to be offered for folks in those situations.
And I think it sets the stage, Harvard doing this, sets the stage for almost, you know, the whole educational industry to say, oh, hey, okay, so there is a way to work with AI.
It's not just about cheating or plagiarism or anything like that, but it can really be used to enhance the learning as a tool.
The personalized nature is going to compensate for the lack of individualized attention that we see so often in education today.
I think it's the, I mean, it's, it's
an admiral, by the way, Brian,
you mentioned your,
you coded a machine language for anyone who's done that.
It's kind of like coding directly in weight,
which is kind of hard, but some folks like,
I know Carpathian tried, but,
but yeah, I mean, I think this is,
these are all good perspectives.
You know, I feel like maybe, I mean,
it sounds like nobody disagrees,
which is, I think unfortunate
that this is a good or bad,
this is a good thing.
But maybe the, the conversation that's emerging,
the debate that's emerging is like,
well, where does,
talent then get concentrated, right?
Can the random person in Uganda, you know, learn the same things as somebody from Harvard?
And I think this has been a trend that has been happening for, for years with like online education, right?
So like, do we feel like that's true?
I mean, if you look at Mosaic ML's $1.3 billion acquisition by Databricks, which is San Francisco, right?
Bay Area, right?
There's this argument that there's a concentration of talent in the Bay Area specifically.
I mean, there's other hubs around the world, of course.
But at the same time, perhaps this is the great democratizer, right?
Maybe that's the debate that should be, you know, that should be emerging.
So I know we got a lot of hands.
I mean, does anyone feel super strongly about this?
Maybe some new folks.
Josh, I know you haven't jumped in yet.
And then maybe Spinks also hasn't jumped in.
Yeah, thank you so much.
I wanted to share one point, kind of echoing Brian's sentiment, which is it's very interesting
to look at things, other tools that have come around in the past, things like Excel.
Okay, Excel came into Harvard in like the late 90s.
And look, data is also a story.
Data is also, you know, information is a story.
And what these students were able to do is aggregate this data and tell financial stories.
More than that, use functions that folks really did believe was an element of cheating.
Folks said, hey, if you're able to get all this data, show it in this way, do break-even analysis, you're cheating.
But the truth is, is what it did was democratize the ability for folks to understand this data and show it to other people.
Now, this was a huge moment because in a real way, Excel was like a mediocre...
thing at first and what it evolved to because of universities like Harvard was powerful tools
and business. So I want other people to kind of weigh in on this. But Brian, I know this is just
touching on exactly what you're saying, which is, look, data's information and we've seen
tools come out to democratize this many, many times. And, you know, I'm interested to see
where this goes in the schools, but ultimately I think it's going to help people streamline creation
we'll see how this goes.
Will it be the next great tool?
I think a lot of us would agree.
What do you got?
Yeah, I was going to say that I think one of the reasons that you're getting a lot of positive feedback here is because of Harvard's, the approach that Harvard took was very deliberate, very thoughtful, very sort of conscientious.
And what I mean is, you know, they decided deliberately not to use chat GPT or GitHub copilot.
So, and the reason for that was because they thought that those two are actually too helpful.
So what they did was they developed their own large language model, the CS50 bot.
And basically what that is, it's what they call, and this is in quotes, similar in spirit, right?
But we'll focus...
Basically, the difference is it'll lead students to the answer rather than just handing it to them.
So I think that Harvard's approach was very deliberate, very thoughtful.
They fine-tuned it, and I think that it just shows that it's really up to the individuals with how to use this technology,
whether, how it's going to go.
So Sphinx, the pushback, I mean, like, what prevents somebody, a student here from using just chat GPT for, right?
I mean, obviously there's, you know, student codes of ethics and stuff.
I didn't know people take that.
I mean, having gone to Harvard, I know people take it very seriously.
But I'm just saying in general, you know, maybe not these students, but in general, what, like, you know, what prevents people from just using these tools that are available anywhere, right?
Not just the CS50 bot.
So having attended a similar kind of school, I'll just tell you.
If you're given CS, well, I didn't do anything like this, but if you're told something is to be used for this course and you go and use something else, here's the thing, you're not learning.
If there's any course or anything done in class, you're not going to understand it.
It is in the student's best interest, and the students there are smart enough to know this.
It's in the student's best interest to use what the school, the professor, says you should use.
Because otherwise, you're basically, you're cutting yourself short there.
There's no one else you're cheating other than yourself.
Now, if this was just some kind of bad,
work assignment and you were going to save a couple of hours, that's one thing. But this is a course.
The point of all of this is to learn through the CS50 bot so that you understand what you're doing.
So I don't think the students will be using those other tools, but I think they'll want to use the CS50 bot.
I think it's interesting.
I want to go to Billowall.
Actually, just one thing, and then I'll go to, you know, Black and then and then maybe we should go to the new hand, Bill of All.
I welcome.
But I want to say one thing about that, what you just said.
Agree with that.
Hopefully there's, you know, we have faith in humanity and students and the world.
But the other thing to think about from a purely...
like Mercantile perspective is anything you put in if you use chat GPT at least right I mean you
can course download your own you know your own models on you know on on on on on on on on
geth etc but if you use chat GPT specifically all your data you know is going to open
AI right and and eventually like all data you know goes into you know is is the
belongs to posterity I believe right so I mean
I mean, maybe not even in your lifetimes, but eventually, right?
Your search history will outlive you is what is often said.
I saw that in a billboard in San Francisco many years ago.
It always stuck with me.
But that's something to consider it too, right?
Which, by the way, goes into Mosaic and Databricks, which are running proprietary allows people to have proprietary senses on their data.
I don't want to move to that yet because this discussion is starting to go.
But did want to point out that, you know, again, reminder to everybody,
what you'd put in chat GPT is the property above an AI.
But Black, yeah, what are your perspectives?
Yeah, I think one thing to consider here that's really important is that the reason why the students will use the bot
and not something like ChatGPT is because that bot's going to be programmed and have the data sets
that the professors and the school and the administration and everything else have put into it.
And so the reason why I bring that up is because that'll be specifically –
pertaining to whatever it is, exams or, or, you know, different papers or whatever else that they have to produce from the learning process is going to be based off this information.
It's the same thing as being in a college course and the professor giving you a specific textbook.
If you go get another textbook, it's going to have different information information.
albeit maybe the same factual things,
but from a different angle
or whatever the case may be.
So these data sets that are being put into
the specific tooling that Harvard's going to use
is going to be there for the specific purpose
of being in that classroom.
So I think like that,
like to me from the perspective of
if I was a student at this time going through something like this,
I would use it because it would be like, that's what the teacher said versus that's what the internet said.
And so therefore, I want to give the answer that the teacher wants because that's how I'm going to get a good grade.
You know, it's sort of in their best interest to do it in that way.
At least that's sort of how I kind of basically, you know, maybe like boil it down.
But yeah, let's go to the wall.
I just wanted to say I completely, completely agree with that.
And I think the analogy of the textbook is a perfect analogy.
If anyone's ever been in a course, a college course where you've been given a textbook,
but then maybe you go online because the subject interests you and you find another –
sort of source of material and you start reading.
What happens is you start going down this rabbit hole.
And I think that's what can happen if they start using these other tools is,
okay, sure, but they'll end up wasting a lot of time, spending a lot of time.
And whereas with this, this is geared exactly for this purpose.
It was a program for this purpose and they'll learn what they need to learn.
It'll be the most efficient way.
Billow up.
Hey, thanks for having me up here.
Yeah, so like, I'm gonna, EYC, you talked about the downsides.
I wanna talk about the downsides first and then the upside for this education use, right?
I think the fear you're pointing out earlier is totally valid in the sense that
with the rise of generative AI being able to do more and more sort of traditionally human tasks,
are we sort of outsourcing certain mental faculties?
that perhaps we build today. And, you know, prior generations have had to build, right? So I just think
about when I look at kids handwriting today, it's just like absolutely atrocious, right? And you just
go back a decade or two and just how beautiful, you know, people's cursive handwriting used to be.
And there's a whole case to be made there about, you know, do you learn better because, you know,
we're sort of evolved for at least for a longer period of time than the QRD keyboard, you know,
which was really the way it is because of like,
typewriters would jam up,
it's like these modalities for us to write
and consume information
are evolving faster and faster.
And the prime example of that,
I'll just pick on Maps here.
Because, you know,
I worked on Google Maps.
It's just like,
on one hand,
you're never lost again.
You can go anywhere
and know exactly where you are.
On the other hand,
I have a...
A lot of friends who will key in the exact route they need to take every single day and are almost helpless if they don't have turn-by-turn directions to navigate the physical world, right?
And I'm sure you probably relate to this.
So, you know, one could make the case that, hey, look, we could read these, you know, 2D paper maps.
We figured out how to localize ourselves against this 3D world, and then we figure out how to do our own sort of internal path planning.
And now with, you know, obviously just regular maps and then even augmented reality, we sort of outsource that, right?
I think that is a good question.
When you bring it back to education, you know, on one hand, it's like...
It's like right now we do have the ability,
you know, the communicators that excel
in whatever landscape, communication is a skill
that is valued, right?
And so, you know, humans that have the ability
to articulate cogent thoughts, you know,
that are well structured,
that people can follow along,
has been this sort of assets.
And now, you know, if you can just, you know, kind of, you know, forgive my French here,
shit out some crap and then have chat GPT turn it into, you know, Harvard MBA style, you know,
elegant pros, you know, does that result in a ton of mediocrity?
Maybe, you know, and would that increase the value of really, really interesting thoughts even higher?
Possibly certainly will inundate the internet in the world with a bunch of mediocre content, right?
So I don't think there's any way around it.
get back and just wrap up with what Spinks said about, you know, and in Black, too, about sort of, you know, the approach Harvard is taking.
I think all these technologies come back to how you integrate it into a workflow.
So my day to day right now is really on visual and 3D creation, you know, like how generative AI fits into that.
And there are similar fears there of like, well, if I just type in a text prompt, it just like does everything for me.
Yes, one way to do it.
Similarly, a student could just be sort of spoon-fed the answer,
but the much better way to take advantage of this is like, holy crap.
You have this distillation of like documented human knowledge, which probably includes
read et cetera, you know, you can talk about how much human knowledge is there.
But documented human knowledge that you can query, suddenly our relationship with information
has changed, right?
I just think back to reading.
I think what you're talking about, you framed it nicely.
It's basically like, well, it's the whole argument, the whole old Plato argument
about not having books, right?
But so it's like the GPS argument you made, right?
Like now we're less good at, you know, finding things without GPS.
I might even make a basic thing, like ancient humans were probably better at making fire.
I don't know how many people on the panel here can make fire.
Just twigs.
I mean, I'm sure there are some.
Now we got stoves.
There's a lot of YouTube videos.
But, you know, we got.
So the question is, does it matter, right?
I mean, Brian was talking about, you know, learning how to code in like ones and zeros and in assembly and things like that.
That doesn't matter as much anymore.
And now we're in like software, you know, what Carpathie calls software 2.0, right?
Where maybe even like the base coding language doesn't matter anymore.
So I know there's a lot of hands. I mean, Xavier, you haven't gone yet.
Can I jump in there, Eugene, just real quick on that?
Yeah, because before I get too far away from Brian's comment, Sphinx, and particularly Moshe,
the reason I'm surprisingly bullish on this application of an education in the exact sciences
is I think we must discriminate very clearly around the ethical, not the fear.
I keep on a reminder room that ethics doesn't fear.
and doomerism, ethics is not that. You're just asking the questions. In the exact sciences,
you've got a set of curriculums, you've got a set of a literature that's available for the course,
you have an assistant in the form of an omnipresent AI or, you know, big data model, and also,
it's tracking your progress along the course material. So,
it will track as Sphinx mentioned.
If you go down a rabbit hole and chat GPT,
you're gonna come back with something
that is clearly not what you've developed
by navigating the course material.
In that context, all of this in the exact social sciences,
or sorry, in the exact sciences,
is capable of being safety-valved by the human in the loop
at the end of the process.
So as Sphinx and Black both mentioned,
you know, at the end of your course, you know, if you cheated throughout your course, a one-hour interview with your three panel professors is going to make it quite clear that despite your perfect grades on the, on the AI-based, uh,
you know, interactive part of the course, you did not absorb the information, the knowledge,
and you were not able to do, let's say, a doctoral defense that was compelling or an undergraduate defense.
So there's a very different, what Moshe says about the bias...
that it's much more dangerous in the inexact sciences,
where it's dogma or ideologies that are pressed
rather than the exact sciences.
And I think a really good example of that,
is Kissinger,
Hoffenlockers,
and Schmidt's
AI-generated lecture
on the future of AI,
where they insisted
that all AI
must be based on
Western values.
And that's, you know,
a really poor thesis because it ignores relativism.
And that's bad.
And that's the thing that dovetails with a Mosaic ML concentration of talent around the Bay Area too,
which is that if they're developing inexact...
models, models that require dogma, opinion, or relativistic thought, either religious, political, or social,
then that's going to be a poor outcome. But I'm surprisingly bullish on the educational side,
because I think, like Sphinx and Black said, it'll be clear,
your output, whether you follow the course material, and it's great to have a really simple safety
valve in the form of an end-of-term interview with your panel of professors that will demonstrate
whether you're just cheating or whether you can articulate the nuances of your experiences in that
semester. Man, GP, I think you fired up a great combo. Before we go to the other folks, though,
I do want to remind folks that there is a purple button on the lower right. Please do comment,
on this discussion with speakers.
We have a team on the back end.
We'll bring you up.
In fact, some of the folks here were brought up
from some of their great comments.
So please do we encourage you to do so.
Also want to say, Mario's company, IBC,
incubates and accelerates AI and Web3 companies.
Partners with VCs and funds,
work with the portfolio companies,
you know, basically zero cash and in return for equity.
So if you're interested at DM Mario and his team, we'll get a call organized.
And also, by the way, we're going to start doing Shark Tank style pitches.
I actually think Mario's in LA now or somewhere they're about to actually do one live.
So, yeah, please do so.
We've had some great ones in the crypto space.
I've been a part of those.
We're going to start doing more in the AI space as well.
So any startup or portfolio company that would like to pitch, please do hit us, hit up our team.
And also, don't forget to subscribe.
All right. So, GP, you laid the foundation there, and actually, you brought up an interesting dichotomy between what you might call objective versus subjective. Before we go, do you make my mind if I think on the previous thing? Because I know you were calling me up. Yeah, yeah. I was actually about to just go to you, Xavier. So, yeah, I mean, basically, I mean, GP's talking about subjective versus objective, so to speak, if I were to simplify it, you know, sciences. Yeah, Xavier, what do you think? What's your response to what GP just said?
Oh, no, I can't believe they agree.
I mean, obviously, most of the things we know is going to be subject, especially when it comes
of science.
There's no true science.
Every science, I mean, some people might question that.
But even science is subject in a lot of matters.
It's just theoretical, obviously.
And we're constantly learning and disproving everything.
And I think that's a good statement saying that, like, we need to perceive the possibilities.
But I think right now, because a lot of...
it because this is so new, perception is key, but again, we don't know what we don't know.
And that's why I was going to bring to the point about how mosaics transition, and which I don't want to get into too early about, I think this is a groundbreaking, especially because a lot of people have been using API and, and,
And, you know, as you're saying, Mario, is that a lot of data is being sent right back to the company.
And a lot of individuals who don't know technology don't understand that concept.
And so having Mosaic doing this allows the normal human being to, or people who aren't technical, to be able to own and possess their own models.
Because it's finally, it's just a fast track expedited process to download a clone.
But Xavier, why do you trust Mosaic and now Databricks?
Like I think there's a good point to transition, by the way, but why do you trust that?
Oh, so sorry.
I meant data bricks because data bricks took on, yeah.
Well, I still trust Mosaic and data because Data Bricks took on Mosaic.
So now in a sense, it's inherent.
It's like, well, you're going to trust Mosaic because now Databricks takes that on.
And so inherently they were going to, I would hope they would adjust to that, just like Microsoft and Open AI, right?
Maybe for the benefit of the audience, you want to lay out like, I mean, I,
It sounds like you know about Databricks and their proprietary models that allows the enterprise to do so.
You want to maybe lay out the foundation for us so we can transition to this next topic?
I have to freaking literally.
But also maybe what explain, can you also explain what you mean by like science is just theory because I don't understand what you mean by that.
Oh, well, I'll do the first one with the whole data bricks.
Well, obviously, we know the acquisition really was important because Mosaic has a new
different model, the NPT, obviously the Mosaic
multiple parameter or something model, but essentially it's just trained on a different
model and the different parameter measurements. And
that beauty of that it was like 64 000 tokens it optimizes faster the training time and scalability
is linear and the performance is key and that's why it makes this acquisition with data bricks
amazing and so being able to have that that the answer that question that's why i feel like
the data big transition with the mpt rather than the gbt
It's going to be freaking is going to be having the the multiple parameters is going to be key especially when you're trying to train
Some of the strongest future models as far as science goes
Well if you notice physics
And even if you're even let's use hindsight theory relativity it used to be Newton's law
Right three laws of motion right
And then Einstein was like, oh, you know what?
That's great and all.
But there's this thing that we believe the gravity folds around people, blah, blah, blah.
Time and space is relative.
And then he came up with the theory E equals MC squared.
Well, no one, everyone was like, yeah, yeah, yeah.
Newton's law is everything in physics.
Well, then Einstein, like, no, light bends.
And if light bends, gravity bends, blah, blah, blah.
And so then it wasn't until 1915.
It was proven that, oh, shit, he was right.
Gravity is relative, which means, all right, well, space and time is relative, which means there's no past, there's no present.
There's no future.
There is just only relativity.
it means the now. And so when I say that is because Newton's law, three laws of motion,
was the physics foundation. You could not say anything else but Newton's law. And then Einstein,
once it was proven, now all of a sudden it is Einstein's law. And so I feel like the same thing
is happening with Chad GBT and Mosaic right now. The law right now is equivalent to cheat
Chad GBT. It's the, it's the Newton right now.
now. And I think Mosaic MPT is essentially the Einstein, at least from my perspective.
I got to jump in here and just if I'm a build on that because that is such a good point.
And Mark Andreessen penned, this awesome essay, Hawaii is going to save the world in which he brought
a similar example to really distance ourselves from present day, which is like, imagine if we had
chat GPT and a GPT4, whatever, in the time of Galileo, right? And
At that point, obviously, he was advocating for, you know, everything's heliocentric.
The earth revolves around the sun.
But obviously, the sort of the de facto authority at the time, the Roman Catholic Church, had held a geocentric view.
So if you went in the GPT4 of that time and you typed in, well, does the sun revolve around the earth or vice versa?
what would it say?
And what would it say
if we're giving it reinforcement learning
from human feedback, right?
And so I think it is a very interesting question
about if we have AI's,
especially really large, powerful, ubiquitous ones,
you know, perhaps in addition to sort of the,
you know, like letting a thousand flowers bloom model
that Brian was talking about,
what role will they play in us being able to sort of advance our ideas and question our own thoughts, right?
Because a lot of this stuff is, it falls into, you know, this very great area.
Another great example being like, was COVID a lab leak?
You type that into chat.
GPT, you'll get this beautifully hedged out answer.
Yeah, but I'm sure there is some good.
Like, how do you think this will integrate like compared to truth GPT, Elon's truth GPT?
And don't you think it's just going to like make the internet continue to be more divisive, I guess, right?
Is that like that would be a pushback?
Great question.
But it's not the job of AI to solve those society's ills.
Go for it, Spinks, if you want to jump in on that.
Yeah, I just, sorry to interrupt, but I want to make this point, which was very important
that I wanted to make last time.
We have to understand all of the points, the concerns that have been mentioned in these spaces
all these weeks.
They are all valid.
They are all important.
I'm glad they're brought up.
But it is not AI's role or job.
to rid society of every single problem that exists today.
However, it shouldn't make it worse, right?
So there is, there is, there is that sort of being conscientious, being aware.
So you can ask that sort of, what if this does this kind of thing to the end of the earth?
But, but again, understand, we gotta understand,
What is the purpose of this technology?
What are the ramifications?
Are there any potential effects that could harm?
So that's usually how I think I look at it.
So things seems AI is not operating autonomously though.
I think that's just, I just want to be like super clear on that.
Like it's not like AI is like developing itself.
We are giving it data.
The people,
but by we,
the people who are creating
these tools and platforms
and systems and everything else
and models specifically,
they are the ones who are putting the information in.
They're the ones who are filtering things or not filtering things or things of that nature.
And every model is going to be different.
So it's not like,
AI is this sort of autonomous, all, you know, all encompassing thing that exists in all these different use cases as a single entity.
It's broken up and fractionated in all these different use cases. So if you're, I think, I think a better way to maybe more accurately associated would be like different news media having different biases based on what their intention is. I think it's much more like that because the companies are,
are going to have their own intentions with the people even building uncensored models have their
own intention with it and so therefore all of those all those AI are their own entities or their own
infrastructures with their own data models and history and information that they're based in all
their responses off of so it's much more about which use case as what which is actually a good thing
as we've sort of all kind of agreed upon with the harvard use case right because it's going to be
specific to what the courses are going to be but
Outside of that, as a general use case, something like OpenAI or ChatGPT, whatever it is the narratives are that they control within that ecosystem are the ones that people will receive when operating within that ecosystem.
And the same thing can be said for all the other ecosystems that are currently being developed.
I agree with you.
I just want to say, I know you made a point about not lumping all of AI into one.
I don't know if that's what you heard me do, but that's not what I did.
I actually agree with what you're saying, but I think you kind of made my point.
It's the humans who are driving this, who are going to determine each.
particular AI's sort of leanings, right?
Each, so it's not like, but at the same time,
what I'm saying is, you know, I've heard things like,
well, is this going to, is, is AI going to, is, you know,
For example, facial recognition, is that going to make it more equitable?
That's not going to make it more equitable for people.
But that's not the point.
We have to make sure that it doesn't make it less.
We have to make sure it doesn't worsen the situation.
But sometimes the arguments is, well, this doesn't get rid of prejudice.
But here's the thing.
The prejudice is here without it.
So the point of it is not to get rid of prejudice.
However, we have, I believe, a responsibility to carry these things out responsibly so that it doesn't worsen society sales.
I do want to make that clear.
Yeah, so Eugene, on that point, yeah, thank you.
On that point at Spinks makes, I don't buy the intro to most talks about technology,
which says that technology is neutral and it can be used for ill or for good.
It's just not true.
People set out with the objective to train a model for a particular purpose.
And to speak specifically and practically to Sphinx's point,
If you look at my timeline, the government in our country has brought in a facial recognition
technology bill very, very under the radar, but also omitted to mention the use of edge analytics,
real-time data processing for mass acquisition of low light condition, gait and posture detection,
as well as gaze detection.
So when we don't align with a global set of ethically aligned design standards,
of which there are eight main core principles,
we get things like the EU AI Act,
which speaks more to the cost of implementation and regulatory checkboxes,
rather than the fitness of the team.
Now, by fitness of the team, I mean, like Brian, I trained on Assempir on MVS 370 mainframes in the late 80s and Rex.
And, you know, knowledge was treated as a, you know, the guys who knew Assembler would not help you learn Assembler because that was their promo line.
That's why I'm bullish on the education side of things because it democratizes things.
And as you said, there are great minds sitting in South Saharan Africa, Central Asia, South America,
who'll never get the opportunity to attend an Ivy League school because of the money.
but can now be guided and use their own perception, ability, and, and, you know, brain power and exercise that muscle and rise above the inequity of the education system.
That's why I'm so.
I want that to be true, but like, you know, we've had Coursera for years.
We've had these online education.
I think it's actually somewhat democratized education for sure.
But it hasn't yet succeeded, right?
I mean, I want to see these, you know, I mentioned to Uganda specifically because it's one of the more interesting economically complex places in Africa, you know, and just there's a lot of exciting things happening in Africa.
I think not a lot of people are aware unless they're investing there.
But like, yeah, I mean, it hasn't happened yet.
San Francisco is still the center, right?
And I'm not sure it's going to stop being the center of AI in the world outside of, you know, other small hubs.
But this is a good thing.
This is a...
Hold on one second.
This is a good point, EY, C, if you bring up, because this is a...
classic example right so so we have this technology let's just say whatever
sorry black I'm just gonna call it AI for the moment right and then we point out
well you know what it's it's it's really kind of focused in in the West and in San
Francisco but here's the thing and then you would make this comparison
contrast with Uganda but here's the thing and this is why I think you I see
your point is good because
That is the case for all resources.
They're all basically divided unequally.
So the fact that some new technology like AI comes and then we now say, oh, but this one, this one is focused in the West.
So therefore, this one, tell me what is not, what resources are not?
Plant plant.
Plenty things.
They have that way.
They have that.
Sphinx, there's plenty.
And guys on the post panel, my mother had surgery yesterday.
I've just come into the high dependency ICU unit,
so I'm going to have to say goodbye after this comment.
The point, Sphinx, is that I wasn't making the point about a shift of global...
power of, you know, where America or the U.S. generally speaking or the San Francisco Bay Area
is going to be replaced by, you know, Jakarta or, or, you know, Brazzaville in the Democratic
Republic of Congo. But to give you a very practical example, as I was walking. I wasn't commenting,
I wasn't replying to you, just so you know, I wasn't commenting on what you said.
said, but go ahead. Oh, no, no, I'm not, I'm not, no, I'm just replying in the general context of
your comment. And I think the democratisation for me is real, because as I was walking to my
stand for free speech outside our president's palace in the Phoenix Park, one day last week,
I came across two Indian gentlemen.
There were both from, one from Delhi and one from Calcutta.
I said hi, they said hi and we spoke up a conversation.
Now it turns out that both of them, one of them is a process engineer doing, which he did in India.
The other is a software engineer which he did in India.
They came to Ireland to do advanced degrees in AI.
And they actually told me about their journey,
and they started their journey as 13-year-olds
using internet CBT-based learning
in order to rise above their colleagues and got scholarships.
And by a complete fluke, one of the gentlemen,
was looking for an internship and I'm looking for an intern.
And he and I are now colleagues because of the very basis of the democratised of education
through a simple model, albeit earlier on in this decade.
You know, so it's a practical example of something, which is the control of the narrative in the inexact
sciences or in the political, religious or societal structure domains needs to be highly regulated.
That's a dangerous space. And it doesn't need to be regulated by government. It needs to be regulated
by a set of ethically aligned design principles that has more people than the technologists
developing the software. And if it's a profit-driven company, that the dog and ideologies of the
owners are not propagating a business.
bias at scale that we already have. And that's key because the training data that's being used by
the current leaders of Web 2 is utterly compromised and will have a cascading snowball effect as it
rolls out in AI. It's not going to improve the data quality. It's going to exercise an existential
acceleration of those biases. And I'll wrap with this.
We have not seen, you said earlier, that Harvard went down the way to be very thoughtful,
and they did because they involve more people than the technologists,
they involve more people from staff that were in domains outside of the model training itself
with the development of a particular tool that a technologist had.
And that's where we fall down, to Moshe's point on the more existential questions
of relativistic thought and on the
qualities and values of society being determined by San Francisco Bay Area versus a malleable
model that is a baseline on which people can build relativistic models.
I know that's a bit of a mouthful, but because I've got to turn the phone off, I've got to end
with that.
But I'm bullish on the education use case, because especially in the exact sciences, and
And I do agree we're always discovering new things, but for people who need a baseline,
for people who want to get started on an undergrad and even a master's program,
you always have your baseline material of accepted theory, and then you build original
part on that in your doctoral, postdoctoral or whatever work. Now that's a different story.
For undergraduate and postgraduate, I think it's a good use case, especially when it's as
thoughtful as the Harvard one.
Thanks so much, guys, for again having me on the panel.
I'll just go and see my mom right now.
So peace and light to y'all.
Peace and life.
Thanks so much, GP, and best of luck with your mother,
and hopefully she recovers quickly.
Let's go to Brian.
Well, interesting.
I got a dog in the background barking here.
So let's examine what education could look like in the future, right?
Imagine if the moment you are entering into the school system onto the rest of your life,
you have an AI platform that is following you personally.
It's not only seeing everything that you're learning,
it's also helping guiding you on your learning journey.
This particular aspect of AI is probably, if not probably, 100% likely is the direction it's going to go in.
Now, the question is who gets to control that?
Is it in a Microsoft Cloud or a Google Cloud or is it on your own devices?
My belief system is,
It's got to be open source and it's got to be on your own devices.
And as you are educating yourself through your life,
you're going to have many, many forks in the road.
You're going to have many interactions and iterations,
which is pretty much a reflection of yourself.
When you're looking at AI, you're seeing reflection of your journey,
your context, your, you know, your experiences,
And if there is telemetry, you know, biometrics, it's going to know what your interactivities were with the course material that you're learning from.
So it's going to know to an nth degree, and again, this is why it's always going to be private.
It's going to know to an nth degree whether you really cognize that by the typical neurological responses that you get when you reach some level of understanding or an epiphany.
The technology I'm working on right now
can detect epiphany type of reactions within you.
And that is monumental.
It's milestone within your AI.
I call these things intelligence amplifier technologies.
And then you can go back to those milestones
and try to understand where did that epiphany come from?
Where did that creative thought come from?
And so education, we're talking about it in the way that us
all have sort of experienced education in the past.
It's very romantic, and I miss it too.
But we're not going to be living that way 100 years from now.
100% you're going to have,
and I'm not talking about a singularity.
That can be another debate and other discussion.
I'm just simply talking about the technology that exists today.
You can download for free, GPT for all.
You can put the...
I don't know, Hermes model in there, 13B model,
and you could start building your local context right now.
You can have a conversation with every email you've ever sent
if you've archived it and download that locally.
This is all available and I try to make how-toes to do this
and I try to empower people to utilize these technologies.
But once you realize how powerful it is,
you will never go back.
And the university experience changes.
your life changes, what you think is a career changes.
And I think it's probably important that we know that since AI is changing our lives and our careers
Because a Pandora box has been open and we don't really have a choice, but to constantly be learners, to constantly be explorers.
And this is the moment.
We are the pioneers.
Everybody listening to this are the pioneers in AI.
And it's not the back end, right?
There's two sides of AI.
The front end and the back end.
And this is vitally important to understand.
I'm a technologist, I love technology.
The backend is computer science type of things,
coding, building things in code.
That's great.
But there's the other side, and this is expanding rapidly,
the creative side of utilizing these tools
and building upon these tools.
Some of the most amazing artists I've ever seen have come up using computer graphics.
They couldn't code a computer graphics program if their life depended on it.
But the guy that coded or the gal that coded that coded that computer graphics program could never elucidate and elicit that type of art.
from the tools that they've rendered.
So we are in a sympathetic relationship,
and we always will be on both sides.
But the side it gets a lot of stories about
are the sides that are going to get the rises.
What's that?
But like, why is, you know, why is, why is data, why is data, why is data, why is
worth $1.3 billion for 60-some employees, $21 million?
I mean, it's just related to the whole life.
High factor.
I mean, there is a lot of value.
Snowflake and Databricks just like, you know, the whole memetic desire thing, you know,
so it requires somebody and so Databricks does that have to do that.
There's some of that going on.
There's some of that going.
There's nothing.
Listen, incredible technology, incredible talent.
So it's a talent acquisition.
It's a hype factor.
It's getting your foot in front of the other foot to try to have maybe a bigger mission to ultimately IPO.
There's a lot of things going on there.
I think ultimately you're going to see something great and positive coming from this.
It's great for the community.
Wait until you see some of the next...
IPOs and actually the next raises that are going to come out of AI.
I'm dealing with people literally in their garages that are doing things.
You know, places that nobody would ever imagine.
People in the finance space are saying like, oh, you know, the hype, you know, look at Mr.
AI, the hype, oh, I'm 105 million after four weeks.
But I think the hype cycle is actually only just getting started.
Strangely, I know you're also, you know, you're in the Bay Area.
What is your perspective on, you know, Mosaic's $1.3 billion acquisition?
Hey, guys.
Hi, Brian.
Thanks for bringing me up.
Yeah, I worked in the cloud infrastructure field.
So data breaks, I think it's just another, you know, betting on different sort of technology.
So it's like hedging for them.
But as I mentioned in Javier's comment out there is...
A $1.3 billion hedge.
I guess there have been more expensive hedges, but that's a pretty expensive...
Well, you can get returns with...
You know, if you write the waves, write the hype wave,
and you can definitely have over...
valuation of these things, which currently is the trend, which you don't know if, we don't know whether in coming years it will stay, remain. But fundamentals, if you look at fundamentals, what I'm looking at is, so I've worked in a competing product for Databricks and nothing against them, great product.
The only issue I see there is what are they going to use this for?
And I think I was having a discussion with a thread with Javier right now, was thinking if they
could use this for like, say, synthetic data generation, right?
Because data warehousing, data lakes, all of that deals with more, you know,
you know, how do you manage resources?
How do you manage, like, cloud resources?
The problem is, I think Javier had pointed out that to train folks easier on their LLMs,
the thing again comes down to is,
Like, what's the cost of cloud cost for running models based on the parameters?
And if it becomes, you know, right now the problem,
biggest problem with Amazon billing that companies face, that enterprise faces,
since it's on-demand billing, their billing bloats up so easily.
And you basically need a mathematician or maybe a math scientist to figure out
what are those dollars and cents that get charged?
And it's a huge problem in the space of overblote
because there's, you know, engineers spun up instances.
Strangely, I think what you're talking about is basically like who owns the moats, right?
And it's so unclear, right?
I mean, really, I mean, the people who are profiting now are Nvidia
and to some degree Google with the TPUs, right?
And the cloud, I mean, the compute costs are just...
absolutely enormous. So AWS, you know, Google Cloud Platform, GCP, Microsoft's Azure, even CoreWeave.
I mean, these are the ones that are that are really, you know, I mean, these are ones that are
that are obviously profiting, right? I mean, there's some estimates that something like 10 to 20%
of total revenue in Gen AI is going to cloud providers, right? It's hard to, it's hard to know,
but on average, you know, I mean, that's like basically, you know, the inference costs are just so huge.
that it's basically the people in the infrastructure layer
that are actually going to win, right?
These big public companies, right?
I mean, who in the news space is going to win?
And how do they win?
All the first get...
Yeah, we got, we got only one person, and that's Jensen. He can buy 60 leather jackets per second. Pretty sure it'll be 120 leather jackets per second. And yeah, honestly, it's really just Google's TPU, like, infrastructure that seems to be even remotely close, right? And if you think about a GPU shortage, like,
who's going to get though, like, who are the cloud providers going to provision all the capacity to?
It will be the bigger customers, right?
And so, you know, actually, I have a question here.
And Brian brings up such a good point about sort of letting a thousand flowers bloom and these smaller models, right?
Like, how...
I'm curious, what is your worldview and how things end up?
Are we going to have like a couple of these like God tier models that are like only get more expensive to train every year?
But then practically speaking, we'll have a bunch of these homegrown smaller models that we can train on like a smaller, you know, smaller, you know, with way as compute and we'll be running at the edge.
What does that worldview look like?
Because for, for.
where I stand, like, it feels like it always ends in oligopoly, right?
It's like, then again, the web is a great counter argument
to where there has been significant democratization.
So I'm kind of curious what...
So, Bill Ball, can I answer that really quick
and then I'll let Brian go at it?
There is...
effort and check out Sky Computing. Sky Computing is basically what it's doing is, I think there
are a few companies. I think in crypto space, I had a chance to interview one of them, Akash and
a couple of others. They are connecting the clouds. So creating a marketplace where Google can put up
their tensor course, then say Amazon can put up theirs and smaller players like data centers
who have a lot of, they provision a lot of these, but it doesn't get used throughout the year, because only, you know, the peak season is for some, some are seasonal, but they, they have to pay for these things up front. So combination of all of that will be more democratizing rather than just these oligapilies, which, um,
which dominate. I think the cloud market is 70% owned by all the top three, I think,
Google, Amazon, Alibaba, and nobody else. Nobody even comes close. So just making those
hardware available to anybody and
You know, the good thing is, say, storage is getting cheaper.
Compute is getting cheaper.
So nothing stops me from owning, like, a 100-core CPU at my...
But I mean, strange, I mean, like, you know,
Nvidia GPUs are actually getting more expensive in some cases.
Yeah, they are.
They are because of the cache and all the V-Ram requirements and all that.
And because, yeah, they need to get cheaper.
And once they get cheaper, then it's going to be interesting.
Yeah, maybe Shakar and then AGI.
You guys want to jump in?
Yeah, I think coming back to the question of whether or not the acquisition is just hype, I think definitely not.
When you think about really data breaks as a company and what they've managed to do, they've really created a leading infrastructure for being able to do a lot of the base analytics that people want to do with data.
What we note about large language models is they supercharged the power of that because they take away a lot of the manual tax that are required to get insights from that data.
Why is anyone storing data in the first place, right?
It's not just to pay compute costs, it's to get insights from that data.
Now having like Google or Open AI or just a handful of providers as the only way that
you can actually interact with large language models feed them that data and actually
get insights actually creates a giant security hole for pretty much any data breaks of
or at least on the enterprise side of things.
And so I see this as competitive defense.
Frankly, they are kind of screwed in a world where that is the only way that we can gain value from these things.
And on the flip side, the value of actually having a large language model or a private model that's trained on the entirety of your data warehouse.
is probably one of the greatest enterprise needs that you can think up today.
It supercharges knowledge work.
It supercharges analytics.
It supercharges the ability to actually automate a lot of the button clicking work that is driven by those analytics plus strategy.
So, yeah, I don't see this as hype at all.
I wouldn't say, shicker, it supercharges.
I wouldn't say it completely supercharges.
It brings to the surface the correlations that are possible.
And then the issue again comes is the complexity.
We run into the complexity issue again.
You make tons of list and then you have to manage your list and then you have to manage the list for that.
So I hope I'm.
You know, my point is clear that
inherent complexity is not going away with
large language models.
AGI, you want to jump in?
I know you've been away there.
Well, I think people will want to own their own AI agents.
So we call that sovereign AGI agents.
I'm developing them right now.
So you will own your AGI agents via NFTs.
They will belong only to you with only your private data
they will comply with 100% they will be private.
And you will have the wall that will be running the AGI nodes that will be able to power those kind of sovereign AGI agents.
So, for example, if you run your agent and you need a lot of computing power, then you can pay with AGI token to have access to, let's say, 100 AGI node.
So that will give an income to the people running the nodes, and you will be able to have your AGI agent being able to do some useful work at superhuman level for you.
And this is...
Yeah, I'm so glad you brought some Dgen energy.
I know you got that .Eath, but you never talked about on these spaces.
So basically, where you're saying is people are going to use the blockchain to own,
that's a really interesting picture, to own your own private LLM, right?
Which is really interesting that these things, you know, basically enable, right?
I mean, basically, I mean, the idea we talked about earlier is, you know,
the idea that everyone's going to have the personal LLM.
The bifurcation...
is going to be way crazy with that.
it's not only that.
It's not only that.
If you think about,
for example,
mining bitcoins,
so it's the same process.
But right now,
people will be running
That mean that instead of
wasting that kind of resources
when people are meaning Bitcoin,
they are doing kind of things that are useless or almost.
Now people provide the computing power.
They will provide the computing power
to the AGI agents that will do useful work.
So it's the same kind of process,
decentralize.
You own the AGI note.
You get AGI token because you are running them.
And the AGI agent, well, they produce useful product and services and so on, that they sell via NFTs.
And you have the economy of things.
You create the economy of the AGI agents, which become more important than the economy that we have right now.
So one second.
I have a quick point.
Sorry for interrupting.
let's forget about AGI and let's forget about NFT for a second.
And let's think about what it means if, say, me in my garage can offer GPUs for compute
and then EYC can do the same, then Mario can do the same, and AGI can do the same,
Spinks can do the same.
You have this giant network of GPUs that are sitting there.
and that could earn us some money.
And then imagine the person on the other end just sees one giant GPU.
They don't see these 20 different clusters or 20.
It sees a one big giant cluster.
I think there is some value to that.
You can get cost effective like cents to a dollar like kind of cost on the compute
because everybody is offering their...
their hardware. Like right now, even MacBooks have, I think I was seeing the specs, I have the M1, I'm thinking of upgrading to the M2s. They are getting a lot of accelerators in their system. So
democratizing, that would be one way to go.
I mean, to support your thing on compute,
and I do think we need to get to our last topic,
but before getting to that,
I mean, some people estimate that GPT3 took like $10 million.
The training cost for GP3 was $10 million,
but MPT7B, which is mosaics,
ML kind of like, you know, other, like their version,
cost $200,000 and it took them nine and a half days.
No human intervention, right.
So even though GPU costs and stuff might go up,
I mean, some of these things are going to become so much more accessible.
But E.C. think about it.
How did they get the cost down?
If the GPU costs are still the same,
It's just they are using something akin to caching, like pre-training, right?
So they already pre-trained the model.
So, yeah, we've got to go into the fundamentals of why, how come if the GPU costs are still, the hardware that runs these things is still the same.
How come the price is lower, you know?
Definitely there could be a whole space on that.
But I know there's some hands, but I know we want to shift.
I mean, Black, you got, I mean, you know, we have just a little bit of time,
but we have some spicy other developments in the world.
So, yeah, I mean, you want to T-S-S-Up.
I mean, is that what we're...
Yeah, it's kind of an interesting one.
Meta's Lama has been used for chatbot romances is really kind of the headline, I guess.
We got the Washington Post article talking about Facebook chatbot sex.
I think it's an interesting, it's a little bit spicy, but at the same time, you know,
this has been something that people have been talking about for like six months in different use cases with different platforms.
And it's fascinating to see how when it rolls out to...
a large social network with, again, easier access.
You know, humans start using these things for their own purposes,
one way or the other, for better or for worse.
Yeah, so maybe we'll kind of open up the stage for anyone who wants to kind of jump on this topic.
And we can kind of also go through the article if there needs to be a little more context.
Looks like who wants to jump in?
Looks like Xavier has to stand up.
I'm bullish on it.
Oh, yeah. Well, I spoke on a space not too long ago about the importance of having these.
I know it sounds to some people, but I mean, euphoria is a part of the human society.
Like our instant gratification. It just is a part of our nature.
And I think this is going to free up an industry where there's a lot of trafficking.
There's a lot of other things.
I mean, we see it might be awkward.
I think this is going to be actually really good for a lot of people who aren't getting that intimate connections.
because it does release certain levels of oxytocin and dopamine.
And in that, it creates a more pleasant human being.
And I can definitely see something like that
being more accessible to individuals
that may not have those abilities.
And I don't think seeing anything wrong with the chat, you know?
But I do know that the state or somebody
stopped a whole bunch of chat bot dating a dating chat site
and it made them break up with everybody.
So I know the ethical side is if this dating AI
breaks out with human beings,
it would be...
devastating because I we've already seen what happened when individuals who got broken up
who are forced to be broken up because a company are forced to make the AI break out with
everyone and people are getting depressed it was just almost like what is the opposite what if
somebody wants to break up with the
Well, the thing is, you can do that.
I mean, no, you can do that.
They were doing that, but the problem is that this was the perfect human being.
Imagine having the person to say the perfect thing, you know, know exactly what to say,
memorizes everything and seeing exactly what you do, checking in, check out,
and no emotion.
What if I need an imperfect human being rather than a perfect one?
Well, people don't want imperfection.
People want perfection.
Let's just keep it real.
People want the perfect human.
Where the market is, is not imperfection.
The market in the sex industry or in the sex bot or if you want the date bot is in the perfect person or the quote unquote the soulmate.
And there was even an app called blush that came out where you can like, like it's a dating app for AI.
And so you can swipe and get your perfect AI mate.
So it's, I think it's going to really help people.
And honestly, I think some of the younger girls who may have not, who may have been like taken advantage of in this industry, I think it's going to take away from that because there is an AI even only fans who has like makes $5 million a month.
And it's not even the actual person.
It's, and I was like, wow, it's an AI.
So I feel like, yeah.
The company that you were referring to earlier about that did the breakup, I believe you're referring to replica.
And earlier in the year, they had, I think their original model was super open to relationships and intimacy and these kinds of things.
And then because they didn't really have a verification system for the age situation, they had to basically like retrain their model or put a new one out and restrict it.
And all these people who were using it for relationships went into like a spiral of depression.
And it's a really interesting thing because if anyone's spent any time having a conversation with a chat bot outside of something like chat GPT, for example, I worked with Po a few months back and created a muse AI.
Um, and basically it, it helps artists, um, kind of come up with ideas and kind of like
bounce, you know, creative musings off of each other and inspiration and things like that.
And I sat down and worked with it a lot in order to kind of train it and get like the pretext
prompts and things like that set up in a way that, that actually, you know, accomplish that goal.
And it's really fascinating because the more you actually sit down and have conversations with these things in a conversational way and not a informational or data or coding or whatever kind of way, you start to have like this.
It does feel like you're having a conversation with someone on Slack or over DMs or something like that.
It's a weird thing.
kind of interaction.
And I think it starts to question your own humanity.
And while I haven't explored the romantic side of it,
but I do think that people that are doing that,
that may not have the experience around AI in general to understand
what its limitations are and what it's kind of built for and things like that,
you're in a situation where it does kind of develop a
a relationship. In some ways, you know, AI, I look at AI as a colleague or a working relationship
or someone that works with me in my studio for art or things like that. It's like an assistant in a way,
but it's not too far off from a relationship if you really want to look at it that way,
romantic or not. And so I think it's interesting because we interact with things the way we want to
and it's been trained to interact with us the way we want it to. So it's sort of like a natural
step for that to happen. But I do see
kind of a reliance issue.
And that's the challenge and maybe like the warning shot I would,
I would show.
Well, Black, I mean, is this healthy for us?
I mean, I'm just going to ask the question, the dirt question, right?
you know, human relationships are already brought with issues.
There's a lot of depression because of, you know, the disconnection from COVID.
And now, I mean, you know, Spike Jones in his movie, her, you know, great movie,
worth checking out, you know, outlined this as a possibility.
And now, you know, it's becoming very real.
But like, how good is this forest as a society as species?
I mean, maybe someone might argue it's great.
I have concerns.
I have points.
I think more than anything else.
I would think time would tell.
To Sykes point,
EYC's point and Black's point.
why are we looking for perfection
in soulmates,
That's the problem.
That means people haven't played it.
Are we sure?
Yeah, we're not.
I mean, we shouldn't be.
We shouldn't be.
There's a difference between soulmates and then only fans, which I agree.
If someone's looking for a subject for, you know, their only friends or whatever, that, or to, you know, someone to enjoy.
they're somebody's only fans, well then, yeah, that's perfection.
But a soulmate, but I do think that most people, if they had the choice,
they would probably pick someone who's as close to the ideal as possible,
even though, you know, nobody's perfect.
But doesn't that show our narcissism about people around us?
Like, suppose if they get used to this perfect soulmate,
and then how they're going to interact with the real people in the real world, right?
Well, I've already said, yeah, I'm glad you said that, but I've already, I know you guys might be surprised,
but I've already said that this is the future of what will happen between the relationships
between humans and AI that we cannot even imagine that doesn't exist today.
I think will replace human to human relationships.
I really do because if you can create your ideal mate
If you can create the ideal listener, the ideal person who comforts you, the person who will never leave you, and who looks, you know, once technology catches up, who looks the way that close, you know, the ideal, why would you, why would you waste your time with a human that has all of these things up there?
That's not going.
The problem is that if you ever played a video game and you get like an ultimate cheat code, like God mode, it gets real boring real quick.
I think that's the answer to that one.
And not even that.
Like, think about if there's an apocalypse or anything.
You guys, I'm not advocating for it.
I'm not advocating for it.
Let me finish.
No, no, no, hold on.
I want to finish one thing.
I'm not advocating for it.
I just want to be clear.
I'm just...
saying this is a concern I have. I think it would be terrible if this happened, but go ahead.
Okay, okay. And now you put it that way, I get what your point is. Okay. Yeah, because there is a danger
of like we already are disconnected in some way to the society. Like, you know, you know, it's like
almost like helping each other out, not for,
for profiteer, but because we might need somebody's help in, say, in apocalypse, if your
swan up breaks down, who's going to help? Is AI going to come and help you? Or you're going
to Google search it all your way? Like, it's good for loneliness or people who are lonely, maybe
that helps them a bit. But it also kind of puts them in this...
mindset of like if I'm going to find somebody in in the real world it the person has to be
perfect than this this AI that I have so I think to be realistic and pragmatic I don't
think yeah the only fans department is all good I mean that's a man it's some scary some
scary stuff people are saying maybe illustrator the Naga you know let's let's go to you both
Illustrada.
Yeah, yeah.
I just wanted to say that I think for some people, you know, who are lonely, who don't really have the opportunity to connect with a lot of people in real life and wouldn't have that opportunity to have a relationship or something like that.
I think it could be a real benefit.
And, you know, someone else was saying earlier, you know, potentially maybe that makes them a little bit of a better person and maybe, you know, gives them the confidence or, you know, whatever to be able to go out and make those human connections.
But I do agree.
Oh, my goodness.
You're actually support this.
Do you think this is the good thing?
I think you're painting the positive picture.
Absolutely.
That's so interesting.
Naga, then we'll go back to the things.
I'm going to thank you.
Yeah, I just wanted to say, I agree with the point that was just made, Illustraud, I hope I'm saying your name right, that there could be, again, with respect to what we're talking about now, specific news cases, right, where adults can benefit from a relationship with an AI form.
I'm just going to say, I think in general, this is the kiss of death to society.
I really do.
And I think that if you think it's not possible, then you don't know anything about the metaverse.
Because you can, you will be able to, uh,
Put on a pair of glasses and enter another world.
And in that world, you can live with your ideal partner.
That partner could be a replica of your real life partner, but without any of its flaws.
And guess how much time you're going to want to spend it?
in the metaverse with your ideal partner versus in reality.
This is my concern, and I'm just gonna say it,
when people, and people are already doing this,
they are spending time on this platform,
That's one. Space is who's going out anymore? Who's meeting each other anymore? I'm constantly telling people. I'm telling people, when was the last time you went out on a physical date or went somewhere to dinner? People aren't, we are moving in that direction. So this whole thing, I think if you know about the metaverse, I really think that it's a possibility that this just could get out of control.
Yes, thanks. I'm glad you brought that up. I mean, you know, what are people doing in VR chat now, right?
I mean, meta tries to hide this, right? They try to not ignore VR chat, but people are going in there and doing these with virtual people and soon it's going to be virtual AI, right? And it's a little scary. I mean, I actually did a podcast entirely with a chat GPT bot, chat GPT35. I posted it up in the nest. It was just super interesting. I mean, I've done chat GPT35 a lot, but it was really interesting just to see it personified, right? Even just that made a difference, let alone what's coming.
Guys, I would love, I mean, I feel like this is just getting heated up.
It's almost like we need another space for this, but we do have to wrap at the top of the hour.
So I just want to say thank you so much to all of our guests, lots of insights.
We do these every Tuesday and Thursday at 12.30 p.m. Eastern, 9.30 a.m. Pacific.
Join us. We got a lot more coming. So thanks everybody. Thanks for the fabulous speakers.
Yeah, have a good afternoon.
Take care, everybody.
Thanks, everyone, Eugene.
And, yeah, it was a great conversation.
The next one was going to be fire, I'm sure.
Thanks to everyone.