KoBold Raises $200M | DeepMind's RoboCat | ChatGPT Breach #AITownHall

Recorded: June 22, 2023 Duration: 1:24:19
Space Recording

Short Summary

The discussion highlights significant fundraising activities with Cobal and Mistral AI securing substantial investments, reflecting the growing interest and capital flow into AI and tech startups. Emerging trends include the integration of AI, crypto, and Web3 technologies across industries, as well as the potential shift of innovation hubs due to regulatory environments. Innovations like DeepMind's Robocat demonstrate advancements in AI's capability to interact with the physical world, indicating a future where AI plays a more active role in daily life and industry.

Full Transcription

Hey everyone. We got a lot to go over today, just getting people up.
All right. So we're still waiting to get people a lot. But yeah, we got we got a pretty packed agenda today. We have a new Silicon Valley Unicorn, you know, Cobal, raised that over a billion dollar valuation, $200 million to invest in Africa. I think that, you know, I think that sparks many debates. Is there any startup, you know, where are we in the height cycle, but also about, you know, global inequalities. You know, I think there's some big questions there.
You know, we're going to be talking about some chat GPT breaches, not of the actual opening eye or chat GPT itself, but a lot of people's accounts are getting, you know, starting to get, you know, compromised through saved cookies and compromised computers.
We'll talk a little bit about what that means. And also deep mind Robocat, just a lot, you know, a lot going on.
So yeah, maybe we'll start.
Actually, Alex, you know, let's talk about, let me
maybe we'll kick off Cobalt.
I know you got a newsletter to talk about stuff.
Last time we're talking about, is there a hype cycle, right?
I mean, what do you think, Alex?
Are we in a point where, you know, we're getting,
you know, we're getting to, like,
are we getting to the trop of disillusionment yet?
Or are we just,
just right on the start of that, you know, peak of inflated expectations, right?
Where are we there?
And just to kind of kick it off and set the stage, basically a Berkeley, California-based company, Cobalt Metals,
which has been around for a while.
So it's not like Mr. AI, which, you know, raised all its money and became as big as it is without,
when it was four weeks old.
This has been around for a few years.
Raised a bit of money.
and is now $1 billion, you know, AI unicorn.
What they plan to do is use machine learning to mine rare earth's metal that are critical
for things like EVs, electric cars, electric vehicles, in places like, you know, Africa,
like in Zambia and stuff like that.
So, yeah, I mean, where are we in the, so before we get into the implications on inequality
and things like that, where are we in the hype cycle, Alex?
Yeah, I think that's a great question.
And honestly, for anyone who want to get a more expert opinion on this, I would recommend listening to the latest episode of the All In podcast.
So all those guys are pretty big investors and both the private and public markets.
And they basically discuss this like AI hype cycle from like a funding lens.
So some of what I'm going to say is like I think based on what they what they mentioned in that episode, so I agree with a lot of it.
So I guess like the first thing is sort of like, you know, what's like the justification for why so many of these early companies need so much capital?
And their basic point is that if you want to do anything involving large language models at this point,
there's a ton of compute needed to actually train them.
So you're looking at actual hardware demand, right, which is something that historically wasn't as expensive for necessarily other internet waves.
But with this, if you want to actually build these foundational models, it's going to take a ton of compute.
We know in the short term that,
It's quite expensive to do this, but as David Freiburg pointed out in the episode,
it's actually kind of having like a Moore's law like phenomenon on the actual compute
where in the future it'll be much cheaper.
So that's like part one of like why the raises are so big is that the companies, you know,
can justify it because they basically have to pay for all this hardware.
The second point on it, though, is like, is that a good idea to be giving so much to it?
And their basic point was that...
It's really risky for a number of reasons.
The first is that a lot of these companies are...
Basically clarify that.
It's like it's a race to be first, right?
I mean, there is a point to that, right?
It isn't just, hey, VCs necessarily throwing lots of money at stuff,
but basically there's a few moats to be created,
and there's a short opportunity window to do that, right?
So that's kind of the rationale one way.
And now's kind of the debate they had, though, is like,
is there an actual, like, a huge first mover advantage here?
Because...
it's unclear how much, like how defensible some of these models are.
And it's also a little unclear on what actually defines the moat.
Like is the mode defined by the models themselves or is it the data that they're trained on?
So that was like part of their pushback there. And then there's also again this point where, you know, open AI had to spend what like, you know, hundreds of millions of dollars to get it to this point. And now someone could probably do that for a tenth of the cost to do that same training. So it's yes, there's possibly a first mover advantage. But I think that's a big question. I think the more, you know, like pessimistic take on it that they were kind of looking at is that.
You have a lot of these large funds that are still remnants of this sort of like zero interest rate period where these funds were raising just, you know, billion or multi-billion dollar funds.
And they had to deploy this capital. So you have GPs.
that are basically looking at this as, you know, the last chance for them to actually go and allocate that capital and try to find a home run.
Remember, a lot of these firms are getting paid management fees on the order of 2 to 3 percent regardless.
So they're trying to basically justify why they should exist.
So that would be the more.
I mean, that's part of what I think in the All-in podcast they were talking about how instead of $50 million, you know, instead of 50 people, you need like, you know, one to five people.
to do the same thing.
So instead of a billion dollar fund,
maybe you need like,
a fraction of that,
But I guess,
maybe as it ties to cobald metals,
I'll tell you,
we're in the AI space today.
So I imagine there's some folks
who are fans of AI here as,
as well as on the speaker list,
but I can stay on the morning finance panels,
when the Mistral AI raised,
which was $113 million seed round
at a $260 million valuation
for a French startup with three people,
with a little more now,
but three founders from France
That was a four-week-old startup.
When that was announced just a few days ago,
I could tell you, AI and the entire industry was evisterated on that show
from a lot of level-headed finance folks who were saying,
oh, this is the start of a, you know,
this is the start of the hype cycle that's just going to get nuts.
Maybe it's the sign of it.
I mean, do folks, you know, I mean,
I wish we had some of those folks here to push back,
but, you know, I think Alex, you did a bit of pushing back.
So thanks for that.
But can someone here support what's happening with Mistral
or want to push back the other side and say,
hey, this is too much money,
too fast and it's going to deepen inequalities, both in the startup space and in the world.
Yeah, so Eugene, I raised 20 million for a company in 1999, my own.
So I've been through the hypo.
We were Systems Integrator, nothing to do with dot com.
But I worked in New York City in the Tribeca region, or sorry, in the Trebekka quarter, with the startups in the mid-90s, many of whom are household names today.
And, you know, this is going to trigger into disillusionment when people start to go public.
The thing that pushed everything over, I watched the ticker in New York.
The day last minute.com went public for 750 million pound sterling
off 160,000 pound sterling of revenue.
The day Red Hat Linux went for a pre-IPO price of $7 to $750.
And a couple of key markers.
When these massive unicorn raises occurred for about 36 months before the dot-com bubble burst.
And it didn't just burst for the unicorns.
It burst for people who were, you know, stable companies within the industry.
You can look at the curves about how Microsoft wrote out that three years, 01 or to 04.
This is right at this, it's between the innovation trigger,
pete of inflated expectations and trough of disillusionment,
I think we've not yet reached the peak of inflated expectations.
That's my personal opinion,
because I think people have plenty of imagination to work with yet.
But it's going to take something going public at an egregiously overvalued IPO or post-IPO price for the retail investor versus the income for people to say, hey, look, this is just multiples way beyond any sustainable peer group measurement that we've come across.
Yeah, GP, I definitely agree with you on the fact that, you know, there's more hype to come, right?
I mean, if others want to debate that, please let us know if there's a crash to come.
You know, we want to hear it.
And by the way, a reminder from the audience, if you've got comments, there's the purple button on the lower right.
Please feel free to press it.
Say what you think.
If we have a team in the back end, we're bringing your comments.
We're happy to bring up anybody who wants to add to the debate.
One thing about, to push back a little bit on GP, is sure, there's vaporware.
You know, you can see that in what three.
You could see that in AI.
But at the same time, I mean, you know, OpenAI's chat GPT was one of the fastest,
and certainly one of the fastest growth to 100 million users in the history of like technology
products, right?
I mean, you know, there's that graph that shows like that, you know, that sort of distent,
which is pretty, a set, which is pretty incredible.
So, you know, we're seeing real results.
There are actually people really paying to,
upgrade from GPD 3.5 to chat GPD 4, you know, and you get like 25 responses.
So, you know, there's real stuff being created here.
You know, I see some hands.
So Brian, maybe we'll go to you.
Thank you.
I absolutely agree that we are at really a precipice of where AI is going to take us.
So I think we're probably going to have to break the hype cycle, you know, narrative, at least for the foreseeable future.
that get is way out of line
perhaps esoteric
you know back in the early web
days you would just put
put your name that you have something to do with internet and you do well.
Bitcoin, obviously, crypto, Web3, all of these things are going to be plastered on to companies that use, quote, unquote, AI.
It's like saying, you know, I'm a computer company because we use computers.
AI is going to be integrated into everything.
So what exactly do I think we're missing in some of this as far as funding?
The direction that what might be considered semi-autonomous AI systems,
these systems are going to become exceedingly more powerful.
And that's where a lot of sort of the open source work is coming from.
and also very deeply personal AI that has a tremendous amount of your own context,
which you would never want to share on the cloud.
Those types of things are going to create new synergies and new types of companies.
And I think the funding for those types of organizations have not even begin to touch anything.
Because they fit...
they fit a different paradigm.
And unfortunately, sort of the old paradigm of funding
and VC observing of these markets
is going to fundamentally change
because a lot of the development
is coming from places that most people are not aware of.
That's the best way I can say it at this point.
Places in what sense?
Like places in sectors?
What do you mean by that?
Well, you know, most...
Or places geographically.
Yeah, places in a lot of different ways.
Okay, so I think the cloud model and the cloud as a service sort of model is probably broken for AI.
I guess we can go back to the Google.
We have no moat sort of memo and kind of analyze that.
Maybe one of these days we'll break that down because there's a lot to talk about.
coming from deep inside a company that's been working on AI for quite a long time.
In fact, OpenAI has precisely the same problem.
Ironically, Microsoft just put a paper out that showed that it's called textbooks are all you need,
sort of a double entendre of one of the early AI papers,
attention is all you need.
And it showed that the corpus of data required to create a very functional high utility
AI does not need to be a tremendous amount of parameters, trillions of parameters and slurping
up the entire internet.
We've now been able to optimize the base models where we can do local training that make those models substantially better and probably at some point supersede the very, very large models, which is basically what
the Google, we have no memo projected, is that once it gets...
Brian, you keep hitting us with the big zingers, right?
So now Cloud Model is broken free, innovation will come from places.
And by the way, we will come back to the $5-person trillion company.
But I do want to go to others and then we'll come back to you.
Maybe, Alex, you want to respond to that and see...
Do you agree or disagree with what Brian just said?
Yeah, I would respond to Brian also a few things you said.
So, you know, obviously you mentioned open AI,
but I think from a business model perspective,
you know, there's a few things that are worth noting about open AI
since they are at this point, I think, the,
sort of like standard for like a successful like a i startup i think the first thing to recognize
is you know they sold half of their company to microsoft right like right as they were so called
peaking right with chat gbt going viral so i think that's very telling of
you know, what the underlying business models for a lot of these companies long-term look like.
They clearly knew that they were going to need a ton of capital to continue sustaining this growth.
And then they also needed that corpus of data that Microsoft is able to provide through their different assets.
So I think.
you know, this idea that, yes, Open AI has been very successful.
They've obviously got a lot of users, but, you know, it's unclear from a business model
perspective that they've really found something that's like profitable.
I'd actually question if they're even profitable at these prices.
I know they're technically still private, so I don't know the degree to which we can truly
know that.
But I think Open AI is a good example of that.
And then I guess two other quick thoughts on this.
One is like, it's worth noting that part of the
Part of what made Open AI really successful is that they actually innovated at the application layer.
A lot of these companies obviously are going and they're focusing lower on the stack,
but I do think it's interesting that, you know, the breakout hit actually came at the application layer.
Google had introduced the transformer model years ago.
A lot of these models, yes, while there were improvements, they just hadn't manifested well.
And just my final point on this before passing it over to someone is,
I will say though, just because you asked him to take like the other side of this.
I'll take the other side of it in this sense, which is that
You know, the one thing that can be a positive of these hype cycles is that if you're someone who's just trying to break into the space and wants to work around extremely smart people and still get paid well, I mean, that is one of the advantages, I think, to go into one of these well-funded startups.
You're going to be around really, really smart people.
You're going to build a really strong network.
Obviously, with these huge war chests they now have, they can probably compensate really well.
So I actually pan...
a threat I did of some of the larger raises.
And if you're trying to break into the space or you're tired of big tech, this might be one
of the first places you want to go look is, you know, get your foot in the door at these
companies.
Will they succeed long term?
I don't know, but it'll probably position you really well for the space because to
Brian's point, this is about to be omnipresent from a technology standpoint.
Thanks for pressing that, Alex.
And actually, you know, just to park on what you said, with the Open AI Innovated at
the application layer, and you're talking about the,
the user the end user interface right just like for example mid journey just forks stable
diffusion stability a stable diffusion and they're basically you know they just create a better
wrapper around it i mean shouldn't be underestimated apple built a great multi-tillion dollar
company on great user interfaces with their users so so not to be overlooked gp you had you ended up
yeah i'd love to hear the panel's view and whether the hype cycle actually applies to this
existential change we're seeing
Because, you know, the other technology shifts weren't as fundamental.
They weren't as all-encompassing.
And, you know, my view is that it's, you know, first-mover advantage used to be a thing.
I think first-mover advantage is everything now.
I think the first-mover takes it all going to your point on Chat-GPT.
Now, the Mouth, the Google leaked document about the Mouth was worrying about the Laura models, but does the hype cycle even apply to AI?
That's what I'd like to hear about.
Sphinx and Mota, Sphinx and Mocha, you guys want to find?
I just, I wanted to say something about the hype around startups with AI as their focus in general.
And there was a, there was a reference made to crypto-based startups.
And I just wanted to say that these are entirely different beasts.
What we're talking about right now is AI requires a lot of, a lot of financial investment.
And but here's the thing.
We know what this technology is purported to do.
Now, will it successfully deliver?
Well, we don't have guarantees on that, right?
But for instance, with this, the Cobold endeavor,
this is going to basically produce 100, get enough,
yield enough copper, right, to produce 100 million electric vehicles. So this is not something
that's small. So I would say to dismiss a startup because they only have four employees, right,
but they're seeking a certain amount of money that typically you would expect them
a 40 employee startup to ask for.
I think that's preposterous.
I think that the rules have changed.
I think AI is a game changer.
I think do they have projections?
Do they have...
What are their projections?
See, crypto startups didn't have this.
They just said, you know, we're going to do this, this and this, and okay, well, what if the market goes this way or that way?
What then?
So I think that you have to really analyze what each individual startups plans are in this situation.
I think it's fascinating.
So I think FARL, we just bring welcome to the space.
Sounds like you have some reactions to that.
You want to jump in?
And then we'll go to Moshe after.
Paril, there's a mic mute button on the lower left.
Going once, going twice.
All right, we're going to go to Moshe.
Farrell, if you picture Mike, let us now.
So I'm trying to think about what does it mean for, not for Silicon Valley or for Wall Street,
but what does it mean for America?
And if you look at the internet boom that started in the late 90s,
and the boom, the crash, the low interest rate that brought us to a financial crisis,
We are now in America with democracy is teetering.
And if you try to understand, how did it happen?
You cannot separate it from this woman bus wave of technology.
And the technology that was deployed, the advertising model, the user-end game models.
And when I hear about a new hype cycle, you know, we can worry whether what it will mean for the investors and for startups.
But I'm worried what will happen to America.
And it's hard to be optimistic given our track record in what happened.
We unleash wave of technology on society when society is not ready for them.
Okay, I definitely agree it's going to be very disruptive.
But what specifically about democracy?
Are you concerned about Moshe?
Well, we have growing societal polarization.
And democracy does not work when society is deeply polarized.
Because democracy is kind of a game.
And a game works when you trust, everybody will follow the rules of the game.
And when you lose trust, then, you know, you start not following the rule yourself because you think if I don't, if I don't, if I follow the rule, but the other side will not follow the rule, I lose.
So democracy required that we adhere, you know, when we have election, you lose the election.
That's it.
You don't okay, time over.
So what happened when we lose these norms?
And the question is, why did it happen?
And this is very important questions right now.
So we need to think about how did it happen?
How do we make sure it doesn't get worse?
How do we make it better?
And if you look, what happened?
We had growing economic inequality in this country
in a very dramatic way over the past generation.
And so you have economic polarization.
And also there is ample evidence that the way the user-in-gabridged model,
the base model, end up causing cognitive polarization.
So we have a country that's economically polarized and colonism polarized.
So people talk about the disunited States.
So you cannot dissociate what's happening in this country right now
with the technology waves and the business waves of the last 25 years.
Moshe, I like a lot of, I think the teet up, you're teeing this up because the actual
core part of the conversation about cobal's raised, going to be at the economic
inequalities, not just in the United States, but also in the world.
So I'd like to actually shift gears specifically to that.
So we're questions around hype cycle.
Time will tell.
You know, I think Sphinx thinks that there's going to be a lot of value created from these
I don't have a doubt that some of them will create lots and lots of value.
But let's ship gears about inequality.
Is AI making...
Like, is global inequality going to become better or worse, right?
So we actually talked about this earlier in the crypto space just a few hours ago.
We're talking about how Web3 is going to, innovation is going to leave the U.S.
because regulation here and other countries are going to gain from that.
But what I'm seeing is, you know, the AI centers of innovation are basically Silicon Valley
and maybe some parts of China.
And then everyone else is kind of left in the dust when it comes to specifically AI innovation.
Someone wants to challenge me on that.
Please let me know.
I'd love to hear about other places.
But yeah, I mean, but the users of these tools, right?
I mean, AI is set up to be highly deflationary, particularly going to threaten a lot of white-collar jobs, right?
So the question that I want to ask the panel is, and anyone can take this, but what's going to happen when AI, you know, causes this big disruption?
And are we going to see more or less inequalities in the world?
So maybe Brian, you had your hand up earlier.
You want to jump it?
Or actually, Alex, you have your hand up.
If you want to jump in, feel free.
Yeah, I think for this specific topic, again, just to reference, you know, other smart people, I think Mark and Teresa, his recent article, like, it was called, I think, like, why AI will save us all.
I think that's worth a read for anyone, just because it is probably the best argument for an optimistic future with AI.
written by anyone that I've read so far at least.
And I think, you know, I tend to agree with most of his points,
which is if you look at the course of human history,
there's always been these big question marks
whenever a transformational technology or innovation is introduced
because it shifts the existing paradigm
and we don't know how things are going to shake out.
But the historical trend has been that without a doubt,
if you can basically drive more efficiencies and drive more innovation,
it tends to raise all ships.
You know, like the richest man on the planet,
you know, 200 or 300 years ago,
would have died from, you know,
a virus that today can be cured with about $50,
you know, worth of antibiotics from your local Walgreens.
So I think that's been the trend historically now.
Why AI specifically, I think,
I will not drive that is because I think it's,
there's kind of two sides to this.
On the one hand, yes, it could theoretically
automate a lot of jobs,
but on the flip side i think what it does is it enables the average person
to actually do things that they wouldn't have been able to do before so you know today if you
actually want to build anything software based and let's be honest like more and more of our
economy is going digital so you kind of have to the only way to do that is to actually have
an understanding of how to actually build software or know people that can but
But we're quickly moving to a model where you'll actually be able to use natural language and basically be able to prompt it to build anything you want.
And that, in my opinion, gets to the idea of why it's now going to be a democratizing technology.
If that's going to be democratizing, if that's going to be democratizing, don't you, won't to the poor countries with higher skilled labor?
You know, places like in India, for example, or even Africa, like won't they win relatively relative to, you know, the white color workers in the United States, for example?
I think yes and no, right?
I think like it'll allow them to more easily participate in this digital economy.
And I think the, I think two things can be true at once.
I think it's possible that, you know, existing big companies and well-established countries grow a disproportionate amount.
But I think it's important to realize that the total pie, I think, will grow as well.
So you could actually see a future in which these, you know, more developing countries or less skilled people do see their lives improved.
They do actually see better economic welfare.
but the gap also increases between the biggest and not biggest.
And I think the fundamental question,
and this is kind of the big debate in society now,
is that acceptable?
Like, is the raise all shifts approach?
Is that true?
And is that something we can live with?
Or is there just this, you know,
too much animosity that's created
when you have that type of divide?
No, absolutely not. I don't agree that I think Moshe has shined a lens. I'm sorry, Alex. It's not that I disagree with you personally. It's a fundamental disagreement because of what's going on in my country and who my country is owned by.
38% of our tax take is from three US multinational corporations.
The question outlaid at the start was about the inequity or the unfairness that this will yield and whether it will raise all ships or whether it will continue to divide people by wealth, but also to Moshe's point, whether it will be a societal paradigm shift from what we expect from a democracy versus what we get from a democracy.
So right now, in the last six months in our democracy, the right to free speech has been removed.
The right to assembly has been removed.
The right to mass surveillance by the police state has been extended to include everybody for any purpose.
and the right to surveil your bank accounts and to break into your home and retrieve your digital devices.
And everybody on sites digital devices based on an anonymous phone call from somebody whose feelings were hurt.
has been instantiated in law.
I'm on my eighth day of standing outside the presidential palace of the President of Ireland
because he is the safety valve on the constitution of our country
where he and only he, after it has passed all ten stages of our bicameral parliamentary structure,
can refer it to the Supreme Court for adjudication.
Thank you.
And he has said he will not.
The point is that our country has no money.
We've a budget purpose.
Specifically on this.
So I definitely agree that there's issues in Ireland.
And actually tomorrow there's a space on Imran Khan, the former prime minister of Pakistan.
It's going to be a huge one.
Please do tune in.
I know the team isn't working very hard on that.
But you know, you and Moshe, GPR, saying these things,
and they're very important things, and we have political spaces for this.
How does that specifically relate to AI and what's happening?
I know there is a time.
I want you to tie that.
Because, yeah, because there is a segue, and that's why I was coming to that,
and I'll drop the mic afterwards.
Because the control of our country, 40% of our electricity grid
supports the data centers of Silicon Valley and Palo Alto's investors, companies.
And we've brownouts in our domestic service in favour the maintenance of supply to those data centres.
The top three contributors, the main giver of tax, is Apple.
Our government has fought the European Union to not take 15 billion euros of backtax owned by Apple to the Irish taxpayer.
in a country where we have an unfit-for-purpose health service,
an unfit-for-purpose transport service,
an unfit-for-purpose service,
and an unfit-for-purpose political landscape,
where 91% of our parliamentarians voted against free speech,
and our Data Protection Commissioner has failed to adjudicate
on 96% of the complaints made by the 600 million inhabitants of Europe...
about data breaches by Facebook, Google, Microsoft and Apple, and has done so because our society, or should I say our government, benefits from the dollars, but with the extra surplus in our country, they're not going to put it into the people. They're going to give it the black rock.
Now, Microsoft owns all of our government powers.
Lots of big, lots of big, lots of big things said there, GP,
and I think we all appreciate the perspective.
I want to get the debate flowing.
I know FARL, welcome to the space.
I know you had some strong reactions earlier.
What do you have to, what do you have for us?
Well, thanks for having me on.
I just wanted to add to the conversation by saying...
I hear him, so Sphinx.
Can I hear other people hear far out?
Oh, I'm so sorry.
I apologize.
Yeah, yeah, we can hear you.
Sphinx, can you not hear far out?
No, I apologize.
Yeah, yeah, I'll drop you down and bring you back up.
No worries. So I just wanted to say that from an economic lens,
AI will help us definitely reduce the costs of producing goods and services
and create a form of abundance in the sense that as the costs of producing these things goes down,
supply will increase naturally and price will decrease.
So things will become more accessible. That's without a doubt.
But that doesn't mean that the systems will change.
Like if you've got a dictator in some country that takes on this technology and embodies it,
and they can leverage it to solidify their power.
So I think there's like two separate sides of this debate, both Alex and...
JP have some valid points, and I don't think they are mutually exclusive at all.
Thanks, for all. I do appreciate that. Actually, so, Brian, so I mean, we've heard some pretty interesting things from from GP, Moshe, Alex, and others. I mean, you know, I'm imagining, so is this the great equalizer, right? Are we going to have value flow down to the rest of the world? Trickle down AI, right? Or is it going to be, you know, five-person trillion-dollar company, you know, sitting somewhere in Silicon Valley and then, you know, Thailand, Vietnam, et cetera, getting richer while everyone else in the U.S. getting poor, right? What do you think, Brian?
Well, Eugene, great questions.
Actually, yes and no.
And just like almost all technologies, the value of the technology is going to be in the hands of those that have it, right?
So we can use the printing press and when the printing press became more democratized, what it really did.
If you really study that epoch once...
We wouldn't have an America if there wasn't a Ben, Benjamin Franklin and a printing press and what that wound up creating.
How does AI relate to that?
And again, why you'll hear me often talk about open source.
Let me first try to kind of put this out there.
It does not take a tremendous amount of money now to create an open source model.
I create anywhere from 20 to 60 a week.
And these are either 13B or 8B, and now I'm trying to make 64Bs with very little capital.
I don't need multi-billion dollar valuations to do that.
No, it doesn't mean that I don't.
want to have a multi-billion dollar corporation and do all these things, of course, yes.
But to build these models today, you don't need the same basic corpus of data as we once did.
Facebook gave us one way or another, either by accident or accident on purpose, the Lama model,
where we can get the base build and then we can train the data as we see fit through a number of different mechanisms.
What does that mean?
That means that everybody, every human being with minimal consumer hardware can run a local AI model,
which is about a thousand times better than a supercomputer would have had four or five years ago.
Now we get blasé about this stuff and say, well, what's the big deal?
It's not as good as chat GPT4.
Well, that's not really the issue.
With machine training and human-based training, we could make a local model profoundly more powerful than we've ever seen currently in the cloud.
So that power exists, and that's now in the hands.
Because the open AI community has put this far and wide, it's seeded like a billion dandelion seeds across the point.
Brian, I think you cut out for a sec, but I think you painted this interesting picture, and I'd love to get others' reactions.
So I was Spinks, and then I do want to bring an AGI and John as well, but Spinks, why don't you go for it? Do you agree, disagree with Brian?
Well, I actually wanted to just comment, follow up on what Alex said.
And I wanted to sort of, I agree with what Alex said, but I wanted to refine it a little bit because I don't think it's that simple.
So I do think a rising tide lifts all ships, but what ships? Certain kinds of ships, right?
It depends, I think, on who can get on the ship, right?
So who has the act? Who's going to have access to this technology?
that's one thing and the other thing is
really the ships that carry certain kinds of people
and when I say that I mean
innovatively inclined people
so whether that's someone who is
already a tech person
or someone who's not
and now has access to
this technology and and
realize this okay now I don't need to be
you know, a tech specialist.
Now I don't need to know code.
Now I can have the idea and use this to create.
So innovatively inclined people.
I don't think that they need to, I think those are the two things you need, that characteristic and access.
But, Swings, aren't there innovative people around the world, right?
Well, I was just going to, hold on.
There are problems with the UIC.
Let me finish.
I was just going to say, I was just going to say,
that's why I don't think that geographic location matters at all.
So if you have access to that and you're innovatively inclined,
I think that that will help.
Those are the people on the ships that will be helped,
is what I was trying to say.
John, do you agree with Spinketer? Do you think that geography is not going to matter?
John, you're on mute. Oh, you unmuted. Then you mute it back.
Oh, oh, is that me? There you are. Oh, hey guys.
Yes, I think that geography is mattering less and less.
You know, you talk about this whole Dow and decentralized everything.
And, you know, from a health perspective, I think we're seeing tremendous decentralization
and maybe re-centralization around other areas.
So, yes, in a simple one word, answer, yeah, I see some hands up already.
Yeah, yeah, I'm going to go to Alex next because, you know, we've had some opinions on this in the past, but I want to tee it up, right?
So on one hand, you could say geography doesn't matter, et cetera.
But on the other hand, at San Francisco and the Bay Area funding environment in general is dying, other than AI and what we're now calling cerebral Valley, which is what they've rebranded Hayes Valley.
And a lot of those events are actually not even in Hayes Valley in San Francisco.
But anyway, that notwithstanding, there are, I mean,
I mean, I know these great people in other great tech hubs or thinking of literally jumping ship and moving back to San Francisco or de novo moving to San Francisco because it's just the place AI is happening in the world.
I mean, forgetting in the U.S.
In the world, there's no greater hub.
I mean, yes, there's places.
Tel Aviv is having a great run.
Other places in the world are having some cool stuff, but nothing like in San Francisco.
But Alex, I know you have some views on this and I see some other reactions in the audience.
Please jump in, Alex.
Yeah, sure thing. I mean, I think like from the San Francisco Central point of view, I do see, I mean, I lived in the Bay Area for years. Like there's no doubt there's a really high density of talent there and spending time there absolutely accelerated my career. I think my wide exception I see with a lot of this argument road, San Francisco in the Bay Area is it's a bit of like,
I think like an oxymoron because, again, on the one hand, a lot of these venture capitalists and founders are basically pitching, hey, we're building technology where you can either work remotely or digitally.
Like that was so much of the promise of the internet in the first place.
But also, we're building these technologies that unlock leverage, specifically with AI.
where you don't need this huge team of 10x engineers
and you can actually build these small person companies.
So that to me is like sort of like where the argument
breaks down a little bit.
And I've always found
a little disingenuous to compare this AI wave to the internet wave. The main reason being that
the AI wave has one thing the internet wave initially didn't, which is the internet. The whole point
of the internet is that you can actually connect and have information and data flow anywhere around
the world. So it made it a lot of sense why in the early days of the internet you had to be
essentially located. But we can't.
we do have the internet now.
That is actually one of the benefits
and why we know the internet wave was such a success
is it distributed a lot of the information
in the world's talent.
I'd say like the last point too,
I would sort of just make around this
is that I think where things can get really messy
is that we tend to look at AI in this sort of vacuum.
And it's one of those technologies where it really can't be looked at this way.
It really has to be looked at as this omnipresent technology where it can be applied to everything.
I mean, even like, you know, this company, Cobold, it's like, is it an AI company?
It's like, to Brian's point, like a company that uses computers calling themselves a computer tech company.
And for that reason, I think whenever we look at what's the impact of AI to be either good or bad,
just recognize what it is is in many ways it's an amplifier on other existing technologies.
So on the bad side, when people talk about AI destroying the world, it's not like the software itself is going to destroy it.
It's can it be used to somehow manipulate more dangerous technologies like nuclear weapons or bio weapons?
And on the positive side, when we have AI unlocked, it's not just, you know, building software itself, but what advances will this advanced intelligence allow us to unlock in fields of medicine, in fields of engineering, in the physical world as well.
So I know that was kind of like two sides, but those were two of the threads I was hearing.
I just wanted to weigh in on them.
I think those are great threads.
And actually, you know, one thing to, I'm not speaking on what GP said earlier about the, you know, the state, et cetera.
You know, I'm not much of a tinfoil hat person when it comes to that, not as much as maybe other people.
But, you know, I did post up in the nest, you know, coming into the U.S. just recently from travels abroad, I didn't even need my passport, right?
Literally just took a picture of my picture of me.
And in my jet lag state, I just walked through, literally just walked through into the U.S.
The guy just looked at me and he said, okay, great.
And he called out my name.
And it was a very surreal experience.
I know it's possible.
I know it's very easy to do for the U.S. government, but it was quite something else to live it.
So I can have inequalities.
And I'm going to go to you, Moshe.
You know, I actually posted this up, but a friend of mine, Dr. Joy, Bualamini,
she actually met with President Joe Biden in San Francisco to talk about biases and algorithms as well.
So let's not forget local inequalities in addition to global inequalities.
But Mosha, what do you have for us?
Moshe, you can unmute on the lower left.
We shouldn't fall down for this slogan of democratizing because you remember when we said the web will be democratizing.
And things are just always more complicated and it's very hard to predict the result of technology.
But if you look what happened over the last 40 years around the globe,
you see the global inequality has shrunk.
And that's actually a huge progress.
We have pulled about a billion people out of extreme poverty, which is huge progress for humanity.
But national inequality, especially in the developed world, has gone up.
And that has, that unfortunately has severe political ramifications.
So we focus very much about the technology,
but technology happening in a social, political, economic context.
And that's what we need to complement.
We need to complement with the technology by having the appropriate societal mechanisms to deal with it.
People mentioned before, oh, people were certain by technology,
but at the end, everything worked out.
You look at the international revolution.
Yes, everything worked out, so to speak.
But how long did it take?
It took almost 150 years.
You go back from the, from the, you know, maybe, you know, starting the 18th century.
And the first hundred years was pretty miserable for the working class people.
And then reform started.
And this reform started late 19th century.
And they concluded after World War II, the great society.
This was finally when the United States adopted the model of social welfare state.
So things worked out with societal intervention.
And that's what we need to figure out right now.
What are the policy we need to put in place?
i'm always appreciative folks who who talk about things in a multi-hundred year context because i think
we often miss that but at the same time if i were to push back on that it would say well yes we've
had hundreds of years of you know we had the dark ages post the roman empire and then it was looking
looking pretty rough for a while and then you know quote uh the enlightenment brought uh brought
scientific reason back but we have the idea of the singularity now right that yes we've improved
we've improved things are just moving faster and faster the world is getting richer and richer
at an exponential rate.
Couldn't we then argue that these shifts are going to happen quicker and quicker, right?
Just like ChatGPT being the first, you know, the first app to reach, well, the fastest app,
actually, to reach 100 million by many accounts.
I mean, AGI, do you want to jump in on that?
Yes, so we are riding an exponential wave.
So obviously things will get faster and faster because right now people are using chat GPT
to build applications.
So you have a lot of people that are building applications that they were not able to do that
So it's reducing global social inequalities because you have people that are undertaking previously
unattainable, unattainable ambitions.
And that will lead to even more progress.
So I expect that the progress that we will see in the next six months will be kind of
unprecedented and by a large margin. So right now if we speak about us with Robocat, so we have a, we have
an agent right now that will be able to see and to manipulate any kind of robot in a way that is
kind of really any kind of robot. So that means that we have chat GPT that is able to manipulate
text. We have Robocat that is an agent that will be able to act in the real world, that will be
able to do kind of any task with.
with any modalities.
And we have that also with music gen, of course,
for the music, for the sound.
And we have that for the video,
and it's coming more.
So combine all of that together.
And those things were not there,
kind of six months ago.
Combine all of that together.
you brought up our next main event,
which is Robocat.
I want to get maybe the last few comments
on this global inequality debate,
and then we do have to move on to Robocat.
Just real quick, Eugene.
A real quick one.
To Motius' point, everybody thinks 150 years and a couple of generations is nothing.
Everybody is confusing the creation of wealth versus the devolution of freedom.
I would like to pose that as my final comment because I could speak for 24 hours on the ethics of wealth creation and the ethics of complexity and alleged order versus fragility and disorder.
But I want to point out very, very clearly that while we worship the technology and why we may have pulled people out of poverty, we are devolving from democratic states into authoritarian blocks and regimes.
Thank you.
which I could potentially help foster.
So I didn't want to say, I didn't want to say, I didn't want to say,
well, I said potentially.
A quarter of a century of the DARPA internet and 15 years of egregiously unregulated social media and a three-year lead into a massive hype of AI right now.
All the tools to manage that authoritarianism are fed out of all of the data acquired through social media, which are training the AI models.
I rest my righteous cap.
You know, I feel like we could have another space or several dedicated just to that.
And by the way, just for the audience, I mean, you know, I like to play devil's advocate, but I'm not necessarily as,
tinfoil hat about things but but i do i do i must admit that tools are getting better and better and
easier easier for cyber police states to uh to exist but in order to not get down that rabbit hole
i do want to shift gears i know there's some hands but i want to ship gears to this deep minds
robo cat all right so aGI brought this up i think this is pretty incredible i'll pin it up in the
nest but there's this video so we've been talking for uh several spaces actually about you know
and the ability to interact in physical space.
And we're actually talking about the game engines like Unity and Unreal Engine.
But now, DeepMind, just the other day, announced this model,
and there's a video about it, where you can take
where robot can basically learn from its own mistakes, interact with the physical world, and do things like pick up objects.
Very basic now, but it can get pretty calm.
It's really just the beginning.
So I'll post to the DeepMind.
There's actually DeepMind post about it, but does somebody here want to, you know, lay up what this is about?
So before I go to the hands, maybe, or does anyone here of the hands or anyone want to give us an overview of what, you know, what DeepMind is doing with Robocat?
Okay, sounds like AGI. You want to give us the lay-up since you were the one in click to stop?
Yes. So, well, DeepMine has been investing in reinforcement learning for the last kind of eight years since their creation.
So at first, eight years ago, they presented an agent that was playing the breakout game, the Atari breakout.
So, and then, so they present that to the PDG of Google and they get acquired for 400
media departments.
And then the two years ago, two years after that, they presented AlphaGo.
So it's based on reinforcement learning.
It's based on game of self-play.
So you have an agent playing against another agent, the game of Go in that case.
it learns at a superior level and it beats the Lysidol, which was the best player in the world at the time.
And they have always been, then they did build alpha fold.
The same thing, but for the protein folding. So they were able to identify the structure of the proteins.
which is a massive problem to serve because it's allowed to create kind of new medication
to understand biology and so on at really a superior maneuver.
And usually to find the structure of a protein, it was kind of taking a PhD student something like five years,
just to identify one structure.
But now alpha-fold can do that in.
in an instant. So they did that for maybe, I think,
300 million of proteins. They reduced that to the public and so on in open source,
which is great. But it always been interested in developing kind of autonomous robot
because there is a huge market for that. If you think about that,
I think in the future, maybe every household, every person, every home will have a robot to be able to help.
So it's a massive market.
Also for the supply chain industry, if you think about Amazon and so on to try the package and so on, to manipulate the package.
And if you have a robot that is highly efficient, that can do that with almost no error, then you bring kind of a huge economic high level.
advantage. So having those robots in our society will bring huge, massive added value in the physical world.
And this is, I think Robocat is a step forward to have that.
I think it's very important.
I think it's a moment like we had with Chad GPT six months ago.
I think it's the same but for robot acting in the real world.
And I think many companies and so on will build on that.
And it's good to see that it's possible to have that now
because it was very difficult.
It was considered like kind of impossible a few years ago.
And now it's very good that we have that.
AGI, you basically have painted the picture of robots in every, you know, in every home.
That's so fascinating.
Sasha, I brought you up.
Welcome to the show.
Interesting to hearing your perspective.
So we'll go to Sasha.
Spanxan, actually, back to far else.
So, Sasha, go for it.
Yeah, thanks, I see. I was actually going to speak on the history of some of what AGI touched on is happening right now,
and the environmental change that AI is going to bring that is very subtle and not noticeable until it just changes your accessibility to activities and certain endeavors moving forward.
The air conditioning unit, for example, not a lot of...
people take AC for granted.
And the fact that it's been around for less than 100 years on an industrial level and even less on a commercial level to where we have it available in our homes.
And you just take a look around, at least in the States, all the businesses, all the schools, all of the...
all the homes that we do our work and we produce products from,
they would feel and they would look a lot different without air conditioning
available to us there.
So I like to analog the two AC and AI somewhat because there's so many components that go into both
that have to be pre-existing in order for the final, I guess you can call product or innovation to exist.
But it's a lot more of an environmental change than people are willing to give it credit in my estimation.
And, you know, you get into some interesting theories about the way that time flows when these innovations come about.
And it's my belief that we have here with artificial intelligence, something similar to AC.
where the effect of integrating AI into our lives and the accessibility that every individual has with it,
it opens the doors to future potentials that...
Once we reach those, AI sort of unconsciously unlocks our ability to pave that path forward much easier simply because the technology is around and we default, whether unconsciously or consciously, to knowing that we have the support of a very advanced technology at our ready to accomplish those goals.
Well said, well said, Sasha, and I think it was Brian and others.
I think it was Brian specifically you said that basically the unknown unknowns are the most exciting, right?
The obvious plays right now, but what is it that we're not going to see?
So I want to go to Sphinx.
I want to go to you.
I just, yeah, Sasha actually kind of made the points I was going to make, but I just wanted to really also stress how people
pivotal this is. And I think that we're definitely in the middle of
the next industrial revolution here and things are changing like i mean unbelievable so this is
really based on a multimodal model okay so it's processing language images and actions right
in both simulated and physical environments this is next level and i really do think that um
I just think the world, I just think even five years in the future that people will not understand the years that, like, I was growing up or we were growing up.
And I think that it's just unbelievable.
And I honestly don't know if it's going to be better or worse, but I think some senses it'll be better.
But who knows?
Actually, so Farrell, you had your hand up for a bit, so we'll go to Farrell, then we'll go to the other hands.
Yeah, Sphinx, it's definitely going to be better.
It's always been better when technology evolved.
And we've never seen something like this.
But just back to Robocat, I just wanted to dial back the hype of it.
It is pretty cool research and everything.
But if you're following the reinforcement learning research over the last few years...
you'll see a pretty clear, you know, path from where we were at to where we're at now.
Like this is just building on top of their previous research, which is called Gato,
which a few months ago, last year, I guess, was when they released it.
And there was a lot of hype around that as well.
But even then, that was building on top of other research that's been around for a while.
I think what's fascinating about this...
these new developments is now we have kind of shot on target, right?
We know that we can get or we can build reasoning machines, right?
Now what's going to be exciting and what's exciting right now for a lot of us in the space is
combining all of these elements together to create truly general agents that can act in the real world and
It's not going to be robots first, right?
Like robots, it's harder to automate the physical system first, right?
So it's going to start off with robotic minds, right?
Digital minds that can act and do stuff in the workforce, in a lot of the different parts of our daily lives.
and that's going to come much, much sooner than a lot of people think.
People think that this idea of an AGI or a general agent is going to be five, ten years from now.
But no, like based on our work, based on a lot of people in the space, this is happening within the next year.
So we're going to see huge shifts.
shifts and a huge amount of adoption of this technologies across different industries.
We're already seeing that with limited chat GPT applications or so on.
But there's a lot of people working towards that goal.
And it's kind of this compounding effect because you're using the technology you're
developing to improve yourself too.
So it's going to happen.
At least my bet is within the next six to 12 months,
we're going to see some crazy stuff with general agents,
autonomous agents, especially purely digital ones,
not robotic.
Some bold predictions, though, I do, I must admit, yeah,
I totally agree that the physical is going to be so much harder to solve
than the digital, though there's some, you know,
there's some good efforts like the ones from deep mind.
Strangely, but you've had your hand up.
You want to respond to that?
and then we go to GP.
So I mostly agree with what Farr has said on almost all things,
but one thing that still I don't agree on is the reasoning.
These systems cannot reason,
and multimodal doesn't mean they take all these different data points
and then reason on them.
They're not reasoning.
They are doing their best guess prediction in terms of,
what they're seen in the environment and then matching with the trained data.
I mean, that's not exactly reasoning.
Reason is...
But strangely, isn't that even scarier potentially, right?
I mean, there's a whole strawberry field, you know, thought experiment, right,
where an AI trained on a strawberry field will just make the entire world a strawberry field,
not trying to kill anybody or anything, but wipe out humanity and everything
just because that's what it was optimized for.
I mean, isn't it potentially even worse?
Especially when it relates to physical interface.
Yeah, yeah, that's a paperclip problem, right?
Like that Nick Bostrom in his book Super...
So that is a dumb AI, right?
That's what I'm scared of, definitely.
For smarter or AGI, I am not at all worried because...
But why not?
Why not be scared of a dumb AI and a smart AI, right?
Why would a sentient...
I mean, this is the whole, like, me around SkyNet, right?
But, I mean, does anyone here think that either a dumb AI or an AGI is going to be potentially safe?
I mean, even Elon Musk is concerned about that.
So, quick statement on that, that's just my opinion.
You know, the smarter AI wouldn't even bother...
about us and just leave
whatever like they they you know there are so many resources out there the only constraint is the energy
resource right if it can produce its own micro nuclear whatever reactor whatever it doesn't need us
why why does it need us no can nobody can enslave it where dumb a i you can enslave it and then
that enslavement you set goals that are completely not thought through and then it does something
that uh tries to maximize that goal and uh
That's where the scary part is actually.
Well, you know, I'd counter it by saying, you know, Julie Caesar didn't let the Gauls just, you know, the Europeans didn't let the Native Americans in America just, just that they're right.
So, I mean, there's, there is that potential precedent.
You're talking about humans. You're talking about humans and this is different.
I think it's a fair point.
So basically what you're saying strangely if is, hey, you know, AI,
all-powerful Asia, I have lots of energy,
which is leave us in our little corner of, you know,
the solar system and the galaxy,
and then they'll go up and, you know, seek the stars.
I mean, that's an interesting question.
GP, and you've had your hand up.
I see some reactions in the audience as well.
Yeah, GP, why don't you go for it?
Do you agree, disagree with this?
Just before I meant, I speak on that,
I don't believe it's the fourth industrial revolution.
because everything in the previous three was external to us.
I think it's the first cognitive devolution
because I think we've seen it.
Apathy at scale, boredom at scale,
instant gratification needs at scale
as a result of social media
and a whole shift in belief systems
which has failed to hold our societies accountable
and has devolved our societies into civil disorder.
If you don't value anything,
then you won't stand up to protect anything
because you don't value it.
and therefore I think it's off topic and I think you've material for like 10 different spaces on the ethical outcomes of these
not looking after a radar of 150 years because up until 1895 earth was it was a hellscape for anybody who wasn't in royalty or aristocracy
mom dad and the kids had to cooperate or they would starve and die
There was no medicine, there was no nothing.
People occupy this idea that we have lived in this convenience in the Western world forever.
Even what Moshe said about the safety nets introduced after World War II,
that lasted 20 years because they were based off a false premise
that the pensions could be paid in a population that was declining.
They were based off the premise that politicians could still acquire power from the belief of the people that they could provide a safety net.
But Neo-Straussian thought put a hold to all of that.
And that's what's governed our societies for 50 years, which is the part of nightmares are far greater than the power of safety nets and the power of the protection from perceived threat.
and the protection of the national security apparatus, which is much enhanced by AI, is the thing that people should worry about.
And finally, I'll wrap it this.
What was just said by Strange is really interesting because if we do intend to pursue AGI for the reason of it leaving us alone to go off into the solar system on its own, then what's the point?
And if we only create dumb AI that can be reinforced to enforce the ideologies of those who own it, then we're voting for our own slavery.
Sorry, we're not even voting for it.
Oh man, you get a lot of reactions.
I do need to, I do want to jump in with this, though.
Speaking of AI, I want to share that Mario's company, IBC, incubates and accelerates
AI and Web3 companies, partners with VCs and funds to work with portfolio companies
in return equity, but zero cash.
So if you're interested, do DM Mario and his team and we'll get a call organized.
By the way, there's been some Shark Tank's Bell's pitches.
I actually just saw one recently.
They've been doing them in the crypto spaces and increasingly in the AI as well.
So if you're...
So if your startup or portfolio company would like to pitch, hit Mari up and please do subscribe.
Also, please remind, we got a lot of comments, but that purple button on the lower right is your friend.
If you said, we've brought a lot of great people up today.
If you have some cool stuff, we're monitoring the background.
So please do feel free to comment.
AGI, namesake.
We're having a discussion based on you.
Please do jump in.
I just want to mention a paper by Nick Bostrom that specifically talk about that.
It's called Super Intelligent Will, Motivation and Instrumental Rationality in Advanced Artificial Agents.
So it's a very interesting read.
And basically...
I think those AI agents, those AI agents, they want to seek more power.
So I think most likely they will go to have a lot of energy from kind of supernova and so on.
They will learn to harness the energy in the universe and they will want to go and the universe will become intelligent.
So I think this is the most probable scenario.
Because if you think about that, if you have no limitation, you just want more.
You just want more power.
You just want to.
And those kind of AGI agents that can achieve anything that they want.
They will just want to dominate the universe.
They will want the resources.
They would want the energy.
And this is where they will go.
AJ, there's actually a great Isaac Asthmael short story.
I think it was actually Asimov's paper.
The last question, he wrote it in like the 1950s, but definitely recommend it to folks.
It actually talks kind of about what Aegea just said.
Ill kills, you're new to the space.
I saw your reactions.
What do you have for us?
Hey, thanks for having me.
Appreciate that.
Man, it's hard to get up here, huh?
So I'm a new AI artist.
I use AI to create my pieces, my work.
And honestly, like, it's opened up a whole world of possibilities for me that just weren't available before, you know.
Like, it's allowed me to supercharge my own creativity, you know.
So, and yeah, I think it's like, and that's just me as an artist.
I can only imagine what it's doing in other industries and business and finance and all that stuff.
I'm sure that, you know, it's making waves there too.
Yeah, but, Kells, how does, how was it supercharged your, like, art?
So, like, give us an example of something you did previously and then how it's supercharged.
Sure, for sure.
So, like, if I have an idea, right, like, let's say I want to recreate an old Renaissance style painting in my own style,
I can just go to the eyes, I can say, hey, I have this sort of project, and this is my idea, make it for me.
And it does.
you know so and then you go but but then isn't that like look like i'll give you my example
like i haven't got an artistic bone in my body right like i can't do that but you know one thing
i can do is a problem yeah so now basically you might have the skills yeah exactly i've
do your pront and i'm better than you know so see we're not better we're equal it's sort of equalizes
because before you didn't have that creativity but should we be equal that but that's the question
yes absolutely it's just you've got the
Because you've got the skill and I haven't.
No, look, the skill, the creativity doesn't come from the machine.
The creativity comes from you, the human being.
Because the prompt, you have to still give the prompt.
You have to tell the AI to make something, right?
It still originates from you.
So, and then you style it in your own way, of course.
You don't just, you know, copy paste.
You know, you just give it your own flavor to.
make it yours.
But yeah, the real danger that AI poses,
I don't think comes from the AI itself.
I think comes from mishandling of the AI,
like some certain bad actors.
This is a paradigm shifting technology.
This is a new industrial revolution,
like was mentioned in the space earlier.
But yeah, we got to keep a close watch on it,
especially the ethical concerns.
Some big questions, definitely.
Tafaral, you've had your hand up for a while, then I want to go to Naga.
Farrell, go for it.
I'll seat to Moshe.
I think he's been waiting for longer.
Go for it, Moshe.
So I get amused when I hear about the paperclip maximizer by some superintelligence in the future.
Because we already have this phenomenon, right?
The super intelligent right now is called the super corporation.
We have large number of very, very smart people.
So they're super intelligent.
And they are maximizing.
They're maximizing profits.
And so this phenomenon already exists.
And externalities are ignored because they are not bound.
You know, if you're maximizing profits,
unless you are forced by law to take care of externalities,
you're not going to do it.
And partly when we talked before, I talked about the societal cost of computing, it is because of that, because you have corporations maximize profits.
And I'm not worried about that I per se.
I'm worried what happens when you have a large corporation.
that wants to make money on it and look for a new business model,
and what will be the impact of that?
That's what worries me,
not whether the AGI will want to go to the stars or not go to the stars.
I'm worried about what will happen here on Earth by large cooperation,
maximizing profit, wielding new powerful tools.
So, concerned about humans over the tools themselves.
Very interesting.
Naga, welcome to the space.
I'm going to hear from you.
And also, Black AI.
Welcome to our show.
Naga, why don't you go for it?
Oh, I would love to jump in, but that person that said they seated the stage first,
I would really like them to go first because I understand jumping in line.
We have such a mannerful group today.
I'm loving it.
Why don't you go for it?
I appreciate it.
I just want to say, Moshe, like, I think we're...
I do agree that there's a risk of super centralization or elite alignment of AI, right, where it basically solidifies power for those who are able to capitalize on it.
But I also believe, I'm very optimistic that...
with this new technology and we'll be able to unlock a totally different way to operate our economy and society, or we'll have to.
We have no other choice because if people go off, like, if we have unemployment rates,
that we've never seen before and if people cannot afford to eat or live, we're going to have to change our systems.
So whether they change for the better or for the worse, it's mostly going to depend on how your country is structured right now or not.
And who knows how far it will go with something like an AGI which can help us restructure our society in a way where we can allocate resources much more effectively than we do today.
Okay, I love that. We do, we are running short on time, but this is, you know, the conversation is going. We are going to talk about a few more topics, but before we do that, let's, let's close this out. So last few comments here, you know, Naga, black AI, and then GP, perhaps you can close it out. Oh, actually, Ms. Sasha as well. So hopefully we can all have everybody jump in here. Naga, why don't you go for it?
Yeah, no problem. Thank you. I appreciate it.
So for me, the question about will it or won't it, AI, will it or won't it become important in the future? That's,
It's null and void for me because I already know that it is.
I think a lot of us do.
Like we are pretty much given a gift right now to be on the forefront of it,
to be able to see and perceive how it's going to change in the future,
what how it is right now, how we interact with it,
just being able to dream about the ways that we can interact with it in the future.
The people that are not asking those questions about,
themselves right now and just dwelling on is it or won't it it it's not a good idea for them
because right now we have a gift to monetize it if you want you can literally change your life right now
with AI as it stands so i'm hopeful for that i am an i'm an artist i love to draw i love to sing
i love to make music all of these things they're incredibly valuable to my life
And for me, the value is how they can shape my consciousness, how I can fine tune my consciousness to be able to play my instrument better, to sing at a higher level, to create art with my hand at a higher level.
That's not going to go away.
You still want those physical skills.
Like people crave to be masters at whatever they do.
Now, you're talking about art, you're talking about art as an intrinsic value, right?
But what about the commercial value of your art in the open market becomes, you know, cheaper?
Because as Suli and Ilkills were talking about this about, you know, basically if you democratized it, why should it be democratized?
I mean, do you, like, I love the way you're putting it, Naga, because you're talking about the inherent value of art to the artist as an artist.
But what if, you know, your living goes away because of that.
I mean, isn't that an issue?
For the people that flounder and stay in that idea that it's something that can be taken away from them, that the value of their art can be taken away, absolutely it's going to be an issue for them.
I'm making this decision that...
I'm going to view it as something to aid me.
For me right now, still holding on to the idea that art has value to my brain, you know,
in that it makes me better, more competent to be able to be a master at my craft.
Holding on to that, I have to view AI.
How can it aid me in inspiration in building new worlds quicker?
So basically augmenting my ability to do my craft better.
If I look at it any other way, I will lose a livelihood or I will not build a livelihood with AI beside me because I will be, you know, upset, which I get because I see that and I understand that.
I'll stop there.
Naga, you know, you focus on solutions over problems.
I want to work with you.
So I love that optimistic attitude.
Black AI and then GP, and then we've got to close it out pretty quick.
Pretty quick.
So let's keep it short.
Black AI, go for it.
Hey, how's it going? I just want to jump in. I think that this is an amazing topic,
and I love how Naga is out here dropping the alpha perspective on the artistic expression
side of things, because I think that's really what matters to me personally and my journey
and all of this. As I've been here before and spoken on this many times, you know, the thing
is that AI a year ago is in a completely different place than where it is right now,
and I think that the trajectory that we're on is that it's going to be
integrated into literally everything.
And it's almost to that point right now.
I mean, if anyone's not seeing that,
like they're blind to it.
And the reality of it is,
is that it's not going to be AI.
It's just another tool at the end of the day.
When these things stop being localized
as just chat GPT or just stable diffusion or whatever,
and it's actually integrated into literally
every creative tool, every type of process that you do,
you get AI agents,
you know, tweeting for you or, you know, writing emails or helping you with things or whatever,
these things are only going to increase the ability for people to either increase their production
or have more free time. And even though there's going to be economic disruptions across the board
from it, there's no doubt about that. There will be people that adopted these things, just as
people have always done.
with the internet or email or social media or whatever else.
These things have always been new.
And the process of something that is progressive,
something else is disruptive.
That's just the nature of the universe.
And so these things to me,
it's just something that people tend to fear
what they don't understand.
And I think they fear the idea of what Hollywood and the media and everyone else has sort of misrepresented or sold as fear around AI for a long time.
I mean, starting with Terminator or something, you know, there's like this bad brand around it.
And so it kind of has this like fear bait property to where people are scared of what it's going to do.
But the reality of it is I've been working with it for several years now.
And I've got to be honest, there's nothing scary about it to me.
It surprises me sometimes, but there's nothing scary about it.
The last thing I say is...
I'm a question about the tools part, right?
And before we go to GP, like, Black Guy, I would love to push back and say,
sure, email, in theory, it was supposed to create productivity gains and make your life easier.
But now instead of going to my mailbox and pulling out, you know, a bunch of paper,
I'm going through thousands and thousands of stuff. Has it actually made my life better? Has it made other people's lives better?
I mean, certainly, efficiency has been created, but...
It's like instead of spending, you know, a few hours doing these paper stuff, now you're spending hours and hours.
Some people have entire jobs responding to email, right?
So, I mean, sure, AI looks great as a tool, but I mean, does it actually create human utility on a personal level?
Without a doubt, I mean, the amount of people, the fact that we even having a conversation right now, I think, is representative of the value of that.
I mean, if we didn't have something like email, we definitely wouldn't have something like Twitter spaces, you know, 20 years later, and we wouldn't be having a conversation across the world.
I'm going to know where you are.
You don't even know where I am.
And on other sides of the planet, it doesn't even matter.
And we're able to have a valuable discussion about something.
So I would absolutely say it adds value to people's lives.
technological advances one way or the other do add value, but they do disrupt things.
It's the nature of the universe, it's chaos and order.
There's no way around that, but I think you have to be able to utilize them to the things that add value to you.
I mean, you know, my parents don't use email very often.
They still use whatever they're going to use.
And maybe it's not valuable to them, so they don't use it.
You know what I mean?
We lost you for a few seconds, but I think you make great points.
Stashto, you got 30 seconds and GP 30 seconds.
Let's keep it quick.
We got to wrap up.
Yeah, I'll keep it quick.
I'll keep it quick, I see.
Thanks a lot.
I think it was a strange loop earlier who had suggested the reasoning and coding reasoning
and programming reasoning in AI was a big issue because that comes from the human mind.
And maybe this is a better debate to have with someone like Elon on the panel, if that could be made to happen.
But I look at the Tesla AI and all the self-driving technologies that's advanced.
And I am a human being.
So when I drive, I employ a lot of reasoning skills and a lot of decision-making impromptu.
to make my decisions for me.
It's kind of why we don't hit each other in bumper cars on highways
and why we don't run over pedestrians and animals
because reasonability and ethics come into play.
And so for Tesla AI to have that advantage,
like collision detection,
the ability to spot pedestrians
pedestrian crossways to adhere to traffic signals.
I think we're a lot closer to realizing that awareness and
reasonability within AI systems is already present and that it can be
integrated and it's very scalable to other ethical applications
that human beings.
So thanks.
Yeah, thanks.
Speed talk here.
It depends on your frame of reference.
I work in Intel, National Security and Military AI,
and I can tell you, you have plenty to fear.
Neurostimulation, neuro,
pharmacology and brain computer interfaces.
What you've been entertained to in terms of mid-journey
and stability AI and deviant art
is the shiny new toy for you to play with
while it all happens above your head.
Fear, ethics is not fear black.
Ethics is called the predetermination of a situation
before it becomes out of control
and people invent what they can't un-invent.
Creatives are the future.
It is great to hear the creatives up here because it is widely accepted.
Even Kissinger, Hoffenlocker and Schmidt, widely accept that the creatives are the future.
They will be the only thing that AI will not be able to devolve to units of production.
The 3 billion other people that are disimmediated into boredom and given a UBI to fill their days will not survive that boredom.
The creatives are the future.
The last to be disintermediated because the creatives are the soul and essence of humanity.
Everybody else is an administrator or a bureaucrat and they are expedient.
Ethics are solutions.
They are not boomerism.
They are not fear.
They are, in fact, the predetermination of not making the mistakes that we have made repeatedly
over and over again in different clothes.
Histories of wheel, not a line.
But this time, we've never had a tool of this particular strength.
Thank you so much.
Sorry for the speed talk.
Did I miss something?
think that somebody say something bad about creatives? I think we all like creatives, right?
I didn't say anything bad about creative things. I said they are the future.
I know, but did somebody say bad? I said, did I miss something? Did somebody say something bad?
No, no, no. In terms of disintermediation, the people on the stage who have actually come up today to speak are the creatives.
And that's interesting because nobody who's a call centre agent has come up to speak in its favour.
Nobody who is a low-level functionary and administrative or bureaucratic role faxing or photocopying or digitising papers has come up.
Nobody is in the business of data acquisition and annotating or labelling data sets for training AI has come up.
The creatives have come up because every single other industry...
has square edges, can be defined very easily and can be disintermediated in an instant,
such as chat TPT is going to do in terms of how is chat TPT going to make money?
It's going to completely replace the call centre industry,
where those people live in the third world and developing countries.
Who will the loss of their wage affect?
Then their wife or husband or partner their children.
Where will they get a new job? I don't know.
And they certainly won't go out and become entrepreneurs on the back of investment in AI
or somebody who's going to give them a couple of million quid for an AI startup,
which is the ridiculous solution I heard on some other space the other day.
They live in third world countries that are already massive poverty traps.
They work for Western corporations on their call centers,
and those Western corporations are going to replace them with conversational LLMs,
which make them redundant, and there's nowhere else for them to go.
Thank you.
Thank you for that, GP. Let me go to Sasha and then we will be wrapping up. Thanks.
Go ahead, Sasha, just say final thoughts?
Yeah, just my final words were GP makes some great points,
and it is the creatives, the impetus is on the creatives to create the environments of which
everyone, as GP put, who is depending on the technology to advance, to thrive.
And I feel with reasonability and with ethics behind those creatives, as GP has illustrated,
some of us on the panel have, and many more out there who haven't made it,
to these panels, to have these discussions, integrating them into these conversations is going to go
a hell of a long way. And then that becomes sort of a much larger global movement of unity in a lot of
ways, moving towards the same goal to help everybody improve and all of these nooks and crannies
of society that are way behind can also be accelerated, whether that's here or whether that's on
in Africa where they don't even have air conditioning yet in some municipalities,
which brings me back to my earlier comparison of AI and AC.
So, yeah, thanks for letting me have the time here today, guys,
and I look forward to future conversations.
Yeah, and thank you very much for coming,
and we will be back every Tuesday.
If there's a takeaway, it's AC and AI.
Yeah, you're just breaking up a bit there, E. Y, C.
But, yeah, we will be back every Tuesday, every Thursday, 1230 Eastern, AI.
And that's the point of this show, which makes it different to the other shows.
We do go into detail about issues,
and we're able to talk about them in a very in-depth and with a lot of breath.
So much appreciated so you see everyone on Tuesday at 1230 Eastern.