ChatGPT Leads Church Service | #AITownHall with @BrianRoemmele

Recorded: June 16, 2023 Duration: 1:26:21
Space Recording

Short Summary

The transcript covers significant fundraising activities in the AI sector, with companies securing substantial investments to advance their technologies. Discussions also highlight trends in AI media dominance and regulatory challenges, reflecting broader industry shifts. Additionally, the lack of regulatory clarity is noted as a factor in the crypto market's struggles.

Full Transcription

Sully, do you want to accept the invite?
Can you hear me guys?
Yes, we can.
Yeah, it sounds great.
All right.
How does it feel to be a speaker, Sully?
Bro, it doesn't matter to me if I'm the host, I'm the speaker.
I am the show, bro.
And you're late, by the way.
That's why the show is late.
I apologize, guys.
I'm sending out the invite.
Man, it's going to be a fucking mental space next week with the guests that we have.
So there's guests that we had, the special guests that we had.
For me, Mario, I'm not just hyping it.
Like, I know you don't agree.
But for me, it is the biggest guest we've ever had.
You can't say more than Elon, man.
Elon is more famous, but he is the biggest guest because when you can't include the fame,
you include the situation that's occurring.
You include the exact timing that we've got him.
No, that would be, bro, you're saying it's more important than Joe Biden or Putin?
No, if we got, we haven't got them, though, have we?
I said it's the big guess we've...
Oh, sorry, I'm an idiot.
What a stupid response.
Sorry, I'm inviting speakers.
That was a dumb response.
It's okay, we're used to it, but this is the first time you realize that it's a dumb response.
I've sent out all the invites.
I've got two more.
So, I have something to tell you, Sully.
I think I already know, but go ahead.
No, you have no idea, bro.
You have no idea.
By the way, guys, so Eugene and Alex just check out the requests that we get.
I don't know, so make sure you vet them and invite because you're deeper in the space,
and Sully's not co-host, so we're relying on you on this.
No, man, you don't know.
I had a joke because kind of dead now.
Yeah, so it's going to be a good space today.
We've got...
We just finished a crypto space, the crypto town hall, which is the best space we run on a daily basis, our best daily space.
Thank you.
Yesterday we did a nightly show.
We talked.
You had, bro, yesterday's space was lame.
Your topic, your panel, everything was lame.
I saved your ass.
Ignore the fact you didn't sleep.
It was a lame topic.
It was a dead panel.
It was a shit show.
And then Mario found a thread that the team sent in the group.
Organize a panel, lit it up, and it became an incredible LGBTQ plus debate, which always
I give you that.
I give you that.
That's true.
It's the first time you admit it though.
I always admit it you're just wrong most of the time.
All right.
So let's kick it off.
Let me see the panel.
It's good.
The invites her up.
Let's do this.
Alex, Eugene, what's the panel today?
What's the agenda?
By the way, guys, we do have a WhatsApp group in the background so we can check it there
for any messages, any questions, anything at all.
If there's any interesting news, so we just check out WhatsApp group, Alex, Eugene.
And yeah, let's kick off the discussion.
Guys, what's on the agenda today?
Sure, yeah.
How are we going to dominate the AI media landscape?
That's topic number one.
So what's our strategy?
How are we going to dominate AI media?
Well, I think first off being here in cerebral valley where I'm hailing from San Francisco is not a bad start.
I mean, there's just so much going on here.
There's like 44 EI events in like two weeks.
So, you know, I think the landscape is changing so quick.
The strategies around how to invest.
is changing really quickly.
And number two, the strategy of how to dominate the media landscape, I think it's going to be real interesting.
It's going to evolve very quickly.
I think we want to kick it off with actually just some news as to like what's happening.
So, you know, there's a lot going on.
But, you know, Alex has a few things.
I know he's got a great newsletter.
Go check it out.
But yeah, Alex, what are some of the top three things we got going on this week?
Yeah, definitely.
And to answer to Mario's question, this actually isn't even Alex.
This is an AI agent that's been trained on basically pentabyte to my data.
So that's how we're going to do it.
Alex is out doing work right now, and I train this on 11 lives.
You know what's concerning, Alex, you know what's concerning?
You didn't pull it off properly.
You should have trained yourself.
You know what's concerning?
I was just sitting there checking messages.
When you said that, I immediately paused everything.
He started really focusing to figure out if you're accurate or not.
Because we had someone pull a prank on us, did this.
Justin did this a few months ago now, a couple of months ago.
Yeah, continue, man, continue.
Actually, Mario, on that, though,
like, seriously, all the data we're spewing up
from Twitter spaces, I mean, not too long from now,
they're probably going to be just pure AI Twitter spaces
based on just these recordings, right?
Man, the amount of content we have,
you can easily make a new version of me
that's better than me, for sure.
It'll be different.
Yeah, you're the first to get deep faked.
Sorry, go ahead, Alex.
By the way, Alex, just jump in.
Like, if silly interrupts, you ignore him.
Go ahead, Alex.
What's the news?
Yeah, so definitely. So we have a few topics today. I think, you know, me and Eugene will just kind of bounce back and forth here on the first couple. So the headline story, which we'll talk about in more detail, is just talking about this concept of regulation. The EU is...
Once again, out in front, we know they're always very aggressive when it comes to policymaking.
Just look at GDPR in terms of data protection.
So I'll pin like a thread that gives some context.
So we'll be talking about that in depth and basically debating like, is it too early to be regulating this space?
Are they doing it?
you know, are they being smart and getting out ahead of it?
So that'll be one topic.
And then, I don't know, Eugene, if you want to just hop back and forth for a few ways,
and then we can kind of dive people.
I had, sorry, I've got a question on that one.
So why people find the regulation discussion,
when we're talking about regulation, to find it very, very boring.
It's probably the reason why crypto is just struggling right now,
is that lack of clarity when it comes to regulation.
But there's one interesting thing that was set on the stage
by one of the regulators in our crypto show a few days ago.
He's like,
I think it was Bruce Fenton.
He goes, regulation for AI was done 10 years ago.
And it went through the same process we're going through now.
And now crypto is going through it?
So then my question was, my question to you now is, is that true?
Because I didn't know there was any talk about regulation.
Whatever, five, 10 years ago.
I think he's about eight years ago or so.
Is that an accurate statement?
Because as far as I'm aware, AI doesn't have regulatory clarity or was something done
eight years ago.
Well, it depends on what you're talking about because there are multiple initiatives, basically.
I mean, there's the initiative, the global partnership, there's the United Nations initiative, there's the AI Act.
So there's a lot going on.
And some of them have started, you know, did start these initiatives years ago.
But like, for instance, the global partnership on AI didn't start until 2020.
The EU AI Act has its roots in directive from 2008.
as does the European Human Brain Project.
With respect to the EU being aggressively regulatory, they are.
However, there's a massive difference between being realistically, aggressively,
regulatory and enforcement.
But why does it matter?
So Alex Eugene, why does regulation matter right now when it comes to AI?
Yeah, I mean, I think it's going to have a serious impact on the EU's.
I mean, like Paul Graham from Y Combinators, basically like every EU AI startup should probably leave the EU, right?
So it's going to have a huge impact on like businesses and how they operate, startups how they operate.
So but regulation is important because, I mean, people like Elon are saying there's existential risks, right?
to AI and eventually AI.
So I think that's kind of like the big,
that's like the big, you know, 800 pound gorilla.
So I'm just talking about Elon, by the way,
he said today a few hours ago,
he said that they should be a pause
on the development of AI and that the AI sector needed regulation.
I think he says needs of regulation
they came out on Reuters about
7198, about an hour and a half ago
and he was giving a speech
he was doing an interview about two hours ago
Mario he made that statement
he made that statement eight weeks ago
he made it when he met with Chubb
he's made it constantly yeah correct
and him meeting Schumer is
is a you know
is a testament to the importance of regulation
but you know
Does that mean self-driving cars are not going to be a thing?
I mean, there's a contradiction there, right?
There'll never be a thing because, frankly speaking, he's got a moral dilemma that will never be solved,
which is whether the vehicle owner, the manufacturer, the vehicle, the software developer,
or the owner of the infrastructure is responsible for the moral decision in an unavoidable collision
between two sets of humans as to whose life is valued most when the car is, the vehicle is in autonomous mode.
Tesla shareholders have been led down a path.
of expectation for autonomous driving
that will never happen on the road infrastructure
that exists in our societies today.
And it is benefiting from massive amounts of research
from Tesla that's going external
to other entities belong by most.
Actually, this is worth a discussion.
I want to push back.
bit because, you know, I know I appreciate a lot of what you said, but, you know, I'm, I'm walking around San Francisco.
They're self-driving cars right now, right? Like, I literally, one, almost hit me, like, in, uh, in, in, in the
hate and, uh, you know, did it stop and a weird thing. It didn't drive like a human, but it drove
fine and it didn't, you know, hit me. So like, why, does anyone disagree with GP out of curiosity?
Is there anybody who thinks? Yeah, I, I, I,
So my earlier point was more on the lines of like the contradictions in the in these statements that we should stop the existential risk to AI.
But at the same time, we are, you know, the same companies are promoting it is what I'm kind of confused about.
But yes, EYC, I do see the cars driving here, but there are mostly humans sitting in it most of the time because they're training them.
I'm in San Francisco as well.
So Cruz does it a lot.
but a G.P.'s
point is totally different,
which is also another can of worms.
So yeah, the
Yeah, I mean, there's plenty of autonomous vehicles.
The idea that the, he said 10 to 20 million autonomous vehicles on the road
of the next 24 months.
Now, you know, I mean, I was behind several Tesla's today.
They always have a human present.
If you're a human in autonomous mode, when the moral machine decides to knock down the two
elderly people with a dog versus the eight-year-old child with a school bag,
then who's liable, the insurance company...
the driver, the car manufacturer, the software developer who programmed the code to say this is the selection choice based on the recognition of the facial features and the age ethnicity and so on of the people.
These are facts Elon never speaks about, and I'll wind on this.
He constantly talks about existential risk.
But we suffer from the Oppenheimer effect and we will not have a pause in AI development because you cannot un-invent things that you have invented.
That's why technologists are so dangerous when they're not accompanied by adults in the room from other disciplines, especially when it comes to AI.
And that's why the EU AI Act is ridiculous because it has four areas of risk, unacceptable risk.
high risk, interim and median.
And in all of them, they are poorly defined.
In all of them, they fail to declare
what are the differences between real time
and near real time acquisition of images,
for example, for facial recognition.
And also, let's not forget, GDPR failed utterly.
So did Safe Harbour and Privacy Shield, which was deconstructed by one man, Max Schrems, from 2011,
the egregious exchange of data between the European Union via Ireland for the match of the ends of its city.
GP, sorry to interrupt. I want to stick with cars. I think you made some great points there.
So, I mean, this is a debate, and you teed it up very nicely for the audience.
So I want to make sure that we get both sides of the debate.
The car thing, the way you frame the moral questions are actually really interesting.
And I'd love to see if anybody thinks the other side.
to kind of recap and to kind of add color.
Yeah, one second, Spinks.
Let me, let me tee it up because the question to ask is, sure, like the car, like,
who's it going to be liable?
The engineers, you know, the, you know, the car itself, the whatever, right?
Like, who's the liability?
The legal question is interesting.
But the problem, right, is going to be what if it actually is safer to have self-driving cars,
So the overall utility of humanity goes up because there's less deaths, right?
So that's a question I have to ask.
So I know Sphinx then strangely, but yeah, what do you guys have to say in
in response.
Yeah, I just wanted to say,
while GP brings up some very interesting
and worthwhile,
greater philosophical issues or moral issues,
the questions he's posing
and the ones you just pose,
these are legal issues.
These can be resolved.
These can be, you can have studies telling us exactly whether these self-driving cars are safer or not.
You can, the driver can sign, I'm sure will be signing a waiver when they buy the car that if they're in certain mode, they take responsibility for any accident.
I mean, some of these come down to being practical legal issues.
And we may not have all the answers yet because the technology is so new.
But don't be, but, you know, rest assured, those things are not going to stop self-driving cars.
So just to jump in here, I'm curious if anyone wants to play the other side of this.
So, you know, obviously we've heard from Sphinx and a few others that this will just be solved.
The regulations, you know, basically not going to hold it up.
But I'm curious if anyone wants to take the other side of this or if someone has a really strong opinion one way or the other.
Because I think as Mario brought up before, regulation can absolutely impact an industry.
We're seeing it with crypto now.
So I don't know.
I don't know, strangely, but I think if you had your hand, then Brian, we haven't heard from you yet, so be great to tag you in, too.
Sure, I'll be brief.
So, yes, so the other side of it is that, okay, some of the aspects of self-driving, you know, that are automated, they do save lives like, you know, instant breaking or where humans cannot make that split-second decision.
So, you know, I'll still put that in augmentation.
And to GP's question about the, you know, the morality aspect, that's a big...
you know, big kind of, it's a gray area where, you know, legal and all will always have issues.
But then my question also is to the community as well is what are the possible solutions.
I mean, technologically, we are not there yet.
We are not in Jetson's era where we push a button and sleep and the car drives us there, right?
You've seen limited demos from Google and other companies, which they do in limited cities and under limited conditions.
And so the world is much more...
wild out there, you know, in terms of the variables that can happen. So yeah, I don't know if that's
a counterpoint, but it's slightly counter in the sense that these things, I'm not a Luddite,
and I would say these technologies will help us and assist us, but there's still going to be the
human in the loop for a long time to come in my opinion. I like that strange loop. It's an interesting thing.
Brian, I know you have some thoughts on this. I was just hearing you speak in the past. Do you have
anything to add here?
Well, thanks for asking. So let's look at it from backwards kind of view, right?
Humans have used machines for quite a long time. We're now trying to assign intelligence to these machines and some sort of agency and sentience.
And thereby trying to regulate it, right? So...
Regulators are going to regulate.
That's their job.
They look at anything new, and that's what they salivate over.
And that's what's going on with these new laws and rules.
The impact...
You know, I can make it very finite if you want to look at self-driving cars.
The person responsible is going to have to be the person that's responsible.
Who is that the owner of the car, the person behind the wheel,
or the person sitting near having a gin and tonic in the back seat and relaxing?
They're going to have to ultimately hold...
responsibility. Now, Laura's going to litigate and sue each other and do that, of course,
at some point the courts will figure that out. My deeper concern is not whether the AI algorithms
of running over the older women or the baby in the stroller. I mean, ultimately, those things
are actually not going to be calculated in my view. I work with a lot of AI people. I write AI code myself.
those things aren't being calculated. What's being calculated is probably the likelihood of the least amount of damage in the situation.
And it's human damage, yeah, that might be calculated, but it's not going to go down to that point, at least not to my knowledge anytime soon.
So we can contemplate that, but that's kind of quite a ways off.
The bigger issue is...
One second, Brian.
I think that was interesting.
And by the way, a reminder to the audience,
there is a purple button on the bottom right.
I was noticing some thumbs down in the audience.
If you have some ideas and you have some thought,
please press the button and tell us your thoughts, right?
And if we like what you say and we think that you have something to add,
we will bring you up.
So, yeah, please do that.
So Brian, please go ahead and others who want to respond.
So let me just finish your thought.
Anytime I hear anybody in the industry from Sam Altman to Elon Musk to regulators,
I want the word defined.
What do you mean by safety?
What is the fear and what's the risk?
And let's break that down into its proper categories.
This is something every single person listening to this should build their own discernment on.
We've got to stop with these generalizations.
What are you afraid of with AI?
What is it exactly?
Put it in sentences, list them in order, and let's address them one by one.
But if you throw a general term...
you know, we need rules for safety.
What are you saying?
You know, I've had this debate now for some 35 years.
And I started basically trying to hyper-define this over the last 10 years.
And I have, frankly, found very few people can be articulate enough
to really raise a lot of things other than your standard dystopian, you know, Terminator Matrix type thing.
But, you know, throwing the general out there like we're seeing in the EU and we're probably going to see in the U.S.,
what's going to take place in the U.S. is going to be actually much more,
worse than what we're seeing in the European EU at this.
Brian, I see a lot of hands based on what you just said.
Josh, in particular, seems to be champing at the bit.
Do you want to jump in?
Yeah, no, I really appreciate it.
Brian, you're making some really good points.
And I do want to note for everybody, AI has been around for, I mean, half the century in use in different ways.
And, you know, I do understand with...
AI safety, the inclination to go to self-driving cars.
And Brian, to hit some of your point and answer some of your question, I think a lot of people are concerned about the scale of deployment and how incredibly powerful the AI is now.
And just to harp on that, you have things like deep fakes, misinformation during political campaigns.
Let's focus on each one of these things because these are always the things that are thrown up.
What exactly about the scale are we concerned about?
Well, with the Internet of Things.
Well, if we're not doing on, Brian, I'll just say this to you.
Brian, sorry to I would talk to, but maybe it will be.
One and time.
Yeah, this is really bad because this space tends to do this frequently.
And I, you know, that was the most patronizing statement I have heard in quite some time from Brian.
Because it betrays exactly what's wrong with technologists.
What's wrong with technologists is they think knowledge of the technology gives them the moral authority to demean the other sciences that must play a pivotal role in ethically aligned design of artificial intelligence.
Now, also, they play fiddle to this idea that we're in the fourth industrial revolution and that we've always had machines and these are just different kinds of machines.
That is untrue.
That is fundamentally untrue, and Brian knows it's untrue.
We have had machines external to us for mass production, utility, specialisation and automation.
They were tools that changed our cognitive ability because we interfaced with them externally.
AI, weaponized AI, neuropsychology, neuropsychostimulation, and brain imaging, and all the other things that travel with AI are existential risk because we are messing with us, not things external to us.
Therefore, this debate must have philosophers, psychologists, everyone from the inexact social sciences, anthropologists, medical people.
Everybody must be included in this.
It's not about weights and biases.
It's not about your LLMs.
It's not about your fancy different models of AIs.
It's about the existential risk to society represented.
by too much power in the hands of too few technologists
with low emotional intelligence
and low understanding of the difference
between absolute truth, complex truth and relativism.
And to Sphinx's point about the legal dilemma...
I want to let Brian respond because you said some great points.
do you want to respond directly to those?
We will go to related,
by the way.
So first off,
anybody follow me.
I'm hoping GP might want to try to do that.
I promote personal AI over network AI,
most exclusively. I work with open source communities. I champion that. I program. I donate a great deal
of my time to build the code to make sure that happens. You know, again, when we start talking about
existential risks, and I do appreciate what he's saying. I absolutely agree. That's a funny part about it is.
I come from a psychology background. I come from a
neuro-linguistic programming, understand, hypnotics, all of these different things are taking place.
Subliminals have been in our advertising for pretty much 65 years.
I understand these impacts, but when we start throwing all of these things up in the air and we don't identify them and we don't articulate them, first off, we're actually doing a disservice.
to that particular line of thinking because we're not defining it.
We're not actually giving it its due justice for consideration.
If you really want to honestly look at the way this is playing out in the news cycle,
It is the agenda of the news cycle is actually the existential risks that journalism and journalists feel that they might face because AI could write something better.
So you can see where those weights and biases come from somebody that might write that.
That is untrue, actually.
It actually, journalism is going to become a far more valuable occupation.
So many things in the creative arts are going to become far more valuable if you understand how to use the new tool that's being given to you.
As far as, you know, general artificial intelligence and what that really means,
You know, look at some of my Twitter feed on my animal videos I've been running for the last seven years.
Most people don't understand why I do that. I don't do cat videos. I show intelligence in other species.
It's very important to understand that we're not alone in this planet, and there are much more intelligent,
forms of life around this than we could possibly realize.
My most recent one, we have,
I think it was some sort of bird dealing with,
you know, some ocean creature trying to remember exactly what.
Hey, Brian, I want to harp on you bring up a lot of great points.
I want to harp on.
one thing you said, which is related to the creative industries, right?
Like, I know this is an controversial topic.
I actually had this on my Twitter feed just this morning where people are going back and
Josh, I want to bring you back in, actually, too, because I know you were in the mix here
with GP and Brian, but perhaps a T.F question, which is, do we think the creative arts
and the artists who are currently creating are going to be benefited, as Brian says,
from the rise of gen AI?
Or do we think that there's going to be disruption?
I'm, by the way, on the disruption side in the near term.
like near to medium term probably within you know most a lot of uh the ensuing years but josh
i'd love for you to jump in
Yeah, look, I think that there is a massive amount of disruption coming.
If you can generate long-form text that is you can't separate it written by an AI versus it written by human.
And same with images.
There's obviously a disruption element there.
But I also see a whole different curve happening where the folks who are trying to take ideas to impact who may be limited by their physical capability, by a skill capability.
such as coding.
A number of these things, AI, the tools that are around now
are far more than just language models producing blog articles.
These are tools now capable of creating entire digital infrastructure.
And with that and then also with artists being able to, you know, imagine a photo and then try to get there through AI rather than their paintbrush.
There's some interesting ideas of what we could see there.
And so to me, I think, yeah, it's absolutely disruption, but I also think there's a whole new curve of art coming where we can really get even deeper into the human mind because we have tools to kind of generate massive amounts of art that don't need to be expressed through the handbrush or pencil.
So just some thoughts there.
Yeah, it's interesting.
I think I used to work at Pixar.
It took a thousand people to make a movie $100 million.
So $100 a minute movie.
So a million dollars a minute.
I suspect that same movie in a few years will be able to be made with 50 or even less people.
So like two orders of magnitude less.
And then you have more content coming out, right?
So you have more creators being able to produce more things.
And statistically, you're just more likely to have more good stuff.
Yeah, which is why the content is declining in value. Even if you look at some of the top 10
largest musicians in the world, which I'd assume all 10 of them are in the United States,
many of them decided that the future value of their catalog of music would be worth less,
so they sold it to private equity companies, see Taylor Swift, see Justin Bieber,
There's a number of others that I can name.
I'm sure there's been multiple transactions on the Beatles catalog.
Some of the catalogs are held by estates or high net worth individuals or family offices,
but everyone agrees the value is declining.
And then look at what streaming music did to the revenue stream for musicians to sell their music.
you pretty much can't even enter the market as a musician thinking that you're going to be able to sustain your career off of music.
Even art, like hopefully never does the museums and the art institutions allow somebody to use natural language to speak to a computer system that goes through a database of images and
concoct shit, I think that would be lower than the way that they used to abuse street or in the 60s and 70s.
But again, wider availability means that there is a decrease in the value.
So even if people can access the...
Christopher, what you're saying is really interesting.
I guess the question is going to be about this.
And what you said, I think you've hit on the excellent point.
Because if you look at the example EYC gave,
so yes, you have these movie productions in terms of,
let's take animation as example.
And so, yeah, the quality one could say seems to be better.
And I accept that there are others who go, like, for example, let's say people who hand draw, right?
Like, someone like Miyazaki, because he's better than everybody else.
He's obviously going to have some kind of niche.
But anybody who's probably good, but not to his level, is probably going to,
they're going to lose out.
You're going to lose an entire industry.
So is that not a concern in terms of AI?
So someone like Miyazaki, the top guy, he's going to be there.
Because, you know, he built his niche and he built his reputation.
And will the next Miyazaki even get an opportunity?
I think it's not, sorry, and it's not just drawings and having these digital assets. I mean,
you have to tell a good story, right? Because you're making it for humans and we often tend to
forget that. And we say, okay, these tools are great. They are still tools, in my opinion,
humble opinion. You know, these things still serve us. And we
We are not to the point. I think the regulation, bringing back to the regulation, what Brian was saying earlier, I think they have been very specific this time around as to, you know, what they will be regulating in terms of AI and emergence of chat, GPT, this mass disinformation where you could create fake videos of something and, you know, content that is fake and how do you verify that?
you know it's pretty specific in my opinion this time around and right yeah and uh lastly um
uh i had a point i kind of forgot i'll come back so strangely actually to build on that and then we'll go to the
next um you know the i think what sullis just said is so in entertainment like music movies etc the power
law like meaning like all the value or the majority of the value accrues the top artists you know
you know, is is more powerful than ever, right? Especially in this kind of digital age, right? So if you're,
you know, your best local golf player and there isn't the internet, you know, you might be like
marginally better. You might make marginal more money than, you know, the person in your local
country club. But with like a global, you know, base where lots of dollars are flowing in, it's like
the top, top people like the Miyazaki's of the world. And we're going to get all the money and all
the value. The question, I think, is a good one.
Because of Gen A.I.
Will the new folks, the new Miyazakis, even get the opportunity to do their 10,000 hours and become the next great players?
I'd love for people to jump in on either side of that debate.
That's a good question.
Yeah, I think my thought on this is that there is definitely a huge benefit to taking the time to create something, right?
As you learn a lot about the work that you're creating, the subject matter, right?
It's about, like, for example, if you're trying to understand a feeling that you have, go to like a therapist or you have a long conversation with a friend and you kind of need that long form process to really,
kind of build that out. And I think that as we go, we're still learning how to use these tools.
And it's one of those things where, yeah, you can spend all your time like on your phone,
talking to people on the internet or something like that, but that's not the best way to engage with social media.
And I think we're going to learn in a similar sense that that's a similar approach that we have to use with AI.
where AI is going to facilitate really fast creation.
And that's going to be great because a lot of people who have great ideas,
but maybe haven't developed or put in the time to generate the skills
to then bring those ideas to life can now get in on that process.
But I think that at the same time, great creators will try to find that perfect balance
between, okay, I want to create this quickly and effectively,
but I also want to take the time to really develop and, you know, scope and, you know, articulate this idea.
That sounds good. I think I had a GP of Brian.
Do you guys have stuff you want to say to that?
I'd just like to say thank you to Brian for taking my sort of ranty comment on what he said.
So, well, a little bit worked up there as a result of other things going on.
What I will say is that the debate that Brian speaks of and write down line by line, that has been done, Brian.
And the problem is that in the forum of spaces, it is impossible to keep it on topic.
So it needs a one-to-one debate with an audience and then a Q&A.
These existential risks, nobody's a ludic, nobody's talking about a dystopian panopticon,
nobody's talking about Terminator's wrong with the streets.
What people are talking about, well, what I'm concerned about specifically, is people like Dr. James Giardano.
who calls himself a neuroethicist and a bioethicist.
Now, he's responsible for the militarization of neuropharmacology,
neurostimulation and brain computer interfaces,
all of which are riddled with AI.
The difficulty with the regulatory landscape is that the regulators regulate for broad brush areas of interest,
But that betrays the lie that the people in the populace understand how they're being manipulated
directly and indirectly by the implementation of these technologies, regardless of the regulation.
And that is, to wrap up,
Very well illustrated by Dr. James Giordano's comment as he asked for the applause to be paused while he went to the podium at West Point, at the invitation of the Modern War Institute, which when he said, please don't applause.
because I feel too much performance pressure,
but more importantly, applause at the end,
or rather still,
I'd prefer to hear the slamming shut
of your sphincters with fear.
Now, he went on to say that he will indirectly and directly tell the audience through his lecture in the following 90 minutes that they will, in their military and civilian lives, come across covert influencing using AI and neuropharmacology and the neurosciences
and they will not be aware, and this will be deployed
through hybrid, linear warfare and the three doctrines of,
or the three warfare of the CCP's doctrine.
The difficulty is regulation or no, experts are no, us or no, this space are no, there are 8 billion people on the planet, about 7.9 billion of them will be utterly unaware of the impact of this subliminal, invasive, remote influencing of their thought patterns, not through subliminal messaging, but by direct interfacing with our neuropsychology using AI.
JP, that is such a colorful way to put it.
I think a lot of the audience will remember this.
But speaking of AI, I do need to jump in.
So we have a sponsor here on this show.
And IBC incubates and accelerates both AI as well as Web3 companies.
And they partner with VCs and funds to work with their portfolio companies.
in return for equity, zero cash.
So if you're interested, DM Mario,
and then his team will get a call organized with you.
And by the way, we're gonna be doing Shark Tank style pitches.
We have actually been doing them on the crypto,
as well as the AI spaces.
People seem to like them.
So if you are startup or portfolio company
that would like to pitch, hit up Mario on the team,
and we'll do the process.
And don't forget to subscribe.
All right, so we're gonna shift gears.
We had a wide ranging discussion here, but
Let's go ahead.
I want Alex,
let's go to you and let's talk about,
let's tee up the next.
kind of set of conversations, right?
We've had a lot of interesting funding news.
There's something I'd like to say.
Go forth, thanks.
So I do think because Mario brought up a really important point at the beginning
because he compared it this, the AI regulation to the lack of regulation in crypto.
And I was here for that.
And what we had in 2021 was everybody making millions of dollars running around elated.
And what do we have now?
Well, we all know what we have now.
So and everyone the whole time was saying, this is the wild, wild west.
That's what people in crypto were calling it.
And some of us were saying, no, it's not the wild, wild west.
Crypto exists in the real world.
And so does AI.
But I think it's important to distinguish between technologies like self-driving cars and, for instance, AI surgeons.
Yes, they're on the way that can physically harm people and tech that can make, create fake content of real people.
So I think that to say some, to think that regulation is, is not, uh,
required in any way. I think, I just think that's ridiculous because in that case, that's fine.
But what you would need is the tech companies to, once they create a product, right, they need to then wait for the counter product.
Meaning, if you can alter someone's voice.
to sound, to make them sound like they said something they never said.
You have to wait for the technology that can detect that for the average user.
Now, I know someone's going to say, oh, but it's there.
But it's there for industry.
It's there for corporations.
It's there.
But it's not for the, it's not like a Shazam on my phone where somebody says, look at what
someone said.
And I say, hold on, let me, let me, AI Shazam it.
And then I can say, nope, that was fake.
then it would have to be up to the tech companies to be responsible and they're not.
So I'm very pro-AI in some ways, but at the same time, I'm seeing similar trends that I saw on crypto.
So I thought that was a good analysis. Thank you.
Thanks, thanks.
Josh, are you worried at all about, you know, about fake people, fake bank callers, scam
callers calling with other people's voices?
Yeah, I just want to add on to Sphinx, great point.
I'm just saying that you have to create infrastructure to keep up with these problems or
the damage will get out of control.
And that's what all industry regulations about.
I think Sphinx is hitting it right on the head by saying, look, until I have a way to reverse
engineer that or the opposite of that tool.
I think there's some legitimacy there,
but I think we can even do it before then.
I think laws and rules,
and groups that can look into this type of stuff.
Those types of folks need to be put in order and organized before you let AI kind of run things like finance, run things like accounting, start building massive amounts of things on its own will.
We need to learn how to check all this.
We need the infrastructure to do that, both governmentally and privately.
So, Sphinx, I really appreciate your point.
I just wanted to harp that we do need this infrastructure, in my opinion.
So, Josh, you know, we have a lot of AI FUD on the space, perhaps rightly so.
Is there anybody who wants to play the opposite side?
Who here is not that scared of AI and its near-term or long-term risks?
I'll answer that.
Should you have a valid fear of AI as it exists today?
Absolutely.
But what does that fear define it?
Don't make it undefined.
We just lived through half a decade of undefined fear.
Let's kind of slow it down and not project that onto AI.
And I think that's a lot of what's going on.
As far as fakes, well, we've been living through that probably for the last half decade.
We'll realize that.
probably in another 30 years of what was actually not very real. The artificial intelligence technology
we have access to right now, others have had access to it. Let's just say what we are accessing
is already about 25 years old in some regards. So that horse left the barn. As far as
Regulation law, I really love regulation and law when it actually can have an impact.
I don't like it when it actually just creates more bureaucracies and it actually doesn't save the problem that we're trying to fix.
For example, there is already the ability on a local computer without anybody's company's involvement to fake somebody's voice.
That horse left the barn.
You can make laws.
You should criminalize it.
Yeah, right?
You should criminalize that somebody is doing a scam.
Whatever tool they're using, a voice changing system or whatever, those laws are already on the books.
But somehow trying to stop AI because...
big companies are going to do something and will regulate it in some big EU commission
where there's 90 people sitting around the table and none of them really drive a car anymore
or go to a grocery store and they're going to somehow live in the real world that we do.
Now, I don't know if that's going to solve the problem at all.
So understand that any time that we're trying to do something on one side of the balloon,
it's going to expand on the other side of the balloon, meaning...
If you are taking the tools away from the open source community, which I would like to say is really normalizing this to a greater extent for the average person, if we take those tools away and the EU's rules as they stand right now, they will impact a lot of the open source community by open-suitary.
overtly putting regulation and burden on them that they could not afford hundreds and thousands,
if not millions of dollars, if they develop something.
What's going to happen to that technology?
You're not going to stop it.
What we need to do is lift up.
each of us up listening here to a higher level of discernment.
You know voice technology exists, you have friends and family.
If you have to create a code word, it's really that simple.
I already have it with my family, right?
And that code word should be something that is uptoose and only the family knows.
And you're not going to be involved with that.
You see a URL in your email that looks like it's your bank.
Well, I don't suggest that you click on it.
I suggest you do something else.
Now, there are rules and laws that protect you from malicious emails,
but have they stopped malicious emails?
So we're sitting here wanting to, it's like if you've ever been in a homeowner's association,
where people have minor power over others,
they start just creating all sorts of ideas of regulation and how to fix things.
And they really don't fix things.
They make everybody's life horrible.
And so that's kind of my thing.
There is an opposite.
Hello, what happened?
Can everyone hear me?
Yeah, the room muted.
No, no, someone kept in the room accidentally.
Maybe it's Alex O' Mario.
All right.
I had a counterpoint.
Can I make it?
so thanks,
Sully and thanks,
I think Suli raised a point on Mario earlier.
And to Brian's point,
I agree that,
You know, this is out of the bag, like the email effect or like having somebody's voice modulated.
You can't control.
It's very hard to, you can put regulation around it and deterrence.
It's almost like a deterrence, like penalizing people who do such sort of scams, but it cannot be stopped.
I think what regulation is good for is bringing, you know, the military applications, the folks in the military, you know, not have this sense of, hey, we want to give full control to these autonomous agents.
Because it's very similar to, and I'm going to quote like the old war games, you know, scenario where it could easily happen with the not so intelligent AI because if you have these systems fully controlled by just fully autonomously, like even drones.
You remove, and again, I'm going to say the human in the loop, to really make those unique decisions that human are capable of.
And we still criticize humans in that loop, but at least...
you know, there is something there to rely upon.
So I think if we talk regulation in terms of military use,
that is where I think it's the most important.
And then comes to these social issues where people should have
at least critical reasoning skills to at least determine,
okay, this could be fake, this could be like spam,
you know, make the decisions for themselves.
Hey, everyone.
I'm going to hop in here real quick, real quick, GP, just because.
So we're about an hour into the talk so far.
I think we've hit a lot of the good points on regulation.
I think we've heard both sides here, both, you know,
in terms of the importance of having, you know,
guard rails in place, but also, you know,
what some of the downside risk is here if we get this wrong.
So what I wanted to do was pivot us to sort of like the second,
a big, like, news topic here, which is sort of two and one.
So that's kind of this idea of the resurgence of sort of builders and companies being driven by the AI space.
So just to get into some specific news.
One second, Frank.
One second, so basically on the actual funding news and stuff, there's two really big
announcements we have to say here.
So the first is that there's a company that's based out of the EU that actually just
raised 105 million pounds in startup funding.
And they've only been around for four weeks.
And then additionally, we've seen big raises like Synthesia, which raised $90 million.
We saw another company raised $20 million apparel for freelance financial experts.
So I'd like to shift the combo off of regulatory concerns here
onto building and the actual companies that are moving the space forward.
And so kind of two topics here.
And I know Eugene's got a lot of opinions and I'd love to hear other people chime in.
So one, like is AI actually leading this comeback in terms of startup funding?
bringing the economy back, can it overcome that?
And I think a second piece of that is, where will the resurgence be?
Is this all going to come down in Silicon Valley like we saw with the internet,
or is it going to be more distributed around the world?
Well, the mistrial funding of 105 million, it may be four weeks old, but look where the founders came from.
And it's a good segue, it's an excellent segue, because the difficulty is not the regulation, so to speak,
and I just would like to say that military is utterly accepted from all of the EUAI Act.
and any of those programs that run under black budgets or our national security base are not subject.
So the regulators tell me to that particular regulation.
But Mistral was funded.
It's four weeks old.
Look where the founders came from.
The difficulty we have right now is that
In an era of alleged disintermediation and disruption, the same players who reigned over 15 years of egregiously unregulated social media are moving their domain to AI.
People can say, yes, you can take a chat GPT API out of under Microsoft,
but they still own the platform.
People can say that the LLMs from meta and from, you know, Bard and others
will allow third-party app developers to build on top.
Yes, they will, but they still own the platform.
We must not, and this is the key point,
not keep using the word fear when we're talking about ethics,
And I think we really need to get away from this idea that there's a doomerism when we try to analyze the ethical outcomes of AI plus the ethical outcomes of the migration of the dominance of big tech from Web 2 to Web 3 AI blockchain and the other emerging tech.
And that's an excellent segue
because Mistral is four weeks old,
but the dudes who founded it
are all from the big tech outfits
and how do the big tech get to reign a new industry?
Create another alleged new company
and have it funded,
have it catch the headlines,
have everybody say,
wow, it's disintermediation,
but it ain't.
Actually, the other day, I was at the office of one of the VC firms that actually backed that was part of this round.
So it was really interesting to kind of hear.
I mean, it was the European team that did it.
I was in the SF, but it was interesting to hear.
I mean, one of the things people said is there's only 75.
I mean, this is what people say.
And they said only 75 to 100 people exist in the world, including these three Mistral founders who could do what they do.
up for debate.
AGI, I haven't heard from, we haven't heard much from you yet.
Let's go to you.
So, well, I wanted to talk about how to dominate AI media for the first question.
I think it's very important to know where things are going.
So I was listening to Greg Borgman and Samatman,
and they were asked,
you have no moat about a large language model.
Well, they don't have one,
but what they have is that they know where things are going,
and this is what is very important.
So if you know where things are going,
then you will dominate the AI media.
And same thing also for the regulation.
If you are able to know where things are going in terms of regulation,
for example, if you will be able to have a superhuman AI agent
debating the actual regulation that are happening right now
and let them recursively improve until they can't improve anymore,
then you will be able to know where it's going,
and then you will be in advance
and you will be able to navigate that regulatory landscape without any problem.
Then I'm also for the last question.
I'm not afraid at all about AI and AI, obviously.
I'm extremely excited.
And I see a lot of, of course, I agree that only 75 to 100 people can build that kind of technology that will impact the future.
So a few of them are at OpenAI.
We have a few of them at deep mind.
Google and other companies.
And I think it's very exciting time.
I think we can create a lot of value.
And I think it's just,
it's what we are living right now,
the opportunity to build wealth,
the opportunity to build the future of humanity,
to solve a lot of problems for humanity like disease and so on, to create a lot of value.
I mean, it's happening right now and it's very exciting.
So I think about the regulation, most of people that are working in regulation are unfortunately people that are not coding the AI themselves.
So I see that there is a disconnect there in between people that do regulation and the people that are building.
And I'm sorry to see that.
It's so unfortunate.
I think it will be very important that we would have a real discussion between the people that are really building the technology and that kind of regulator.
And I think if we want to have a real protection, if we're not.
I think people are good.
Like people that are building the technology.
AGI, I can ask a question?
I think you just laid out what could be, I mean, it's just a fascinating way to frame the discussion, right?
A lot of people say, oh, AI is overhyped because, you know, 105 million euros in a four-a-week-old company.
But you just affirmed, because I haven't heard this from investors, only 75 to 100 people can do what the founders of Mistral AI can do.
That's what you're saying.
I'd love for you and others to comment.
But the question is because let me frame it in this way.
People are saying, oh, you know, the incumbents, the Microsofts, the Googles, the, you know, the Facebook's metas of the world are going to be the dominant players, right?
Even Open AI, relatively new companies, going to be dominant.
But at the same time, you have people at these companies and only a few of them, right?
Let's say there's only 100 of them in the world.
They go off and start their own company.
I mean, what keeps them?
I mean, data is the moat, right?
There's a moat data.
I think it's worth discussing, right?
I'm not sure if data is the moat.
Because you can use the physics of this world.
You don't have to use data, I mean, human data at all.
You can train a model, an agent at a superhuman level,
that will understand the universe without any human data.
Then you will be sure to comply with regulation,
because regulation is mostly...
about human data.
So that's what I'm doing,
certainly.
So you can build even better.
Will there be more?
Will there be more of it? I mean, like, so what's holding it back?
I mean, it can't just be that these hundred people are just like literally just superhuman.
I'm sure they're superhuman smart, but there's got to be more than 100 people who can do it.
Is it just the training, et cetera?
Can you describe for the audience here?
Why that number is so low today?
And what's your projection for how fast that grows?
Well, that depends on the level that what you want to reach.
For example, I know from experience that in,
reinforcement learning, that means the way that you code the AI agent and this is where things are going.
Not much people are able to do that. So it's not that it's complicated at all. It's it's
I mean if you care about it, you will be able to do that. It's just it's just a matter of
caring about it. Certainly people can do that and
if I were to put a tutorial, how to train people, to deploy that,
and if people really care, they will be able to do that.
I guarantee that.
There is no complication at all, but people have to put the hour
to dedicate the time to do that.
The resources are there.
The code, all of the code is open source.
You can study that code.
Do you care enough?
It's just, it's the one, it's the people that, that want it most that will succeed.
I love the way you frame that.
But some people, I actually wanted to get.
But what I did observe, what I did observe, a trend that I did observe.
It's, it's that usually, most of those people are connect to one individual.
And that is Jeffrey Hinton.
If you look at Elias Saskerva, he did the one that,
created kind of chat GPT at OpenEye.
He did his PhD, starting in 2006, I think, under Jeffrey Entom.
Hey, AJ, I want to jump in.
I think you said some great points.
I want to move.
I want to actually get Brian's perspective on this because I want to see the follow on there.
I saw a lot of, I mean, maybe it's also, let me know if anyone disagrees.
But actually, Brian, why does...
Why are there, like in five to 10 years, those hundred people become, I mean, first off, do you agree with that?
Number two, do those hundred people become 1,000, 10,000?
You know, I'm curious about your thoughts on that.
Great question.
First off, let's do an examination.
The device that you're working on right now, I don't care if it's an iPhone or an Android device.
It's all core open source.
It's a Linux-based or Unix-based open-source platform.
We are using open-source right now by shutting bits over the Internet packets.
All of it's open source.
The open-source community actually created that no-mote Google memo.
That memo is actually quite phenomenal.
So let's look at this. Really, the shots were first fired in November when GPT3 became, you know, available. A lot of us have been working on this for a while. It was a big moment, but it wasn't as big because we've seen how it's grown. But it's grown at such a level. It's beyond exponential. So what's going on with the open source community? Well,
I'm really a proponent of putting AI on your hard drive.
So there are a number of projects.
I promote everybody listening to me to download GPT for all and start experimenting with it.
It's like buying an Apple One.
I'm not going to wow you with the idea that you're going to do the stuff that a mainframe could have done when you could have bought an Apple one from Steve and Steve out of the garage.
But you are a pioneer, and everybody listening to this can be a pioneer, to start learning what this AI is doing.
Now, why is the moat not available to companies that have endless amounts of server, endless amounts of energy, much more energy than energy?
Bitcoin is being used on by the way.
That might be a surprise to a lot of people,
but we're already getting to that point.
Why can we get a model, let's say a 13b model,
a 13 billion parameters into an 8 gigabyte hard drive file
and get a pretty darn good response?
Well, because we found out all of the trillion,
potentially trillion parameters that are out there
in a GPD like 4 model,
is actually not very useful.
And it's sort of an 80-20 rule thing going on here.
There's sort of a question, answer,
pairing that takes place and fine-tuning thing going on here.
But right now, I can load onto my computer
some 275 different models.
Now, some of them are Lama base, some of them aren't,
some of them are taking a different open source type of situation.
So this comes back to that question.
Are there just 100 people on a planet that are doing this?
No. No, that's maybe 100 people in commercial space.
But out of the open source space, you're going to see the next apples.
You're going to see the next Googles.
Because there are actually, in a sense, Google was a product of academia to a certain level,
but it was really a product of open source and sort of a side hack project for a CS student at Stanford.
what you're going to see coming out of this community are the new applications and the new use
cases that none of these larger companies can think of. And they're not going to be burdened by the
regulation or the ridiculousness, the outer ridiculousness of every sentence before you get a response
out of AI saying, I'm just a large language model and I can only do this. That's utter ridiculousness.
If you're at a prompt and you're,
interacting with AI, you are not five years old, you don't need to be told that you're dealing with a large language model.
But that's what these folks have boxed themselves into.
And they've boxed themselves into a limited capability of question and answers or prompts.
So when you get a local AI, and I personally like the Hermes model, I might change tomorrow because there's a new model at this point coming out about every three or four hours.
Right. So the Hermes model is a 13b model and it is performing nearly to most capabilities to about a GPT 2.6, you know, not quite three. But for most use cases, it's phenomenal. And you can ask it any question. You can get any response.
And there's nobody telling you that you can't or you cannot.
You're just getting raw response.
Hey, Brian.
A quick question on that, Brian, too, because I think this plays in well to the overarching subject of like funding and where things go.
Like, do you feel that there's sort of like an oxymoron happening right now where on the one hand, right, we're being told.
hey, these tools are more powerful than ever.
You don't need hundreds of engineers to build a company now.
You can do it with four or five.
These models are open source.
Anyone can do it.
How do you juxtapose...
You know, and this question for you and everyone on stage,
but how do we juxtapose that sort of narrative, right?
The company's building this saying that it unlocks anyone with this other side
where it seems like there's these companies raising these massive, massive rounds,
everyone's saying, you've got to get to San Francisco,
you've got to get to these hubs to be around the engineering talent.
Like, is it fair to say that maybe the future model is,
is going to disrupt what we're seeing now,
where you don't need 100 people.
You don't need 100 million dollars to build it.
So why, like, I guess why are we seeing these contradictions?
Alex, that's a great question.
I put out the first five-person
trillion-dollar company has already, those people already exist on the planet, and it's going to exist in the next 10 years.
We're going to see a trillion-dollar organization in sales that has only run by five people utilizing AI.
The force multiplier that personal private AI is,
has for an individual has never been seen in history. It's a lever that is just absolutely phenomenal.
It's a liberation for individuals. So what does that mean for raising money? The old model is broken.
The old model of Silicon Valley that you need to go here and this is where the mindset is, that's gone.
Whether we love it or not love it, it's pretty much over and
Silicon Valley Bank failure is sort of that.
Historically, we're going to look at that as the breakage point.
The venture capital industry has not really caught up with it, not all of them.
I deal with a lot of venture capitalists and I'm guiding them what the new paradigm looks like.
And it doesn't look like Web 3 and it doesn't look like Bitcoin funding or anything like that,
because the models are fundamentally different because the power structure has shifted.
Knowledge is where the power has always been in society.
When you liberate this knowledge in a way that's easily identifiable to an individual,
where they can execute on concepts where they don't need an expert.
For example, I know of two founders right now that are not,
technical founders in a sense that they can or want to code,
but they're coding something exceedingly complex using GPT4 and some local tools.
And there's a new, there's a new model that came out that basically is every API call that you could ever think of
built into the model and you just put your sentence in there and it'll start building those API calls.
Is it perfect?
I think you just painted us such an interesting picture.
And strangely,
I'm going to go to you because I saw some,
some interesting emojis kind of come out of you.
But I do want to tee it up.
I want to tee it up a little bit.
You just said there's going to be a $5% trillion company.
If that were to be the case, if that were to be the case, and that's when current dollars, I take it right, not an overinflated, whatever, wherever we're going to.
Yeah, I don't know what inflation is going to look like.
trillion dollars for a low price.
Don't help me that.
So, but let's just say it's in today's dollars.
So we are already having problems with rising inequality in the world, what we call the Jenny coefficient, right, is going up in economics.
I mean, the kinds of inequalities that could be generated by something like that.
I think even GP was alluding to that.
So I look forward to going to him.
But I do want to go to Strangely first and then perhaps have it back and forth
because it looks like Strangely might have some different opinions.
And then let me know and I can speak and reply to Brian, please.
Sure, go for it.
So strangely, then stinks.
You know, I'm going to sound like a broken record here, but, you know, I've been in the space for 20 years and I do code and, you know, all this narrative around only 10 people can do this.
Basically guys, anybody with a good knowledge of statistics and some Python is able to do what these models are doing.
The real kind of stronghold of the companies lie in running these models on the hardware because...
you know as much as we have software innovation we still are in the dark ages of hardware where
the hardware is not optimized to run these and so we have giant server farms that only big
companies can afford it's as simple as that you know it costs upwards of 20 million to run these
models while some are saying it'll cost less but if you think about the physics of
you know, the larger the model, I mean, if they can compress this somehow, then maybe they can run in a smaller rack, which is 10 million or 5 million, but it's still going to cost any entrepreneur substantial amount.
So there is no like this myth of genius that's created in Silicon, and I'm part of Silicon Valley, and I've seen that mythology being created every time, I think it's simply not true.
And also, I would like to add, like, all these companies getting funded, you also have to look at what are they proposing. Are they proposing the same paradigm that existed? I mean, throwing 20 million, 40 million, 50 million, doesn't matter. It doesn't solve the underlying problem.
In AI, we still have problems of common sense reasoning, you know, how it analogizes or strategic planning, you know, reasoning engines. We don't have generalized reasoning engines yet.
You know, that can reason over your data so that we don't need massive amounts of data.
So I would look at these things as a cautionary.
Like, you know, yeah, companies get funded, 100 million and then doesn't lead us to anywhere.
I mean, they get acquired.
Yeah, somebody makes money.
I agree with that.
There's a lot, you know, I mean, there's a lot of hype in the valley, right?
But some of it's real, a lot of it's not.
Sphinx, do you agree, disagree?
I actually wanted to comment on what, yes, I do agree, and I wanted to comment on what Brian said.
So if you could provide a bit more clarity, I would appreciate it because I hear this a lot.
I hear, but it's open source.
So here's the thing, Brian.
Just because something is open source doesn't mean it's not subject to the same vulnerabilities as other things.
Let me explain this.
It's still, if something is, if an application is open source, it still has security issues.
In fact...
it's open to more security issues.
It still has to deal with quality assurance.
It still has to deal with accountability.
It still has to deal with legal, ethical standards
and sustainability.
So when I hear, you know, we're having these
discussions and then I hear but this is open source it's open source that's fine but
your the argument on the open source side is that you're going to kill creativity no one's
seeing kill creativity the discussion is what kind of regulation it would be most
appropriate but the I feel that but it's open source and therefore it doesn't
these do rules don't apply I just don't buy that
Brian, what you got to say?
Okay, so I've unfortunately been around when the Internet was being formed.
And when the Internet was being formed, it was a very unusual time.
Let's just try to explain, let's just try to explain what it looked like.
I had to say to somebody that...
So you know, I was around at that time, too.
It's going to be a bit of a...
Give me a second here.
So I had a...
Explain to people that packets are going to be sent through various routers around the world, and you will never really know whether that router is touching a friendly router or an unfriendly router.
I happen to have been in a payments industry at that time.
So I had to try to explain to bankers that I'm going to route transactions encrypted.
through a new thing called the internet.
This is, you know, late 90s to mid 90s.
And that somehow that transaction is going to get back and there's not going to be harm done.
And we're at the same kind of sense of that.
And again, I had to explain to them, nobody owned the internet.
Nobody owned even necessarily the path that those packets have taken.
But somehow everything's going to work out.
In reality, it did work out.
The fears that came about were unwarranted fears.
They were very, well, let's just put it this way.
They, they...
they ultimately killed the possibility of payments being embedded into Mark Endresen's browser.
And we would probably have had a different thing than Bitcoin at that point.
So getting back to open source, open source is not a magic wand that I wave over things.
What I'm saying is I would much rather have.
Us have a billion people have access to a model, all of them working on it so that nobody has a moat,
so that we can all have access to this in an egalitarian and democratic type of way,
then to force regulation and to force scrutiny and to force on us on individual developers who are really the people who are creating the advancements.
The advancements that we're all really...
taking advantage of right at this moment was really in the back of the open source community.
There, there was some innovation in these companies.
There was some optimization.
But if you extract the people working to 4 o'clock in a morning and not shaving for 10 days to put out some code because they are just in love with this idea of building code,
Yeah, if you extract them out of the thing, we wouldn't have.
Can I just jump in for a second on that?
Because I've had my hand up for quite some time.
And I think the...
Before you do, let me just, if you know, man, GP,
I want to remind the audience quickly,
Guys, if you are a founder or a VC with a portfolio of companies, of AI companies, hit us up.
You could DM me, the team will reply to you.
If you want us to incubate your project, we do it for equity, or if you want to come on the show or join the Shark Tank pitches that we're going to start doing.
Same way we do for Web 3, we're going to start next month for AI.
Hit us up.
You can DM me on my profile, and the team will reply.
But yeah, go ahead, GP.
Thanks so much, Mario.
You know, Sphinx makes a great point about open source and, you know, Brian makes great points.
Everyone's making great points.
But here's something that I think we're all failing to see.
Everything is built on a set of data that's been acquired over 20 years in an environment that is defined by misinformation, disinformation, information.
fake news, opinion addressed as fact in a post-truth society.
Now, it's my personal view, having written the Penrose Tabula Rasa theory,
that if we're going to have any sort of an attempt to have a democratised, unbiased, global conversation regarding this,
that we must let the incumbents have the benefit of the mass-scale data acquisition
and the bias contained therein on which to build the models upon.
While it will slow down progress, it will lead to the ability that I think Sphinx and Strange alluded to also AGI earlier, which is the identification of the source of truth of a particular data, whether it's video image, text, voice,
or otherwise, this essence of a tabula rasa allows you to build that into the system from the ground up,
as opposed to trying to stick bolt-ons onto an already corrupted model.
And as we all know, in the field of computer science, any business transformation project or pivot in business
that tries to do so around an anchor of a startup acquisition or by bolting on some new tech to legacy architecture fails miserably,
at the data cleansing layer mainly.
So I'd like to just plant that seed maybe for a further space.
I would posit that we are propelling the with data quality.
GP, I love the planning of the seed,
and I look forward to that on the future space.
We do have a main event, and we'll have a few minutes to do this, though.
So I need to switch gears.
Maybe Alex, could you tee up the church service?
And then we love to hear from Pat.
Welcome to the show, Pat.
So we're going to go to you.
But Alex, please let us know what has chat GPT done to religion.
Yeah, so just to bring it home, you know, obviously the title of the space is that chat GPT actually led a church service this week, which is obviously a very interesting idea to hear, right?
Just on the surface it sounds so absurd.
But the basic facts behind it was that there was like a German Protestant church.
And they had a theologian who basically typed out their Sunday service, and then they had an AI avatar projected up onto the screen and actually preach about leaving the past behind and overcoming the fear of death.
And just for context, I think there was also a rabbi in New York a month or two ago that had done
a similar thing.
And I think what the,
why this topic in particular is just so,
I don't know if I want to call it triggering,
but it's just so like attention grabbing is because I think it just gets to this root of like,
what it means when you get beyond just like the actual bits and you know what are these other aspects of life whether it's creativity or spirituality i know i can get really high level but you know i think it's it's asking it's making a lot of people basically ask what are the limitations of what a i can do and can ai be a spiritual and a creative lead
And so maybe we can just take it home on that topic, obviously, like really attention-grabbing,
but there is probably a deeper philosophical point there.
And I'm curious to hear what people think.
I think it's excellent.
Pad, you're new to the space.
What do you have to say to that?
So regarding religion, I've been, I did some experiments about, like, my faith, I'm Muslim.
And for context, I've been playing with GPT for...
nine months now, I'm working on what people call ATI, but it's not really true to the ATI.
But my point here is if you go to straight GPT, it will hallucinate.
However, if you use techniques like vector search embeddings, I think you can steer the model to give you more accurate.
responses. Now could it be a spiritual leader? I don't believe so.
But yeah, I mean that's my point in the nutshell.
But I guess only time will tell.
I mean, I'm at EYC.
I mean, I know what you guys mentioned, but from my perspective, I just see, I don't see it as a big deal because, let's be clear, I can only give my own experience.
I have been in the, I have been in the, like, philosophical, theological field for a considerable amount of time.
Like, most of these preachers are basically relaying the same point all the time anyway.
They are basically chat GPT, most of them.
So what difference does it make?
Meta, what do you think?
Meta, yeah, what do you think of that?
Yeah, no, I mean, I think Solomon makes a good point.
One of the ways that I would describe this idea,
and so for context, I'm what I call metapreligious,
I believe that all religions are different interpretations of the same thing,
call it God, divinity, universe, whatever you want to call it.
obviously it's a very big topic, and so I'll just kind of leave it at that.
But the point that I wanted to get at that kind of works well.
Solomon's point is if you think of the truth of reality like a 3D object, like a statue,
a lot of times what people represent to each other,
is actually one dimensional, right?
Like when you're telling a story,
there's a start and there's an end.
It's a single line.
And usually what you're trying to describe
is a 2D representation, right?
It's like a picture of the statue.
That's your point of view
on the reality or the truth of the idea.
The problem that I think a lot of people have when they're talking about things like divinity or just truth broadly is they are just regurgitating like two-dimensional points of view, right?
These different ideas or sometimes they're even just regurgitating the one-line explanations.
They don't really understand the complexity of the idea at that three-dimensional state.
And I think this is the problem with chat GPT and AI.
is in order for you to experience what I think of the divine or divinity,
I think you actually need these kind of senses beyond the normal senses
that we would attribute to like machines are capable of like we have,
I believe,
like a spiritual intuition,
And so it's very hard to get at that kind of deeper level of complexity and spirituality
without that.
And I think it's very dangerous to kind of,
default or use AI to represent or to give spiritual guidance because I think you're really missing out on these different aspects.
But to Solomon's point, I think that there's a lot of spiritual leaders who are not doing a good job of that they themselves kind of lack that spiritual connection.
And they're just regurgitating things anyway.
Meta, I appreciate what you're saying, but, you know, isn't it kind of, I mean, to kind of support the last part of that is, isn't it kind of true that, I mean, it's just so rare to get like effective religious leaders, right? Or just leaders in general, I would say, right? But like spiritual leaders come around. I mean, you have tears of them, right? But, uh,
obviously the ones that found religions to come around once every hundreds, if not thousands of years, right?
And then you've got, you know, the people effective and then you got the Martin Luther's of the day, et cetera.
So it's almost like, well, does chat GPT?
Is it just kind of like the McDonald's for, you know, I mean, there's billions of people around the world of varying faiths, right?
So like, can it provide tools?
I mean, I think it's an open question.
I'm not saying yes or no, but it's an open question.
I know we're short on time.
So I want to go to Xavier and then.
and then Brian, you're probably going to sort of end the show.
But Xavier, I know you had your hand up for a while,
once you let us know what you think of this church and religion topic.
Yeah, earlier I heard some great points. I was 100% in Harding, like, all day. But yeah, the point is, that's exactly, they're regurgitating in the Bible. And if you, and many of the people who actually, like, who actually took into time to read the Bible, it's saying that the preacher is not telling you any ideology. It's actually the Bible.
your speakers through the Bible.
So if you think of GBT,
especially if you're doing APIs
and you fine tune that bad boy
and you just feed it the Bible
and different versions of the Bible,
it doesn't matter.
Even if it does speak
because you fine tuned it.
And even if you can create a new,
you use one of the open source models
and you just feed parameters in
it purely on the Bible, well, then it's going to give you exactly the prep, like what the Bible is
intending. You can even fine tune it to understand that are called parables or aka analogies
of the Bible. Like, what is it truly saying? Because like you said, unlike in church, a lot of
times I've been, I've grew up in church, they'll regurgitate the same, the same scriptures.
just said it differently.
But with GBT, it would utilize from front to back of the Quran, of the Bible, you name it.
And I think, honestly, like I said in the message, I think this is going to open up a new,
maybe a Bible AI or even an LLM model where you can call a theological LLM,
you know, where you're just feeding different parameters of different Bibles.
And I think this is going to be opportunity for a preacher in everyone's hands,
every religious person's hands.
Oh, I'm going to say, but you just gave you, like, what if we had, like,
I mean, I don't agree.
I was just a trip in there.
I mean, Javi, I agree with you to one extent, but I don't think chat GBT will solve the problem.
I'm more with Metro on this point, I think, and he can clarify if I'm not.
So, essentially, what I think chat GPT will do is almost give you the exact same thing as an individual,
because whatever perspective you apply, or whatever prompt or whatever you want to call it,
you are going to use when you come to.
looking at the Bible, looking at the Quran,
it's going to give you a perspective
and whatever that is.
In my view,
having that innate ability to,
of consciousness to be able to
analyze information from a specific perspective that maybe an ordinary person won't do.
I'm unsure, but I'd like to see if it can be,
how they'd be able to do that from within the spiritual aspects of religion
and apply that to a Herman who takes or textual analysis or wherever it may be.
I mean, humans are flawed, and when we think about...
You know, Suleiman, that's an excellent point because it...
I don't know.
GP, we can hear you.
GP, we can hear you.
Go for it.
Going once, going twice.
All right, GP, we'll have to come back to you.
I mean, he said the best point anyway, said Solomon.
GP, I think you're going in and out.
Okay, we don't have much time, but yeah, if you want to close this out.
Yeah, I was just going to say, Solomon.
GP, we're going to have to bring you down it up.
Once again, you said the most important point, so, I mean, I'll pass over to you, I see.
Okay, all right, we're going to have to go.
It's breaking.
Yeah, the air agent is breaking.
So, Brian, I think you're going to have to close us out.
We got to make it quick because we don't have much time.
Yeah, thank you.
Can you hear me, guys, just real quick.
Can you hear me?
Yeah, okay, Suleiman, the problem we have right now is defined by Kissinger, Hoffenacher,
in Schmidt's recent video, which said that all AI must be defined by Western values.
This is where relativism is not taken into account in the philosophical debate,
as you know, through the philosophy of mind, and through the relativist, moralistic, political, social, socioeconomic,
differences between regions of the world,
if you were to develop an AI that was capable of taking the good books from all the different religions
and exercising the weights and balances against the permutations and combinations of each,
it would take you a very long time.
So you will have a devolution of theological thought, in my opinion,
if you give it to AI.
And hot take also,
how is it going to navigate these new anti-free speech laws that were all being entertained to?
uh right now in terms of what is defined as hate and what is not that you might unintentionally
find popped out of your uh ai model.
interesting points all right Brian take us home and uh yeah we'll have to wrap right wrap right
right there's other hands but here's what i find very interesting the more i spend with
AI and this is just a phenomenon with me the more you actually start becoming uh spiritual if not
religious uh
The bottom line is when we are peering into large language models, we're peering into the corpus of human knowledge, and we're also peering into the part of the brain that invented language.
And we're also peering into the connections of how life grew on this planet that actually invented language.
Why did we invent language?
How was it used?
And how did it make humans become who they are as opposed to other primates?
And so when you're diving into these models, you're actually seeing in some way the face of God.
And I find this fascinating conversation with a lot of AI scientists who were prior atheists.
So that's a good segue, I think.
Look internally.
Brian, on that divine note.
We are going to end the space.
Thank you, everybody, for joining us.
We're going to be doing these every Tuesday and Thursday.
Thank you.
12.30 p.m. Eastern time.
Please tune in.
And thanks to all the great speakers for contributing.
See you on the next one.
And a couple of thoughts, two.
Final things to add.
As Mario mentioned, I think if you are an AI company,
make sure to reach out to either one of the co-host, Mario.
So, LeMayne, and we can, you know, get you on here for the Shark Tank-style pitches.
Be sure to follow the speakers too.
There's a lot of really good AI speakers on this stage right now.
It's great to have them on.
If you know anyone who, you know, is kind of like a subject matter expert in this space,
send them over.
We'd love to have, you know, sort of a rotating panel here, getting a wide variety of perspectives.
And then lastly, shameless plug.
I have an AI newsletter called Big Brain.
I know Mario in the Roundtable, they'll be launching one soon.
So I don't know, Suley, is that something we can plug now, or is that still in the works?
But there'll be some AI stuff in that, too.
So just make sure to check out Mario and Suley's profiles as well for updates on that.
I love it.
Until the next one.
Thanks, Alex.
Thanks, everybody.