we're just inviting people.
that silence is much more better
than what you comes out of your mouth,
Just wait until I have my coffee,
You got to get some sweet intro music.
I mean, I didn't play the music.
So I guess it was just, it's good.
We're going to start the AI space.
We're going to be using our mind, body, and souls to think about things.
So we just needed that moment of silence just to think and contemplate before we find out and talk about the problems with AI.
The problems is going to cause in our society.
But also the benefits of AI and the good things that are.
AI is doing and what are the development.
So that's going to be the focus of this space,
is the development in AI, what we can see in the future,
the practical applications,
and then after that we will be talking about some of the issues
that they are concerns people have with AI.
we may even start talking about some of the more deeper issues
if people are willing to talk about them.
I have sent requests to all the panelists, so there may be, a lot of them would have come from my accounts, so just make sure you check them out in your DMs.
But yeah, Fidgetal, let's hear your initial thoughts.
Well, I'd be remiss if I didn't mention that you're at an advantage today
because you were in my crypto space, but you didn't know anything,
so you're able to reserve your energy and voice.
So I've already done two hours of mental gymnastics.
So I'll do some verbal gymnastics with you tonight.
I'll give you that, vegetal.
You were on it in that space.
I heard you talk more in that space than any of the other spaces I've heard you in.
So, yeah, you did a good job.
You're improving every day.
When you're willing to learn like Fidgetil
and you're willing to improve, you're going to get better.
It's really hard to talk over you and Ian
and then add Nick to the political spaces.
So at least on my screen right now, I'm closer to Mario.
Although if you could see the full picture of my PFP,
have I showed you at Sleiman?
Are you talking about this new one that you've put on?
Oh, I'll DM you right now.
You can't share it, though.
So, guys, let's get into it.
And so the first thing I want to talk about is...
And is the developments in AI.
So what kind of developments are we seen in AI?
So I'll just tell you some of the things that I've seen.
I'd love you guys to explain those developments to me further.
And then the second part of this show, I want to talk about...
We'll get into there when we can talk about the second part show.
So first of all, I was reading and there was something called Remotion,
and that was the University of Cornell paper.
And they were talking about how there were certain advancements in robotics, I believe,
where it was mirroring the movement, or it mirrors the movement of man.
So if someone can explain to me, and I'd like to go to...
what kind of developments are we looking at there
and yeah if you answer that first
and then I'll ask you some more questions on it
Well can you explain more about the robot
that you saw so the robot were
Well, I don't think it's much AI there.
If it's just imitation, then you can just do that with sensors and so on.
So let's look at the robotics aspect of it.
Yes, because what was limiting the AI deployment in the society was that the robots were not...
able to move fast enough.
And we were not able to train them fast enough
because they were kind of breaking all the time and so on.
But now if you tell me that they are able to imitate
Then that means that we can use, for example, AI agent.
We can deploy an AI agent in the body of a robot.
So it's become embodied AI.
And then that AI agent could be able to act in the world and do some useful tasks.
And that will have meaningful impact, for example, in medicine and so on.
For example, if you have a place, for example, you have COVID that we have,
two years ago, then it will be useful to have that kind of robot that are able to act in some
facilities instead of having human that are just transmitting more virus and so on.
It could have many, many use also in supply chain.
And we know that we have a lot of inflation right now because kind of the supply chain is kind of
broken a little bit because of COVID and so on.
So if we have the kind of...
Yeah, yeah, I get, let me just break this down and please just correct me if I'm wrong, right?
So essentially what would happen is there'd be a human being.
Let's say he'd be in his basement, which a lot of these AI nerds are, right?
He's in his basement and he's got the robot imitating his actions and maybe going through the hospital or whatever.
Is that what it's going to be?
Well, that is possible to do.
I mean, is that how it's going to be, or how do you envision it to be?
Well, the way I envision it, because just imitating is fine, of course,
but then you need the time from the human to imitate.
But if you want to scale that, then it's better if you have an AI agent that learn from imitation.
Then you have an agent that learn from imitating that human or...
more humans and then that AI agent will be able to make decision and to optimize
action for to accomplish a goal yeah then for making the decision we'll talk about that
later because I'm not sure if we can ever get to that but I mean the imitation aspect
The imitation is just the start.
So you can build that kind of data set from imitation.
So you learn from human preferences.
So we can do that since a long time.
You had Open AI that publish a blog and paper on that.
I think it was Tom Brown.
I think it was six years ago about learning from human preferences.
But the aspect that is new is that maybe now it's become possible
possible to do that in the physical world because those robots are good enough and it's an
amazing advancement and if we have that then you have robots that learn from human preferences
then they are able to do some tasks they they are able to do some tasks from what they what they
learn okay and guys if you want to do more then you need plenty guys just help me help me real quick
i'm a little confused and maybe i'm just dumb or maybe it's been a long day
You are the dumbfish clipper, don't worry.
There's software and there's the AI component
and then there's a robot that can do more stuff.
I don't understand how they merge beyond the robot being able to do it
that the AI somehow is, this combination is revolutionary.
Yeah, let's go to Brian on that one just to get into the perspective.
Brian, have you got any thoughts on that?
Well, I was thinking about this this whole time.
I used to go to CES in the 2010s, like the mid-2010s.
There were a lot of toy robots, moxie, kiki, all these kind of things.
And they were designed exactly to do that.
They looked like toys, though.
They looked like robotic, like, wally kind of toys.
And they had a digital screen on there.
They were still able to...
Like, just like a Tomogachi.
I imagine what is new about this is that these robots that you're talking about, they're more life-like.
They're more doing it in a human way because the actual...
emotion recognition, I guess, from the robot has always kind of been around.
It's not really a necessarily new thing.
But when we were also talking about this, there's deep brain AI.
It's a company that makes AI avatars.
Three targets that they're targeting is they've got Howie Mandel signed up.
So they've got a bunch of influencers and celebrities trying to recreate themselves.
They're trying to convince people to use them for like customer service kind of things or to even represent yourself as an avatar at work in a meeting.
And then they have a third thing that they're going for grieving people.
They're trying to ask people to essentially scan yourself in,
and it'll recognize your specific body movements and little intricacies
and how you move with like this three-hour 3D scan.
And then they attach chat GPT to it,
and they let you put on a VR headset and you...
like it's like a black mirror episode or upload on Amazon like a sci-fi thing you visit your
deceased relative inside this virtual world and for that kind of thing it's it's very creepy for me
I would not do that I could see where somebody would have a therapeutic use for it but I still
don't understand where emotion would help a robot get more
more productive. I'm lost on that connection to. We are going to go into that because I actually
do one part. I don't know if it's the like average joy in me, but I do want to talk about
black mirror and some of the things in there and how realistic they are in terms of application
perspective, right? But before we get into that, GP, I think Fidgetil asked like an awesome
question and it's very rare. So let's focus on that question. Uh,
Those three aspects he's talking about in terms of the software, the AI, and then the robot.
How would that work in terms of the implementation aspect?
Yeah, so the Cornell example, um,
is specifically to allow a non-present member of a meeting to be physically present in the form of a proxy robotic device,
which will mirror all of their visual cues, facial body language and so on.
So it's a collaboration enhancement tool rather than a significant move along in terms of the technology of robotics.
The Cornell specifically are saying,
look, you don't have to control this device.
It has a, you wear a sensor.
And as AGI said, you know, the robots are sensor developed.
They will mimic now not just your head movements or your facial movements,
but your body, arms, hands, legs and so on,
as a member of a conference that you are virtually attending.
You will have a physical robot in presence.
Now, alongside of that, in the process of doing that,
it will be picking up training data about the bodily movements of people
that is matching with verbal,
statements, depending on different types of statements they're making, it'll be able to determine
what type of body movements accompany them in different circumstances. So if someone's
uncomfortable or someone's been put under pressure,
It's like haptic feedback systems.
And this is present already, as AGI has said, and as Brian has mentioned as well,
Python Technologies did it for ALS patients and then militarized the application to allow remote control of different robotic devices on the battlefield.
So to answer physical, fidgetal's question,
you're wearing something on your head
that is giving a sensory output to the physical robot
in the teleconferencing environment,
where the other people are physically present.
It is also mimicking, using other devices, your hand and body gestures alongside that to give a more...
a more immersive experience for the other people who are present in the room regarding your bodily movements that go along with your voice.
That's really interesting. Thanks for elaborating on that.
Now, just, but what about Vigital specific question?
The best way to think about this is in terms of what we know from the movie Moneyball, for example.
That's how most training is done on a lot of these LLM systems, okay, where basically you take the last 10 years of
football and then you're able to predict what's my best fantasy football lineup for the next year, right?
You train it on the data and then you go against and you try to make predictions.
That's sort of basic machine learning and then you throw in some of the, you know,
beautiful mind AI stuff to go on top of it.
Now, for a physical robot, that's a very different scenario.
It's not even motion capture, right?
Just think about, for example, you know, like I'm a father of eight.
So I'm constantly, I have one kid in one hand, I've got a binky in another, I've got this one here, right? And the type of dexterity that the human body has to do that sort of thing has just not been trained. That's why, for example, it started with the Segway, right? This two-wheeled machine that has thousands of computations a second to keep you aloft. And then it must.
moves on to these other robotic types as we see in these parkour courses that Boston Dynamics is doing, right?
And so those core sort of physical training elements are just now coming into their own.
And I think that's the key difference here is that we've been able to mimic, you know, this robotic made going across the floor on wheels and going, you know, go four steps left, go five steps left.
and now it's able to, yeah, do that with a baby in your arms,
a beaky in one hand and, oh, don't forget to turn the pasta, right?
So those are things that physically we haven't been able to really dutifully train on a regular basis,
But I reiterate my question.
evolutionarily increasing dexterous robot mimicking, starting with humans or animals or whatever we want it to mimic with a program and a system that absorbs data inputs, outputs, data inputs, and does its best to recreate humans.
and probably increased capabilities and opportunities, right?
That's spot-on, digital, in a nutshell.
It's exactly the same as the fMRI technology, which is nascent,
and they need to collect more data for the data sets to make it more dexterous
in the robotic sense, just like in the fRMI for thought recognition,
they need a greater data set in order to be able to advance the field.
So you're quite right. Right?
Right now, it is robotics and sensors and it trying to gather data regarding speech acts
and its intersection with body gestures.
So there is little data sets.
But how do we, this is a question to you, GP, so just continue on.
So I get where we are, but how do we get to a situation,
or is it even possible to get to a situation where you have a robot and it's got integrated AI in there
and the software related to it?
Yes, so we have a pay-bye.
I mean, I think this is the problem with anthropomorphism
that we tend to always associate with robotics,
you know, in mimicking a human and giving it qualities it doesn't have.
A very low level of, say, machine learning in the body of a robot
will fool many people that it's actually a clever robot.
But to answer your question real quick,
We're a long way away from a position where the dexterity of a robot combined with a reasoning, artificial intelligence,
will give a robot the type of physical presence and intellectual presence that we see in the movies.
This is very early doors, and as Justin said, the dexterity of robots is poor,
and the integration of that dexterity matching it against speech acts in the form of LLMs.
in order to mimic speech acts is very, very nascent.
So when you say we're a long way away, okay, that's fine, but is it possible?
Well, yeah, I mean, with any amount of data, you can get mimicry.
That's not to say you can get intelligence.
You can get mimicry of a very high caliber with very high production values of a robot instantiating
the same type of body gestures that accompany a speech act that say is an angry statement.
and therefore mirroring that with, you know, if you have a very malleable face on the robot with the facial expression and the physical expressions that accompany that sort of angry statement that they have typically learned are the ones that go alongside of it.
Same way as LLM mimics the speed track.
Only because we're agreeing on everything for the first time.
Would we agree that the, and Eugene will go to you one second.
And by the way, Slaman, notice GP went to the one thing we weren't going to talk about,
that's the end of the entire space, because this is where this leads to every time.
But GP, would you agree essentially that where this becomes revolutionary and probably
is the lowest hanging fruit or the most valuable hanging fruit is in
injecting this technology into capabilities that humans can't do now.
So it's not necessarily the dexterity or the mimicking of a human,
but being able to use technology kind of like a Pacific Room,
Pacific Rim, to do things in terms of strength or speed or intelligence.
Yeah, yeah, so just translate to the audience because people like to use
complex words to make themselves sound smarter.
So essentially all digital is saying is that can't we get AI to do more than what we do.
I have a paper right there that does exactly that.
So it's called the surprising creativity of digital evolution.
It's a collection of anecdote from evolutionary computation and artificial life,
research from the research community.
And in that paper, you have many, many, many examples of robot that learn to do
much more than what they have been taught of doing.
For example, right, I haven't.
AGI, is it learning or is it just that you're basically,
and this is a genuine question,
like is it actually learning or is it just based on what you're programming?
I will send you the link of the paper in the DM.
For example, yes, so it's this.
That's the fundamental approach for it.
It's learning and discovering, and that's just amazing.
For example, right here I have in front of me,
I'll send you the link of the paper.
I have a, they ask a robot to walk without touching any of its feet on the ground.
And what the robot decides to do is just to flip over and walk on its elbows.
And you have like tens of those examples in that paper.
And it's just you put a robot in a physical room or physical situation.
And that robot will put constraints that are unimaginable.
And it will manage to do that most of the time.
I'll send you the link in one minute.
So the important answer to the, sorry, the answer, Suleiman and Fidgetle is this.
Just as AGI said, it will use novel decisions to decide how to get to the other side of the room
by not using its feet, therefore using its elbows.
But what people mix up with, and I'm not using this word to sound clever, Suleiman,
is epiphenomenon or phenomenon that occur that aren't expected from the training data sets that will put into it.
And this is the thing that we heard recently about LLM's hallucinating.
Okay, so you gotta be really careful that this does exist.
as things exist as an epiphenomenon,
which are outside of the domain of what has been input to it
and somehow feel like the A or the machine learning model
is making decisions or coming to conclusions
outside of the domain of what the model creators thought was possible.
It's called epiphenomenon and AGI, I'm sure, can speak to it right away.
AGI, can you speak to that?
Because basically, it's not a real learning, it's epipenomenal.
What I've been training a lot of those agents.
And what I see is that when the agent is learning in an environment,
the agent is generating its data set by moving in the environment.
So as the agent is moving in the environment,
So the agent tried to do something.
And if it's good, then it gets a reward, maybe later.
And if it's bad, if it's bad, then it gets, let's say, a negative reward.
Then the, and what we have here is if the agent can act enough in the environment
to explore kind of everything that is possible, then you have kind of the complete data.
So in that way, you don't need a data set when you do reinforcement learning.
The agent learning, acting in the environment, generate the data.
Like Pavlovian, essentially?
Yeah, actually, and I want to clarify the digital.
Just the origin, because I joined a little later,
you're asking about whether humanoid type of robots can exist.
And then that's what we've been discussing, right?
So humanoid robots, which essentially have AI and software linked to it.
Yeah, I think this is really interesting.
And I love a lot of what GP and AGI I just said, and I look forward to research.
Eugene, you're echoing to it.
Maybe the WF listening on you, but yeah.
That's a different discussion.
Is this any better right here?
Yeah, it sounds a bit better, yeah.
So I'm pretty convinced, folks, that...
The way, you sound like a human robot right now.
That's so interesting. Maybe I am a humanoid robot. Maybe I'm the first example right on the space. So actually meta, meta, the company meta, which is where I used to work. And I only used to work there because they acquired, I was an executive at Oculus way back in the day.
and they acquired this company.
And what's, and I've been in the space ever since,
though now I'm an artist, but,
but before that, you know,
was really following along with a lot of stuff
that we were doing computer vision and stuff.
And as somebody, previous speaker mentioned,
it's about mimicry, right?
And we're talking about LLMs a lot.
But we shouldn't be overlooked that I'm pretty convinced
that humanoid robots are first going to be developed
in virtual reality, right?
And I mean this for a very specific reason.
Right now, people who are users in VR who are growing slowly,
are generating all kinds of data, right?
Data about how they talk, how they move, how their full body moves,
such that even years ago, meta-scientists,
like, you know, AI scientists could figure out what a person was doing
just based on what their movements were, right?
They didn't even need to know what the app was doing.
If you look at these apps like a VR chat,
you're capturing everyone's full body, right?
And with that data, meta, the company meta now, can recreate
with just your hands they can recreate the movement of your legs right through AI and if
increasingly movement of your hands and then with voice data and particularly integrated with
lLMs it doesn't take a whole lot to say we can mimic an entire human in a virtual space right and
from that point on it's not much longer from there where you can kind of replicate that in
physical space that's why i think like what's happening with unity and to a lesser degree unreal
that's exactly what i was going to say
are yeah wow it's great i mean i think i just want to i just want to bring in sorry oh sorry i was
going to bring in joa joa you sent me a video you disagree with all these ai nerds let us know
you sent joa are you there
Yeah, sorry, I was in the process of trying to pin it to the nest.
So these panelists get too comfortable.
They're just regulars and they get comfortable.
So, Juer, you send me a video.
You were like, these guys are talking nonsense.
No, the only thing I said is that we're not far away.
from the dexterity, even the facial expressions,
even things that will trick nearly any human,
whether it's skin or not skin.
We're not far away at all from these things.
And that's something that was said before that, you know, I've seen countless videos.
And then to go even beyond the robotic, Joe, what excited.
Joe, where I think the difference is that the picture of the video you sent looks like a human, but doesn't move like.
like a human, the nuances and the dexterity. Sorry, go ahead. Yeah, and just point of information,
Joe, I didn't say that the facial dexterity was far away. I said the integration of LLM's robotic
dexterity at the skin gesture and physical gesture plus reasoning capability was a long way
away. Just point of information on that. Yeah, intelligence, I agree. I completely agree. We're really
Although I'm not that worried about AI, the only place where it does worry me is in what you said before, the mimicry, but the digital mimicry.
Like I stated this the other day.
on the show, but, you know, if China wanted to attack US, give every person a Trump filter and voice modulator and a Biden filter and Biden voice modulator, and you won't know what's what anymore and what people are actually saying or not saying.
That's where it begins to worry me.
At least they'd be dexterous.
Yeah, there you wouldn't be able to tell a different.
They can make it so well that you wouldn't be able to tell.
Yeah, you see, that's the point.
The number of people you need to fill and the level of dexterity that you need to have
and reasoning that you need to have is not high.
So there's two questions.
What's the minimum level of dexterity, reasoning and intelligence that you need to fill the majority of people,
not what is the ultimate objective of the domain?
Yeah, you recall just recently there was an incident where they set one of these GPTs on auto mode,
and they gave it specific tasks.
At one point, it came across a CAPTCHA that says, you know, are your robot, check this box?
It couldn't complete that.
So it hired someone over TaskRabbit to perform that task.
The person at TaskRabbit was suspicious, pushed back on the bot and said, are you real?
And the bot said, yes, I am.
I just have trouble seeing, right?
Not completely a lie, but at the same time,
we do see those elements of trying to get around things at any means possible.
And Justin, they analyzed what it did, and it knowingly lied,
saying it knows if it tells the absolute truth that he would not be able to complete his task.
Therefore, in this scenario, he is allowed to lie.
And this is where the importance of the information that's coming from sensors needs to be tracked and the veracity and the trackability of the input needs to be accountable.
I was going to say blockchain and zero knowledge proofs, right?
Exactly. So, yeah, Fidgetle, why are we agreed so much? I'll have to say something.
Well, the reason Fidgettel is he rang me before and he said, look, you and GP have always been right and I appreciate that you guys are the masters of the knowledge of AI and I bend to your will.
So that's essentially what happened.
But that's just giving you some background information.
I'm not sure if I was meant to share that, Fidgettel.
I think actually the most interesting bridge part that I think is not, that we haven't touched on heavily enough, is what Eugene was talking about, is this gap that GP is referring to and AI is referring to, and we're all kind of talking about making robots human.
We have a much better, I think, a much better petri dish in metaverses.
I think Apple, is it Apple?
Who's launching their metaverse goggles like in the next three weeks?
Yeah, it is. And what AGI said and what EYC said, Joe and Justin, regarding the recreation.
Sorry, sorry, just we do that, because remember, I want to make sure that the audience are now, what are these goggles? Explain it in a layman's term, digital.
They're goggles that allow you to effectively hop into Metaverse's and integrate into AR, which is augmented reality, which most people think is just Pokemon Go, but it's anything that augments your reality that doesn't change your entire reality.
And then VR would be a fully immersive...
Whether it's goggles or otherwise.
It's coming out on June 6.
It's rumored to be about $3,000.
It will be a, it's Apple's venture basically against Oculus.
It's Apple's venture against...
Wait, wait, wait, wait, everyone.
So, listen, guys, you're just...
Because what is you're all talking about so much, which maybe not everybody understands?
So, wait, so you wear these goggles on, right?
Well, they're glasses. They are AR VR glasses. I've been told by Robert Scobald and a few people, maybe even eyes as he's seen them. I don't know if anyone's, I haven't seen them myself. But basically, they tell the experience is very positive. It's primarily towards augmented reality, but will also allow virtual reality.
Augmented reality, you can think of it, for example, I'm in a warehouse, I'm looking at a bunch of packages, and it's doing a heads-up display to tell me which packages I need to pull from the rack next.
Virtual reality is I'm watching my favorite television show in front of my eyes on 120-foot screen, basically.
And the apps that will go towards it, Apple has been working on this as hard as they worked on the Apple Watch.
And so this will be a huge release.
There was question whether or not they would release it because it was unclear whether it would sort of meet the theme of AI, which is the dominant narrative right now.
But I think they can adequately do this.
I think they're going to pair it off.
It's rumored to be about $3,000.
It'll either come out here in the fall, but it'll basically come out with their newest iPhone release, which will come out here towards the end of the year.
Justin, one example people might have played with, I don't know if anyone's played with the new Chrome app, but not Chrome, sorry, the Google app. They
They've kind of updated it.
And now you can point the camera.
I'm currently in Turkey, so I have to use it a lot in the supermarket where you just point your phone and it translates it for you.
You spin it around and it tells you where the restaurants.
For about three years, there are several apps where you can, for example, hold it up to a sign that's in a foreign language.
And it will literally superimpose in the same font and color the words in your native tongue.
So these are all things that they'll be incorporated.
And these actually mean, Apple is one of the first ones to incorporate in its SDK, its actual virtual reality kit.
And that's been out, I think, almost for.
five, six years or more, right?
So they were, but they were biting their time.
They saw that Oculus had, you know,
it's incredible burst on the scene,
but anyone who's, you know,
played with Oculus knows it's an exciting medium,
but it's also a demanding medium.
you're a gamer and you can just sit down
and alt tab over to your game to start playing.
Virtual reality requires a little bit of sacrifice.
And what these goggles, these glasses, these Apple glasses, I think there's actually a name to them that's been going around.
They'll actually, they're meant to be worn, you know, in a regular basis as part of life, if you will.
I don't know how that experience will be, but it should be very interesting.
And Apple obviously is taking its same sort of tact, which is let other people go first, maybe find some things, maybe fail.
And now they're going to come in and do their version of it.
So let me ask meta. Meta.
I was watching a TV show called Black Mirror.
Don't know if you guys have seen it, but I bet you have because you're all AI nerds.
And in that TV show, right, there was essentially a scenario where everybody had this,
like kind of shared virtual reality.
So how far are we from what Justin's saying, where we're paying $3,000 to those glasses
to get to a scenario where we have that situation?
Yeah, I mean, I think that's the million dollar question.
A lot of companies are trying to time it right, because if you're a little too early,
as Justin's pointing out, right, you build a bunch of stuff that is inevitably going to have problems
and you're going to have to figure that out.
But if you're kind of that second adopter, you can learn a lot from the mistakes that people have already made.
I don't think it's as far away...
there were people who were running in my space 10 years ago
who were playing around with AGI
and it was just too early for them.
all these tools are starting to come out,
the interest is starting to come out.
there's definitely a big,
obviously it's dangerous,
to kind of piggyback off what GP was saying earlier.
this verification of what's true and what's not true, but also kind of the decision that we have
more broadly as a species of, you know, what level of digital immersion do we want to take on,
right? What level of digital immersion is healthy and productive? But I think that to your point,
right, it's a it's a wild, wild west. There's a lot of real estate.
that could be generated in the digital space.
You've already seen, you know, a lot of people trying to do stuff with like
second life forever ago and other things like that.
And so I expect that people won't want to be late to that party.
I think, you know, people saw what happened with crypto.
People have seen what's happened with NFTs.
I think that the digital AGI space is going to be that next frontier.
And so I think that there is, it's going to happen a lot sooner than a lot of people think.
One meta, did you not know we're not allowed to talk about NFTs or crypto in these spaces? I thought that was clear.
Slavement doesn't understand it.
The only thing you need to know about...
Literally know nothing about it.
Guys, guys, you don't need to know anything about it.
The only thing to know is these meme coins and these shit coins are basically a scam.
They're just gambling for nerds.
That's literally where it is.
They don't want to get out of their basement.
They don't want to go to the casino and they just want to gamble while they sit in their mum's basement.
You're not wrong, by the way.
But I was going to have a second point, which is in terms of spectrum,
and I mean that from going full immersion to kind of there was the,
you know, even go back as far as, what were those stupid goggles from Snapchat or the Google goggles, right?
There's been a big blend of...
How can we do this so you actually look like a normal human being in public?
And what level of functionality do you need?
Lenovo actually lodged glasses quite a while ago.
That looks like sunglasses, but allows you have heads-up computer screens.
So you can have multiple computer screens while you're looking at your own computer screen
So I think there's different levels to the immersion that we're going to see for specific functionality.
I think that's going to be a very interesting application.
we're Wally, we sit with these massive
gawkers on our head that turns our eyes into
But what you'll actually see is
use case specific, affordable versions of these
that you can use for certain circumstances.
Well, here it comes, Fisgible.
The elephant in the room is not rather who creates the most immersive experience,
but whether one should be willing to give the data points that will be achieved from using their devices to these companies.
That's the elephant in the room.
User experience is a function of them improving their product,
but these people are the same people who gave us 15 years ago.
of egregiously unregulated AI and all of the difficulties that are, sorry, social
media and all the difficulties that we had with that.
So the elephant in the room is they're using these devices to collect data, that data, and
it was mentioned very early in this space, is particularly good at inferencing information.
If you look at Beat Sabre, that simple game where you're using the two goggles in your hands
to beat those cubes to the tune of the latest dance music, with two seconds of motion, you
can uniquely identify 54% of people by their physical movement of those goggles.
With 100 seconds, you can increase that identification to 94%.
Similarly, with the glasses and with the other wearables, you can inference the information to do the same as, say, CCTVs do to a person by detecting them by their physical gait and posture rather than their facial recognition.
Of course, I wouldn't expect anything less.
I think one of the most beautiful and terrible things that we have learned from the data game is that by the time that humans, especially Americans, are so greedy and so short-sighted.
that they don't care about the data.
And by the time they do care about their data,
their data is not very valuable.
So I think that's an interesting way to look at it,
but I think it's going to be who captures that data
while nobody cares, meaning the incentive,
maybe it's just like it could just be porn or whatever it is,
that people just lose their mind from adrenaline bursts
that drives that data accumulation.
So maybe that's the winner.
Jesus, I find myself agreeing with you again.
The key is self-sovereign identity where the person sells their information to the organization,
not where the organization acquires the data.
from the shiny new AI toy syndrome.
Sorry, GP, sorry to interrupt you.
Right, guys, we, unfortunately for the second time,
I think this always happens with the AI space.
And it's just because we were about to talk about consciousness
and Fidgital couldn't deal with it.
So now we're going to have to.
delay and they're talking about the ability of AGI existing as a conscious being, but now
just let it be known that it's not possible and we'll end it there and we'll talk about it
Because of that reason, because of that reason, we are moving over to the Musk interview
We're going to stream it live on this space.
and then we'll discuss it as well
during the breaks as well as
after the space. So it will
be streamed live. Justin, are you ready
Are you going to get me kicked off of Twitter for this, though?
Am I okay to do this, right?
I mean, I don't think there's any kind of copyright for...
Well, we'll do this real time.
I didn't want to upset Elon or anyone else.
He loves the sound of its own voice.
Cybernetic, collective mind for humanity.
This is going to sound quite esoteric.
If, you know, in pursuit of that objective, you want to have information move quickly, have that information be accurate, and you want to have error correction on that information.
So you can make a community notes as like an error correction on information in the network.
and the effect of community notes
is actually bigger than it would seem
it's bigger than the number of notes
because if somebody knows that they're going to get noted
they are less likely to say something that is false
because it's embarrassing to get community noted
and that applies even to advertisers by the way
That's so far $40 million in advertising.
Because it was misleading or because the community notes said it was?
The community notes, they get two pretty big advertisers got community noted.
And I, yes, I think on balance the community notes were correct.
And I did say to those advertisers, look, provide, just go on Twitter and provide some
facts that contradict the community note.
That's the way to deal with the community note.
Is that the community note is saying that the ad is misleading for the following reasons.
If you've got information that rebuts that note, then just add that to the ad.
We're coming up on an election.
I mean, it's a ways away, but it's going to all start.
President Trump is allowed back on the platform.
He hasn't actually come back.
But one would imagine if and when he does,
or there are others who will say 2020 election was rigged.
Is that something, I assume that's not something you believe?
I, well, I think the answer is, the answer is nuanced.
Like, do I believe Biden won?
I wish we could have just a normal human being as president.
I think if we could, you know, there's that old saying of like,
we're better off being run by people picked at random from the phone book
than the faculty of Harvard.
I don't know who said that, but if someone were very wise.
And I would say if we could do that for the president,
You think that would be beneficial?
obviously you're not happy with buying it.
Don't we all just want an all human beings to be president?
Whatever I'm not in the story more with normal things.
No, but I mean like, you know, just...
I don't know, so you want somebody's competent.
Yes, I think definitely somebody's executive ability is underrated since the president
is affected the chief executive officer in the country.
It actually matters if they are a good executive officer.
It's not simply a matter of do they share your beliefs?
You know, but are they good at getting things done?
There's a lot of decisions that need to be made every day.
Many of them are unrelated to moral beliefs.
And you just want a good executive.
Because they're CEO of America.
We want a good CEO of America.
It's going to do, be ineffective.
Unfortunately, we live in highly partisan times where there is war about everything,
including ideas, including the truth, which gets back to,
It's not true that the election in 2020 was rigged.
And I wonder on the platform, when you see that, does that end up in a community note?
Or is that something you take more action on?
And obviously, there places so many.
I mean, to be clear, I don't think it was a stolen election.
But by the same token, if somebody's going to say that there's never any election fraud anywhere, this is obviously also false.
If 100 million people vote, the probability that the fraud is zero is zero.
Of course, there's always going to be some, but is it going to, right?
I mean, the tinyest, perhaps.
I mean, this election was audited.
I mean, it went on and on and on, and there was no nothing whatsoever that.
I don't want to debate this with you.
My question is more about...
I think it's important to say that in any given election, even if you try your hardest,
if you've got 100 million votes, there's going to be some amount of fraud that is not zero.
And it's important to acknowledge that without saying that the fraud was of sufficient magnitude to change the outcome.
So my opinion would be that there was some small amount of fraud, but it was not enough to change the other.
Right. And by the way, it might have been either way. I mean, I, you know.
Yeah, there's probably a little bit of your way.
But again, it's going to be, you're going to let people say that, though, on Twitter, and then you're going to hope that they're corrected.
Let's talk a bit about your tweets, because it comes up a lot.
Even today, it came up in anticipation of this.
I mean, you know, you do some tweets that seem to be, or at least give support to some who would call others conspiracy theories.
Well, yes, but I mean, honestly, you know, some of these conspiracy theories have turned out to be true.
Well, like the Hunter Biden laptop.
So, you know, that was a pretty big deal.
There was Twitter and others engaged in active suppression of information that was relevant to the public.
That's a terrible thing that happened.
That's like an interference.
But how do you make a choice?
You don't see, I mean, in terms of when you're going to engage.
I mean, for example, even today, Elon, you tweeted this thing about George Soros.
Well, I'm looking for it because I want to make sure I quoted properly.
But I mean, you know what you were.
I said it reminds me of my vetoes.
It's like, you know, calm down people.
This is not like metaport like a federal case out of it.
You said he wants to erode the very fabric of civilization and Soros hates humanity.
Like when you do something like that, do you think about...
Yeah, I think that's true.
Why share it when people who buy Teslas may not agree with you?
Advertisers on Twitter may not agree with you.
Why not just say, hey, I think this.
You can tell me, we can talk about it over there, you can tell your friends, but why share it when?
I mean, this is freedom of speech.
I'm allowed to say what I want to.
You absolutely are, but I'm trying to understand why you do because you have to know it's got a, it puts you in a middle of a,
The partisan divide in the country.
It makes you a lightning rod for criticism.
I mean, do you like that?
You know, people today are saying he's an anti-Semite.
No, I'm definitely, I'm like a pro-Semite, if anything.
I believe that probably is the case.
But why would you even introduce the idea of that?
that that would be the case.
I mean, look, we don't want to make this a George Stavis interview.
But I'm, what I'm trying, even came up, though, in the annual meeting.
I mean, you know, do your tweets hurt the company?
Are there Tesla owners to say, I don't agree with his political position?
Because, and I know it because he shares so much.
Or are there advertisers on Twitter that Linda Yarkarina will come and say, you got to stop, man.
Or, you know, I can't get these ads because of some of the things you tweet.
You know, I'm reminded of the seat in the Princess Bride.
Where he confronts the person who killed his father.
And he says, offer me money.
You want to share what you have to say?
I'll say what I want to say.
And if the consequence of that is losing money, so be it.
But, I mean, when you link to somebody who's talking about the guy who killed children in a mall in Allen, Texas,
you say something like it might be a bad sciop.
I'm not quite sure what you meant.
Oh, in that particular case, there was a...
Somehow, that's, not, not that the, the, the, the, the, the, the, the, the, the, it was, I think, incorrectly
ascribed to be a white supremacist action.
Um, and the evidence for that, uh, was some obscure Russian website that no one's ever
heard of that had no followers.
The company that found this is Bellingcat.
And do you know what Belichet does?
I couldn't really even follow exactly what it was you were trying to express there.
So that's part why I was curious.
I'm saying I thought that the ascribing a two-way is trying to see you is bullshit.
and that the information that came from an obscure Russian website and was somehow magically found by Valincat,
which is a company that does Cyops.
And there's no proof, by the way, that he was not.
I would say that there's no proof that he is.
And that's a debate you want to get into on Twitter?
Yes, because we should not be ascribing things to white supremacy if it's false.
Can we talk about AI now?
Actually, I want to talk about AI.
Well, let me end with Twitter in the sense of Sam Altman was on the Hill today.
And he said AI's ability to manipulate interactive disinformation is a significant area of concern.
It is a significant area of concern.
And how, you know, I'm curious as to whether you agree with that, how you see that even playing out on Twitter with people who,
You know, somebody could, you know, look like you or me and use our voice.
I don't know what it could be.
Oh, that, that, that, that, that, that, that, that, that, that, that, that, that, that, the, the, so the, the, the, the, the, the, the, the, the, the reason that I'm, um,
asking people to be verified on Twitter and that we're saying, okay, verification means
you've got a phone number from a reputable carrier, which means that you've at least passed
through whatever their security mechanisms are, that you have a credit card, which so you
now have passed through whatever security mechanisms the credit card company has, and that there's
some small amount of money paid for month.
that set of actions significantly increases the cost of fake accounts.
And with the latest AI, it can bypass basically every test for are you a human.
So then how do you know that a million accounts were created?
How do you know that those are people?
I don't know. How do you?
Exactly. You have to do account verification.
And the thing that makes, like, I sort of put myself in a position of like, if it was my goal to manipulate public opinion and create millions of accounts and make it seem as though a topic was trending and that this is actually what the public believes.
But in order to do so, I had to get a million phone numbers, a million credit card numbers, and pay, you know, $8 a month.
And have that all not be traceable and clustered, I would say it's impossible.
So the goal of the sort of Twitter verification is fundamentally to prevent AI manipulation of the system.
Final question on Twitter.
Walter Isaacson, your biographer, he said your big goal for Twitter is disrupting the banking industry.
Um, I'd say that's, that's, that's a, that's a, that's a, look, first of all, I don't want to disrupt something for the sake of disrupting it.
It's more like if there is a better product, that's great.
But I'm not output disruption for system disruption.
Um, I'm like, if we can make a product that improves quality of life for people that they find more useful, that that's great.
what people see in PayPal is sort of like sort of a halfway,
it's frankly sort of a half-baked version of what it could be.
And so I think there's potential to create,
a more efficient financial system.
And here we can get, again, quite esoteric
and so we're going to do some information theory.
But the actual financial system today
is a heterogeneous set of databases
running on mainframes in COBEL
that still engage in batch processing.
It's really quite, very inefficient.
So things are still not real time.
And so it's possible to have...
a much more efficient, homogenous real-time data system.
Money is just information.
But that's not like the only reason.
It's just a thing that would be, I think, poetic to fulfill ultimately the vision that I had for X,
over 23 years ago, and actually see that counter fruition would be nice.
But there are many other things for Twitter as much as financials.
Well, you talk about enhancing humanity.
You know, I'm curious then about AI, which many people say will lead to great productivity gains,
I mean, I can imagine what they conceivably could do empowered by AI.
But I'm also curious because you've certainly been concerned.
What percentage do you give the chance that it will destroy humanity?
Well, the advent of artificial general intelligence is called the singularity because it is so hard to predict what will happen after that.
I think it's very much a double double-edged sword.
I think there's a strong probability that it will make life much better and that we'll have an age of abundance.
And there's some chance that it goes wrong.
destroys humanity hopefully that chance is small but it's not zero um and so i think we want to
take whatever actions we can think of to minimize the probability that AI goes wrong and you've called
for a pause along with the number of other people yes i look when i call for the
a friend of mine max tagmogg's business at at MIT um you know i wanted me to sign on to the
letter and it's it's like i i knew it would be futile
I knew it would be futile.
I just want to call it like, it's one of those things.
For the record, I recommend it that we pause.
Did I think there would be a pause?
I think that's what you're calling it, or some new AI effort.
How is it going to be different than Open AI?
we don't have enough time and,
and nor is this the moment to really talk about it.
We will have a launch event and we'll explore the issues in more detail.
and I mentioned this at the shareholder meeting,
on Twitter prior to our interview
I'm not even sure who's second, frankly.
Then what are people not understanding about what you have?
Why are we talking so much about chat GPT and generative AI at OpenAI and what Microsoft's going to be able to do with it?
I mean, people do talk about it online.
I think Tesla will have sort of a chat GPT moment.
Maybe if not this year, I'd say no later than next year.
You're going to have a sort of chat GPT moment.
Oh, you will in terms of suddenly it will.
Yeah, suddenly three million cars, we will drive themselves with no one.
Right, it goes back to that.
Yeah, and then five million cars and then 10 million cars.
And I would also say that if positions were reversed and say, well, in fact, the positions are reversed.
For example, Google has Waymo, which is, you know, sort of attempting self-driving.
And they are able to make self-driving work in a very limited geography with very tightly mapped streets.
But as soon as anything goes wrong with those streets, like there's an accident or a parade,
or road construction, it stops working.
Basically, Google is unable to produce a generalized solution to self-driving that works anywhere.
They've been trying to that for a long time.
They've been unsuccessful.
Tesla basically has that and is far more advanced than Google.
And so if the positions were reversed, you said, okay, okay,
Tesla's got to produce a large language model that has output equal to a greater than chat
TVT or Microsoft Open AI has to do self-driving and we just, we flip the tasks.
You have the computing power and everything else you'd do it.
I'm being told we don't have that much time.
Can you give me another five minutes?
I do have a board meeting.
But five minutes is probably fine.
I mean, you seem somewhat frustrated with them.
You were one of the big contributors early on.
The reason, I am the reason opening I exists.
How much money did you give?
I'm not sure the exact number, but it's some number on the order of $50 million.
Man, fate loves irony next level.
So I used to be close friends with Larry Page, and I would stay at his house, and we'd have these conversations long into the evening about AI, and I would...
I would be constantly urging him to be careful about the danger of AI.
And he just, he was really not concerned about the danger of AI.
It was quite cavalier about it.
And at the time, Google, especially after their acquisition of Deep Mind, had three quarters of the world's AI talent.
They had obviously a lot of computers and a lot of money.
So it was a unipolar world for AI.
And you've got a unipolar world, but the person who controls that does not, or at least, did not seem to be concerned about AI safety.
That sounds like a real problem.
And then the final straw was Larry calling me a specious for being pro-human consciousness instead of machine consciousness.
And I'm like, well, yes, I guess I am. I am a specious.
So you helped to the creation of open AI.
You put it as much as $50 million.
It wouldn't exist for that.
It wouldn't exist for that.
The name, Open AI refers to open source.
So the intent was, what's the opposite of Google,
would be an open source non-profit because Google is closed-source for-profit.
And that profit motivation can be potentially dangerous.
So should you've gotten governance for that money?
Should you have gotten some level of control perhaps in retrospect?
Yeah, I fully admit to being a huge idiot here.
So anyway, so Open AI was like meant to be Open AI
It was created as a 5-1-T3.
And so part of it is also in the beginning, I thought,
look, this is probably a hopeless endeavor.
How could we possibly compete with,
how could Open AI possibly compete with Google DeepMind?
This seemed like an ant against an elephant, you know, which is not a contest.
And I was also, I mean, I was instrumental in recruiting the key scientists and engineers, most specifically, most notably, Ilius Haskar.
Ilya went back and forth several times because he would say he's going to join opening eye.
Then Demas would convince him not to, then I would convince him to do so.
And this went back and forth several times.
And ultimately he decided to join opening eye.
And really, Ilya joining was the lynchman for opening eye being ultimately successful.
So you're very disappointed in what's happened there in terms of it becoming for a profit.
I would say action, sue them in some way.
I do think that there's some...
Look, it does seem weird that something can be a non-profit open source and somehow transform itself into a for-profit closed source.
I mean, this would be like, let's say you funded an organization to save the Amazon rainforest.
Instead, they became a lumber company and chopped down the forest and sold it for money.
And you'd be there for like, oh, wait a second.
That's the exact opposite of what I gave the money for.
And if it is, and in general, if it is legal to start a company as a nonprofit and then take the IP and transfer it to a for profit that then makes tons of money, shouldn't everyone start, shouldn't that be the default?
And then I also think it's important to understand the, like, when push comes to shove, let's say they do create some digital superintelligence.
almost godlike intelligence, well, who's in control?
And what exactly is the relationship between Open AI and Microsoft?
And I do worry that Microsoft actually may be more in control
than, say, the leadership team at Open AI realizes.
I mean, Microsoft, as part of Microsoft's investment,
they have rights to all of the software,
all of the model weights, and everything necessary to run the inference system.
So they essentially have a great deal of control.
At any point, Microsoft could cut off an AI.
Elon, I'm being told we have to wrap up.
Your board has been very patient.
I want to end on one AI question.
I have one who's actually soon to go into the workforce.
I struggle with how to advise him about a career
when this technology exists and will only improve.
I'm just curious when you think about advising your children on a career
with so much that is changing.
What do you tell them is going to be a value?
Well, that is a tough question to answer.
I guess I would just say, you know, to sort of follow their heart
in terms of what they find interesting to do or fulfilling to do
and try to be as useful as possible to the rest of society.
You know, if we do get to the sort of like magic genie situation
where you can ask the AI for anything,
And let's say it's even the benign scenario.
Let's say it's a benign scenario.
How do we actually find performance?
You know, it's a, how do we find meaning in life if the AI could do your job better than you can?
I mean, if I think about it too hard, frankly, it can be just disparating and demotivating.
Because, I mean, I go through, I mean, I...
I've put a lot of blood, sweat and tears into building companies, and then I'm like,
well, should I be doing this?
Because if I'm sacrificing time with friends and family that I would prefer to do, but then
ultimately the AI can do all these things, does that make sense?
To some extent, I have to have deliberate suspension of disbelief in order to remain motivated.
So I guess I would say just, you know, work on things that you find interesting, fulfilling,
and that contribute some good to the rest of society.
Well, that's a great place to end.
There's so much we didn't get to.
I hope you'll give me another chance to sit down with you.
But, Elon, thank you for being so generous with your time.
Thanks to your board for waiting as well.
And thanks for having us here at this incredible facility.
So that was the end of the interview.
I'd love to hear it. That's it.
Brilliant. That was awesome.
That was awesome. That was pretty powerful.
What do you think was the best bit?
I really like that he is willing to stand up and say, hey.
He said that he did not believe that the election was stolen.
He's got a lot of maybe MAGA fans.
How do you think Team MAGA is going to feel about that?
I think he has to be smart about what he says.
And I think that that's a very reasonable line to take.
Anyone noticed a massive amount of contradictions in that interview?
Just to ask that before we go into the detail.
Anyone notice any massive contradictions?
Do you want to give us an example?
For example, he at the outset talks about too much power in the hands of Google, deep mind and an ant against an elephant in terms of open AI.
Then he speaks to the fact that autonomous driving...
in the form of Tesla will have its chat GPT moment in the next year or two,
where five to ten million cars will be self-driving,
which I fundamentally disagree with,
both in terms of the morality and the model machine, road structures,
and the ability to actually legislate for that.
He then went on to speak about Larry Page and the dangers of AI
and the fact that Larry didn't seem to care for them,
went on to say that Open AI would not exist without him
effect of Microsoft's stranglehold on Open AI and the fact that they may cut it off at any one point,
which we covered in a space I ran about four weeks ago after they finished training at using our free time.
But then went on to say the exact same thing about Tesla centering all of the sophistication of AI
and being much more advanced as far as he was concerned.
than the Microsoft Open AI initiative.
So he spoke to the centralization of power.
We finally get to disagree.
He then clarified what, when asked about it,
that it was about cars driving.
A little bit, but the interview also, if you notice, in my lens, was very much point of the finger at Google and Deep Mind having too much data back in the, say, five, six years ago, where it looked like data acquisition and scale was going to be the winner.
He then pointed the finger at Open AI, taken on Microsoft as a stranglehold investor.
But then he spoke to the centralization of power within Tesla and the ability for autonomous driving monopoly
and the centralization of a very sophisticated AI, which he wasn't willing to speak about, which is opakness, the very opakness he described.
He didn't say centralization to that point.
And do we really want to decentralize autonomous driving, AI-based autonomous driving?
No, I'm not talking about autonomous driving.
We constantly talk about the fact that Tesla says it's not about cars.
And that's exactly the point.
It's about the acquisition of data points and the acquisition of various other points in order to create other AI elements.
the car is the least of his interest in the acquisition of data points.
But he doesn't seem to see that the contradiction between his declaration of car.
I don't think it was a contradiction.
And for me, the most powerful points were something he said halfway through and something he said at the end.
basically without paperwork
said that one of the best parts
earning the money to have the quality
elect in the first place.
For me, those were the two.
I'm going to talk about the white supremacy stuff,
even though the interviewer said,
are you sure you want to talk about that?
And he said, if I lose money, so be it.
Well, I think he was also very self-deprecating, and he was also pretty self-aware.
He said, you know, this was probably my biggest mistake, right?
There probably is a point of jealousy that he has towards the success that Open AI has.
But I don't think that, I don't think he's unaware of that.
I think he's very aware that.
Justin, I don't think it's the success.
I think it's the head start.
Yeah, I think you're right.
That's great way to put it.
But I think it's the head started in the flip of motives here.
And he's been really clear on the flip-flop that Open AI has had over, you know, his tenure being involved in the company.
For all the things that I think Elon has maybe, you know, disagree to themselves on, I don't think I've ever doubted his commitment to open source.
Now, that's not to say that they open source everything, but I genuinely feel like, you know, the same way that we feel like Elon cares about AI safety and probably actually is concerned about a future in which, you know, AGI kills us all.
He actually also has a similar faith in open systems.
Well, just finally, I'll say this, three weeks ago on this very space, myself and Suleiman held up against great resistance from the panel, not this panel, that Open AI had been compromised utterly by Microsoft's acquisition.
And the justification from the panel was that they'd spun off a bunch of whole load of...
of corporate entities off the edge of what was essentially an open source foundation,
and that made it all okay.
So I'm glad to have validation for myself and Suleiman's argument on the recorded space three weeks ago,
because that's exactly what everybody disagreed with in very vocal terms
about our analysis of the involvement of Microsoft in OpenAI.
Even though, GP, I'd say we don't need validation.
We knew we were right, and these guys just learn a few weeks later.
Say, man, you're allowed to have an off day.
Today's just not your day.
I would like to mention that Gary Marcus is in the audience right now.
So Gary Marcus is in the audience.
Brilliant, yeah, we can bring him up if you want.
Originally, we wanted him up for the AI space.
But yeah, if you want to talk about Musk's interview
and any thoughts he has, it'd be an honor to bring him up.
If you does want to come up, just let me know.
I'm going to quickly push back against GP on some things, right?
Because I think he's making this.
Gary, thanks for joining us.
We really appreciate your time.
Today was a really historic moment in the Senate.
I was really proud to be there.
And I'm happy to take any questions or tell you about it if you like.
So, Simon, just one second.
Were you there when one of the senators said that he used, I think,
Chat ChbT or some AI tool, and he asked it to write a song,
and then it wrote him a song, and he sounded amazed?
I was there for the entire thing, all three hours, sitting next to Sam Altman.
I was one of the speakers.
So, yes, I was there for all of the Senators' experiments with ChatGPT,
some taking it more seriously than others.
Brilliant. So Gary, just give us your initial thoughts about that meeting. What was some of the highlights? What was the, what do you think? Let's start with that. Let's start with what do you think of one of the highlights were?
First of all, I'll say that I thought it was a historic occasion.
It was fantastic to see the US Senate be so bipartisan about things.
Both sides of the aisle were very supportive of things that I've been pushing for, which
are to have an agency for AI at the national level and to have some kind of international
Almost everybody in the room, except for the person to my right who was from IBM was supportive
voiced his support for me in that proposal, which was wonderful. Seeing all that consensus was
terrific. I think also terrific was how seriously everybody took things in the room. I think people
understood the urgency of trying to find ways of having a good, positive AI future rather than negative
Senator Hawley made an analogy to the printing press, which he said was basically good for everybody and the atomic bomb, which still wants us and we still have to worry about every day.
And he said, look, we wanted to come out to be like the printing press and not like the atomic bomb. And I think everybody...
had remarks, or almost everybody had remarks similar to that. I found the senators to be pretty
well prepared. I think most of them asked very good questions, and it was three hours, and I think
it's actually worth watching, not just the highlights, some of which are, of course, fun, but the
whole thing is really amazing. I was super proud to be part of it.
Gary, one of the comments that Allman made was that he was concerned about the direction of AI.
Elon was commented and kind of agreed with it.
What was your thoughts on that?
It's fair to say that Sam was worried about the direction so much as possible directions.
I think on the whole, Sam is more bullish about, Am Altman is more bullish about AI than I am.
I think we would both like to see it succeed.
I think I'm more concerned about the risk, but he made it very clear that he was concerned about the risks.
himself. There was one moment where I kind of put him on the spot because the justices said,
what's your worst fear and asked him about jobs? And he said he's not that worried about jobs
because we've always found new jobs, which one could argue with. But I asked the senators,
I said, you should push him and ask him what he's really afraid of. And Sam put out there in the
public record that he is worried about what some people might call long-term risk, but I'm not
sure that's the right term, of really serious harm coming from machines that we can't control.
I found it fascinating to sit right next to Sam, see him much closer than you would see on television, and see how sincere he seemed to be in expressing those things.
Of course, he's a corporate person, and he, you know, evaded certain questions.
But on the whole, he was very direct about his concerns about election misinformation, which has been my biggest concern for the last several months.
He was quite candid that there are long-term risks.
And while he didn't play them up, he didn't deny them either.
And I thought that that was really great of him.
Gary, if you don't mind Simon.
Gary, you said you hope that AI succeeds.
What do you mean by succeeds?
Well, I guess in two senses.
But what I really meant by that was shorthand for, I hope that, I should have said it more carefully.
I hope that we have a good, positive AI future that is good for all of humanity and not one where things go off the rails.
I suppose there's a different sort of technical sense, which is like, do we get to AGI or not?
I personally have always been interested in AGI.
I think I would like us to get there, but I am also concerned that we don't yet have a clear handle on how to control things.
My own view is that we're not...
near AGI, despite what people say that we have big statistical machines that are not
very deep in their comprehension.
That makes them unreliable, that we need to do some foundational work that we haven't.
But I think of all of this as a dress rehearsal.
Like, this is the first time we've confronted, if not...
you know super intelligence or anything like that a kind of broad intelligence different from our own
that is widespread we've never had that breadth before you know gpt three is not very different from
gpd4 um or chat gpt the you know the guardrails are nice and important but it's not that
there's a technological revolution there's a social revolution here of using these things
at such broad scale and I think what we're seeing is we don't get know how to control these things,
how to regulate them. Something else we found a lot of agreement on that I was pushing is having
something like an FDA that might, you might have to get approval if you do widespread deployment.
Sam had different words, but it was also fairly supportive,
and the senators were pretty supportive of that.
That's kind of a new idea to treat AI like medicine.
I think Michelle Remple Garner is here, and we should bring her to the stage.
She and I wrote a piece...
in our respective substacks a couple months ago calling for something like that.
That's a kind of new idea to the world.
People were receptive to it, which is terrific.
But again, getting to a thriving AI future means figuring out how to get the best out of these things without suffering the worst.
So, Gary, my last question, and then Slaman, we can go to the hands as well.
One of the most triggering and contentious questions that we have in this space,
this is going to be another day I think where GP and Slayman lose again.
They do not believe that AGI is possible.
I think it's eminently possible.
I think it is certainly possible. I think,
One can actually have an interesting argument that I think Jan Lacoon is mounted that I agree,
which is there are differences, there are varieties of general intelligence.
I would say that humans have some degree of general intelligence,
but not the optimal general intelligence that you might imagine on first principles.
There are limits in how well we transfer knowledge and so forth,
but that we're far ahead of machines.
But I think that we are biological computers at some level.
principle reason why we couldn't have an artificial general intelligence at some point.
I think we happen to be looking in the wrong direction.
Again, I would agree with Lacoon, who I think is really agreeing with what I said, although
we want to acknowledge it, that we're not that close.
He says we have an off-ramp, we're on an off-ramp to AGI right now.
I would say, you know, deep learning has hit a wall on a certain set of issues about reasoning,
They're still unresolved in that sense they are a wall.
So I don't think we're actually like on that path yet.
I think we need some foundational discoveries about how to do reasoning,
how to build cognitive models of the world.
There's just stuff we haven't figured out yet.
We're kind of like in the age of alchemy where they could kind of smell chemistry off
in the distance but didn't really know how to do it.
And that's kind of where we are right now.
But that's not a principled reason why we can't do these things.
We just have to keep plugging away on them.
hopefully also figure out how to instill values in these machines
and how to have the right regulatory framework around them and so forth.
So Gary, I said last question, but I have to follow up with the last one.
Besides, if you could just repeat the name of the person that we should bring up so we can
invite her up, that would be right.
Michelle Rempel Garner is a Canadian parliament member, and I guess she's listed here as
The follow-up question, and then I will stop is...
So AGI, is sentience possible, in your opinion?
It might be. We're very vague about what we even mean by that.
And even if it's possible, I'm not sure we should do it.
It's a really good version of the Jurassic Park just because we could.
Why would we want our machines to do that?
I think we could build machines that understand us well enough to help us and that we could trust
without letting them wander off and think about whether humans should still be there.
see the reason to run the risk. I don't see what the value is.
God, I could go forever. So we can't stop technology. So the question is,
do you think it's inevitable.
We could stop some technologies.
I mean, we could, for example, say you must register all large-scale AI systems.
You must, if you want to deploy them, do certain kinds of regulatory approval, and that you may not build self-improving systems, and that that is against the law, and that that will be punishable.
I mean, you'd have to define self-improving in the right kind of way.
But, you know, Blumenthal, Senator Blumenthal at the end, I think, made the point about enforcement has to go with all of this.
And there was not much discussion today.
That would be challenging.
But we could decide, you know, we've let enough horses out of the barn already.
And we don't, or maybe better metaphor, cliche, we let enough genies out of the bottle.
Maybe, you know, three genes out of three bottles is enough and we don't want to go with the fourth.
And, you know, if we can't come up with good plans for what we're going to do when we let out the rest of the genies, maybe we should wait on that.
Gary, we need you in these rooms.
I've been fighting these discussions for a very long time.
Black, you've had your hand up.
Yeah, I appreciate it. This is such a fascinating conversation. I find this to be very, very big brain, if you will.
The thing is, is that we're at a point right now where I feel like we're at the metaphorical fork in the road,
which I kind of describe it more as like a Schrodinger's cat moment with AI, where you heard it with Elon.
Gary just mentioned as well. I spoke about this even in this space a few weeks back when I was talking about what the proper use cases were going to be in the future.
we're at a point where in one direction is doom and gloom and in the other direction is utopia.
And I don't think anyone really knows exactly how to get to either one of those.
And I think I'm not even sure if as humans, as a society and as corporations and technological advancements go,
if we even know, even if we would make that choice,
because we seem to be so self-deprecating to a certain degree, right?
So to me, it's like, it's a very interesting question in time to be alive.
The last thing I'll say is,
There's something to note as a user of a lot of these AI platforms,
as someone who's, I've spent hundreds of hours in the last year or two
I haven't developed these things with code.
I've said and I've used it.
I've had conversations with them, whether through visual applications or
through conversational applications, I can tell you that one of it,
there's two things that are true and they're both sort of contradictory to one another.
One thing that is true is that they're really close.
to being very human-like, especially on the more LLM side,
when you have conversations with them back and forth.
Like if you sit down, it's not just like a news line
or a news headline or something.
I mean, if you sit down, you literally spend tens of hours
having a conversation with a single thing,
you will find that it's really close.
But the other thing is that's also true
is that it's completely imperfect.
It's not perfect. I think as things go along and you have more and more experience with, you'll find how, as someone mentioned earlier, the kind of hallucination that occurs. And it's because it's making things up as it goes constantly. But it just happens to make things up really accurately most of the time. But it's also making things up as it goes.
And this is, you can see this through almost every single AI application that's out there right now.
And so there's a weird type of balance that I think we have to achieve whether we want to go into a direction that is, is realistic and is perfect as we all kind of, I see we all as in like the technological users and creators tend to seek out this level of perfection in order to achieve that, that kind of high end level of technological advancement.
and or also balancing the things that make it a variable and make it unpredictable.
And that's where I see this fork in the road and this Schrodinger's chat.
You know, one way is true.
And that to me is really where the question is.
And you can see that same thing echoed through a lot of these conversations.
things they might like to respond to. One is just on the point of self-deprecation, one of the things
that astonished me about the Senate proceedings was how self-deprecating the Senate was about the
Senate in terms of really feeling like they got the Internet wrong and that Section 230 was not
the right solution and that they wanted to do better with AI than they did with the Internet. And that really moved
that they were willing to be kind of intellectually honest about it and say, how do we do this better?
So I love that about today's proceedings.
On the hallucination point, I will correct you gently and point out that there are some AI systems that don't hallucinate.
So, for example, our root planning systems that we use with our GPS to get from point A to point B.
almost never hallucinate.
And it's just a property of large language models
that they do, given the way that they assemble information
that they've in some sense compressed in a lossy fashion
to use a metaphor, the New Yorker used.
I could go into more technical detail,
but fundamentally they lose track of individual properties.
They don't have databases.
That might just be the wrong way to build AI.
It works if you want to make a chat bot that's fun to play with.
But if you want a chat bot you can rely on, I don't think it's the right technology.
And I think we're in a kind of local minimum, if you know, that phrase,
where we're all trying to make these techniques better and to hallucinate let
less, but it's a fundamental property of how large language models work that they hallucinate.
Even if you don't give them, for example, misinformation scrape from Reddit, they'll make up their own
Like the case I mentioned today was a law professor who was accused of sexual harassment.
And the reason he was accused probably is because he had a student who worked with a famous
case that involved sexual improprieties with a certain president.
And so it was kind of like word chaining like you get in the telephone game in the system,
but it just made it up. There was no actual data to suggest that this law professor was involved
in sexual harassment, and it got worse. So the law professor wrote an op
Ed about it in the USA today, said, hey, this accused me, defamed me, this didn't happen,
said I was on a field trip in Alaska with a student. I wasn't even in there. It referred to a
Washington Post article that didn't exist. This is terrible. We need to worry about it. So,
he writes this op-ed, and then the Washington Post gets the story, and Will Oremus starts writing
about it with Prancho Verma, and they...
the first system was chat t pt then they asked bing and bing said
yeah the guy was involved in sexual harassment and it gave a reference and the
reference was the USA Today article that said he was not
involved in op-ed said he was not involved in sexual harassment so
Again, it was looking at the statistics of the word, like this guy in sector harassment and missing the word not.
And, you know, in my years before I was working on policy, I was working on AI research and pointed out this problem with negation over and over again.
Well, now it's in the real world.
I don't think that's going away with these systems.
I think we need new architectures to solve it, and I hope we'll get there.
So guys, I think we've only got Gary for another about 15 minutes.
So if you've specifically got questions for Gary,
are any kind of statements towards Gary?
I can go to the end of the hour, 27 more minutes.
We've got a bit more time.
Gary, so insightful, I watched your testimony.
I'm so glad you pressed Sam on his answer to that question.
I thought that was really great.
I will say, though, after the last three years and the complete depletion in trust between the citizenry and its health overlords,
what makes you think that we would be amenable to another FDA coming to oversee something that might be even more vital and important?
considering everything they got wrong,
from plexiglass to face masks,
to everything else, right?
Look, we have a history of regulation that is mostly good that's been really rocky.
It was, I think, very poorly managed during COVID, and we could think about all the different political pressures and lack of information and why and try to do better.
But I'm still happy we have seatbelts.
I'm glad that there were regulations on commercial airlines that made that safe and okay.
I don't think we should throw the regulatory baby without the bathwater.
I did point out in the hearing today when they asked what are the things to worry about,
the risk of regulatory capture.
One thing we don't want to have is a kind of greenwashing where people basically make policies
that protect the incumbents by making it so honor us to do the work that nobody else
can get involved and wind up accomplishing nothing.
Totally non-trivial to get the regulation, right?
It has to be an iterative process.
But again, I'm really glad with my cars and medicines and so forth.
I think regulation does work sometimes,
and we need to be thinking carefully about what lessons we can learn
about where it worked and where it didn't.
where people blunder forth and maybe put, you know, one of the things that probably happened last time was too much political pressure on what those regulations could be.
There were organized misinformation campaigns that we need to think about.
There are lots of lessons that I think we can learn, but definitely don't throw the baby out with the bathwater.
Gary, how do you address, and obviously this is always the fundamental question, the lack of global regulation, right?
And it goes back to Elon Musk's signed petition.
If we put up bumpers on the bowling alley, we just become worse bowlers.
Well, what I've been lobbying for is an international agency for AI.
And one of the arguments I made today in...
the sanitary was that it's not actually in the company's interest to have different regulations
for every country of 190 some countries can you imagine if you had to train a large language model
you know and maybe each company has to do this for every country given how much energy is required
and you know if there are different rules and you have to do this like regularly to update you know
for the news and so forth like just the amount of energy would be disaster and then the amount of kind
It's hard enough to do that just around like currency exchange and so forth.
And then like it could get worse.
Like California and Tennessee could have their own rules about which models you can use.
Like this is not really in the corporate interest.
It's not in the interest of the environment.
I think there are lots of reasons to try to get together globally.
Another reason is, you know, maybe the United States has enough.
wherewithal to have the right experts that they could come up with the right policies here, maybe.
But a lot of countries wouldn't.
And putting together our best minds to do this and also to build research, to build new tools to mitigate the risks.
There's lots of reasons to work together here.
And the room was super supportive of that, much more so than I expected.
I mean, there's a lot of complicated global politics that almost everybody in the room was solid behind, solidly behind exploring that idea.
And a point that I made was that the U.S. should take the lead on that and not follow just whatever the EU does or something like that.
Like, we have the talent, we're building the software, we have the resources, let's take the lead.
And I think people were extremely supportive of that.
Basically, I have a question.
Go ahead with your question.
Gary, thanks so much for making time for us.
I think we were all very impressed with really everything that you've done, really, across the board.
One thing that completely stuck out to me in your synopsis was just the overall optimism from, you know, the policymakers in the room,
that this can be something that we address and, you know, ultimately hopefully help us get closer to this AI positive reality.
Now, I'm fundamentally an AI option.
Can I just interject for one second?
I wouldn't say that people were
positive, I was positive about what happened in the room. I think everybody there was nervous.
Everybody there felt like this is important that we need to tackle it. So in that sense,
positive, but not positive in the sense that like people thought this was trivial and that
we'd knock it out in a week. Nobody left the room thinking that either. Everybody thought we have a
real challenge in front of us. It's important and we're in line to work to it together,
which is the best possible outcome given the circumstances.
I just want to clarify that.
No, thank you for clarifying that.
That's exactly my point, right?
Being willing to work across the aisle and actually make, you know, some kind of positive momentum here is, I think, the first step in actually getting this right and not being, you know, another, another Internet 2.0 again.
I guess, you know, my follow-up question, though, really just rests upon what's possible today
versus what we're all thinking about doomsday scenarios for tomorrow. And, you know, I know you've had
some interesting thoughts on how we could potentially regulate creation of, like, more powerful
AI. I think to your point, and to quote you, right, the genie being out of the bottle is it
having mass application today in a way that is ought to be.
honestly pretty hard to monitor for any company as doing business at the scale of AI,
of Open AI or really any other infrastructure provider.
That's really what concerns me, right?
Obviously, self-encoding viruses that agents can create, certainly a doomsday scenario.
There's a million other doomsday scenarios that rely on more powerful or better technology being out.
But that doesn't obscure, like, mass social engineering purely done using voice manipulation and script generation that is also empowered to do research on every single employee at a company and create a very personalized, relevant social engineering attack.
And that has, you know, massive ramifications.
I know we've talked about this over and over again,
but that's like one of a million scenarios that could go really poorly.
And I don't think we're that far away from that.
I'm honestly kind of amazed no one's already done it,
and it hasn't been headlines across the board.
But, you know, what's the plan to regulate that,
or is that even a regulatable problem in your mind?
I think there's a lot of surface area to worry about,
or you can think about hydras and heads.
yet have a grasp on them.
I will say that I've made a lot of dark predictions lately.
Many of them have already come true.
Like I predicted there would be the first chat butt associated death this year.
And I was correct about the year and it's unfortunate.
One of the predictions I made is that this year, which has not been shown correct yet,
but there's still a lot of time,
is that this be the first year in which we'd have a major newspaper or a similar headline
involving prompt injection attacks.
you know we're going to see new kinds of cyber crimes and unpredictable kinds of things and
there's just a lot of them and i don't think we can promise that we're going to head all of them
off and you know this is why we need to mobilize and why i don't think we have five years to wait
like i think we need to move now the good thing again is that the people in the room really appreciated
that so i guess my so i could ask a follow up yeah go ahead go ahead sir
So my follow up question there is, you know, our government is based on the idea that it's fine to be slow to react, right?
Everybody in the room was worried about speed of execution.
Part of the reason, I'm trying to remember who made this really good point.
One of the senators made a really good point, which is that the Senate is not designed to do things quickly.
It's designed to make enduring decisions.
And the enduring decision here, I'm putting words in their mouth, but they said something to this effect might be to make an agency that can move faster than they can.
The speed was absolutely top of mind for, you know, at least 80% of the senators.
I will have to throw in only because...
Only because I'm the crypto.
I just want to say, Gary, sorry, I have noticed in the congressional hearings regarding
like blockchain and crypto, but I have seen that the congressional participants have
seemed to accelerate it in terms of their understanding and competency.
So I think that's a really good thing for our government.
I just wanted to point that out.
But I want to go to go ahead.
They know this is important.
You could tell it, their remarks were well prepared and something that
Other people in the room, it was a little hard for me to focus on this as much as other people in the room.
But everybody else in the room that I spoke to noticed how much that senators were listening.
I talked to some staffers after the fact, and they never saw the other senators sit through a whole session.
I guess what they usually do is they come in, they make the remarks, they go,
and there were a number of senators that stayed the whole time not speaking, not trying to be in the limelike,
but just trying to understand this.
Gary, I wanted to ask a question.
By the way, thank you so much for sharing your views and agree with your insights on Sam.
I was lucky that I was part of a VC firm like a decade ago where we invested in Looped, one of its former companies.
It didn't work out as well as Open AI, but...
I was not involved with the deal at all, but I did spend some time with them.
And I think the media does portray some of his things in barely.
I think he cares a lot about sort of the human condition, the way that Elon does in the sense that, you know, he wants to, you know, focus on fusion and things like that.
So, so I kind of agree with you.
I can spend a little bit of time, not a lot, but, you know, I think it's great to see what's happening.
Question on your congressional hearing. I loved what you talked about with the FDA, like comparisons to having some oversight board. And it's really interesting to hear that people were engaged. I agree with what, you know, what digital is just saying. But the question that I have for you is,
actually kind of two-part.
Do you think what does the engagement,
what do you think it's going to translate to, right?
And I'll draw the analogy of,
we haven't talked that much about the EU amended AI Act
that happened the other day.
And some of the stuff that's in there is just wild, right?
AI entrepreneurs that are really serious
should probably leave the EU
because, you know, the people who made that,
made the EU AI amended act
don't really seem to understand how AI generally works.
And even people like Paul Graham are saying,
Hey, you know, you companies who are focused on AI should probably just, you know, head out
So what's like the risk of potential misregulation?
I don't even call it overregulation, but misguided regulation because of the great interest,
I mean, on the crypto side, Gary Gensler, I think all those all those work through folks,
we're actually kind of excited because Gary Gensler, you know, top web three and he seemed to understand
So we're all like, oh, great, we have somebody who's an insider who's going to be on the regulatory
Turns out an insider isn't always the best thing, right?
So I'm curious about, you know, how this interest, how it's going to manifest, you know, into potential regulation and what the risk of misregulation is given sort of the deep level of interest among lawmakers right now.
I mean, these are hard problems.
The thing that I emphasized over and over again was having independent scientists at the table.
It can't just be governments and it can't just be the companies.
There have to be so I'm probably not just scientists, but there have to be some independent representative there who have the technical expertise and who are not financially benefiting.
and are not benefiting in their own power in this way as government is.
I mean, it's going to take iteration.
We have to get it right, and it's hard,
but having independent voices is an important part of it.
I got kicked out earlier, so I was going to ask.
Hey, Gary, I just had a question for you.
Do you think, and I guess just I'm asking for your realistic perspective here,
do you think that there's an element of naivity involved in, you know,
the idea that international powers will cooperate on AI development?
I mean, it feels to me like initially...
Well, it's not AI development. It's AI regulation.
Right, or even regulation.
My point is, is just that...
It feels like there is already weaponization concerns, national security concerns surrounding it, and it feels unlikely to me that the two major, or who will be the two major corporate drivers of the trend are the United States and China.
It just seems far-fetched to me that they would be willing to either level the playing field amongst themselves or cooperate on any degree of regulation, right?
Because corporations are the driver now, right?
Even Open AI, which was intended originally to be a nonprofit, now Microsoft has dumped tens of billions behind them.
They skyrocketed to a 40 billion market capitalization.
Maybe at some point in the future they'll go public.
Isn't it difficult to stop the corporate playground here if you don't have...
unified international oversight and knowing that well don't you think it's unlikely to get that i i
mean it's an uphill battle to do anything international but we have done it before we did it with the
i mf we did it with the atomic energy authority the thing that brings people together are common
China, for example, I think a lot of people are worried about them, but China has some of the same worries that we do around misinformation, around cybercrime.
They've actually been more progressive around some aspects, I should say that carefully.
But on some aspects of AI regulation, they've actually been more prospective.
There are some in China that I greatly disagree with, but there are aspects of regulations they're actually moving quickly on.
I thought that the China question would come up more today than it didn't.
I think people want to get to the right answer here,
and I think people recognize that there are global challenges.
And so maybe we can't get 100% agreement,
but maybe we can get agreement on 75% of the problems, and that might be a lot.
Let me go to Strange. Go ahead.
Thanks for giving me the opportunity.
Hey, Gary, a big fan of yours.
Been following you for a couple of years now.
And I recently interviewed Felipe, if you know him,
the great AI Winter blog that he wrote.
And you have exchanged tweets a bunch of times.
I think you know each other.
So I'm going to get to the question really quick.
I was just a little, you know, excited to talk to you.
So you had an interview way back when with Doug Lennart and Minsky, and their Minsky lamented that we are going backwards.
I want to get your thoughts on that with the latest, you know, LLM developments.
What do you think now about that talk that you had with him?
Yeah. I think that was Minsky's last public appearance.
I was honored to share a stage with him and with Doug Lennett.
You know, Minsky's point, I think, if I recall correctly, was that the research that we were doing was not really carrying us forward towards AGI and that he was disappointed in that research.
And we've clearly made progress.
Large language models are clearly of some value,
but I still think we're not focusing enough on representations of common sense,
on planning, on reasoning.
And in some ways, I still feel like we're pretty stuck.
Like there is value in the things that we've developed.
Let me say that was what.
seven or eight years ago that session.
We've certainly made some progress,
but we've also sucked the oxygen away from most other efforts.
I am hoping that the desire to make semantic search
or whatever you want to call chat-style search
and the desire to make that truthful will actually push us towards taking seriously some ideas from symbolic AI that I think we still need in terms of being able to represent facts and to reason over them and so forth.
And I don't think we really get to AGI until we wrestle with how you can do symbolic knowledge and yet also learn from large amounts of data.
There's some fundamental discovery there, some fundamental tooling we might need to do.
And until then, I'm kind of with Marvin that this is all very nice, but it's not really getting us to what intelligence really is, how to make it, and so forth.
Like, we've built this other thing.
It's interesting, but it's not AGI.
And it doesn't really stand on its own two feet.
It works only because there's so much human data available to leverage.
two humans and put them on a desert island without knowledge, they would do pretty well.
You put two large language models on a desert island, they would do nothing.
We're still much more resourceful than these machines.
And one quick last question, follow up to that is any thoughts around non-alcoholic approaches?
I think they have been applied in probably neuromorphic computing.
But I'm also fascinated by your work in robust AI and...
So yeah, if any comments there, that would be great.
I'm actually no longer at Robust.
We had a difference about direction.
So I won't comment on them.
I will say they just raised more money and they're doing something interesting in warehouse robots.
So it wasn't what I wanted to do when I grew up, so to speak.
But they'll have some announcements relatively soon.
I think that whatever the solutions are algorithmic, I don't really think there is such a thing as a non-alorithmic answer, but the space of algorithms is essentially infinite and that we have not, by any means, thoroughly explored it. We're like in one little corner of large language model, part of
algorithmic space and there's so much more there to explore and you know we eventually will make
better discoveries but if we put all our eggs in the large language model basket and start training
you know billion dollar models and don't leave much room for anything else it'll take us longer
i think science is ultimately self-correcting and probably engineering is too so i like to think about how in the
early 20th century people all thought the genes were made of proteins and they wasted 30 years trying to find what
protein genes were made of but eventually they figured out they were wrong and it was an acid and then
lots of things fell into place once we had that basic discovery so we need to get out of the space of models
I think that we're looking at if we want to get to models that we can trust but eventually there'll be all kinds of
I can take like one more question
I will do a rude thing and point you all to my podcast
before I go because you might actually like that's fine
so just thank you oh good
yeah so Michelle just thanks for joining us we will
we've got a few questions for you as well so thank we do appreciate you joining us
but just get this one last question in
Hey, Gary, so I so appreciate the humility you're bringing to this.
I think these are huge problems.
I don't particularly know the answer.
But again, I always caution that government has been a terrible historical chooser of winners and losers.
You think back to California here where I am, and the assembly and Senate there chose to CFL light bulbs a decade ago,
which artificially kept LED light bulbs ahead of the game.
They were totally a terrible product.
And yet they chose them over Edison's light bulb.
And so I worry that when we get government involved,
they're going to make terrible decisions again,
like millions of dollars spent on plexiglass in schools
and then coming out to realize that they really didn't make things worse.
That's why I worry about it.
There are going to be some bad decisions made there.
part of the expertise that we should have on the table is people who can help us to understand
where governments have and have not made good decisions around regulation and what politics
were around that and how they were made, which is not really my personal expertise. I'm trying
to get dialed in quickly and people are being very gracious and teaching me things. I'm not
expert on all of those things.
Like I know there are cases where it's worked
and cases where it hasn't and we need to think
pretty deeply about them. I think the alternative
of doing nothing is a disaster,
but there are perils in all directions
and we've got to get it right.
Do one more quick one and then I'll bounce.
Brian, you had a question?
Yeah, I was kind of curious about not just the geographical breadth of an AI regulation.
When you compare it to like an agency, a hypothetical AI regulation agency to the FDA,
I think about all the things that the FDA has to look at that's not food and drugs.
There's so much tech that they are involved in and so many things that they have to go through.
And do you really think that it's possible for one agency to rule them all when there's so many complicated questions related to, I mean,
ownership, IP ownership, digital privacy rights, healthcare environment, all sorts of things.
And would that, creating that agency also, would that be something that would kind of negate all of these other agencies from having AI, just kind of push it all off to them?
Like, wouldn't that create more problems?
I think minimally we need some coordination or else it's going to be chaos.
Like there has to be some traffic control, but there also has to be delegation.
So, you know, a central agency has to say, look, you know, IP issues are handled over there,
but, you know, we need to be in touch with them because, for example,
we might need to think about IP differently from a large language model that we do, you know,
for design of a motor or something like that.
You know, let us support you and make sure that you get to the right place on that.
I think that AI is enabling new classes of risks, like, for example, misinformation at a new scale that we haven't really dealt with before.
They require new approaches.
I think somebody in the government has to take the responsibility of trying to stay up on a field that is moving incredibly fast.
I think the UK white paper where it says, we'll just leave it to all the existing agencies.
I mean, some of those existing agencies are not going to have the resources to even follow what's going on.
That doesn't mean we want to take all the existing agencies out of the loop,
but there has to be, I think, something like a cabinet level,
you know, agency or position or something like that that tries to understand how all this is working, has the personnel,
to try to do it, has the expertise on call, you know, probably has people that have actually built models and things like that,
and really understand stuff. You just can't assume that every given moment that every agency is going to have enough internal expertise to really make that work.
So I will plug my podcast because I think it's of interest.
It's called Humans versus Machines.
The first two episodes are about the rise and fall of IBM Watson,
which I think is a real parable for right now where we have lots of hype
but may or may not get delivery.
I hope you will enjoy it.
Thanks for having me and for all the great questions.
When do we have you and Sandbeth on the show?
That would be a wonderful experience.
If you can get Sam back, I'll certainly come.
I love being on a panel with him today.
And, you know, I can come back at some point, not in the immediate future, but I'd be glad to come back.
Thanks a lot for having me.
Thank you very much for coming, Gary.
Michelle, thanks for joining us.
You've got a lot of Canadians who are excited about you joining the space, so thanks for coming.
So from what I understand that your work or you've been writing about regulations
and putting AI regulations in.
And I guess the same first question to you is the same question a number of people that have had on the panel,
which is that if you essentially have a scenario where you're regulating, let's say, the United States,
other countries which are unregulated are going to get much more advanced in this area.
I think it's a really good point, but I kind of want to give the group a sense of optimism, which I think has been lacking from the overall discourse.
So I am a Canadian legislator, but my background's in economics and intellectual property management.
And in the former conservative government, I was a cabinet minister, so a member of our executive dealing who dealt with.
like the commercialization of early stage research and economic diversification.
So I have a bit of a subject matter expertise in the area.
And I just recently, for Canadian listeners,
there might be a lot of people who are familiar with a bill
that I just tried to push through our parliament,
which dealt with putting together a framework for growth for the Web 3 sector,
but also was looking at a cohesive, non-volcanized regulatory system.
And there was a lot of learnings from that.
On, to answer your questions, I'll start with a really sort of like what I would think is a non-obvious point, but of optimism, which is the space, this space in terms of looking at regulations or standards or certifications in terms of AI, it's not overly partisan yet.
There aren't rigid partisan polls.
And I think that that really will help if that can hold.
put together a framework where industry and civil society and legislators can come together on the on the global side of things a lot of the you know what i find just so interesting is that the fact that trade agreements like you know for example the the second iteration of nafta so cosma
Like there's a lot of regulations that actually governs AI that were written into those agreements.
Same with, you know, Canada has a pre-trade agreement with Europe.
where we actually will have to promulgate each other's regulations.
So there already is, even though it's not necessarily prescriptive towards AI,
there's already governing frameworks that deal with this type of intangible
and an emerging technology that will apply.
And I'm actually surprised that more people haven't kind of clued into that
and started to think about how that interpretation will work.
I think that a lot of the points that have been made around the fact that we don't have to reinvent the wheel.
It's the same thing, you know, if you're going to use a congruent example with crypto,
like there are existing frameworks that can be applied.
They just have to be slightly altered or thought about differently.
So it's not like, you know, we're having to reinvent the wheel here.
The piece Gary and I co-authored was around the concept of,
on AI was saying, look, we don't need to stop research, but we can apply research standards like we have for other areas of publicly funded research and then regulate the conditions under which large scale deployment happens. So like a clinical trials model. And I think that, you know, certainly within our respective, you know, national governments, you know,
we can start thinking about that. I think Gary's point of saying, like, you do need to have
sort of a central agency or something that's coordinating like versus like on it, on a, you know,
issue by issue basis, be it how do we apply intellectual property laws to this? I think that's smart.
A lot of that's going to happen in the judiciary as well. But then when you take that up to an
an international level, I think that, you know, that somebody had asked, like, well, what about
other countries that might not have the same interests as, let's say, Western countries?
We already collaborate on research in a lot of areas, and we also have standards on things like,
you know, the example Gary gives is like the civil aviation industry and stuff, where it's like,
look, we all kind of agree with some bad actors from time to time where,
you know, governments have to intervene, that there are certain ways of doing things on certain
types of technology that tend to impact humans. And we can't be naive about bad actors
working on that, but it's not like there aren't existing frameworks that we can't duplicate
And, you know, maybe I'll, like, I can speak much more to this at length and perhaps give a perspective as a legislator working in the space.
But I'll just close by saying, like, you know, I find, like, Microsoft, I think it was Microsoft, right?
And I think it was their CEO that was just like, oh, well, legislators are too dumb to deal with this.
And government is too slow.
And so, like, tech should just, big tech should just regulate itself.
that's such a, like, that's a, that's a great lobbying position.
But it might be true, perhaps in some aspects, but it's, it's, it's, it's just not true.
I do think where everybody who's on, you know, this space can participate is you have the
right and the agency, if you know, if you have a company or your own positions, get involved
and like, write to the legislators that are on these committees that are interviewing people like Gary or, like,
people like myself and and and and and own that agency and say that this is an important issue that needs to have political attention it needs to be done with care and not with like rote partisanship but i'm actually optimistic that we can get this right i don't think that
I have a specific question.
Your IP background, I'm an IP attorney.
And when you said the IP thing and the ability or capabilities of government to keep up or
law to keep up, at least in the US, and I know that I'm not familiar with Canadian
IP law, but I know Eurocentric and Western IP law tends to be pretty similar.
I don't think that IP law can keep up, right?
At least in the U.S., the USPTO won't grant trademark or other IP to computer-generated images,
or at least to the extent that they weren't generated by code or by the individual or changed afterwards.
Do you really think that with AI-generated art or any sort of IP,
I think we can really keep up in terms of...
applying proper law based on its intentions of promoting creativity for the common good,
as well as, go ahead, sir.
I was going to say, like, I assure your concern as somebody who has background in the area.
I think, like, looking at this with my, like, politician and legislator hat on,
usually what needs to happen for there to be, you know, a look at, like, a relook at, like,
like existing IP law frameworks or whatnot is some sort of like a seminal case that spurs
everybody into action. And I'm not sure that has necessarily happened yet. And like I don't
want to say that that should happen. Like obviously legislators should be proactive on this. But,
I think typically what happens in IP law is that something happens and then people react to it afterwards, right?
And I think that because this area is so nascent that there's a bit of a recitence to wade in without understanding, you know, sort of the broader perspective on where things are going.
But no, like I don't want to sound naive and say like, oh, my God, we can totally deal with this today.
But I think that very quickly there is going to be some high-level, high-profile cases that are going to spur either regulatory change, changes to, you know, the international patent legislative framework.
and within domestic laws.
But again, I do want to also point to trade agreements as well, too.
There's going to be litigation through those on some of these things, I would imagine.
So, you know, I've been in elected office now for about a decade.
So, you know, your experience is more current than mine.
But I think it's just a combination of ensuring that there's political pressure and attention on the issue,
education of legislators, and ensuring that there is a demand for,
and suggestions for solutions in order to see that.
That's a super cool point.
So basically you mentioned crypto,
so I'm just going to bring it.
And then we'll get a black.
But essentially we need an XRP.
We need a ripple case in...
in AI that will force or open the eyes of what we're dealing with and shape what legislation
looks like. I think so. I shouldn't be that way, but, you know, I'm also a realist, and I think
that's probably what's going to spur action, if I had to guess. What would that look like? I mean,
you know, it's one thing, for example, for...
for me to try to hide an unregistered 737, that's going to be found very quickly, right?
I can hide an LLM on my MacBook Pro, and it would do pretty well over a couple months.
Who's going to, I mean, what do you think that incidents looks like that's going to cause some action?
Well, it's literally going to be somebody saying, you stole my shit and, you know, whatever that definition is.
And then trying to get some sort of remedy on that and that remedy not existing or not being clear that is going to force legislative action, right?
And with enough money, right?
Enough money to actually go through with it.
But like I also want to just be very clear.
I don't think that that's the optimal situation here.
people like yourself who have a background in this space who are starting to see the nascent cases
who might and to your very excellent point might not have the resources to to to pursue this
really do need to get in front of people like myself and and say like really clearly explain the
economic impacts and I'm trying to do that within my own you know with within Canada within the
federal legislature in Canada but again like we're having the super technical conversation here and you can
what it's like trying to communicate on this, right?
So I would just say like just a pleading to anybody who sort of understands how important
this is is get in front of your elected officials and educate them like in colloquial terms on like basically down to, hey,
If somebody steals someone shit, now under this, our laws don't protect them, right?
And I think that that is going to just be such a, I cannot stress that enough that this needs to become a political issue so that there is attention on this.
Because it is going to affect our economies.
Michelle, have you seen my profile picture? I'm not going in front of Congress.
come to Canada maybe then, you know.
I'm glad to things have happened.
Hey, Missa. I really appreciate the time for you to come up here.
I have to go in just a minute, but I just want to ask a quick question.
We can always get into the weeds really deeply on a lot of this stuff.
And I just wanted to kind of like,
pull out and ask like a bigger, broader question, if that was okay, within your position and
the circles in which you're in and in parliament and working with those officials and that sort of
thing, what is, because there's like a thousand different terms for AI or what it actually means
because there's all these different models and all these different applications,
what is the thing about AI that is concerning to yourself and those that are within the circles
in which you operate, sort of that perspective. And I appreciate it. Thank you.
I think that's a great question and I don't think that's defined yet.
I think, you know, Gary kind of spoke to some of this about the obvious human health implications,
the, you know, the suicide by chatbot sort of example.
But then everything that we were just talking about IP, broader economic disruption,
there are going to be, you know, sort of partisan polls that come down onto those next level of public policy discussions.
but for right now I think the biggest concern I have personally is that
I think legislators need to quickly come up to speed on the basics of what this is in order to answer your question.
So if somebody wants to read further, we do have a bill in front of our parliament right now.
It was actually released, or it was tabled in front of our parliament six months before chat GPT released, was released.
So the analogy I use is it's like trying to, it was like they're trying to regulate the printing press or sorry, like scribes.
four months after the printing press was invented, but it's the top link in my profile,
and it's my speech on it. And it's got some principles in there on what I think
Parliament needs to be looking at in terms of, like, very broad, to your point, macro-level
principles. So you guys can have a read through that and let me know if I'm out to lunch or not.
On the education front, I just posted up top for everybody, so if you want to take a look.
a couple of my colleagues and I who have a background in this area.
We've been working with some international legislators and groups within Canada.
I want to give a shout to the Montreal AI Institute to educate parliamentarians.
So we're launching actually tomorrow a cross-partisan working group on emerging technology.
Really, as nothing else, if nothing else, is to link folks like yourselves with people like myself.
Because I think that that we can't.
let there be distance between that right now.
This can't happen behind closed doors with, you know,
There really needs to be a lot of transparency and nimbleness.
in the process and you know I I just I do think cooperation is possible I think that an
international standards body is possible but I do think that the community think it's really
important to push the Overton window on political thought to sort of like the paperclip scenario
but then it's also important to pull it back and talk about the positives that AI could have
for the economy and just say like look um
If we're going to see those positives, we have to really band together internationally within our own governments across political strife with some subject matter expertise to get the overall boundaries and frameworks and systems by which this is going to happen done correctly and quickly.
And, you know, I think I really appreciated Gary's note of optimism after his congressional testimony today, which said, like, I think...
And my call to action to anybody on the space tonight is just this.
Get in front of your, like, your congressman, your member of parliament,
whatever jurisdiction you're in, and talk to them about this is your responsibility.
And I think that you'll see some receptiveness.
I mean, make it an election issue next time.
Michelle, two things before to go to Eugene for the last question.
I wanted to say, one, I was going to say you don't sound Canadian, and then you said process.
I'm married to an American, so I have this weird accent.
The second one, just as a statement was, we said to Gary, and it's something that I've noticed as an attorney in kind of tech and blockchain for a while.
It has been refreshing to see our Congress...
accelerate in terms of their knowledge in the blockchain and crypto space dramatically.
So I don't know what oracles are using, but it's working and I'm happy it's happening with you guys.
Just before you go to you.
Just anybody who's got any questions, go down to the bottom right inside, put your comments in,
and we will discuss those questions during the space.
And if you get them in quick enough, we can ask Michelle the question you've got.
In addition to that, IBC incubates and accelerates AI and Web3 companies.
They partner with VC companies and funds to work with their portfolio companies in return for equity and zero cash.
So if you're interested, DM Mario and his team and they'll get a call organized.
So yeah, if you're into that stuff, then do it.
Yeah, thanks. Michelle. It's been just a pleasure to hear you and your perspectives. I think they're quite a breath of fresh air compared to, you know, kind of the discourse we sometimes hear about technology among regulatory bodies.
A question that I have, and frankly, I reside in your neighbor to the south, so I know a lot less about Canadian politics, but, you know, it did strike us that...
in the Google AI, during the Google, I should call it the Google AI presentation.
During the Google CEO's presentation, the Alphabet CEO's presentation on AI, the EU and Canada were specifically called out as being on the wait list related to the EU AI amended act, I think, but also just generally regulations.
So kind of a big question around that.
And actually, just a few days ago as well, somewhat related, you know, finance and several other exchanges,
have decided to leave Canada again over, you know, regulation.
So wanted to give your general perspective on, on that, like, how's regulation, you know?
That was nice for you, Gene, have chosen to leave.
I would say more forced out.
Yes, that's a great, yeah, that's perhaps a less diplomatic way to say it.
But yeah, I mean, and also, like, maybe a description for maybe the audience who doesn't, who knows a little less.
less about it. You know, like, are there big differences between how, you know, the liberals
versus Tory think about regulating, you know, frontier technologies, you know, kind of just a, just a
overall question and kind of related, like, you know, diving, the EU AI amended act for anyone who hasn't
looked into it is like 140 plus pages. You know, I've kind of taken a gander. I mean, it's,
it's just a beast. And frankly, it's terrible. Like, from the perspective of myself, who's, who
is involved in tech companies in the Bay Area.
who runs a tech company it's like you know it's it's it's not great i'll say that right like i
would probably advise any AI company that's really serious about building to probably leave the
you right because it's going to take years for the EU to get it right on the regulatory side um
i curious about your perspective on that but particularly with canada
Um, okay, I'm, like, super excited to answer this question, but I'll give you a caveat.
Uh, I am a conservative member of parliament in Canada.
I have frankly no idea where that falls on the spectrum in the U.S., but, you know, my party's
It's the liberal government right now.
So I will be naturally partisan with my response and, you know, I'm just giving that
disclaimer on the, um, uh, the crypto space and, and, and, and, and, and, and, and, and, and
About a year ago, a year and a half ago, like particularly when Quadriga CX sort of hit the public consciousness in Canada.
And obviously I've been following the Web3 space for some time.
It became very clear to me that and like we hadn't even discussed Web 3 or crypto at all in parliament in any context.
I used my private member's bill to a very like stage zero sort of bill to say to try and
bind the federal government to put together a public, a transparent working group process,
which would develop a regulatory framework that included governments at our subnational
levels so that we didn't have a balkanized regulatory approach, that included industry
participation, civil society participation, so that there was regulatory transparency,
which would actually look at both...
increasing investment in Canada, investment stability, and also protect the consumer at the same time.
Like, I won't give you the whole history, but like the crypto space became highly politicized in Canada.
That is a topic for another space, which I'm happy to host.
But the moral of the story is that when the space gets like,
When there isn't regulatory certainty and the governing party uses a nascent industry to score cheap political points as opposed to put together regulatory certainty, the investment climate becomes uncertain and people don't want to be there.
And like Canada, I'm just going to brag on my country.
We have some of the best and brightest minds.
You know, Vitalik had to leave Canada.
And we should be doing more to attract that type of investment.
And that should be a great certainty.
Just because this is a broad-ranging audience,
Vitalik was the founder of Ethereum.
So that's kind of the crypto, on the comments around like Bard not being available in Canada, that's correct.
I think that that is due to the fact this bill that I mentioned, which is sort of alluded to in that, well, it's discussed in that substack article I talked about.
The bill, our artificial intelligence and data act, again, it was tabled prior to the large-scale deployment of LLM's late last year.
What it proposes to do is essentially pull the entire regulatory process out of parliament behind closed doors and then not really have rigs come forward until like two or three years from now.
And if you're looking at large-scale investment or deployment, that's not really a pro, right?
opacity, like it's kind of a problem for investment. So I am opposed to that, and I think that
there's starting to be more cross-party traction on that. That is a bad idea. So I'm thinking that
if I had to guess, that was probably why Google was looking at this. Also, there's another bill.
It's similar to as link tax bill that Google is opposing in Canada right now.
And I know that there's jurisdictions in the US that are looking at this as well.
So heads up, that's coming for you.
I don't support this approach.
And I think that Google is kind of saying, okay, Canadian government, we're really not digging what you're doing.
It's also like startup Canadian businesses.
So, you know, just in closing, like, what's the difference between Canadian conservatives and liberals on this particular issue?
I don't have a liberal colleague to debate on here.
But, you know, I believe that you need to have regulatory clarity and stability and, like, not like,
minimal government intervention, but also that, you know, keeps consumers safe. But that that shouldn't be an opaque, politically different, driven process. And I think that that's largely the difference. And frankly, my most partisan thing that I'll say tonight is our liberal party in Canada is very, very interventionist. It's far to the left.
And I think that that is driving away investment.
And that is, you know, I want to make sure that we're correcting that so that, hey, please come invest in my country, that needs to get fixed.
And I think that that's where you'll see folks like myself take a bit of a partisan bent in debate around these bills because we do want that investment, particularly if we're looking at migrating away from a natural resources-based economy.
So Michelle, the most important answer.
I think this might be the last question, but basically it's a comment from one of the people who are listening in the comment section.
Slightly unrelated, but maybe linked to an extent.
They're asking about Bill C-11, which was proposed in Canada.
I believe you voted against it because you felt it was censorship.
If you can just elaborate a little bit on that.
So we've had to, like, again, I'm sort of like, when I'm in spaces like this, I try to be non-partisan, but like these were crazy bills.
What it does in function is that it allows the government to, the government to set like,
essentially regulatory specificity on what content can be shown on platforms like YouTube and
Facebook and it's upgrading and downgrading content ostensibly to show Canadians more
quote Canadian content which is vaguely and ill-defined in the bill but this is direct
government intervention in what people can see
online. And there was a lot of cross-partisan outrage over this bill. It was, it was
It went through two parliaments because it was so contentious.
We did everything that we could to prevent this bill's passage
because it just seems so crazy for a G7 democracy
to have allowed this level of government intervention on speech.
I think it's super dangerous.
I think it's very chilling, and it's not just me as a conservative saying this.
But at least you do it publicly.
I just want to say, like, the fact that this went through the government is insane.
It's bonkers, it's fucking bonkers.
And now there's a follow-on bill, Bill C-18, which is this sort of this link tax.
And now we've got META and Google both saying that they're blocking news in Canada.
I mean, that is super short-sighted.
And it's these sorts of...
super interventionist policies that need to be avoided in the AI space. So, you know, my, my, my,
My begging to anybody listening tonight is anybody who thinks that you should not be in, like,
oh, let's not look at the regulatory space or let's throw our hands up.
If you do not have a say on this, somebody else will.
And that's why it's so important for people who are on this call who have some interest
or subject matter expertise in this area.
Get in front of your congressperson, your member of parliament, talk to them, demand meetings.
So that they understand that this isn't something that just should happen to them via lobbyists or bureaucrats.
Because Bill C-11 is the result of that, right?
And so there's going to be a lot of work to do in this space.
But I'm hoping that this doesn't have to be a massive partisan fight, that we can agree to some, at least some high-level principles on this.
And, you know, hope springs eternal.
You have to have hope in politics, right?
Cheryl, last question, probably the most important question.
The most important question of the night, and Sleighman will agree.
I think it's definitely possible, and it's possible in a short period of time.
I wish Gary had stayed on.
Like, you look at Gary, right?
even a year ago was saying like, oh, you know, it's 50 years out. It's not possible. I think
he's right. It's not there yet. But I do think it is, I do think it is possible. And, you know,
I'm less of the philosophy that like SkyNet is going to happen overnight. I think that
probably what would happen is human nature would happen where we kind of seed control to something
or not treat quote unquote sentience. And I'll use that
you know, term very, knowing that it's, it's a very charged term. We wouldn't treat that
sentience with the respect or like understanding that there's now something else on the planet that
you know, challenges us for primal dominance.
I think that those are big questions that we all have to start asking ourselves very seriously
And, but I will say this, I think it's really important going back to the entirety of our
Yeah, we need to talk about it, that it's possible.
play to that, but we also need to, for legislators, talk about the here and now, and the here and now demands immediate attention, right? So keep an eye on it, make sure that we're addressing those bigger, you know, societal questions, understand it's a possibility, get ready for it, maybe put...
guardrails around deployment, but at the same time, focus on the here and now, too.
And I think that's pragmatism. It's not like sexy, but it's pragmatism.
So. I appreciate you, Michelle. And to Sleighman's, well, the chagrin, they're not binary, right?
We can, we can address the current and prepare for the future because AGI is inevitable.
So if anyone ever pushes you into a binary in politics, you, you are, they're, they're,
they're wrong or they're lying to you.
So that's been my experience.
And don't get manipulated by Fidgetil.
And as we've discussed many a times,
there are many issues when it comes to AGI,
such as sentience, such as consciousness.
But obviously, we won't talk about that now, Fidgetil.
You already ceded the point by today.
So guys, we've only got a few minutes left.
So what was everybody's thoughts about what Gary said,
Michelle said, Elon said.
I'd love to hear a few thoughts on that before we end the show.
Yeah, I mean, I'll jump in here real quick and just kind of give some of my thoughts.
But one of the main points that I thought was really great to hear was how seriously he took that kind of push from David on, you know, well, look, like with all this tweeting, you're, you know, you're risking the reputation of your company.
People might not like your politics.
Like, don't you think that's something that you should consider?
He took a long pause and he thought about that.
This is effectively what he was saying is this is a matter of principle.
I'm going to do this because I believe it's the right thing to do,
that's what it is what it is.
I never think we should put our absolute trust in any one person to kind of fix all of the world's problems.
But, you know, if we're asking a broad question of, you know, does Elon seem like he's oriented in the right direction and making the right decisions?
And I would say that someone who is willing to forgo, you know, money for to do the right thing, I think that's such a good signal.
And it was great to kind of see him push back in that way.
Yeah, sure. I think it was just overall, just a great discussion as always. It was great to hear everybody speak. I think one of the things that really struck me about-
Well, no, Eugene, this is the best AI space to date.
I think it was fantastic. So it was amazing to be part of it and amazing to contribute.
One thing that really we didn't talk about was at the very end or towards the end,
you know, Faber asked Elon, what about what would you tell your kids, right?
And really, that's kind of, I think the metric, a lot of these things should be looked at, right?
Because that, you know, talks about this is a very future looking, it's very forward looking.
And he didn't really have a great answer, right?
I mean, did anyone think that he had,
an answer. He was like, well, you know, we should all kind of do our best, right? I mean,
it was kind of like a non-answer. I think it's, it's, it may be the best answer actually, right?
I don't think he was being disingenuous at all, but it's not like he could have a great answer.
It wasn't like it. It was like, oh, you know, they should focus on this area or this area.
But yeah, I felt like that was kind of a, there was sort of the things he didn't say
point into a darkness in the future of AI that I actually am, you know, and I was talking about this,
I think with Rob and some others just yesterday. And it was,
And it was around the idea of, you know, Joseph Schumpard's creative destruction, right?
I think AI could lead to long-term future benefits, but the kind of carnage that it's going
to reek for white-collar jobs, you know, for the amount of jobs lost, for, you know, knowledge
replacement. I mean, a lot of the folks who are going to get displaced are not going to be,
you know, able to retrain in their lives, right? We're seeing that in blue-collar jobs.
We've seen that throughout our lives. And I think that's going to happen in white-collar jobs. I mean,
I mean, I'll pose a really interesting thought experiment.
Right now, these recorded spaces are feeding in lots of training, potential training data
for, you know, for a potential AI Twitter space, right?
I mean, what about all the people here being replaced by AI because we're giving,
you know, the AI enough data to make ourselves irrelevant, right?
I mean, I think we would be, you know, we're just a small part of the kinds of things that
But, yeah, I'm kind of concerned.
sort of non-answer points to some of the issues
that we're going to all face in the coming years.
Yeah, you can just, you can summarize.
Yeah, just really quick response
because I think that's a good point.
And I actually put together a thread today
to kind of address that exact issue
Bro, I love your threads.
Like, I love your threads.
The art on them is amazing, bro.
millions and risk my life savings just to look at his brilliant art as part of some
shitty shit coin but anyway go ahead mayor i appreciate that but um yeah i know i think that this
this is a very big question right it's like what does a life look like when we no longer have to
do as much kind of active work right so that maslow's hierarchy of need once once uh you know society
is productive enough to put a roof over your head and to feed you know what what
you know, what should you base your life around?
And the point that I make in the thread at one point is that we are entering a realm,
like where a lot of the conversations are being had by an AI.
They're had by software engineers.
They're had by politicians and they're had by, you know, businessmen.
But we're actually entering a time where we need to have like theologians and philosophers
in on this conversation because, you know, what happens in this post AI world where
you know, we don't have to define, we're not defining ourselves by our work, but we can pursue life however we want to pursue it.
And I think that that really should be a critical part of the conversation.
We've definitely had conversations even in this space about these ideas of,
you know, what is meaning, what is truth, what is consciousness, right?
And I think that my advice for younger generations definitely be proactive
in thinking about what industry you're getting into
because not many of these industries are going to be AI proof.
And then also on top of that, getting into a position,
even if it's available post-AI, is going to be even harder
because much of the middle of the ladder is going to disappear.
wrong jobs that are very work focused are just going to not be there and you're not going to have
that opportunity to learn. You're going to have to make this giant jump from, you know,
no knowledge to, you know, architecting things. And I do not envy the younger generations entering
the workforce in the coming years. I think it's going to be really tough. But I think, you know,
meaning and proactive thought into, you know, how, how do you live in a post-AI world is going to be
I love that. And by the way, guys, thank you all for joining. Before you leave, I posted an important poll in the nest.
If you can take a second. This is a volume answer, guys. The answer is obvious.
Right, guys, first of all, do not let Fijtel manipulate you.
You know the answer to any poll Fidgitl says.
I've not even looked at the poll, but I know the answer is going to be no.
It says, did Fidigital smash Suleimantanite?
And the two options are yes and yes.
Anyway, thanks for the listener.
I am going to be doing another space myself in about 26 minutes about whether 9-11 is an inside job.
So if anyone's interested, once I hear an alternative view, it's not my position, but I'm interviewing somebody else.
But if so, yeah, join my space and you'll enjoy it.
This is the third one of the day.
That's what we're all about.
Don't you have like 16 kids and pray like three times a day?
This is what I mean, bro.
It's just called focus, dedication.
This is what happens when you're when top G.
That's why I think top G represents us all.
But yeah, I do appreciate everybody.
No, I'm helping you. You're the pro, remember?
Yeah, I find that, you know, talking about the newsletter, it's a bit boring.
Like, you know, what do I say about it?
Guys, there's a newsletter.
It summarizes everything that happened in this show.
One of the things that happened in the show was fidget still getting destroyed.
The idea or concept of AGI being destroyed.
So if you want to read about that, check that newsletter out.
Literally, literally in front of the Congress, the people chosen with Sam Altman to speak to Congress about AI.
Parliamentary representatives said yes.
And didn't it, by the way, neither them skipped a beat.
That's the important part.
It's just a matter of time.
Yeah, but you have to understand psychology.
These guys are coming up speaking there to talk about concerns about AI.
They're obviously going to bring people in front of there who are going to be like, guess what, it's going to be the end of the world.
AGI is going to take over.
We're going to be slaves.
It's going to be planet of the humans.
Like they're going to say that.
You're going to get somebody like GP going on there being like, did you say planet of the humans?
You ain't going to get GP on there who's going to basically say, you know what?
There is not going to be no AGI.
There is not going to be no conscious.
There is not going to be no sentience.
Not going to bring me up as well for the same reason.
But anyway, guys, thanks for coming.
Join us tomorrow, same time.
I'm not sure what the topic is, but I'm sure it'll be something amazing as it is per usual.