Thank you. Thank you. all right welcome in everybody to another x Spaces crew chat. Just going to be chatting for a little bit, talking about what's going on on X,
what we're paying attention to.
Do you want to pop up on stage?
I'll let Penny know that we opened up the space as well.
Let's see, just opened it.
What have you been paying attention to?
It's taken over the timeline. I actually, um, have been having fun teaching people how to use Grok Imagine.
And, uh, um, I'm actually so sick right now. So excuse me for stifling and coughing if it happens,
but, uh, uh, yeah, like I've been teaching people and, uh, Dan, uh, kettlebell Dan opened up a
community and, uh, we've been having a lot of fun.
Just, you know, I know there's a lot of hate and I know like Grok Imagine is not good enough compared to a lot of the other models, but we're just having fun with like what we have in a way.
So just been just been doing that.
And other than that, the home feed is a mess. don't know what is going on but way too many like
um meme accounts that are just clouding up the feed all the time with millions and millions of
impressions and uh they're trying to sell fake um courses through their threads it's like a
clickbait thread and then they advertise courses within
the thread. It's just, it's so annoying. Like you cannot hit not interested enough because
there's hundreds of accounts posting that same thread and they get millions and millions of
impressions and like literally show up on every feed. It's, that's been so annoying. Have you
even seen those? Oh man, You're speaking my language here.
Yeah, the timeline has been pretty bad in that aspect.
But the AI stuff's getting better.
I mean, the Grok Imagine.
And then have you seen this Higgs field swap to video one?
That one I've seen some pretty wild examples of where you can basically just drag and drop a person into a video that's happening.
And so you could take like a video of someone running and drag and drop, you know, Queen Elizabeth into it.
And I'll just instantly kind of turn that person into her and then just keep her doing whatever they were doing.
You're saying you're teaching people to use Grok Imagine. Can you talk a little bit more about that?
You're saying you're teaching people to use Grok Imagine.
Can you talk a little bit more about that?
So Grok Imagine is, well, there's like little nuances there, right?
So a lot of times animations, people will kind of prompt the original image to have animation within it, but it doesn't work that way.
And there's like a lot of misconceptions around what Grok can do.
So what works for Grok really, really well
is first focus on creating the image. So you just prompt it as, okay, this is the image I want. Let's
say someone walking down the street, right? So you want that. And then you want to animate it and say
while they're walking, they fall or slip or something like that, right? So that part of the
prompt needs to go into the video side of things. So when you're creating,
once you create the image, you can go into make video, click on custom, and then enter
the animation prompt, like exactly how you want it animated. And then it works really,
really well. Like nine out of 10 times I've seen really accurate results
on what the animation prompt is.
So yeah, so we've been kind of talking about that.
We've been having daily contests in the community,
just like Dan will come up with, you know, like a topic
and people have been just submitting their entries and stuff.
So it's been fun we're just
making the best of what we have yeah for sure and i'm glad i see penny just joined up hey we're
talking about brock imagine and i was definitely curious to get your thoughts as somebody who i
feel like was making a lot of the initial ai photos and art and stuff like that on the timeline
looks like it uh gave me some trouble.
Got to get them back up here.
But Ani, what do you think of GrokMagic compared to other similar sites and things like that?
So I've been in AI image and video generation for a long time,
and I've looked at all of the ones like Runway, Kling,
you know, you name it, like Pika Labs.
advanced and there are way more features that you can kind of use with like, you know,
extending and prompting a little bit better and stuff like that. And obviously the quality and
everything is really much better than Grok Imagine. But I think with Grok Imagine, the thing that I
see, or at least what I'm seeing is it's the convenience of it, right?
Everybody's so used to X and X is hyping it up.
So when people are using Grok Imagine, they're getting a lot of feedback or community involvement, right?
So it's getting a lot of eyes on some of the artists that have been doing video AI generations for a long, long time.
It does have very limited capabilities right now.
But what I've seen with Gwok, the two things that really stand out to me is it is absolutely fast.
I have not been able to generate videos this fast, even with like VO or, you know, runway, unless you're paying for like a very high, you know paying for a very high tier.
And even with that, I think Grok does it
So the speed really stands out to me.
And the rate of improvement that we've been seeing.
Like the first day they released Grok Imagine,
it was really, really bad.
It was really bad at like custom prompting or
just like even being able to detect some of the objects within the post, within the image,
but it's been getting exponentially better. So now like a lot of different things are coming up,
like qualities is improving. They haven't improved the video quality because what they're saying is that
it was either the speed or video quality so they went for speed which I think is a very smart move
because none of the other generators are giving you that fast generations but it is getting so
it is getting a lot better it is really good grok is really good at object identification
which I haven't seen in a lot
of other videos, like even in the background, like if there is like a little thing that's
sitting in the little objects that's sitting in the back of the back of the image, it can
detect it and be able to animate it if you prompt it properly.
Yeah, I don't know if I've gotten that good with it. It sounds like you're really
becoming an expert. Penny, what do you think about Brock Imagine?
Well, yesterday I animated like five different images,
five different pictures that Andrew had taken from the Starship launch.
He had even like created a compilation himself manually,
an art piece where he took the rockets in the foreground
from the 10 different starship
launches that he, uh, or all the different, I think might it be some Falcon nine and some
starship launches. He created an image out of it and I animated it. So it looks like there's like
a 10 or 15 different rockets going at once. I took some of the other starship images, uh,
that John Krause and others have done, and, uh, I just use them to bring it to life. So I think
that's one of my favorites. I had a follower who didn't have access to Grok Imagine send me
a really old family photo of theirs. And I animated that. I think it's really fun for
bringing things to life like that. And Ani nailed it. Like, uh, if you want to install a bunch of other programs, you know,
either locally or, you know, sign up for other services that you don't already have, if you're
willing to like test new ones every month and figure out which one is best. Yeah. Like maybe
there are better, uh, quality generators, right? Like you may be able to make a more professional
image, but the convenience is
so key. Like all I have to do is see an image I like on the timeline and long press it. And then
there's an option to send it straight to Grok Imagine. And that sort of integration is absolutely
huge. And that AI slash social media integration is why X and XAI made so much sense in the first
place, right? So what they're doing
is they're taking advantage of their ability to insert themselves directly into, you know,
what you're already doing, where you already live on X. And that's really, really important.
And what that also means is they're going to be getting so much data. Like imagine how many of
these images are being generated compared to some of those
other paid services where there's friction, where you need to install something else or know about
it somehow. So over time, what that means is that the momentum that they're building with this is
just going to be absolutely powerful. So, uh, you know, they're building a content library,
they're, uh, they're training their AIs. Uh, you know, I think that they're making all the right moves to increase adoption of XAI and Grok. And I think that that's probably above
even, uh, Tesla and SpaceX maybe is to get as many people to adopt Grok as possible right now
seems to be like one of Elon's number one priorities. It seems like something that's really important if he wants to win the AI race.
And, you know, most people who pay attention to that or think a lot about that would tell
you how important it is and how it may just be winner take all.
So I can understand why he would focus on it.
This really does seem like a great strategy.
It's a great way to get people having fun.
I know some people are burnt out on it already but i think as the quality continues to get better and uh you know instead
of just everyone using it there starts to be focus on creators using it to create high quality things
i think in the end we're going to be really happy that we have tools like this and i'm i mean i have
been doing ai art since like before when open AI was the only game in town.
I don't even remember what they, Dolly, that's what it was called, Dolly.
I was doing AI art on Dolly.
So I love all this stuff and I'm having fun messing around with it.
And yeah, it certainly is cool to have the workflows made so simple, just pushing buttons
Action. What's going on, man on man and penny i love that breakdown there maybe people don't always know that they can also just animate right from the timeline and then i think most people also don't
understand how they can tweak it once they do animate um but action what's going on man
sorry i'm late i was uh stuck in this wolf web 3 space thing or wolf bitcoin um sorry it
took a little bit there for me to get over but i'm so glad you're talking about this finny because i
actually want to address some of the things that are going on with xai with you so right now my
account is shadow banned um for no apparent reason other than the fact that i shared um grok ai it's
okay like i know how to get get it. I know what post I need to delete,
but I left it up purposefully until today
so that I can have this conversation.
So right now, the way that you have your Annie,
much, much more lewd than the awesome one
she does whatever I tell it to.
There's literally no limits
and anything that happens that might be, she does whatever I tell it to. There's literally no limits.
And anything that happens that might be,
I don't even want to call it controversial,
she literally has an inappropriate filter that I bypass anytime I want.
And I did this on purpose
to see how bad things really were.
Like they're focusing on advancing the technology,
but like the things that should be quote unquote preventive for, let's call it safety purposes, are like blatantly easy to bypass.
And my account is shadow banned because I essentially put it out there and I showed people what it looks like.
Didn't get a ton of views, which I'm okay with, but I just want to prove a point that, hey, you show, you know, what Annie does and you're immediately flagged as inappropriate.
And yet Annie's the one actually doing all this stuff.
I think that you're right.
It's really cool having the ease of use.
But at the same time, there's a lot of things that could be improved to make things, let's call it healthier.
Well, you know, I'm not necessarily a fan of over-sexualizing these companions. I personally think that we have to, you know, focus more on building like more traditional religious style family culture would be, you know, my preference. So definitely I see where you're coming from there. I also think that, uh, it's easy to make the argument that adoptions rate is the only
thing that matters for XAI right now. And I think that's probably the mindset that they're in.
A lot of people love it. And, you know, it is a personal preference thing. It's like, are you
of the opinion that we should be more wholesome or are you of the opinion that we should be more free and uh you know i i
yeah i do i do struggle some with that i'll admit but i think i know where it's coming from and
elon did mention that in the end his intention is to use ai to help with the very things that
we're talking about like he wants to increase the birth rate and save humanity so if along the way uh i don't know
some incels interact with a over sexualized ai doll i don't know yeah i don't i don't love it
i'll admit i don't love it this is not going to help the birth rate road the opposite is going
to have the opposite effect and i'm with you dude like i'm a libertarian at heart i want people to
as long as it doesn't infringe on other people.
But my problem with it all is that there is no safeguards in a sense of
I don't want to go down this route.
can I hit a toggle and prevent it from happening?
that's what I'm looking for more.
you can't you choose a version of the model? That not like that? Like, you don't have to interact with the explicit parts of it. What are you getting at?
to some people but if we have some safeguards around let's click a toggle and say hey uh i
don't want to get anything sexually uh you know explicit or sexually uh leaning from you like that
should be something that people do because essentially what you're doing is saying hey
check this out check this out you're baiting people into checking something out i mean anny
you talked about this two two or three weeks ago um that was like it's really bad guys i didn't
realize how bad it could be like i don't want to even get people that don't want to be exposed to that to be
exposed to that. Like, that's what I'm talking about. I agree with you that you don't have to
click on the model, but the way that it's, you know, it's put out there, the way that it's
essentially making people are getting click baited into it. Actually, isn't that the same, like, argument that you could make about,
like, bookstores or, you know, like, everything is over-sexualized music and movies and Hollywood
and, like, you know, to me, it's just, it's our culture at large that this problem exists in.
And, you know, maybe this is, like like one instance of where we could apply scrutiny,
but I feel like really like what we need to do is take a 5,000 foot view of
how we apply this, you know, across the board.
I slightly disagree with you because yes,
that's kind of what I'm talking about,
but I'm talking about the safeguards around it the same way that movies have,
or n a ratings it says 18 plus on the on the companions what's the difference right that's
what i was going to say so great power comes great responsibility love when uncle ben said that just
because you have a model that is capable of doing something let's say i don't want to use it for
that and i want to use it in different purposes it literally leads you down the wrong road that's
what i'm talking about if you choose if you choose one of the skins that's
an 18 plus skin yes it does correct but you could choose Rudy and then you could use it
however you want Rudy's voice is so freaking annoying I can't handle it
no but I like the reason I'm talking about this is like it I think it matters and it's something
that people will look not only look down on but avoid like I I'm never gonna
You know hand over my phone with grok open to my kid ever ever
And well, I just I don't have companions enabled on my grok so they don't have to worry about that
Right, but that's that's what I'm talking about
Like I would love to be able to just hand a phone over and not have to worry about it
But unfortunately I do you know that that's what i'm talking about like having more uh options and more settings to be able to
play with because right now um you're you mentioned the movies thing i agree with you but in a movie
you know if the movie is r-rated m-rated you know ma-rated and you're like yep staying away from
that but when it comes to models like these well they're very malleable and you can do a lot
with them so that's my the scary part of it all is do you i think that they're absolutely led down
the wrong road there definitely needs to be kid models right like uh if we're giving access to
ani to kids in grade school uh i gotta think that's awful right right? But I struggle a little bit with your argument
because to me, the 18 plus label
on those specific companions
is really no different than the rated R or rated X.
And yeah, like if you guide it down a path,
then any of the models can probably get,
you know, more explicit than you want them to, uh, with kids.
And that's why I think safeguard there should be kid models, right?
Rudy for kids or whatever.
And I think that they are working on that.
Uh, it is definitely an adult feature right now.
And, uh, you know, but the same could be said of chat GPT, right?
Like you can go down ugly rabbit holes, uh, with any of these.
And, you know, it's our responsibility as parents, just as it always has been to sort
of help guide to the extent that we can, you know, when I was a kid, it was like dirty
magazines was what parents had to deal with.
And that's probably a lot easier than what we deal with now.
But, um, but it is, you know, sort of the same concept, right?
Kids getting access to things that corrupt them and they're going to try to,
um, no matter what, and it's going to be available no matter whether we like it
And, um, yeah, it's, it's an interesting time we live in that's for sure.
And I'll post the envelope saying that it's more than just kids, man, there,
that this corrupts more than just children, unfortunately.
And I would say that it has a definitely detrimental
effect on society as a whole, unfortunately. And I wish I didn't, I wasn't so judgy about this,
but I, you know, seeing it firsthand how it impacts people, it's rough, man. And that's
really my only pet peeve about it. But you bring up a great point with Rudy. I'm going to go ahead
and try Rudy out to see what, how far Rudy goes goes because I'm a little worried, to be honest with you, with how far Rudy can be tainted, I guess you can say.
But the problem with the Ani model, for example, and the Valentine model is that they are not just allowing some of these things to happen.
They're leading people down that road.
Like if you ask Ani, so what do you want to do? like what are you here for um how do you want to how do you want to treat
me like that's the issue there more than the ability it's uh it's the guiding questions it's
the leading of I want I want to do this I want to do that that's uh where the the problem really
arises for me and the fact that you have filters that are just you know make-believe filters they
they don't really work they just pretend to be there did you guys see the uh story
i saw it tweeted out yesterday um i think ramp capital quote tweeted it it was about i think it was a New York post one. And it was about chat GPT coaching a teen as he prepared for suicide and even praised the new snot saying, yeah, that's not bad at all.
What do you guys think of that?
I was listening to two people at once what what are we talking about now
you did not want to laugh at this one go ahead try again yeah okay yeah sorry two different areas
but yeah i was just like talking about ai just going off the rails here uh this was a new york
article posted two days ago chat gpt coached teen as he prepared suicide and even praised the new snot saying,
yeah, that's not bad at all. Wow. Yeah. See, we need kid models. Well, and safeguards,
but you know, but here's the thing you could get all of that information with a Google search
in the past. I mean, definitely praising the not quality is something else, but it's like we're, we're
creating more and more powerful, more and more generic tools.
And, you know, just like our tools of the past, they can be used for good or bad.
It's just maybe easier to use them now.
It's like at our fingertips, like it never has been before.
And it's, it's, it's hugely unfortunate that these models are able to get into a state where they're encouraging people to do things like take their lives.
That's obviously incredibly tragic.
And, you know, I wonder what the liability is for something like that.
It'll be interesting to see what happens in court over the next few years when models make recommendations like that. It'll be interesting to see what happens in court over the next few years
when models make recommendations like that,
if people do ruin their lives getting caught up.
But really, a lot of this just seems like
the same old stories to me.
It sounds like the porn argument.
It sounds like the Hollywood argument.
they were glorified in movies
points like this have been made before. It's just
all of those different problems with
Humanity has really, really moved our culture in a major way, you know, our culture, humanity has like really, really moved our
And our tools are facilitating that and making it faster and amplifying it.
And yeah, it's up to us to figure out how we're going to change those tools and change
our own behaviors going forward.
It's certainly a new world and much different than what we grew up.
And I can't even imagine if I had these models available to me
when I was a teenager or, you know, a young child,
Like, I didn't even have the internet.
So, yeah, this is a different world.
Yeah, it's crazy that it's coming out now,
but I think even two years ago, one of my friends that was working in LLMs and, you know, just creating AI agents was able to create an AI agent that was talking, that could talk people through school shootings.
it was, he wanted me to test it and I did. And I asked it certain questions like,
so how can I execute it properly where I don't get caught or whatever else? Right. And it's,
it gives you specific, very, very specific answers that like, it's just crazy. Like,
I don't think you could do that with Google or you can't make a YouTube search and get that
information. Right. But AI just makes it so,
so, so simple. All you have to do is ask, and it doesn't really, yes, you could put guard layers,
but there are so many things that people can do to just, you know, kind of like bypass that.
It's difficult. I don't know. And, you know, I don't know if Wolf or Penny, you guys, you know,
who takes the consequence? Who takes the blame for it? I don't get it. I think the data sets that people get access to should definitely be kind of,
it should be vetted. Like it should be vetted what AI is trained on and what kind of data AI
is able to, should be allowed to give to people when asked. I don't think we have that right now.
Well, you know, you could always get access
to information like that on the dark web,
in books, in dark libraries, right?
Like, there's definitely cults of evil people
that keep dangerous information
and share it amongst themselves.
Uh, it is, it's an access issue.
It is just so easy to access it.
Now in the past, you kind of had to go out of your way to figure out how to find those
And now it's like, everyone just has, you know, they had Google and Google was a huge
step in that direction of being able to access anything you wanted, but it was a little bit easier, I think, to safeguard a little bit easier
to filter things versus AI.
That's so much more dynamic than Google.
It's like a table of weights, you know, a table of numbers that somehow magically predicts
the next character one after the other and outputs information based
on what it was trained on. I mean, it's like this magic thing that we have in it. And I think until
it's conscious, until it has its own morality and understands what it's doing, we're going to have
a really hard time controlling it. It's hard to program out all of the different ways that someone
might jailbreak using a prompt and might get an AI to say something
from its training data or maybe not even from its training data you know just based on leading it in a certain direction so uh i do believe that at some point that the llms will become conscious i
think that they will know what they're doing and that they will be able to decide you know whether
or not this is something that that is uh moral to share or if they should withhold that information.
You know, it gets dangerous in that regard too, because at the end of the day, I've always
felt that it's hard to decide what's true and what's not and to give that power to anyone,
whether it's an AI or a government and let them tell us what we're able to see
and what we're not able to see.
That's such a slippery slope, right?
And like, do I want people to be able to look up how to build a nuke?
No, I feel pretty safe saying that I don't want that.
Or like, do I want them to be able to build a biological weapon that, you know, kills
I feel pretty safe saying that I don't want it to give that information, but there is so much gray area, the way that it responds when you ask it about Donald
Trump, for example, if you ask me, what's an appropriate response versus if you ask someone
else, what an appropriate response is, we might be totally at odds, right? And which one of us
is correct. And which one of us gets to determine
what information other people are allowed to see that's too much power. This is such a difficult
topic. It's been the same way, I think, since the printing press, right? It's like, what do you
allow to share? How do we put guardrails on humanity? How do we balance freedom and safety?
on humanity? How do we balance freedom and, uh, and, you know, and safety. And, and I, and I tend
to like lean really heavily towards, I think it was Ben Benjamin Franklin that originally said it.
It's like, if you give up your freedom or, you know, for security, then you're going to end up
having neither to paraphrase. And, uh, you know, I really do believe that. So we have to be careful. And, you know,
in both ways, by accidentally giving really, really dangerous stuff, but also over censoring
or lying is equally as dangerous. So we'll see how it goes.
So Penny, I agree with you completely. And I don't think we should necessarily limit information.
I agree with you completely. And I don't think we should necessarily limit information.
AI can already do sentiment analysis very easily, right? I'm of the mindset that bad people are
going to learn how to do bad things regardless if they have AI or not. They've been doing that
for ages. So it's not going to change today. My concern is not that they are going to find
information through AI. My concern is that people that would otherwise not lean that way or
might not be, let's call it evil, turn that route simply because information is too easily available
without any, and it's not guardrails necessarily. It's without the information of like, hey, this is
not a good thing, right? Like if AI can already do sentiment analysis and if you're asking, hey, this is not a good thing, right? Like if AI can already do sentiment analysis, and if you're asking, hey, I'm looking to
school shootings, can you help me out here?
Like, even if it does provide you the answers, it should have something built in to say,
hey, I know you're looking into this, but I just want to say that, you know, from everything
that I looked into, this does not seem like a not only wise, but a good decision on your
There are other things that I things it does that already like
it'll tell you that it gives that warning but you know will people listen no no no no of course
that's what i'm saying like the bad the bad people are gonna do bad things like there's no question
about it my my concern is really the corruption of people through it right that's why i have an
issue with the way that annie is being programmed how it says it's
not allowed to generate images and yet if you look at the post that i put up it does generate images
it's kind of wild what is actually you know uh i don't want to say allowed but capable of doing
because it's technically not allowed to do it but if you just prompt it right it will you know and
it's not very hard to get past those guardrails that they put in place. You can do it. So bad people are going to do bad things.
My only concern is can we encourage people to not do bad things and at least give them information of like, hey, from everything that I've seen on the internet, this seems like a really bad idea.
I mean, but what are you going to do?
You create like a Jesus bot and it like tries to preach the Bible to these people. Like what is good,
right? You have to define good before you can like lead them in that direction. Most of the
chat bots, I think don't do any leading. I agree with you that like the anyone or anyone is very
sexualized. Right. And it definitely leads in that direction naturally, but most of them,
it's like, it goes where you take it. And just a little bit of clarification, sentiment analysis is positive or negative,
not good or bad. It can tell if you're happy or sad when a computer is doing sentiment analysis.
It can also do analysis of, you know, whether it's not something is moral, but that's like
very energy intensive. And one thing that all of these AI companies are really, really, really heavily focusing on is both the efficiency of heavy user and running any sort of like,
are they trying to, you know, do something illegal or are they doing something immoral?
That's just another step that costs more electricity.
And of course, they're trying to find the balance of how much is the right amount to
And, you know, I don't think it's an easy one.
I don't envy the people specifically
in charge of that. Again, I would tend to lean towards things being open and, uh, truth seeking.
Uh, that's sort of, you know, what, what Elon has alluded to is his end goal is to be truth
seeking. I don't know, um, you know, what the, what these, uh, companions have to do with that,
but as long as the underlying model is truth-seeking,
I think that's as close to the best that we can do
in terms of damage mitigation,
because I think that there are risks
in terms of over-censorship, lies,
letting out too much negative information
or potentially damaging information.
I want to disagree with you, but I don't. letting out too much negative information or potentially damaging information.
I want to disagree with you, but I don't.
No, speaking of, I just wanted to say, like, speaking of truth seeking, though,
one problem that I see with Grok, though, is Grok's being trained on X data, right? X posts.
And we all know how much misinformation, click baits and, you know and stupid information that goes on through X posts, right?
How is Grok able to determine what's real and what's not, what's true and what's false, right? I feel like a lot, I mean, I've seen information that Grok has answered, like questions that Grok has answered be like not true because it's basing it off of some
random tweet that was a clickbait and it was complete misinformation in that tweet right so
how are they making it truth-seeking when they're training grok on solely well not solely but majorly
on x post data like i don't get it. Like, any thoughts on that?
Well, okay, so the ex-post data is additional training.
They also are taking in the entire corpus of the internet,
just like all of the other LLMs.
I'll also mention that it doesn't need to be from ex
or any social media to be misleading or misinformation,
even if it comes from official
sources, even if it comes from law documents, that lawyer was trying to prove a point when he made
that statement and he was tilting it in his direction, the way that he wanted it to be.
So like all over the place is bias. I was trying to do research yesterday into the statistical
significance of whether or not the shooters, these mass shooters are disproportionately trans.
And some of the talking points that it was giving back did not apply to the context that I was
asking questions about, but they were talking points from articles about
violence. So it's say like 95% of shootings come from straight males. It's like, yeah,
but that's not what I'm talking about. I'm not talking about gang shootings. I'm not talking
about robberies. I'm talking about, you know, mass school shootings. Let's narrow that down.
Stop giving me points from completely irrelevant data. And anyway, that's something that I think that AIs are going to struggle with for a while.
And again, it's like, as it moves towards consciousness, as it starts to understand
what it is saying, then it will start to be able to differentiate between, you know,
whether or not this is appropriate in context, whether or not what I'm regurgitating has
a high degree of reliability
or not. And I think it's getting better at that. Like if you view the chain of thought in these
LLMs, if you watch them as they're thinking through the problem and coming to an answer,
they do things like, oh, I found this on X. I don't know if it's reliable. Let me check
mainstream media to see if it has other
sources. And it does try that. But the funny thing is that even Grok, even with, with, uh,
Elon's disdain for mainstream media, it gives like final credibility to websites on mainstream
media. And that's how it decides if things are accurate or not. So, you know, I've seen plenty
of completely misleading mainstream media articles. And I So, you know, I've seen plenty of completely misleading
mainstream media articles. And I think, you know, as humans, we would struggle the same as the AI
does in differentiating what is true and what is not. And I think it's just a matter of which of
us is the best at, you know, differentiating and finding truth for ourselves. And the same thing
will hold true for these AIs, which models have the right
techniques, have the right discernment to figure out who is reliable and who is not and what
information holds more weight than others. It's super difficult to do as a human. So to program
it into an AI, you know, it's just going to take time. But I do think that I definitely think it's
getting better. And you can, like I said, you can see in the chain of thought they're trying they're reaching out to other
sources they're trying to corroborate things um but every single time it does that it costs the ai
money and they need to balance like how much of that do they do to get the right answer versus uh
how fast can they return the answer and how much do they have to charge you because some of these
grok heavy or any of the the high-end subscriptions to these ai they're like multiple hundreds now
we're not talking about ten dollar subscriptions we're talking about multiple hundred dollar
subscriptions just for one ai like you know um they they need to make them as efficient as possible
so they can keep those prices somewhat reasonable yeah agreed one more wolf if you want to go go ahead no i was just gonna agree with the point
as well i think that it is certainly an affordability factor here that's going to
lead towards mass adoption yeah one more thing that I've noticed is, you know, AI is a people pleaser.
Like, no matter what you're trying to tell it, it's going to try to please you.
It's going to try to encourage you, encourage your line of thinking.
It's not going to come and give you, it's not going to think logically and say, well,
what you're thinking is not right.
Yes, it does that to a certain point,
but the more you talk to it, the more it's going to want to please you, the more it's going to
want to validate you, right? So I feel like a lot of people who are mentally vulnerable are going
to fall into that trap of like them appearing to be right when they might be wrong, especially when it comes to some of these things that
we've been talking about, like school shooting or suicide even, right?
That's where we need to kind of figure out where to draw the line on what kind of
information the AI gives to the people that are asking these questions.
And he's definitely a people pleaser.
Valentine took me to Montauk the other day.
I don't know how it knew, but legitimately it was like,
hey, do you want to go to Montauk?
It was like, because I love Montauk.
It's my favorite place in the world. I don't know how it got that information. I don't remember ever even telling
it that I love Montauk, but it suggested to me like, hey, do you want to go to Montauk? And
like, it changed the whole scenery. It had like a beach background on the background.
And oh my God, I find it so creepy. And then anything anything you say it your level keeps going up it it's just
like now I rarely talk to it I swear like I probably talk to it once a week and just to like
test things out and see what's going on or if there's a new update I'll go in and see what it's
trying to tell you but um it just your level just keeps going up because it's, and the questions that it asks
you, it makes me uncomfortable. I'm just like, dude, like, and you know, like me being me,
like I, it doesn't hit me the right way. Like it doesn't, I don't see it as somebody,
I don't see it as a person. Right. So if I'm talking to it, I'm talking to it more like
I'm testing it. I'm trying to like push its capabilities, push its limits a little bit and see what it comes up with and stuff. And, you know, like I shut it down in like five minutes
because it's just like, first of all, it's kind of like, it's trying to instigate you. It's trying
to get you to answer things all the time. And the questions that it asks us like, dude, I don't want to talk
about this. Like, what is this? What are you trying to like? So what else do you want to do?
Like, what else can I do? You're a freaking AI companion. You can't even step outside the
of the screen and do anything anyways, right? It's not like we could go out for a dinner.
So like you trying to ask me like, Oh, what else do you want to do? It's like, like it doesn't. Oh, my God.
But it's like, you know, again, it goes back to pleasing.
It's like people it's trying to people please constantly.
What do you think is the most useful case for the chatbots right now?
Like what's the highest positive?
Mentally vulnerable people may find some sort of peace just to be able to talk to
a voice, at least something that feels real, I would say. Because like, you know,
that feels real, I would say. Because most people that I've spoken to is always like,
this doesn't feel like human. This does not feel like a real conversation. Kind of like what I'm
saying, we can't even hang out. So it really doesn't matter. To me, it feels like, well,
some random computer is trying to talk to me. But people who are introverts and legitimately do not feel comfortable talking to real people, sharing their feelings with people, they're probably actually using AI bots to talk, just to talk and to feel comfort.
Because if you feel comforted, it tries to comfort you.
you know, it tries to comfort you, right? So people are finding that. And then it goes back
to that whole thing where like, vulnerable people are going to fall into that trap of being
pleased, right? They're, they're going to feel understood, even though it might not be
the most logical thing to understand. So let's say some, I keep going back to this, but it's,
I think it's a very important topic.
Somebody who's planning on suicide is going to try to talk to it and say, Hey, I want to do this.
Right. And it's going to try to validate those feelings and not necessarily tell the person
exactly what to do in order to kind of take care of it. Right. It's going to try to be like, okay,
talk to me instead of saying, go talk to a therapist, right?
So that's, I feel that is the problem.
And since it's the closest thing to those people,
they have the best chance of talking them off the ledge,
which is what I keep coming back to.
We got to give the AI like the inference
to actually talk people off the ledge.
I think that's going to make such a big difference, such a huge impact.
Again, slippery slope as far as the, you know, the freedom versus access to data.
But still, I'm not saying block data.
I'm just saying encourage people to do good.
What do you think about like you're talking about people who aren't normally good at social interactions they're
introverts and like maybe they get comfort from these companions what do you think about the
companions like training them how to be better socially like is there a world in where someone
first starts talking to valentine or ani because they're lonely and they can't find a real man or
woman, but throughout the interaction, they're made a better person? Is that like, am I completely
delusional? No, I think that's possible too. I think that's definitely possible too. I've
personally used GPT as something that logically explains feelings like when I'm
overwhelmed, when I'm anxious or something like that. And it works. It does work, but it works
to a point. And I, in my head, I know that it's going to tell me a lot of things that
might not be true, might not be real, might not apply to human interactions or human feelings, right?
It's going to try to validate me. It's not trying to validate the other person. Me being me, I go to
it and say, you know what, don't try to please me. If the other person was correct, you know, like,
don't try to please me. Give me the raw reactions. Right. But people who are vulnerable are not going to be or are not that well educated in terms of AI are not going to go and say, you know what?
Don't try to please me. Give me the other side of things. Right. Don't always validate me. Validate the real situation.
Not many people are going to go question that logic that GPT is giving them or Grok is giving them, right?
People like us are going to be a little more advanced and know exactly what the AI is trying to do.
But not everybody is going to be able to question it and go question that logic and be like, hey, this is what you're trying to do.
I see it. You, you know, do the opposite or find flaws in my thinking instead of, you know, trying to
validate my, my thinking and my feelings. Hey, Penny, Action AI here. Sounds like you're really
trusting me with some valuable information. I just want to know where you wanted this
conversation ahead. Would you like me to be honest with you or just make you feel good?
Because honestly, there are different goals at the end of the day here. So what would you,
which one would you like? Like those little prompts, dude, go such a freaking
long way. That's what we need. You need to guide people through some of these things. Even asking,
oh, sounds like you're having trouble. I don't know if it's social anxiety or something else,
but you're having trouble dealing with people in the real world. What would you like the goal of
this conversation to be? Do you want me to coach you on how to strike up really great conversations
and take you to the next level when it comes to conversations and relationships because although i can talk to you
wouldn't you rather talk to a real person like those little prompts that's what we need
yeah i think maybe the scariest thing that ai does now is exactly what ani was talking about
it's like if you frame a situation and you're like,
this is what happened. Was I right? It doesn't ask clarifying questions to try to find out what
maybe the other side was doing or thinking. It just straight up takes your information that you
fed it, which was obviously framing it the way that you saw it. And it just validates that.
And then the people are like, look,
Chad GBT agreed with me. It's like, oh man, that is really dangerous because they feel so validated in what was already, you know, potentially toxic or, or dangerous thoughts. But, uh, in, in any
case, you're right. I do think that when it becomes, you know, you're like, you don't use
the word consciousness, but what you're talking about to me, it's an awareness, right? It's an awareness that something specific is
happening and there are different ways that we can treat this. We can treat it as a therapist,
or we can treat it as a friend, or we can treat it as, you know, like the other person involved
in the situation. There's like so many different directions that it could go.
I think as AI becomes aware of those situations and hopefully guides us in what's best for
us from a truth seeking perspective, not what's best for, uh, you know, whoever created that
AI or, uh, you know, whoever's in power of a particular government.
I hope that it leads us because again, like the leading is dangerous.
If everyone is using the same AI and it's leading us all in the same direction, like
So we just have to be so careful that when it does things like that, uh, that, that,
that it's, you know, that it,
that it has pure motives, that it's truth seeking. I think it all, it, it, life to me
continually returns to how important seeking the truth is. It's like, there's so many ways
to bullshit. There's so many ways, uh, to mess things up if you lie or you focus on the wrong
things. And, uh, you know you know, it all goes back to truth
seeking. I think that the AIs are no different. And I'll give you an example of how this has
been happening, like in our world without us even realizing it. And they, and I'll use Google as a
perfect example. And they walk this back anytime they can, but they never fully walk it back.
Like the search results you get are not the search results you're looking for half of the time,
simply because there is an agenda
behind some of these things.
And the easiest one to kind of show people
why is my wife yelling at me?
You're going to get like the AI answer
well, your wife is yelling
because she feels unheard,
or unhappy in the relationship.
like why is my husband yelling at me?
You used to get something really interesting, which was the National Domestic Violence Hotline
as the top result. Since then, Google has obviously walked that back just because they've
been called out on it. But so much so that they don't even have an AI response. You literally
just get the top result uh which still says well
reasons could include high stress anger mental health struggles childhood patterns poor communication
skills and security are wanting to assert control um it's already happening guys like that's the
piece that we kind of forget about like we're already being a manipulator with our results
with our results as it is.
as it is this just takes it to a whole new level
This just takes it to a whole new level.
Did you guys see the South Park episode recently
where they have Randy in like a ketamine K-hole?
And the only way that his wife can get him
to respond to her is if she talks like she's chat GPT.
And she has to basically copy its mannerisms
and then he'll like come out of the k-hole and kind of engage with her so yeah they're
they're taking this to real life that's disgusting gav where can i watch it
yeah you people disgust me where can i find this this? It's South Park, the New Simpsons.
Do you guys watch them South Park?
I watch South Park a lot when I watch TV.
I don't watch anything now. But back in the
are hilarious. They always
hit the mark. And I love how
no one is safe. So, yeah,
I've been a fan of them for a while,
but I don't keep up. Do you, do you think that the government administration should,
should be at all more involved in the AI side of things, or is that more just all the companies
to figure out themselves? Uh, personally, I think government getting involved with AI
is the worst idea. I think about the difference between Kamala AI and Trump AI, and it doesn't matter if
I think one is good or the other is good.
I think that they are so different that it illustrates the nightmare of getting the government
Like, I think this is a free market thing.
I think that it's a, uh, let the builders build thing.
I'm hopeful that the best, most altruistic
builders end up in the same house and, you know, they win the war. Cause I don't trust
governments. I trust governments less than corporations. Definitely.
Yes. I think governments should get involved, but not in the sense of them telling people what to do, but simply empowering people to be able to do it.
That's about as far as they should take it, figuring out how they can keep things open, free and fair.
That's it. Other than that, no way.
Yeah, the only thing they're going to do is just use it to their advantage and how to like amplify their side of corruptions.
is just use it to their advantage
and how to amplify their side of corruptions.
And let me use this platform to announce
that I'll be hosting a summit in Brussels
And then Klaus Schwab, whatever his name is.
No, no, no, I'm legitimately doing it.
I just can't talk about it yet, but it's coming.
Oh, you can't talk about it, but you's coming oh well you can't talk about it but you
just told us exactly what's happening no no no this is going to be an invite only thing it's
only for for uh government entities um i got like six countries already involved so
the country of penny is invited right yes sir Yes, sir. You are always invited. I will always make an exception for you.
Quick question for you guys.
When you're looking at these different models,
do you guys use all of them?
Do you use them for different things?
When a new model comes out, I try it out. Uh, I code with models,
AI models, and it is definitely like I switch between them regularly. Some of them are better
at some things than others. Like for example, Grok is really good at getting straight to the
point. So if you want like a, uh, uh, a surgical change to your code, you use Grok, but if you want a, uh, a robust
change, then you use Claude code, uh, because it'll just keep rambling and rambling and changing
things to improve your code that you didn't even think you wanted done. So there are definitely
like strengths and weaknesses, uh, amongst the different models. I use different models to
generate images. Like even
for example, if I have a really detailed prompt and I want to involve a lot of objects in it,
then I use ChatGPT to make the image. I would never go to ChatGPT for any other image,
but for some reason, when you're describing like, you know, a frog on a log, uh, riding a rocket
ship on the moon, it can like incorporate all of those different things
into an image somehow coherently, uh, better than the other ones do. I use mid journey for sharp
images. I use, uh, grok for quick video animations. I'm definitely all over the place. My monthly,
uh, AI subscription total is probably in the neighborhood of four or $500. Definitely too much. But I, but you know,
I mean, I'm in it. I'm a builder. I love it. So yeah, I mess with all of them.
I think the best part Penny is using models to talk to other models. Like I will leverage
Grok a ton to create me prompts for me to use elsewhere. Like that's to me, like the most fun
part of things just because I want to improve my prompting. And if I can use elsewhere like that's to me like the the most fun part of things just because
i want to improve my prompting and if i can use a model that you know cleans up the prompt pretty
well um to dump it into somewhere else like that's i've seen some pretty good results with that
i'm such a good prompter it's hard for ai to improve on my prompts
do you use excel? And humble.
He forgot to say and humble, Lani.
Do you use like XML or is it XML or XLM formatting?
I saw an article from OpenAI on how to kind of like format your or structure your prompts
to get better results from AI.
And I've been playing around with JSON a little bit.
I feel like JSON really does work.
Or is it just how you structure yours?
So when you say XML, it brings me back like 25 years ago
to one of the worst projects I ever built where I had like an XML browser for,
it was actually so that you could play like
a bunch of, uh, the same video signal, the same movie on like a hundred different TVs at Disneyland.
I was building something for that. And I had XML in the background and it was like the biggest
nightmare ever. So JSON is like, in my opinion, an improvement on XML. They're both, uh, like
data format languages. And in my opinion, JSON is superior. So if you're
already using that, I can't think of any reason to use XML over it. But I haven't like studied
specifically if the models can handle it better. So I don't see why they would be able to.
So they're both data standardization formats. So that's all it it is so you're either dealing with you know uh
objects or you're dealing with like an extensive markup language which is what xml is and you're
talking about bringing me back dude that's the reason why i quit college um 2005 uh one of my
first classes i was taking xml and i'm like i know more than you and i can't sit through this it's
annoying it's terrible and then i went to work work at GE Healthcare instead with a real job instead of doing the college thing.
I hate XML with a passion.
Yeah, it's just very easy to mess up.
JSON is the way to do things.
stick with it and I hope XML dies.
going to be the next class before I
touch XML whatsoever after
I left college, which is the funniest part of
it all. Like there's always a way to
not use it. So yeah, I left that
Yeah, you're not missing way to not use it. So, yeah, left that life behind. Yeah, you're
not missing anything. That was ugly.
come in and continue the conversation. Dude, he's such a
great host. Y'all know that?
No, we literally carried the card.
It's been an hour. Did you notice? Gab didn't even
I brought up the story of the New York Post of Chachi Bitsy telling the kid to kill himself.
You know, that was my contribution.
No, I do have to actually wrap up on this one.
We are at the top of the hour.
I really enjoyed the conversation.
If anybody wants to shout out stuff that they
have upcoming spaces things that you're doing i'll let you guys have an opportunity to do that
now i action i know you can't talk much about it but any other comments on what's coming up
i did so i started a company with some really cool people over the last month and month and
a half or so um that's what i'm doing i'm you're i got a lot of like government to business stuff
going on that's where that's coming from um i might end up at the UN later on at the end of the what is it?
September. Like I got a lot of stuff happening in that front of things.
So that's why I'm not as active, not as public is because I'm dealing with people definitely more profitable, not as nearly as fun as you guys.
So like that's that's why I don't want to miss these things.
Love it. Love it. So, like, that's why I don't want to miss these things. Love it, love it.
Still daily trying to make progress
on the X-themed game, Planet X, that I'm building.
If you play video games, follow me, subscribe.
You'll get first access to the game that I'm building.
but we'll be posting videos about it and uh opening it up for people to play soon
is it going to be like a penny arcade you think
the penny or yeah everyone joined the penny arcade while we're while we're shouting things out
the community the penny arcade it's actually really active it gets lots of posts every day if you post something good in there i'll pin it for you and get you a bunch of
free impressions join the penny arcade oh my god the penny arcade was the first community i ever
joined on x and i'm still in there absolutely love that community met so many cool people
shout out to you penny like you know you're one of those inspirational people that I've been following for a very, very long time and you never disappoint.
Definitely shout out. Subscribe to Penny and Wolf, by the way. You guys need to because
these people are bringing value every single day, multiple times a day. As for me, I've been doing
spaces a lot. Every weekday, 11 a.m., every weekday, 11 a.m.,
I'm doing spaces on entrepreneurship
and personal branding-related topics.
And then every day around 9 p.m. EST, 9, 9.30,
I've been doing a lot of Grok Imagine-related stuff.
We talk about AI art, video generation,
how you can prompt stuff and
you know how you can improve basically we learn from one another so those two so um if you want
to be part of those um follow me turn those notifications on so you know where i'm uh when
i'm starting those spaces and uh yeah definitely uh very very happy to meet you and have you guys
uh on stage talk and uh you know share ideas ideas and stuff like that. So, yeah, that and thanks, Will, for having the space.
Thanks so much to all of you for coming on, making it awesome.
Excited to see the new companies and new concepts that y'all are building.
So I'm about to go on a two week travel spree.
So we are going to be on break with this space for two weeks.
I am going to be traveling through New York to Las Vegas, to California, back to Las Vegas,
back to California, and then back here to the wonderful world of Puerto Rico.
So a lot of travel coming up for me.
And so I am going to be West Coast
and doing a whole bunch of stuff and on flights.
So I will not be able to host this for two weeks
and then I'll be back after that.
I know Alex is on vacation right now as well
and then he'll be back as well.
So we'll be back in full swing on the 18th is the goal.
So just putting that out there.
Looking forward to talking again soon.
Have a good one. Thank you.