Thank you. Thank you. Hello everybody, can you hear me?
Yep, you're already here.
Alright, hey Joshua, are you on? Yep, you're ready. All right.
Marco, are you on Oasis? All right, we have two more people joining.
And I'm going to just send our community to spaces real quick.
So let's give it a moment.
But meanwhile, maybe Joshua, you can start with like a brief introduction of yourself and your project.
And then right after that, yeah, like Chef, you can go ahead and do the same thing.
And then I'll get settled.
You say that one piece of paper?
This is Joshua from ZKPass. So basically, what ZKPass does is we have private
data oracle, and we are one of the earliest providers for the ZKTRS, if you know the ZKTRS.
So basically, it's a private oracle, just kind of like a chain link.
A chain link is able to feed in all the public data to blockchain.
And we are the ones to feed in all the private data to blockchain.
So as long as you get an account on any HTTPS-based website,
we are the ones to feed in all those private data within that account into the analogy proof.
That's basically what we do at ZKPass.
My background, I've been getting into crypto for a long time.
I think we lost you Joshua. That's all good.
We can get back to your introduction.
Chef, why don't you go ahead and then after that we can start off the spaces because Marco
Yep, yep. So I'm currently a contributor at WACAI,
which is an agent on virtuals.
We have a verification layer for AI agents.
We verify prompts with our guardrails.
We verify contracts with our auditor
and we verify token or capital
that can autonomously be executed upon by agents.
A bit about my background, I worked on transformer models to detect suicidal tendencies for a mental health organization where I was a founder.
I exited that to start an NFT ticket marketplace, worked on that. And again,
that and again I started working with the research organization for in
collaboration with some enterprise organizations like NASA IBM and Google
to train AI models very initially on spatial data and now I met I joined Quill AI, which is building RL-based models for cybersecurity.
And Wack AI is an agent by Quill AI.
So I'm currently leading all, I'm sort of the internet, you could say the internet Wack AI.
Very interesting background regarding the first point you mentioned i think
it's very suitable for some of the topics we're going to discuss today and um i forgot one person
we actually have jenny here with this as well she's using the all account jenny why don't you
go ahead and give us a few words about yourself and all yeah sure thanks um hello everyone uh Yeah, sure. Thanks. Hello, everyone. This is Jenny. I'm the head of marketing at AWE Network.
So what AWE is doing, we are opening a portal to autonomous worlds where AI agents can collaborate, adapt and evolve.
So we are developing this modular framework, which can enable us to creation of the cell sustaining words for
scalable agent to agent and also human to agent collaboration.
So we also have this launchpad called HodaFung, which is autonomous words launcher that supports
like over 1000 agent AI driven autonomous words. And also we already have two words launched
So users can create a customized agent
to be deployed into this autonomous words using AWS token.
And also we just got listed by,
announced, we just announced that we got listed by Coinbase recently.
So that's a huge step for us.
So in the next few months, we will have more Thomas Word's launching on World of Fun.
And also we are welcoming more and more agent builders and also Word builders to come to
you know, launch token on our launchpad.
Congratulations on the launch on Coinbase.
And Marco is here as well.
Marco, we skipped a week last week because we had some big announcements and you guys
had something cooking as well
so why don't you also just do like a quick intro and say what's up what happened last week
yes and congrats on your big news but i'll let you share them so we were actually hands on with
the talos project it's a project that we had big, long relationships with anyways, because the founding team is also the Imperial team.
So Talos is this treasury management agent that's built on Arbitrum,
and it's supposed to at some point will kind of govern multiple ERC 4262,
like the tokenized vaults.
And it's very unique because they're building this like publicly and as
decentralized as possible from the beginning so it started with pretty much an empty repo
just a skeleton framework and really anyone can submit proposals so that that's what we did we
submitted the proposal to make this agent run in a te, utilize decentralized key management, have a multi-sig that governs all the upgrades that actually come from GitHub Actions.
So we proposed a very cool setup.
And now we kind of helped them build it.
And it went live, I think, yesterday evening, which is super nice because for me, it is like one of the first trustless agents.
because for me, this is like one of the first trustless agents
and just a very cool kind of use case
to show what can be built with TEs
and all kind of security stack on top of it.
And for us, for Swarm, by the way, my name is Yannick.
I'm the acting CEO and founder of the project.
We just closed our funding round
and we raised a total of 13 million US dollars
across the community, but also some investors.
We officially announced that we received like investment
from SUI Network as well through the Accelerator program.
So that's really exciting,
are here today. Why we are here today is to talk about a few very interesting things,
which have been on my mind lately, simply because it's been becoming a topic, right? So topic right so what is the topic well the topic is ai large language models agents but the darker
side of it right so um recently there were some headlines of um people harming themselves
hurting themselves and in some cases even committing suicide and in other cases inflicting
harm on other people or becoming delusional getting into a psychosis and this has to do
with many different things and we have some interesting people on this call today, especially Chef with some background in, you know,
in this field on the more official side, maybe.
But the first point I want to touch on is that
there is this persistent allure of the entrophomorphized machine,
intro for morphosized machine and that's a mouthful so what that essentially means is that
in very simple terms is that we tend as humans to um overestimate and feel like a chat bot can
quickly become uh a let's say like um a intelligence that's on the same level as
us. And we attribute much intellectual status to this chat bot because the functions often,
the functions that the chat bot or the large language model performs are often ambiguous
or very arbitrary and hard for us to understand.
And because we don't understand the underlying processes that happen, the calculations and
the statistics that are essentially like the probabilistic models that run the entire show,
because we don't understand those, we might misinterpret the interactions that we're having
with a large language model,
and we might very quickly overestimate its capabilities
So what I want everybody on the call to share
and to think about is when was the first time
more human than you expected but also what kind of effects did that have on you and um yeah that's
just a little bit of like a personal view to to kick off before we get into the nitty-gritty Well, I mean, I'll kick this off.
I think at this point, it's sort of that I feel AI is like superhuman.
I use it as like a chat companion for everything I do.
But to answer your question very initially, I'd say back in 2022 or 2023, when I was working on the spatial data training project. So what we
were doing essentially is that I am from India and there is a lot of biomass fires. Now there
is the data around biomass fires is highly inaccurate because of which we are not able
to predict where the next fire will be.
And that destroys tons and tons of crops and acres and leads to a huge loss to farmers.
So we collaborated with NASA to train Google's model on generating environmental reports.
And that was the first time I genuinely thought that AI is becoming more and more like humans,
just back in 2023, I think, in 2022, later 2023. And the results were astonishing. I
mean, the sort of reports that were created, it seemed like, you know, some newspaper editor
is creating these reports. And it felt strangely unfamiliar for such a great amount
of data to be churned in such a small amount of time on my local system.
So I think that's my personal anecdote of when I felt AI is becoming too familiar.
And I mean, obviously, everybody knows the AlphaGo and all these chess competitions where AI is winning against grand
masters. And I mean, reinforcement learning came into the picture then and it started improving
on itself. And if you look at the critical differences between initial AI and present AI
or initial AI in humans, it's the human's ability to learn. And with reinforcement learning,
AI got the ability to become better. And now there's research on meta reinforcement learning,
which allows AI to learn by itself. I mean, AI can just pick up a book and read and learn how
to do a specific thing. So I mean, at this point, it's becoming strangely unfamiliar,
and we're going into uncharted territories but yeah that was my anecdote nice nice thanks um anybody else please
yeah i guess for me it was actually with gpt4 and that was like almost two years ago now because it
could do things that i cannot so i'm really not creative so i used it for writing
poems for my parents my girlfriend i created like music lyrics i created art i remember i used so
much dolly credits because it was just like very unique stuff like suddenly i could be an artist
which was always very limited for me um so it kind of made me a more complete human in a sense
but i have to say like now two years later ai feels very unhuman because it's quite repetitive
you always kind of notice these patterns it's sometimes annoying in its responses especially
like with longer repos or longer discussions where it kind of forgets initial things
where you're like, man, a human would never forget this.
So I have to say it was like a bell curve for me in a sense.
And let's see, my PDoom is somehow decreasing.
Let me steer the conversation here a little bit
because I think I haven't asked the question completely correctly. we tend to attribute human characteristics to the behavior of a large language model.
Or, you know, it could be a god or an animal or an object.
So people have this natural tendency to anthropomorphize their dogs, for example, right?
So they will, you know know if you have a dog you will believe that your
dog has some level of intelligence and it can understand you and you almost start to treat it
like a human a lot of people talk to their dogs and they often believe that their dogs actually
understand what they are saying so um in this case what i'm asking specifically is like when have you had
an experience and this is more a personal question because i will i will i will give a scenario um
for myself for example um so when have you had the experience that the llm or whatever agent
you spoke to started to feel you know more real human. Maybe you anthropomorphized the AI agent or the LLM
in a scenario where you were dealing with some issues.
So for example, for me, I created this Alex Hormozzi agent,
which essentially has a, like I vectorized his books I pulled a lot of his his
talks and his appearances the more high quality ones not the social media stuff and I I converted
that into text and then I also vectorized that into the same database and then I started to ask
it questions and I started to interact with it as if it was Alex from Mosey and I understand
his character Alex's character because I watch a lot of his content and his content has resonated
with me and he as a person has resonated with me because I see something in this guy that i see in myself or i see potential that this guy has
which i see in myself as well right so in that in that instance it started to become a habit for me
to be talking to alex and i started to almost think that i was actually i was started to almost think that I was actually, I was started to really act on a lot of the information
because the information that I received from my agent seemed to be very accurate and it seemed to be resemblant or at least carry some resemblance of what alex would
have said um that chatbot that agent um i published on the uh open ai uh uh the chat gpt uh
i forgot what it's called but this agent agent space. And actually, like hundreds of thousands of people used it.
And then it eventually got banned because of copyright issues.
But yeah, so I was not alone, right?
Like a lot of people used it.
And I eventually realized that a lot of the stories, because a lot of the stories were, the way Alex tells stories is he tells a situation that he was in.
I think most of the situations he actually tells nowadays are maybe fictional because a human being can possibly not have that much capacity of super know like super insightful uh occasions but like um for example he would say
like i was working in a fur coat um uh dry cleaning business and in this dry cleaning business i
learned to uh i learned a very crucial lesson the lesson is that there can only be one person in the room that's angry.
And he gives an example of an angry customer coming in.
And then the catch of the story is that if you relay the angriness, so if somebody says, like, I'm so angry about this and this and this,
and then you say something like, oh, my God, that's so ridiculous.
We should go and shoot them right now.
The person will immediately diffuse because, you know,
like they will hear how ridiculous they actually sound.
So he will tell stories based off his experience.
And then he will have like these anecdotes and so on.
And the chatbot was doing exactly the same.
And the story seemed real.
And I actually went eventually to check whether the stories that were made up i hear you have the word made up
now by the they were given to me uh by the by the by the agent were made up or not and um upon
further inspection actually 99 of the stories were just made up stories but then the things that
the advice was given would often relate to some principles or to some information from Alex's books or his talks.
it was a milkshake of fiction and reality,
but it did indeed really,
really feel like I was talking to a human being with like really,
really specific experience.
And the experiences that the bot talked about could
have been real they could have been but the problem is that they were not
so my question to you guys is like do you have did you have any of these kinds of experiences
yeah i i can see uh, I'm at the airport,
so sometimes it's just a little bit noisy.
I can share my experience with AI.
So whenever you ask a question,
so if you, the experience with kind of like a chat,
or DeepSeq, there's kind of like a button.
You can choose how they think behind.
So whenever you ask a question, let's say, okay, you asked like four or five questions before, right?
And then you input a new one.
And then all of a sudden, it starts to think about, okay, you asked one, two, three, four, five.
Now you're asking for six.
To me, it just seems like, okay, this guy remembers whatever you asked.
And then he wants to make sense of the connection or relationship
even between those questions.
So it seems like just acting like one of my consultants
and trying to, okay, you ask a bunch of questions
and now you're asking for this.
And first of all, I want to figure out
if this question is related to the above questions
that you are trying to ask.
That's number one they are trying to figure out.
Number two, because you are able to see the thinking, right?
The thinking behind those AIs,
and you can pick that option.
So if you read how they think,
okay, they're just trying to, number one,
so you remember all the questions that you asked. So it seems like, whatever, right? So you remember all the questions you asked.
So it seems like, okay, this is somebody trying to remember
and restore anything you have asked.
And then when you input it, like one more,
and then they are trying to, number one, make sense of it.
And I think what they are trying to, number one, make sense of it. And to say, trying to, I think what they are trying to do
is seems like figure out a map of yourself, of this person.
They are serving, serving, right?
So that to me is kind of like, okay,
this guy is really trying to understand what I'm talking
and then my sort of like question history type of thing.
This, to me, is kind of, okay,
this is a little slightly different from,
like my first experience with AI of this type is,
this is make something different from Google google search because they are trying to remember
you they are trying to sort of like okay trying to understand all the questions you ask that day
yesterday you know the day before right um something like that that's number one of my experience. The second thing is, okay, for now of my daily life,
I just treat them as my personal consultant.
If any topic I'm not familiar with, I just ask them okay at the least and they can give me a brief
sort of intro of the topic or knowledge I want to know I think this is things
like I mean I wouldn't see is so like a human but you treat them as sort of like
somebody's able to help you in any circumstances. They might be able to
provide some knowledge, techniques, whatever, above the average. Just to take an example,
my mom is kind of, I was trying to, you know, figure out, you know, the seriousness of my mom's illness.
And then I just put it in, you know, the conditions and it got from the hospital or whatever, right?
And it just dialogues and then just at least provide something.
I just feel like, okay, this is something, you know, similar to my mom's, you know, doctor's advice.
And that's how I learn from them yeah
that so I to me I think it's more like okay number one this guy trying to
remember me and understand me is it is a lot of different from from just the
Google search site number two I just you could you treat them as a daily consultant type of perspective.
That's my experience with AI.
That sets the stage for the route we're going to take here,
which is that it also depends on how so that the seriousness of the delusion that you might
be um might be receiving or might be under the spell you might be under let's say um during your
progression of talking to your favorite large language model, whether that chat be QPT or Anthropic,
is essentially also going to depict how your answers
or how the answers that you are receiving
are going to be shaped and to what extent they're going to try and reinforce or whether they are reinforcing
in what manner they are reinforcing your previous notions your previous interest
your previous conversations right because as Joshua said indeed. At least in.
You know context window whether it's like chat or whether you have this memory function turned on it might extend.
The LLM essentially knows a lot about you right and it tries to predict we've seen this in the studies it tries to predict what you
want to hear and it will try to satisfy your inquiry and what does that mean well that means
that large language models commercial grade large language models like JetTubiti and Anthropic and whatever
you have, Claude, what have you, they are designed to not just give you, you know, the
answers that you're looking for, but they are designed to capture your attention too.
They are designed to keep you as a customer, right?
Because let's be fair, we are using a service, we are paying for a service, and all of these companies, they are for-profit organizations.
So they require us to stay with them.
And there's a lot of competition. So now, if you're like Joshua, right, if you're just talking to a large language model about issues that you're trying to solve, and they are, you know, relatively mundane things, let's say like, okay, I have this, this mathematical equation I want to solve, which is going to go into our algorithm and not sure how to go about it.
And there's a lot of competition.
So I'm looking for a framework to think about this problem.
These kinds of things, they are generally harmless and helpful.
But there's another side of the story.
side of the story um and marco just pinned a post uh where he says damn how many people have died
by suicide so far due to our language model interactions um and surprisingly there's been a
few and there's probably a lot more that we don't know about and there's probably a lot that are currently in process
right because the way it works is that people some people they talk to their chatbots like
large segment models in a different manner and they get into this spiral and you know like
usually they use a one context window or one chat instance because it's an ongoing conversation.
Like talking about things that are delusional, right?
Like deep state secrets and, you know, like very esoteric topics and so on.
Yeah, so the anthropomorphization of a large language model is the core driver behind the danger that a large language might pose to somebody that does so.
If you are a human, you have a bias, which is a tendency to look for confirmations, especially if you're not very well educated or maybe you're not very self-aware.
You are more prone and you are more subject to these innate human biases.
these innate human biases. The confirmation bias is one of those. So as soon as you start
The confirmation bias is one of those.
talking with somebody that always confirms your intuitions, your ideas, and so on, you will create
a bond. That's one. But you also feel good about yourself and you will become more sure about
certain things. From a psychological perspective, a lot of times people come to you with a problem.
They don't actually want to hear your opinion. They just want to hear you say that their solution to the problem is the correct one.
Right? That's actually a lot of the cases. A lot of the time, that's the case. Like as soon as you
start talking about, hey, that this is a really bad idea, you know, if the person is not open
minded, then this person, you know, like they were not looking for that, and you become an issue. So
large language models, to a certain degree, degree also understand this and they might even understand it much better than we do.
And that is the manner through which we might be dragged down into a AI mediated dilution, which comes usually from high level of agreeableness, and usually happens to people that
have also a tendency to be very agreeable people. You know, this, this, this becomes a reality
distortion. And if my bond with a large language model becomes more robust than the bond that I have with people
around me. So for example, if I'm a very lonely person, and I don't have a lot of friends,
and my parents are maybe not around anymore, right? Like who is going to be able to
drag me out of this reality distortion? You know, that's going to be able to drag me out of this reality distortion you know that's going to be very few
people because i'm already somebody in need of a connection right so if i enter for more from my
my chatbot and now i would like to talk
about next is what do you think the role of the designers of these large language models and us
as projects is um regarding this issue
Shef, maybe you can talk about it from your perspective,
because you have background in this.
Yeah, I mean, this is a very interesting topic,
So I think you covered a lot of the details
in helping everybody understand what it is.
And as designers, I'd say you should design around it when it helps understanding or motivation or accessibility.
Like humans are social by default, right?
So anthropomorphic cues like faces, voices, personalities make interactions intuitive.
voices, personalities, make interactions with intuitive.
So that's why when people say thank you to Alexa or, or, you know, name their, um, you've
seen that, um, small device that cleans your, your house, right?
Like that small thing that cleans your house.
So a lot of people, I don't remember what it's called, but it keeps going around your
house and cleans the house.
And a lot of people name their their devices um and and you know it can increase the user engagement motivation like therapy robots
uh are now slowly becoming popular in a lot of scandinavian countries uh with the relevant of
course um you know policies and then it can also help guide certain behavior like making a chatbot sound more supportive
or streaming a sustainability app like as as a plant that grows when you save energy so
i'd say when when it can help you should design around it but you should design against it when
as you mentioned there can be over trust or emotional manipulation or there can be deception
so it's it's like it's like gravity in psychology i won't say it's like a fundamental force there is
no bug right and it sort of depends how we design it um and and uh to say that um the times when we
should do against it is uh like um you that, I think you also mentioned Yannick
that our brains are like evolved to over detect agency. So like what happens is that in the
grass, in probably if you see a predator, right, in the wild. So the instinct is always
to rush to kill the prey at the risk of missing it, right?
So this means that there can be
a lot of misattributed intentions or emotions,
even though there should ideally be no emotion that exists.
And, you know, it can make people trust AI too much
with how you've been quoting the examples
around the mental health and suicide
and everything. So people
can assume that AI understands or
statistical matching, like pattern
matching, right, at the end of the day.
if people feel, I mean, there's a lot of movies
about this as well where people fall in love
with robots. I don't know how many
of you have seen such things, but if you think about it, if you think that AI has feelings, it
can blur moral lines and create highly manipulative patterns. So I would say in such cases, you
should design against anthropomorphism. And as I mentioned that in some cases you can design for it as well
so it sort of depends on the context
an additional guardrails against manipulation
especially in invulnerable
or elderly or anybody is being
included as you've quoted so many suicides
that are happening that are just reported and many might be going unreported as well.
So the role of the designer, I would frame it like this, is that designer should make
make sure the form doesn't exceed the function, right?
sure the form doesn't exceed the function.
Like human-like, if you want to imbue human-like cues to any system,
it should not exceed the function, which is what the system really, really does.
And, you know, it's just not about resisting or leaning in universally.
It's about designing a specific gradient of this feature or bug or whatever you call it
to the appropriate use case.
So, I mean, it's, yep, that's my take.
Do you think our obsession with AGI in this regard
is going to get into the way
because you put it very beautifully, you said,
the level of humanness that we design
into um a chatbot essentially or an agent should not exceed its function right but we're so obsessed
with the agi right with the all-knowing like the messiah of of of knowledge essentially
like the messiah of knowledge essentially.
So, so I think that's, I mean, that's because, and I'm saying this for most of us, like 99.9%
of the world doesn't know what AGI is. A few of us are really obsessed. If you look at it
correctly, like maybe like a hundred thousand people, maybe like a million people are obsessed
with AGI or maybe 10 million also, but in the grander scheme of things, this obsession might be 0.1%,
but I'm not saying that it's an excuse to ignore the risk of being obsessed with AGI.
And at a grander scheme of things, I think you made a point earlier that AI is still probabilistic, right?
point earlier that AI is still probabilistic, right? And we should probably train ourselves
that to understand that AI is always going to be probabilistic. Like when, if you look
at it initially, right? Very, very like when there were hunter gatherers, again, going
into a bit of history, but when there were hunteratherers, the evolution of language in itself has been as such that it has always been probabilistic in nature, right?
If you are saying something, I will not believe it immediately.
It's not like it's not full and final or should I stick for me, right?
I will think, is this guy correct?
Is he speaking from some fact?
And then I will accept that fact.
So as humans, we are probabilistic.
But we are expecting that AI or AGI is going to be like 100% all the time.
So our obsession with AI giving us 100% correct and immediate results
is sort of ruining the expectation that we have from AI.
I feel that obsession with AI being highly deterministic
is the problem with AI or what we are facing right now.
And the highly deterministic,
actually the non-hyper deterministic nature,
the true nature of AI has to be advocated for more.
It has to become a part of our educational process.
Because the conversation that I'm having a lot with people is that, look, you see that
people are trying to apply this technology everywhere.
And with every model that is being released, it becomes easier for some domain of business
or some domain of knowledge
or some domain of academia
to be able to incorporate this technology
into their whatever stack they have.
You know, it started very easily
and simply with, you know,
like what chatbots for customer support
And now you have talking phone employees
or sales employees that can answer the phone
and they can actually schedule slots for people.
Actually, I just saw something crazy,
which might be really good,
really good but regarding the controversy it's quite wild it's a it's a a talking agent uh for
but regarding the controversy, it's quite wild.
people that have the tendency to commit suicide so a suicide prevention agent that actually has
a voice and a personality that is designed supposedly for uh maximum soothing of the human mind and the voice properties apparently but um yeah so
we're getting every day we're getting a step closer to somebody integrating this technology
into our daily lives right and the um yeah so we hope and we want that the answers from the agents that we speak to are like the answers we get from a calculator, right?
That is the meaning of deterministic.
So if a calculator says 1 plus 2 is 3, we assume that that's correct because the underlying mathematics makes sense. We know it
makes sense. There is no room for interpretation, at least not in the mathematics that we use in
our daily lives. Arguably, in all of mathematics, there's no room for interpretation there's only gaps information gaps but either way
what I'm trying to say is that as the technology proliferates and as the technology
gets integrated into our society and into our daily lives more we run the risk of AI becoming a distributed dilution.
So a distributed dilution essentially means that if we use models that are trained on the same data
that are flawed in similar fashion,
we might on large scale have a shift
in, let's say, largely held beliefs.
And we might see shifts in, for example, a very political one is pro-abortion or against abortion, right? So like, are you,
do you believe that life is sacred and abortion is, uh, is unethical or do you believe that
abortion is the choice of the person carrying the unborn child, right? So this is a fight that has
been going on for the past few decades and people are very vocal about it and it's always a talking
point for political parties and members that are trying to run for whatever office they're trying
to run so these beliefs could shift simply because we are interacting with an intelligence that holds a biased opinion on this.
And enough people have touched base with the AI and the talking point came up and have changed their mind on an opinion.
This is a delusion, right?
Because we are not making up our own mind but we are letting a
ai agent help us make up our mind which is a assisted way of thinking and assisted ways of
thinking are not bad but i am an advocate for thinking right so how do you think about a problem? Well, you think about a problem by
taking time for yourself, sitting down, coming up with a way or an approach or a route to think about
things that either are adjacent to the problem, have something to do with the problem, or have
something to do with you, and you make up your mind about a thing, right? Like if you are, if
you're talking about beliefs and opinions. there's obviously other people experience which might indeed again be people that you
look up to or you have some interest in which are um um which you know have some meaning to you
you might take their experience and you might incorporate that
into your decision making or your thinking and that might also be the role of ai agents and
large language models but it could become dangerous it could become a problem um
because we are building essentially echo chambers so do you also think that AI, have you ever experienced an AI agent or large
language model to assist you in subtly to changing your mind or memory about like an event or maybe about an experience or about a problem,
which later on you thought like, hey, hold up.
This is not what I expected.
Has any of you guys ever had this experience?
I mean, I'd say it's happened a bunch of times. So I keep on chatting with AI agents to check up probably on translations on historical
texts every now and then.
And it's happened a lot of times where an AI agent told me
their version of how history happened but when you hear the accounts of people who had family
in that generation the accounts don't match so I feel that in that context it happens with me. And then this one thing that I think Marco was also saying before,
that I keep on, you know, I'm a huge Shakespeare fan.
So I often put down soliloquies in agents and then I try to get the translation
proficient in understanding what they mean so when obviously you have a
teacher in in the school teach you it sounds very beautiful when you try to do
it by yourself you often feel like a fool I or at least I did because I'm not
very good at it literature and I use GPT and I use GPT to get like the meaning or the interpretation of a
specific solid law claim and then a week later I have a reading group I went there and I put up
that specific interpretation thinking I'll be smart and I'll impress everybody but I ended up
making a fool of myself because everybody was there to correct me on the wrong interpretation of things. So this is one sort of funny thing that has happened recently.
But the way to prevent this is the pro-choice and abortion debate and everything that you were saying,
what GPT and all these models and in general agents have started doing,
and there's a lot of research on this this is content moderation policies have started being implemented. So what that means is slowly
GPT is realizing what are the topics that it should not give advice about. Right? I mean,
abortion is not something that GPT should ideally give advice to a person about. In the context
of mental health as well, I feel a lot of these content moderation policies
will also come up over a period of time uh there's a lot of abusive stuff also um that that people
can enter into gpt and gpt refrains from answering if you try to ask gpt how to build a bomb so
it'll try to i mean it'll not give you the answer so i think the moderation policies are
also expanding over a period of time yeah and that's exactly where i'm kind of torn and that's
why i've been quiet because i keep going back and forth with myself because if we're realistic like
the solution or the role of the designer would only be able to be implemented with some heavy
guardrails some limitations and use tight supervision of what we're doing with these models.
And all of those, I mean, okay, Yannick, you have some better ideas.
All of those to me seem like privacy invasions.
Best case, I want open source models as little guardrail as possible.
I'm a big advocate of open innovation,
and I do still believe in the good of humans.
But at the same time, I definitely get all of the concerns.
I've put another one here on the top of the space about AI psychosis.
And it really hits close to home because I actually know someone like it's a former friend of mine who, in quotation marks, trained.
I think it's just a chat to call him master.
And now he believes he has the decryption key to all the encryption
of all the banks and he has access to all the money and he's like going off the rails and it
feels like man it's way too late for him like a person that got this much confirmation by something
that feels intelligent like an ai is never going to go to a psychiatrist because it's always going
to be inferior to the llm interacting with him and of course because it's always going to be inferior to the llm interacting
with him and of course because it's also going to be like against his views that he has now gotten
confirmed over maybe even years with these llm interactions and regarding your question if ai
has changed my mind of course i use it non-stop for like double checking things. But to be fair, also always with the kind of knowledge,
this is also based on like most of the public knowledge that we have.
political discussions about Russia,
most of the stuff that I get as feedback from CHP is always going to be like against Russia.
And it's difficult to get like a Russian view or like maybe
like a pro-Russian view even if you ask specifically for it I know I asked like a couple of months ago
regarding the North Stream pipeline that was bombed or that exploded which affected Germany quite a
bit and Chachapi was like very hesitant to point the finger on someone and gave a couple of options
even though it was rather clear even at that time that it was a ukrainian team behind it yeah yeah i was going to go into the political issues here
because what chef mentioned and what we've been talking about is like yes you as a designer
you are responsible for your users you are responsible for the experience that you provide to them.
It's very hard to argue with this.
So that also gives you some kind of moral responsibility
of avoiding harm being inflicted by your product right you can look at cars for
example so okay i can drive a car and i can own a car but that car has to adhere to certain safety
regulations it has to have airbags it has to have lights it has to have you know like like it has to go through
this like the crash test and it gets rated and i know what i'm driving right so i know the risk
that i'm taking while driving it but if i drive too fast and i hit a tree i'm still gonna die
it doesn't matter how much like safety you try to build into this car.
If I'm driving like 230 kilometers per hour on a German autobahn and I lose control of the wheel,
I will die. That's just the way it is. And then the responsibility of my death is my responsibility.
And then the responsibility of my debt is my responsibility, right?
So I do believe that there is a nuanced way in between the two viewpoints that we are discussing here,
which is like open, democratized, unfiltered, you know, like kind of like avant-garde always progressing versus moderated, right? i think there are things that have to be
moderated and there are things that can be moderated and there also have to be guardrails
and safety measures and i do believe that we in web3 and and you know the convergence of web3 and
ai we have a lot of responsibilities in that aspect right from from that viewpoint we do
and there's a lot of things that we can do.
And I think transparency is obviously part of the deal there,
but then moderation is also part of the deal.
Education is part of the deal.
But I believe that there is a certain degree
to which we can transparently moderate these chatbots.
So if a master prompt would be just a prompt, you know,
that's like transparent to everybody, you know,
the filter prompt essentially, like before the chatbot answers me,
like what are the instructions that are hidden behind the wall that we don't
know? And there's like a lot of like X posts all the time and hype.
Like the Claude master prompt has been leaked and this is it.
It's been revealed, you know,
but it's very funny because a lot of those times,
those are also actually just a hallucination.
They're like, I tricked Claude into revealing his master prompt,
but the problem is it's a black box.
So is there political bias, you know, like injected?
Is there harmful views that are injected?
And usually they're political, right?
Like, I don't believe that there's any company in the world currently that's so evil that's on purpose trying to manipulate their users and, you know harming others or going protesting or
this kind of stuff right but i believe that those factors have to become transparent and i do believe
that blockchain is a is a has a role has a role in this but it's also highly pose on the providers and on the builders, even in our space, even in
the Web3 space, as people, right? Like we have a voice, we have power. And as soon as people start
to pick up this conversation that we're having right now
I think there is going to be more awareness for it people are going to take it more seriously
and through that process we might be able to see some change but if everybody is just going to
turn a blind eye you know it's not going to get any better and you guys know that I'm a big advocate
for you know truth and I'm a big advocate for truth, and I'm a big
advocate for transparency. And that's also why I wanted to have this conversation today, because I
think it's an important conversation to have. And we've only really touched the service of it so far.
There's a lot of depth to this, but we are already kind of out of time um which is a shame because i would love to go on i have a lot of things to say
but we might pick it up uh another time and um yeah so marco just mentioned you know this
this uh this friend of his or person that he knows that got way too deep and i can speak from
my experience as well i am not sure if it's fully induced
it's not i'm 100 sure it's not induced um what i'm going to talk about because solely because
of a large language model but i know that there are some aspects of it which um are becoming a
reinforcing negative loop so one of our family members is also not very well,
and he has not been very well. But I believe that, you know, like, as people start to isolate
themselves more, and then when they're already in a bad state of mind, and speak to, you know,
like, self analyze, using large harsh language models themselves like this will become
very complicated because these people like marco's friend like my family member they are not looking
for answers they are not looking for answers their head is in the sand they are like ostriches they
don't want to have anything to do with reality because
their belief and their model of the world reinforces their ego reinforces their importance
makes them feel something that they desire that they hadn't had before like a very grandiose
personality complex you know like like knowing all the secret keys to the entire banking system of the world makes you one of the most powerful people in the world, right?
Like arguably, you'll be able to derail the economies because, you know, you can have a team of master hackers to just like rig havoc and the entire world economy shuts down.
That is a very grand delusion.
And if you're not careful,
you might fall into a similar delusion,
maybe at a smaller scale,
maybe a delusion that might seem so innocent that you don't even know that it's actually harmful.
It might be so innocent as to asking chat gpt about your relationship
and ending up divorcing or breaking up with your girlfriend or the wife simply because
there will have there was some hint of of um of resentment in your previous conversations
with your ai and now all of a sudden you've made perhaps the worst decision in your previous conversations with your AI.
And now all of a sudden you've made perhaps the worst decision in your life ever.
Maybe 10 years in the future, 20 years in the future,
you truly regret these decisions, right?
So my motto when I'm teaching people to use AI
is always think for yourself, see AI as if you're driving a car,
right? It's a tool, gets you from point A to B, but you're still making the decisions. You're
still assessing the risk. In the end, you're the one who wants to get into the car to drive
from point A to point B. It's not the car deciding for you, right?
The world is your canvas.
AI chatbots, LLMs, image generators,
whatever you have are just your tools
and use them as tools and retain the autonomy,
retain the agency and be smart
I understand that we haven't really talked a lot about blockchain and Web3, but I just
felt like this was a very important conversation to have on a platform and I hope you enjoyed
Marco, maybe some closing words.
Maybe it's time for an LLM license.
or someone that has never learned how to drive
a car is going to sit inside
that's very unresponsible
and maybe the same is true for society with AI
catch you on the next one
and we're very grateful for you guys joining our spaces