Thank you. Hello, everyone.
GM. GM. I think we will just wait five more minutes for people to join. Thank you. Okay, maybe we just start.
Super excited for everyone who joins.
We are going to talk with the B-Art AI team today.
Maybe you can introduce yourself.
So I'm Jay, and I'm leading the B-Art project.
So my background is in mathematics and then I worked at Humboldt University doing research on large language models.
Hi, I'm Beata and most of the people know me well here from Molecule actually.
So I co-founded Beart with Jay last year, but previously I was with Molecule and Bio
and CIDA team on the operation side of things
cool yes and uh from molecule side we also have rafa here today maybe you want to introduce yourself as well yes of course hello everyone my name is rafael i'm a scientific assistant at molecule
at molecule previously also had a lot of experience in the metabolic pathway in
psilocybin and other tryptamines so very excited about this life with this new hypothesis generation
from brd and everything that has been produced so far from the team. Yes.
So maybe before we start about the most recent hypothesis,
maybe we can just go back a little bit,
talk about the vision of the other AI, the agents,
and how it all works in a little bit more detail.
Right. So there are, I would say, three most important components of the system.
One of them being a knowledge graph. And here they can be domain-specific knowledge graphs,
like one, for example, for psychedelic science or neuroscience.
All right, so this is one part of a knowledge graph. Then another important
part in the system is the system for extracting subgraphs and what a subgraph
is. Maybe well maybe I'll start with saying what a knowledge graph is.
What a knowledge graph, yeah, exactly.
Or maybe the vision of the project in general.
Like what kind of, what was the problem you're solving
or you're trying to solve with it.
And then go into knowledge graphs and whether they're domain specific.
So let's start from the beginning.
So like every year, there's a very significant amount of scientific publications and scientific experiments conducted.
and scientific experiments conducted.
So I think in 2022, the latest year I checked,
there was more than 3 million publications published in this year alone.
So speaking a little bit from experience,
because I was also doing research on large language models,
and there was just a literal explosion of papers that happens
once they got really popular, so around, well, around 2002, but even before this.
So there was a literal explosion, and I think the acceleration of the amount of papers
and ideas that are being published and validated is just skyrocket.
are being published and validated, it just skyrocket.
So it's just super hard sometimes to even keep track of the newest papers and
advancements in your own field.
I mean, especially if everyone is being pushed
to put out as much as possible, right?
Right. I mean, this is also kind of the system, right?
Yeah, so there's definitely a push for producing more and more.
So obviously this also comes with its own problems,
but yeah, just the sheer amount of publications that is being published.
And if you can imagine, there is just so many universities, and at every university, people who are optimizing for publishing papers, because that's their job.
So anyway, there's just a lot of research being done.
Is it super valuable or not?
This is a completely different question, but it's
just, there's way more publications and way more scientific data than is possible to comprehend.
And this is not really a new problem. I think this only got worse in the last few years,
but this problem was already existing before. So in the 80 80s there was this attempt of using
computers but it was all mostly manual work so they were using the first so in
the 80s so the computers weren't really advanced but they could search for for
papers and then literally they have to do it manually to verify ideas in the
papers but what they started to do they manually to verify ideas in the papers. But what they started to do,
they were starting to connect different dots in different domains
to start looking for new scientific ideas.
and it became known as a literature-based approach for hypothesis generation.
And some of the very early and promising examples
were, for example, connecting magnesium to migraine,
that it could be used to elevate the pains.
So just taking this as an inspiration,
there's just so much knowledge that is just waiting there
in this scientific landscape to be connected.
The knowledge is already there.
The data is already there.
It just sometimes needs to be just disconnected to see a certain relation.
And that's what PRD is doing.
So we turn scientific knowledge or scientific literature into more structured form.
And this is exactly what knowledge graph is.
And then we use these knowledge graphs to start looking for these connections,
starting exploring the scientific landscape and seeking the connections that are waiting
So the knowledge graphs, because you mentioned in the beginning as well, you started very
domain specific with psychedelics, but that's not the case anymore
uh so we started with psychedelic science knowledge graph but the tools that we developed
um the system that we wrote is agnostic so basically we can turn uh any set of papers into
uh into a knowledge graph right so we are are no longer limited to just one specific case
because we can really expand towards different areas.
And we are doing a program also with CerebrumDAO
where we are investigating,
where we will build a knowledge graph from neuroscience papers.
So yeah, the system is agnostic,
so we can basically build more and more, bigger and more
comprehensive knowledge graphs. Yeah. Yeah, that's interesting. And then from the knowledge graph,
what happens after? What do the agents? Right. So we took inspiration in regards of how to use the knowledge graph,
or maybe differently, how to continuously being able to supply new scientific ideas
from a paper called SciAgents.
So they introduced a system where it was also using a knowledge graph
and was using it to causal inference to certain degree by using
knowledge graph measures. So maybe this is a little bit too technical, but they use specific
techniques to look for subgraphs in a knowledge graph. So if you, well, I would take a step back.
So what would be a subgraph in this example? Yeah. So take a step back and just talk a little bit about a knowledge graph.
A knowledge graph is just a different way of representing certain information.
As we can use it with natural language, as it's commonly known,
we can also extract from this natural language facts that are related by certain relation.
So a very simple example would be, in Germany, cows eat green grass, right?
And what we are looking there in such a natural language sentence,
we are looking for subject, object, and relation
that is connecting subject and object, right?
So it would be cow eats grass.
So the eats is actually relation, so how the subject is acting on the object.
And basically what we can do is we can extract from scientific literature these facts, right, and then embed this into this web of facts.
And basically a knowledge graph, that's what it is.
It's this giant web of interconnecting concepts. A subgraph is just like this chain of concepts,
a representation of certain causal relations
that connects these different concepts.
So if you can imagine how it can work in science,
there might be one paper in one domain talking about X,
then another paper that X x um maybe might might
have influence on y and then you have another scientific paper in different domain investing
in a completely different thing but saying y may influence z well these papers they are not
connected right even though you have this causal relation that actually X causes Y and Y might influence Z.
So we could infer that actually X might influence Z as well.
Because X causes Y, Y might influence Z, right?
We're looking for these subgraphs that might be connecting these different concepts
that are maybe waiting there to be found because nobody else thought about it.
And we can do it on scale
because we are using AI agents to do it.
So we can do it on much larger scale
than it will be ever possible to do it by doing it by hand.
Yeah, I think that's really interesting.
When we go down more in the nitty
greedy of um ai and especially in in your um in the building of decentralized knowledge graphs and
therefore at the end hypothesis generation how do you think about everything that is bias hallucination overfitting and how do you avoid
this classic kind of pitfalls and also in the terms of how do humans validate and help refine
this hypothesis what yeah what do you think in these two aspects? Okay, let me unpack a few things there.
In terms, I'll start with the bias.
So I think this is more of a philosophical question
because, I mean, language models are trained on human text.
Human text contains human bias.
I think, like, avoiding bias, it's potentially feasible once we humans realize what kind of bias is embedded there.
So especially in case of gender, this might be a little bit easier to spot on.
However, looking for scientific bias, this might be a little bit more difficult because it's maybe not so well defined what it is.
But there is definitely a bias in the scientific literature, obviously.
Even just by the scientific literature focusing on certain topics, right?
This is already a bias because they are focusing on certain demographics, for example.
They're focusing on certain demographics, for example.
So I think, I mean, avoiding bias is a complex thing for sure.
One thing definitely that can be used in avoiding biases
and this can be actually applied already now
just thinking of possible biases
the agents are aware of this
the side of the human, like how do humans interact and what kind of saying, what kind
There was also a hallucination question, right?
So hallucination problem, this is a really interesting one because I think there were
voices and a lot of opinions that we might be able to eliminate hallucinations.
But as we actually observed, the next iterations of language models were even more complex and more advanced.
We still thought that the hallucination problem is still there.
I mean, I think the problem of hallucination can be solved by allowing agents to use tools.
So we as humans also have not perfect memory of what we remember,
but we can actually consult whether it's the internet or books
to verify our own assumptions.
And I think this is a pretty neat and good way also to include with AI agents
so they can actually verify whatever information
And about the human in the loop,
so I think my opinion is that we don't really
want to replace science with AI scientists,
And I mean, a lot of science is being driven by our own curiosity,
and people who are doing science, they don't really want to be automated away.
But I see AI agents and systems, AI-powered systems, as great tools for scientists,
like these kind of co-scientists, that they can help them with a lot of work, for example, by doing what they're best at.
So analyzing a lot of text and trying to infer answers based on large text.
So I think the human in the loop, like maybe this is, not sure definitely this is very important
when we want to have control over the system
and if we want to improve the system as well.
So this can be a great source of data that we can use.
Appreciate your input there. Sorry, sorry alan i caught you at some point
you wanted to say something no worries uh i think it was quite similar just for me uh to understand
again the human in the loop what do you mean exactly by this is a good data point we could use
i mean because um like you know like if you want to improve any system,
And this is a great source of information
If the human domain expert can actually
wait in with some evidence or feedback.
So the human is there to give feedback to the ai on the hypothesis
that's being generated uh i mean so i was talking very much in general um but specifically uh in
regards of bjord um so one like um i think this is co-living, kind of. So AI agents and domain experts, they could be really co-living.
So basically, kind of use the best out of two worlds.
So we know LLMs are great text processors,
and humans have this great curiosity and creativity,
so just by connecting these two things so what we are doing right now uh we are we open a channel on psy play down discord called psy beach uh where it's kind of like a mouthpiece
of the our system so every psychotic hypothesis that is being created in the system, it's being sent to the Psy beach.
And then the community and researchers,
they can interact with the system.
So first of all, comment on the hypotheses, give their take,
ask questions, learn, but also kind of the way we want to take it is by also allowing to
co-create these hypotheses. So, I mean, hypothesis is kind of a broad word in my mind at least
because it can be made very specific
where it's just be waiting to be implemented or tested
or maybe it might be very generic, general,
which actually is just waiting to be kind of specified further.
And yeah, like by involving humans in this process, we can actually align the system. So
it can create a hypothesis that align with what people want. For example, in CIDAW,
it allows community members to align
what kind of hypothesis will be being produced
on what specific areas to focus on.
So would that be something the community is voting on?
I mean, at the end, yeah.
But at this point, it's more about vibes.
So they're giving like, okay, I really like this approach
where it's taking on these areas of obstaculate science research.
This is something I'm really interested in and I would like to explore more.
Yeah, so I think at the end where we'll get to
interesting and then I mean now
that we're talking about the public because there's
I'm afraid I will misspell it
but in my head I call it hype gem
where kind of the public spell it, but in my head I call it HypeGen. Yeah, that's great.
Where kind of the public can be involved as well.
Maybe you want to elaborate on that a little bit more as well.
So HypeGen is more like a social media for outputs of AI systems. This is something we created with Coordination Network.
And it's literally like a tweeter for outputs of the system.
So the idea is that to take it from just, from taking from an idea to actually realization, we need to engage people and engage scientists in this process.
So we thought the best way for scientists,
or for general public to interact with hypotheses,
is by creating an experience that most of people
And so this was the idea for Hivegen.
So basically, you can subscribe to different world models
or different generators and being basically
notified when a new hypothesis
of that is of your interest is being published. With Psybeach we are taking it a step further
because we allow people to talk with AI agents that also create these hypotheses.
So for HypeGen, it's more we allow for anyone to follow and basically interact with this hypothesis.
Basically also saying if they like it or not, or what might be wrong or what is good, basically like interacting. This is exactly the place where we are gathering feedback
from scientists in order to be able to improve the system later on.
And have you, sorry, have you received a lot of feedback
from the scientists already about how the agents will learn from this process
or how they're learning from this?
Yeah, I mean, so I think maybe one thing to mention here
is that, for example, Psybeach,
more and more scientists are joining.
Also people who don't really talk anywhere else on Discord
and they started interacting with these more scientific um outputs um and
just by looking at the side region now we had some really interesting hypotheses uh recently
if i may also jump in we have uh actually tyler quigley on the call who's the lead of
science working group uh inside also i just want to to call him up and maybe tell a few words about how he's receiving this
because he and the scientific team, they are vividly discussing it.
So every day we have one hypothesis being posted there and scientists chime in.
And basically first they like upvote or downvote and then if something
is really interesting then they dive into the details and and really discuss it um so if you
want to hear more uh from the team i see like three people from zaydao on the call so we can
also hear from them. Don't be shy, everyone.
Yeah, I think it would be great right now
to maybe hear some of the hypotheses
that have been generated so far.
And also what is being planned after
and what the process is in general and what
it looks like when hypotheses are being generated and what happens with them after.
Okay, so should I just continue?
I can see that Tyler is now a speaker, so maybe we can continue this thread for like two more minutes and let him tell how was the experience for him.
I've been having a great time so far reading the outputs from Saibi.
from Saibi. It reminds me of the very beginning of my PhD when, if anyone else here has gone
through the PhD program, usually your first semester or first year is spent just reading
lots of papers and trying to draw connections between various things to try to figure out
what you're going to study for the rest of your grad school experience. Proposing that to your
advisor and usually getting the shit beat out of
your idea for an entire meeting until you go back to the drawing board and back and forth. And so
what I am kind of feeling from Saibi is that it's sort of like a grad student coming to
its advisors, being the humans inside out, the experts, and also people that are just
putting in their input on what they're generally interested in coming to us with an idea us conversing about it and then i imagine
our conversations will be fed back into the model so it can continually refine what it's putting out
it had a really interesting one recently that aligns with some of my own hypotheses about
psychedelic action in the brain that's super understudied. And that is the effect of psychedelics on glial cells, which make up about
half of the cells in our brain, although they probably comprise 10 to 15% of the total neuroscience
research out there. The things that psychedelics do in the brains that we know are very much
aligned with what glial cells do in the brain, cleaning up synapses, optimizing networks, pruning connections between neurons.
And so Cybe spit out a hypothesis about this.
We had a long conversation about how it could be more specific.
And I'm looking forward to seeing how the hypothesis is refined
according to our conversations and also how many very much more specific hypotheses could
emerge from this broad hypothesis that it gave us. But the fact that it's spitting out these
connections between psychedelics and microglia and neural pruning shows me that it's 100%
working. It's going in the right directions in terms of areas of knowledge that we,
domains of knowledge that have not been explored very much in psychedelics.
But the conversations and the humans that are interacting with it,
I think are going to be super, super important for getting it to, you know,
give us that spot on hypothesis at some point that we're like, yeah,
want to fund this we want to get scientists that are have labs to get on this immediately because
it's a super important question that we need to answer thank you tyler so you basically uh are
saying that through this whole process a hypothesis emerged that probably would have
otherwise taken, I don't know how long, because research is currently not looking at it properly.
Yeah, a lot of attention goes towards the action of psychedelics on neurons, and not a lot of
attention goes to the action of psychedelics on glial cells. And this spit out a hypothesis, one of the first 10 it spit out was about glial cells,
which is definitely an area that's explored.
So maybe now it's a great time, place of that time,
actually to mention how it's being found, you know?
So what do we do with the knowledge graph once we build it?
And if you think about the knowledge graph and think about it as this web of interconnected facts, so what we do, we let our agents kind of take the walks of these graphs. So they can start walking from one concept to another.
And since these concepts are connected by a relation, they have this contextual understanding of what is this fact about.
And while they are traversing through this knowledge graph,
they are building this longer and longer subgraph.
we let these agents explore this knowledge graph.
And they have this one task to do.
And the task is to actually start looking
for this interesting, unobvious, surprising
connections that can be found in this knowledge graph. And then we have another team of agents
that are basically kind of these judges. So whenever an agent comes and he just took this
walk through the knowledge graph and found, hey, this is a really interesting connection. I would
like to take it further, right? So he proposed this subgraph to this
judge committee, and they are judging how likely, how novel, how interesting, how surprising
it is, this connection. And when it's not meeting certain conditions and certain threshold,
it's just being rejected. And the ones that are fulfilling these conditions are going through.
And once a subgraph is going through this judge committee,
then it's getting into the multi-agentic system that is tasked to extend the context,
get more data in, get the newest research, perform basically this entire research online,
look for more information, and then based on this extended context,
they have started working collaboratively to come up with hypotheses.
And once the hypothesis is crafted, then it's being published on Psybeach.
And some of these hypotheses are like, it's not really a hypothesis,
but something that the settlersgraphs that are inspiring
this hypothesis are being posted on Twitter.
So this is what happens on Twitter.
We see these subgraphs that are inspiring this hypothesis
being posted with a story about this more metaphorical story
of how the agents are traversing the landscape
And, I mean, how long would you say this process takes overall
for the agent and compare them to a human?
No, like for agent it's like extremely quick.
Obviously, there are like all of these techniques that we're also using
to actually extend this time,
to actually give them more time to think, to debate.
And then we also implement introducing more agents that can actually engage into these discussions
and the debate, scientific debate.
But just processing information is extremely quick.
So obviously, it takes some time because there are multiple passes between different agents,
but in general this is a rather quick process. We have to limit ourselves because it's just impossible for humans to be able to just be exposed to so many ideas at the same time. Yeah, exactly. And to reiterate our hypothesis and gather feedback and reiterate again and again and again, right?
Right, right. So in order to kind of minimize this human time, we are trying to do most of the work in-house.
And this is a work in progress, right? So the current system that is in place
is probably the worst quality system
that will be ever in place
because basically we are constantly improving this.
But soon we'll be releasing basically a version 2
that has significantly extended test time compute.
So what is meant by this is that you're allowing agents
to spend much more time to think about the problem
than just within one pass, right?
So they can really debate on this.
And when you combine it with this multi-agentic system,
then you have multiple agents doing more of extended work.
Right, so you can actually extend this process.
You can significantly extend this process of thinking.
So now I would say it's quite quick and we have to limit ourselves.
But as we improve the system and we work on this and we make it more complex,
this time also extends, but it doesn't mean it takes more time to create one hypothesis.
There are multiple hypotheses being considered within one scope of a hypothesis.
So as I mentioned before, what we do right now is we look for these subgraphs
that are basically these inspirations for the systems to generate hypotheses.
So this is a one-to-one relation.
We have one subgraph and one hypothesis at the end. systems to generate hypotheses. So this is a one-to-one relation.
We have one subgraph and one hypothesis at the end.
What we are working right now on is actually
to have one subgraph that is resulting
And these hypotheses are entering kind of a tournament
that was put in place to basically find the best hypothesis out of the pool.
And that is being determined by the agents?
Or is that also through research and feedback?
So all the work before the hypothesis is done is done by agents.
And then once the hypothesis is done, it goes to either HypeGen or PsyBeach.
But let's talk about PsyBeach,
because then you have real interaction
between AI agents and scientists.
Then everything that happens after the posting
is basically providing more information, more alignment,
and more of the instructions from the community in order to improve the hypothesis.
And actually, as Tyler actually mentioned, so the idea is actually to provide a more general hypothesis first,
and then with the community, go down the rabbit hole and start coming up with more testable, falsifiable hypotheses
that are more aligned with what community wants to fund.
What about the next steps that you have?
What would you say is the basic priorities that you have right now
in terms of building maybe tech-wise or operationally?
So it's like I would just, I would talk about the tech
because it's the easiest for me.
Tech-wise, we have three parts
that we can continuously improve on.
So to start off, knowledge graph.
I mean, this is continuous work because what we do is we use
also agents to construct these knowledge graphs.
So what happens is the agent is being released on the web
and it's looking for papers.
It gets these papers, then it's turning these papers
And then the knowledge graphs themselves,
they can always be improved.
And a good thing is that we have continuously agents
that are traversing this knowledge graph.
So they can actually, while they are traversing
and looking for subgraphs, they can also validate the nodes, validate the facts, and basically self-improve the knowledge graph from within while they are traversing and looking for these interesting subgraphs. and just by adding new papers, right? So whenever there's a new paper,
it should be added to a knowledge graph
in order to have a comprehensive knowledge included.
Then there's always some work we could do with the...
There's always some work that can be done
with subgraph extraction.
So as of now, we are using agents that are being,
they're using their parametric knowledge,
so basically the weights in their models
in order to decide where to go.
However, what we are doing right now
is we are gathering more data,
and we hope to, in the future,
to create a model that could be kind of a reward model
that will be leading the agents to traverse this graph
as real scientists would do,
driven by their curiosity or by their surprise.
And then obviously there's work on hypothesis generator,
but this is an ongoing by adding new tools,
adding new sources of data, testing different
architectural designs of these multi-agentic systems, and also working on tuning the
parameters, so to say, of the system. Yeah, so there's a continuous work on this. And
then, yeah, from the operational side, what we are really interested is in coming up with hypotheses that can be funded and then can be turned into IPTs.
And this is basically what we are doing with SciDao right now.
We are looking for kind of funding 2.0, where instead of a DAO being a passive entity that asks for research projects to be funded comes up with its own.
Maybe some of these research are being conducted kind of in-house,
only the experiments are being outsourced.
So there's a lot of room that we can actually work with this,
basically taking the hypothesis that everyone would be happy with to testing it.
Yeah, and I think to finalize...
Just to finalize this idea around the next steps for you, I think it's also something
that we've seen from bio sides of things where they want to...
They're building this flywheel mechanism where a bio agent will collaborating
with research hub will put a proposal on research hub and the researcher scientists there will
peer review the the documents and then there might be some feedback that goes into the hypothesis generation and from there it would go to mint
and an ip token and crowdfund the the next let's say scientific research that comes out of these
agents and it's quite uh in in one one uh one like circular mechanism so I'm just very excited for what you guys will develop
Definitely will keep an eye.
Just wanted to close that thought there.
Sorry, you can go, Alana.
And I mean, I think for me uh just because you said
finding a hypothesis the public is happy with and finding one that can be um yeah where the
research can actually be founded i just wanted to ask if there's a difference for you in what
the public would be happy for you and research that can actually be funded.
Sorry, could you repeat? I think I might. Sorry, because you said it is a task to find a hypothesis that the public is happy with and then eventually to fund the research, right?
But just I wanted to ask if there's a difference for you in finding a hypothesis the public is generally happy with versus finding a hypothesis that's easily funded.
Well, for me, I mean, I think hypothesis can be all, so hypothesis is like an idea.
And an idea can only be really tangible when it's going to turn into something, right?
And here I would say a hypothesis is worth as much as the willingness of people to fund it.
So in this sense, I would say I think we should be either by educating people or allowing them to educate themselves as well.
And this is what we do also with Psybeach, right?
So if somebody just gets interested in the hypothesis,
they can just learn about it.
But at the end, a hypothesis is as much worth
as the willingness people to fund it
because otherwise it wouldn't be,
it would not really see the daylight.
So I'm not sure if this answers your question.
I mean, more or less, I just wanted to know if there's a really big divide
in what the public wants to see being funded.
Maybe I can add a bit more here.
So what we're doing with SciDAO is like really a new thing.
I don't think any other DAO did it before.
And so we're just testing this approach.
And we are really happy to see the feedback we're getting from the scientists.
And we don't know if it's going to be funded,
but we know that we want to try it and make it happen.
And finally fund a hypothesis that is generated by AI.
And then on the other, because this is built for Saidao and we are together with them, we are creating this system.
Every week we meet, we discuss and we basically create
the rules of how we want to do it um and then on in hypegen for example in the very long run in the
in the future we want to also implement their like crowdfunding so if people will find one
hypothesis that is really interesting and they really bet on it and they want to put funds, their funds into it, they can crowdfund it and then find a lab that will actually execute it and do it.
So there are different ways of, I think, validating them.
And we're just trying different options.
And there is one more product that we actually didn't mention,
which is not only for scientists, but it's more educational
and it's for everyone who is actually interested in science
and want to learn and want to explore.
And it's called Graph Surfer.
So we are now working with game devs to actually build a game where people can jump through the nodes of the knowledge graph.
For now we are taking care only of, like we are focusing on psychedelics graph and they will be able to explore the node and explore the connection. So see what's linking the two scientific facts and how did the
agent go from the first one to another one and ask the agent what actually this means,
what did I find here, what is the connection and in very simple words get to know the science.
Yeah so the graph surfer just invites everyone to dig deeper and also to follow the steps of the agents.
Yes, but the system will be connected so then
the Beard's system will be also learning from like what the what paths are being taken by people,
where do people go the most,
and what is the most interesting.
And maybe from time to time also take a snapshot
Looking into how the knowledge craft is explored
to look for interesting connections.
One of the latest things that would be automated is human curiosity.
And this is like a rich mine of data,
just by observing how people are interacting with the system and the knowledge.
Yeah, and the agent is learning from that as well.
So, I think we covered a lot already just in terms of long-term roadmap.
If you could just summarize again because I think we mixed a little bit with the technical and the non-technical.
The long-term plan is, well, we will be continuously improving the system and a new system will be put in place very soon and we are super excited
about seeing the the results but we're also working closer with with communities like
CIDAO to align the hypothesis more with the con what community wants to fund
and at the same time work that is in parallel is on graph surfer and as well on HypeGen where we
want to explore more of the scientific domains and cover larger spectrum of
domains than we are currently doing now. Yeah, and here maybe I can just add that
if you guys here on the call
are interested into having your own agent
because our project is an open science project,
so anyone can contribute.
So if you have 100 papers in your domain
and you want to see the graph and
see what hypothesis you could get out of it, you can just connect with us and we are happy to
add one agent to the swarm, your agent to the swarm, and you will be able to see the hypothesis hypothesis on Hypegen, as we did with psychedelics for CIDAO and also with Siri Brundau for
neuroscience. Yeah, thanks Beate. I think that's really valuable just to answer something again
that this is open and accessible and that the public can actually participate in all of this, right? Yes.
Are you also expecting questions or it's no question space?
If you have questions, please.
I mean, not for me, from people,
because I see people joining as speakers
and I'm just wondering if you guys have time or plan for it.
I mean, if there are questions, I think we can just do until 6.30.
Cool. I think... No. Maybe not.
It was really cool to talk to you guys.
Thank you for taking the time and joining.
Have a wonderful afternoon.