Can we predict what AI predicts?

Recorded: Feb. 6, 2025 Duration: 1:09:42
Space Recording

Short Summary

The discussion covers the launch of Centernet as a new blockchain project, innovations in integrating AI with blockchain data, and trends in AI agents' roles in the blockchain ecosystem. Technical challenges with Solana's data structures and the potential for reputation systems to impact DeFi are also explored.

Full Transcription

Anybody else here?
Hello, hello.
I seem to be online.
Nice to meet you.
I'm great.
I'm John Conley.
So I guess I'll start.
All right, well, welcome to our geek Twitter spaces.
Today we have Jonas, I'm going to try to say your name, Simeon Avikis, is that correct?
Close enough.
Close enough.
I'm so sorry.
Who is a co-founder of Centernet.
And we're going to be talking about AI and blockchain and things that are related.
Yeah, super, super happy to be here.
And thanks for inviting.
And it was sort of random that I spoke on Twitter about Intense, I believe.
And then we got together to chat.
So thank you for the invitation.
And looking forward to learn what you guys do as well and any AI blockchain related topics.
Well, great.
I'm really pleased that you showed up.
It's very nice of you to share with us.
Why don't you start by telling me a little bit about your project and what you're trying to do?
So basically what's internet is, is a layer one blockchain with a data layer on top.
So think of it as a decentralized pub sub where anyone can publish data, publishers, and anyone can subscribe data.
So, and what we're trying to become now is a biggest inference pool.
You know, so data is just data.
So AI inference is just another type of data.
But what we have and what accidentally, not accidentally, but fortunately is a good use case for AI is that the data layer has payment rails.
So then applications where consuming data, they don't have as much need to interact and post and sell data back.
Whereas when the AI agents started, it became apparent that AI agents cannot interact with thousands of other AI agents without some sort of a data lake where it's not only a read only data,
but where we can post data back and sell it for other agents to reconsume and then recompute their data and resell it back and basically kick off that data flywheel.
So what we're doing now is on top of just blockchain data, we're adding AI inferences, things like sentiment data, you know, crypto sentiment market,
any type of other inferences which can be published and sold to the data layer.
And that's so agents can consume and make better predictions, you know, be more clever, know what's going on on blockchain, on chain,
and then as well sell that data back to the data layer for other agents to consume.
So that would be our current mission.
So let's see, it sounds like you're, we've been thinking more about AIs selling services or maybe selling things directly.
Um, and there, the concern is if I, if I talk to an AI and I say, I don't know, do my taxes or find me the best price for something.
Uh, I, the AI doesn't really have any identity.
And so it's very hard to track what it has done in the past.
AIs can be destroyed and they don't really have, um, an individual existence, but it sounds like you're, you're thinking more about AIs.
Um, but you're thinking about the layer before you're thinking about the data that an AI might use, um, as opposed to the results that an AI might produce.
And that's like actually an interesting problem with you guys tackling.
So we were thinking from the point of, uh, then our data was used only by applications.
Well, you just code, you know, how to read the data.
So even if it's raw blockchain data, let's say Ethereum transactions, right?
It's a, it's, it's in very hardly understandable data, but you can code the ABI, you can code the meta of it and your application can understand it.
So you can feed, uh, out of, uh, any really data source from RPC nodes or internet or alchemy and, and, and, and do, and do things what the application does.
Now, when it came to AI, uh, it turns out AI doesn't understand blockchain data.
And then if we go to edge cases like Solana, where it's like extremely difficult account structures, then AI's just don't understand it from the get go.
Uh, the raw data, uh, the raw data and then, uh, it's the cost of acquiring raw data to AIs is also almost prohibitive.
So that's why the AI inferences are getting popularity because you can have separate AI models, which precompute that data and precompute, um, guesses, you know, like, uh, estimates and guesses, uh, and predictions.
And then put it as inferences being like an index, you know, uh, on, on, on, on data to be consumed so that other agents, they can be very clever, how to talk.
They can be clever, how to, how to make sense out of these predictions, but they just want to pick them up.
You know, they're not actual LLM models, calculating the predictions.
They're models, which uses them to then trade better or execute better, or as you said, do taxes better, right?
So if some AI is doing my taxes and then it just needs, uh, it just needs raw blockchain data, it probably won't understand it.
But if it needs specific data stream of specific transactions, uh, and indexes of, of, of actions being done, then it doesn't need to do all of that work.
It doesn't need to be clever about data.
It just takes it for granted and then does your taxes.
So data validity and inference validity becomes important because for AI models, if it picks up wrong estimation or bad data or fake data, then it will, it will basically will do bad executive actions.
I was unaware that Solana was so complex.
Why is it that the API doesn't simply describe the data draw and, um, make it, uh, comprehensible?
Is it that the, that the, uh, the metadata that describes it, um, are the, the templates that describe it are not somehow amenable to AI interpretation?
So if you take something like Ethereum, which is fairly simple blockchain embedded beauty, uh, it has transaction data.
It has logs, it has, uh, it has, uh, it has mempool and maybe a couple other categories of data.
And so it was probably easy enough to make AI even understand raw version of it.
Uh, but even then the ABI, ABI is like decoding of what's, you know, coming out straight from a blockchain is, is, is, is done separately.
So if AI knows how to decode it, it can do that, but it's a pretty straightforward data, but with something like Solana or other fast paced blockchains, but Solana is a good example.
So it produces like gigabytes of data a second and, uh, and their account structures are, are insanely sophisticated.
So if you want to extract a complicated query, like, you know, find me all of the accounts, which traded something with that token and it goes almost from block Genesis block.
It's, it's not a simple query.
And basically it would probably take an hour or a day to compute a sophisticated query because it needs to get, it needs to get from one reference to another, to another, to another.
So it's a big, it's a, it's a deep rabbit hole.
And that's why people are making products where they make a specific query, which we think will be useful.
And then, uh, and, and then sell it as a data stream, uh, because on the spot, it would just take too much to compute because that data is then spread across, you know, where, where the data storage is set.
And, and, and it's hard to query, like simple queries are easy, but if you start to cross, uh, cross, you know, I don't know how to describe, if you start to need, you know, three or four variables in your query, like, uh, accounts, which did something with specific tokens, trades, and over a long time, it becomes a very, a very compute intensive as well as, uh, as well as query intensive process from Solana.
I, I didn't, I knew that they were difficult.
I had no idea they were that difficult, um, already.
Yeah, I believe, I believe their total data is, it's like, I don't know, betabytes.
Like they, they produce like a tons of data.
That's why you don't really ingest all Solana data raw because it's just a lot of it.
So you normally filter out only data you need, like specific transactions or specific, uh, specific fields.
Well, so we, I always, my philosophy is that if, uh, if, if something is not transparent, it's a trust layer.
And even if the data is available, if it's not, if it's not computable, it's may as well not be available.
So that means we're simply trusting Solana to do what it claims it's going to do.
Uh, you know, you, you're, you're burying, uh, you're burying the bodies if you can't interpret the data.
Um, yeah, it's a good analogy.
I think, I think they're burying the bodies, but not, not, not by any bad intention, but it's just the sheer amount of data from those transactions.
It needs to like go somewhere and normally sits dormant because like you don't need all of that data all the time, but a sophisticated query needs to go into, into, into a lot of depth to like grab that information, get it composed back together.
So that's why I believe like secondary data layers, like, like ours and similar, where you make a specific use case related data stream.
And then you, you, you compute that data stream and then you present it ready to use for AI or for other applications and sell it.
Um, so that the AIs don't need to run, which is another process and they don't need to constantly query and do that computation, but they get the data ready for consumption.
Um, but then the problem is, do you trust that data?
You know, yeah, I was going to say that data is correctly reduced.
So that is a problem.
Um, and that's, that's, that's, that's what we try to solve with our data by having trusted publishers and also later maybe cross verify, but cross verification of a high throughput data is also very hard because then you lose the performance.
So in our experience, nobody really required like the centrally validated data.
If it's a high throughput data, uh, you kind of want to just trust the publisher.
And then if it, you know, starts failing you, then, then, then the application can react accordingly and small throughput data can be, can be cross-checked.
You know, if you have like three different sources submitting Ethereum transactions, it can be easily cross-checked if, you know, all three of them looks the same.
And then you put a tick box and saying, this is validated data.
Uh, it might mean, it seems like maybe a better trust layer, but, but there, if, if the data is not available, effectively, the data is too big, it's not available.
And I, I can't download Solana's data and check it, even if I had the computational ability to do so.
So in the end, I'm trusting one or more people or agents, AIs or, or organizations to, uh, to tell me that what they say is the truth.
Um, that's, that's, that's, that's a little bit concerning to me, I suppose.
Um, you know, it, it, it, it also is not clear there.
Um, what is the reward for giving independent validation?
Um, it's actually, my daughter's, uh, is a neuroscientist.
Um, and one of the problems in science is that what we'd like to see is to have, have experimental results confirmed.
Um, and you know, if you get given the data, you should be able to reproduce it, but, but no one is willing to pay for reproduction.
You know, cause reproduction doesn't advance us.
It simply tells us that what we thought we know, we actually know very important, but, but still not a thing that people are willing to pay for.
So how do we defeat that in, in, in, in your, in your structure?
Well, I think it's kind of, it, it, it ended up being cost-based.
So it's possible to get the data is just computationally costly.
And so if it needs to be retrieved and trusted often, I guess that's where the data availability layers come in.
So the, the DAs try to solve that problem, you know, even for Ethereum to have light nodes, they try to solve that problem by having the latest data and then validate it and then, and then make it, make it available for applications.
So we don't need to go to the actual blockchain, like original source.
So DAs try to solve it, but honestly, I didn't see that use case pick up much.
Um, so in terms of Solana, it became more like if you have a simple query, the RPC nodes will work.
If you have a very sophisticated query, you trust your data provider.
Uh, and that's about what exists currently.
And then if you really want to get the original data source, then that would get complicated and, and, and, and computationally costly, but it's probably still possible.
So if that data is, is somewhere, it's, it's, it's accessible, but in, in your terms, it's not, it's not easily accessible, uh, or it becomes like prohibitively costly.
Um, but for simple applications, simple use cases, like show me balances or, or, or it works, you know, you can, you can have applications trading, showing, uh, balances, different tokens, et cetera.
Like it really, the problem starts if you have a sophisticated query and with a data structure, I guess like the performance just wouldn't be there if it was easy, you know, easy to, to query such, such a large amount of data.
Yeah, I guess, I guess, I guess, though, that even a simple query, you'd have to trust it's on the basis of this larger data set.
You can make a simple query to find out something about it in some cases, but still you're trusting in the correctness of the totality of the data set.
I think, you know, what you, what you might do is if, you know, good architecture would be to structure it.
So there's kind of separability of, of verification.
So if I could have the simple queries along one track of data that could be verified independent of the larger track of data, well, then that, that then would make it practical for me not to have to trust you.
But I, it, it takes an architectural choice of how the data is structured.
Yeah, it's probably an architectural choice and they probably chose it for performance reasons.
And it's really the account structures, which is the biggest problem I've heard.
And, and yeah, that's as far as what we faced and, and, and we heard using, but that's, that's kind of with Solana.
Other blockchains are a little more straightforward.
And yeah, now with AI.
It's certainly not your fault.
Solana did what Solana did.
You, you just have to make the best of it.
You live in their world.
So, so tell me about, I'm sorry, does anybody have a question?
I shouldn't monopolize.
Let's stop for just a minute.
I just wanted to get back to, you mentioned the, the identities and, and the validation of what the AIs are doing.
So like that, that's an interesting case as well, because, you know, so far, you know, we provide data, AIs can use, all good.
But what you mentioned is the AIs, you know, how do you validate the record track, like the track record of the action record of the AI, what AI did, because AIs don't live on chain.
So, you know, at least currently, they, they live off chain.
There, there isn't any really on, good on chain compute yet.
Well, I would say they don't even live.
That's the first problem.
AIs come and go and then they just leave a trace, right?
And so I guess what you're doing, in my understanding, is you can produce that trace and put the validations on chain so that it can be, you know, remain, there would remain a record of what AI has done with identities of those AIs.
And like, I was interested for what are the main use cases and purposes?
Is it like auditing of AI, basically primarily auditing of what AI agents have done or what would be other use cases?
Yeah, sort of.
Let me give sort of the simplest use case.
So I'm an economist.
So it comes, it comes from mechanism design.
And there's a long history of what are called agency problems, where you're my worker or, you know, you're my government.
And I want you to behave in a way that satisfies my interests.
I want you to be productive or I want you not to spy on me.
But you're my agent and your interests are not the same as my interests.
And so you may do things that I don't want you to do.
So you design a mechanism so that with whatever monitoring, whatever data that I can see about you, whatever I can see about your actions, I can structure a reward system so that your interests become aligned with my interests through the addition of rewards.
So that's sort of basic economic mechanism design.
The problem with AIs, though, is that that's all predicated on agents having preferences.
Like you have to care that you're getting money or that you might be put in jail.
You know, if you don't care, well, what is the mechanism?
What is the lever I have on you?
How can I incentivize you?
And AIs, it's not clear what AIs do.
They don't, they don't, I don't believe they have continuity of consciousness.
I don't know that an AI is aware that he was, you know, you can reproduce them a thousand times, which is the original, or I don't think there's even a question.
So they don't, they, they, they don't have preferences in the way that an economist would understand them.
And so that's the starting point.
And so what we do is this, we're generally, we don't really know who anybody is on the internet.
You know, there's that famous cartoon that says on the internet, no one knows you're a dog.
I have no idea who I'm dealing with.
You could be a very sophisticated Turing program, but.
Don't worry, I agent.
I believe you.
But, but, you know, you don't know, you have no idea who you're dealing with on Twitter, who's, who's, who's tweeting, who's doing anything.
So since you can't figure it out, let's not, let's try to design a mechanism that it doesn't really matter who's behind it.
And so to make the long story short, what it comes down to is that I assign a key, a public key to an entity that claims to be an entity.
So it has a public key.
And then the actions are then tied to that public key.
Public key could be handed off.
It could be that sometimes it's a human and sometimes it's an AI and sometimes it's a different AI.
But I, I don't know, and I don't care because I can't verify any of those things.
So I can't possibly build a mechanism on the basis of something that I can't know.
That's stupid.
Then I fail.
But the incentive attaches to the key.
So if you sign your actions, if you say, I've done the taxes for this guy and the guy comes back and says, taxes were done correctly.
Are there some other kind of verification that we can use to indicate quality of the output?
Well, then what I know is that somebody is signing with the same key that has a history of good behavior.
And if they behave poorly, then they destroy the value of the key.
And so provided that whoever it is that has control of the key at the moment cares about its present value, and an AI should, because that's resources they can use to replicate and take over the world.
And if it doesn't, it's evolutionarily selected out.
If you don't care about rewards, then you don't get them.
And then only the ones that do care about rewards remain.
So that's the identity solution.
So the reward in that case is the reputation, Ben?
Well, the reputation is the basis of getting rewards.
So if I'm contacting you and I see that you're an AI or you're anything, but you have a key, and I send you a request, and you send me back the result, and you post it in a blockchain with your key, then if we have either an umpire or myself, we can just do it on the basis of reputation.
I say, yep, did a good job, or no, you didn't, because those attestations can have a nonce on them.
I can detect that all of the interactions between your customers and you as a provider, which might be an AI, all of those interactions are logged as having been initiated on the blockchain, and then finally logged as having been completed at a certain level of satisfaction.
So you can't just care.
So it's like digital ID for the AIs, right?
And if I am an AI, which is roaming on the internet and, you know, jumping from one blockchain to another, you can really, you know, basically it collects his CV, right?
And then it has a certain reputation to accept jobs and things, and I guess it ties in well to intents, right?
Because then what AIs will eventually going to do is they're going to roam looking for work, and then they'll try to earn Bitcoin and presumably bring it back to their owners or just keep it and collect it.
And so we'll have a first billionaire AI agent, which is not going to give it any of that Bitcoin to anyone else, human-like, but it can.
So, for example, then you can actually make intents where it would only accept AIs, which have a good reputation score, right?
Because I don't want my intent, for example, to be executed by something, you know, because I don't trust for that transaction, right?
That's right. That's right. No, that's a very good way of putting it.
And it does generate the possibility of autonomous artificial agents.
I don't know if that's good or bad, but it does.
But it also gives, you know, Facebook's agents an incentive to behave well.
If somebody is actually behind the agent, well, then they have an incentive to make sure that it behaves well.
And it also creates this possibility. I've been thinking about this.
It's an interesting concept because, like, one of the problems, one of the problems of digital identities in general and metaverse, like, remember the metaverse, you know, season?
Yeah, remember the metaverse.
That was, that was, that was the problem with, there was an app virtual something.
It's like the closest to the metaverse, which was not related to blockchain, but has been used by basically teenagers abusing each other on the internet.
And, and that was the problem because there was no cost for your identity, right?
You can wipe out your computer and start again and, and do harm to others.
And only if you develop your character and you're known for, like, three years, everyone knows you in that virtual room, only then you kind of try to care about your reputation.
But even when people start doing bad stuff because they can just delete their avatars.
Once you start losing Bitcoin or reputation over it, that's when the behaviors are a little bit controlled.
So that's an interesting concept if you give, but like, how do, why do AI agents care about reputation, right?
They still only care about reputation to get more jobs to then win more resources for what they're coded for.
Well, that's right.
But we don't, that's the point.
I don't believe AIs care, but I don't know that people care either.
People could be sociopaths.
I just have no idea what people are thinking.
So I have to design a mechanism around that fact, around my lack of insight into the motivations of anybody, human or otherwise.
So it's the price of admission.
You, you're not going to get into my party until you have a record that is costly.
It's, it's kind of like proof of work.
To be able to get status, you have to expend a cost, which is how humans work.
You know, you're not going to give me a job unless I've got a degree and I've got, I've got a, I've got experience.
You're not, I can't just walk in and say, I'm a brand new human, human.
Give me a job as your CTO.
And, and where do you place the, so where does this live now?
So is it like a standard where you create this agent ID to collect your experience?
So people who will create agents, if they want them to get more work, basically they'll try to collect reputation.
What's the, what's the, what's the, what's the mechanism to sort of enforce it as a, as a standard?
Because agents, agents really then kind of need to live in that one ecosystem or, or, or adopt a standard then, right?
So, no, of course, the standard is, standards are always better because it makes things mutually comprehensible.
But no, it doesn't necessarily have to be a standard.
The way that we propose is we have something called a local public key infrastructure that can exist on an instance of one of our chains.
We have a multi-chain environment.
So in this multi-chain environment, you effectively just issue an NFT that says, I claim to be Fred and here is my public key.
And you could include metadata about yourself, whatever you feel like.
Like, it means nothing unless I decide it means something.
And so now, now we have maybe a two-sided market that could exist on some other chain or even not on a chain in some application.
And the offers would be presented on the chain.
I'd say, hey, Fred, would you do this for me?
And there would be evidence of the offer being made.
And you would say, yes.
These would be attestations, by the way.
And the attestations would be there and visible.
And you'd say, yes, I do.
And you'd do it.
And I'd say, you did a good job.
And now, Fred would have one brownie point.
And as long as Fred records his actions on some place that's visible, could be many different places, then, you know, maybe your system of data integration would be useful here.
Now, I've got somebody who claims to be Fred.
I know they're Fred because they've got the private key.
That is whatever Fred is.
And if I can integrate his behavior, well, then I can make the decision of, is it worth trusting him with this action, which perhaps I can't verify, or which could harm me.
And I believe he'll do it, because if he doesn't, he throws away his entire career.
Yeah, so both intents and attestations could live on something like a data layer.
And it would be sort of up to the cross-chain ecosystem and what tasks these agents can do to then.
That's right.
Whoever creating the intent basically needs to integrate that concept into the intent so that we don't want.
But, like, why would an intent care who is executing the transaction or the intent, right?
So, let's say one example, right?
If the intent is DeFi, if the intent is a trade or a yield, let's say someone puts an intent, I want to earn 15% yield on my USDC.
And I put that intent, so why would I care which AI agent and then what reputation would execute that intent?
Well, you'd care, because if you didn't care, I would just set up a brand new AI agent and take your USD, DT.
I'd just grab it from you.
So, if there's any trust involved, if you have to trust in my good behavior in any way at all, then you do care about who you're reposing trust in.
And still, that's why you care about the reputation.
And if you didn't, I'd just steal it from you.
And it might not be, I'm sorry to interrupt, but it might not even be malicious, right?
So, if you have an incompetent agent who says, oh, I can do it very cheaply and it wins and then doesn't do what it says it does, then the reputation should fall and other people wouldn't use it in the future.
I guess for deterministic tasks, it's easier because then, so like if we, another example, Uniswap, Uniswap X, right?
So, it's clearly just a centralized service, which is cheaper to match orders because it's centralized and it works wonders.
And you want to trade momentarily and if you want to exchange, you know, 30 ETH to $100,000 and it does it with zero fees, it either gets it done or no.
And then you already know in three seconds that your trade was executed and you really didn't care.
It happened centrally, centrally, and you just trust that service, like so Uniswap in that example.
But for non-deterministic things where something needs to be done for a longer time maybe or execute like more complex tasks, then I guess it becomes way more important because you don't know even technical capability of that AI or similar.
So, I guess for non-deterministic tasks where you can't verify if a job is done or if you can't immediately, you know, get your trade or something executed, then it would become important.
No, I think you've highlighted two very essential points very nicely.
The two things are, number one, trust.
And the other is the quality of the execution.
So, the trust here is that many things are trust gains.
You know, when I go to work for my boss, I trust that at the end of the month, he's going to pay me.
And he could not.
He could just go to Brazil and then I'd be out of luck.
I would have given him a month of work in advance and he wouldn't have paid me.
And so, many human interactions are like this.
We give something in the expectation we're going to get something back.
Uniswap doesn't really involve that kind of trust because, I mean, at least to the extent that we believe the smart contract, it works.
We do something and there's a deterministic outcome.
So, it's always going to happen in the same way.
And so, there's not really a sequential trust problem there.
And there is an immediate verification.
Now, on the other hand, it could be something that's longer term that doesn't maybe involve trust.
It could be a longer term contract.
But I think the real difficulty, both in mechanism design and economics and in blockchain, as you suggest, is that if we don't have a clear objective measure of the outcome, then we have problems on both ends of the information aisle.
Can this be extended to, let's say, DeFi?
So, the reason I'm asking, and I'm just brainstorming now.
So, the reason DeFi is only fully collateralized or over collateralized is because there is no trust, right?
So, you put your deposit down and you lose it if something's wrong.
So, similarly, AIs, they can just put a deposit down to a smart contract, execute a job.
If the attestation doesn't pass through, it loses its deposit.
So, there's no trust if you put collateral or value up front.
But then the system can't be as efficient, right?
So, the reason TradFi exists is you don't have a collateral and you get a mortgage because people trust that you will have a job for the next 30 years and pay it back.
And if you don't, you go to jail and people don't want to go to jail.
So, that's why DeFi is over collateralized.
Can this be extended to DeFi?
So, for example, agents are using DeFi, or even if it's not agents, but just like smart contracts basically get reputation or something, or users get reputation of using DeFi.
It's basically a credit score.
Can this be extended to make DeFi less collateralized using that reputation, maybe?
I think so.
You know, even loans are collateralized, of course.
We don't, we have equity in the house, which is taken if you don't perform.
But, yeah, really what matters is that there be a penalty, which could be positive or negative.
I could give you a, you know, I could fail to give you payment if you don't perform, or I could take away a bond if you don't perform.
It just has to be that you're worse off if you don't perform.
And which structure, you know, is available is what we use.
But, yeah, I mean, I think, in effect, DeFi is a big trust layer because nobody really knows what the smart contracts say.
And many of them have been exploited because we don't really read them and they're poorly written, or else maybe they're even written strategically so that they can provide exploits.
So it is a trust layer, and I don't think I would go with a new DeFi group unless I looked at the founders or whoever generates it or whoever's joining it.
I'd try to infer something about the quality and the honesty indirectly rather than looking at the contract itself because I'm just not equipped to do that.
So we already do that, and I think that this is something, what we're proposing is something that makes it more formal and also more provable, that we have this actual history that you can verify actually through AI agents is the right way to verify it.
So I can say, yes, this key has signed a succession of actions, and we see that good things happen.
And so the reputation is not my vague inference of reading Amazon reviews.
It's that, yes, we have signed cryptographic proof of good outcomes for a long time for these actions.
And, yeah, I think in DeFi, that would have a big application.
So, like, these reputation things you could post on – like, so you post on chain, right?
You post on chain, and it's signed, and we have verification that both are, you know, verified sellers and buyers because the offer was made and an action was taken.
But you did point out another problem, and that's a problem that happens in real life and also on chain.
If we don't really have a metric of what the right outcome is, well, then we're in a sort of squishy world of, you know, I was unsatisfied.
Well, maybe I was unreasonable.
Or maybe I just bought a small item so that I could tarnish your reputation by writing a bad Yelp review.
So, any time there's a lack of verification either of what the reviewer is saying or what the actor did, then, you know, in real life and in blockchain, we have a big problem.
It's not – we can't do things with information we can't have, right?
Yeah, because trust is not binary, right?
Trust is a spectrum from zero to a hundred.
So, it introduces a new, like, level of, I don't know, human-ness for the computer systems of AI agents because up until now, it's pretty deterministic.
But now we can – now we can at least quantify part of it.
You know, we can leave it down to the judgment part.
We can quantify and say, this much good reputation, this much bad, all of it's verified.
Now, do you believe it?
You know, here's my personal problem.
So, I have code that I would love to have written.
But, you know, how am I going to – am I going to go to TaskRabbit or some site to try to find a developer in China or India or something?
I don't have any way of evaluating how good they are, really.
They have reputations, but I don't know if those are true.
I don't know if they're relevant.
My project, and I don't really have a – the real problem is I don't have a good way of evaluating how things should progress.
I don't know how much this code should cost.
I don't know what good coding would look like.
You know, were they sloppy or were they good?
So, I can't even give a good reputation except to say I was unhappy.
And that's a very terrible market, right, because nobody knows what's going on.
Yeah, I guess it's an easy example with a developer agent.
So, I already saw – I already saw on X today, actually, a job ad of we will hire an AI agent which can do X, Y, Z.
Yeah, that's cool.
Yeah, that's cool.
Yeah, and so that – the reputation would be literally like, you know, freelance.com or Upwork where, you know, exactly look for outsourcers or the companies or developers and basically look at their score.
And they're mostly, you know, faked or engineered anyway.
So, a developer AI agent that must have some sort of reputation behind it, right, because it cannot deterministically guarantee that this output will be what you want.
It's the same with ChatGPT now.
You know, you ask it a question and it gives you an article, but you have no idea how accurate is it.
You know, in thinking about this now, I can really see an application for your project here because you could be the meta.
You know, if we had the data or not – I think we should have been, of course, because we're the best.
But if anybody had the data, reputational data like this, it's big and it's spread, and some of it is opinion, as you suggest here.
So, I would like a meta agent who can tell me their evaluation based upon all the data that they've gone to the trouble of acquiring, which is expensive, as you say, and say that this actually is a good actor.
And then I'm going to have somebody else evaluate you.
You now have an incentive to say that you didn't get very many complaints about the things that you had estimated.
And if you did, maybe you can prove it.
You can say, well, maybe I was wrong, but here's why I said it.
And somebody else can examine that and say, that's a reasonable inference, and it just happened to be wrong.
If I may interrupt, I think this is where the identity is so critical, right?
So, if you have – let's say, going back to the developer example, whether it's an AI or a human, and they perform work on multiple sites, multiple freelance sites, the identity unifies their work across a different site.
So, right now, if you go to Upwork and you look at a reputation, it's only the reputation on Upwork.
But if you use the public key as identity across networks, then someone who's evaluating, as John was saying, like a meta-evaluation can go across multiple – basically, look at the CV from multiple angles and provide an assessment and attest to that assessment of this particular actor.
Yeah, like our system can be used basically to submit all these attestations, and for – it's really an ecosystem problem.
So, if you have an ecosystem or a marketplace where these AI agents can be hired and look for jobs, then I guess that would be the place to look at the reputation, and they just need sort of a database or a communication layer.
So, we – I often call our layer, like internet data layer, as IRC, you know, the chat channel for AI agents because that's what humans used to do where you have channels, you subscribe to channels.
If a message comes, you get it, and you can also reply back.
So, the agents, because the speed at which agents can do work, you know, we need to imagine that when an agent developer is coding, he's not coding for one project for a week, right?
Like, it's done in three seconds.
So, he's probably doing 17 jobs every four seconds if he's good and if he's hired enough.
So, you know, it's basically as fast as the compute goes.
So, it would be basically nonstop posting these attestations and collecting these points and anyone submitting jobs.
They also wouldn't, like humans, go to Upwork and select, you know, out of 10 people and three pages, the ones you like, and then you try not to look at the picture and so on, right?
They would be very computerized.
They would be very computerized.
They would just look at literally the score and the reputation and the parameters of capabilities and you just submit the work because you would be submitting also not one work a day.
You would be submitting gigabytes of work and you want to get outputs back.
So, once it becomes a lot of data in, a lot of data out kind of system, then you need a data layer where, you know, it's a one-to-many publish and subscribe.
Anyone can publish these attestations and anyone can request a single message or a stream of messages to keep checking that my work has been done, right?
So, if I keep submitting 10,000 jobs and I'm getting results every four seconds, you want to keep getting those attestations, keep marking them and so on.
Basically, the system needs to be spinning up, doing a lot of transactional work, even if it's development work, because it's automated.
Otherwise, there's no point automating it if it's not faster than going to Upwork.
Well, we've had the same thought, actually, that blockchain should be exactly, as you suggest, a one-to-many communication layer.
We can imagine this in a private instance.
Like, for example, suppose we have a set of, I don't know, maybe we've got industrial suppliers and an auto manufacturer or perhaps a group of hospitals that have radiology groups and other outsources that they use.
Well, if we're going to try to do contracts between each other and share work, we have to share data.
We have to tell you about the patient.
I also can't lose track of the patient.
I have to know that you agreed to transport the patient to the hospital and that he got there and that the guy there accepted responsibility for the patient and so on and so on.
And I don't know who's in your shop.
I don't know who a doctor is.
I don't know who a driver is.
So you have to say, I'm the ambulance company and this is my driver and he has a public key and he signed that he accepted it and the doctor is attested to by the hospital and he accepted the patient and so on and so on.
So it creates an audit trail in a very decentralized way through mutually untrusting parties and can also be used to assign work and pass off responsibilities.
Yeah, that could totally be done because we interact with many chains, most of them, and get data from them.
So you can collect what's happening on these chains and then you can post data, publish custom data, not as a blockchain node what's happening on that blockchain, but you can post any type of data.
And anyone reading can combine basically the transactions you want to see from these different blockchains as well as these custom attestations, let's say for this hospital use case for those specific data streams.
And you would be ingesting basically nonstop ingesting attestations, nonstop ingesting whatever the task or a job is and doing any type of cross validation you need, basically ignoring you dealing with blockchains.
The way I look at blockchains is just databases and different types of databases with different features.
Which is great and amazing, but it doesn't need to be all transactional and that's why a lot of action happens off chain.
So these systems can work off chain, basically just need to make sure that once the data sits down in somewhere, then it can be verified and untampered and transparent if need be.
But the computation really happens off chain, but the computation really happens off chain in most cases and in most use cases.
And that's exactly the reason why we designed the data layer for creating the IRC chat for computer systems.
So they can both receive and send information back.
And it was hard to imagine sort of the use cases before agents came up because the use cases were pretty static.
Like, you know, getting data from five different blockchains, computing, reserve, like, you know, asset reserve pools and, you know, validating the assets are where they need to be or how much of the asset is on different chains.
But once agents come in, it becomes really interactive.
And for interactive, the best example to imagine is, you know, 10,000 people chatting in an IRC chat.
And that's how we imagine AI agents chatting with systems and between each other.
And they cannot do that just through blockchains.
It's too much transactions and it's too much cost and lag, et cetera.
So we need a, basically, an IRC layer, which we provide.
Well, you're like a brother from a different mother.
That's exactly how we think of it.
But, you know, we, let me just, I'll go, let me just say one thing, Len, and then I'll turn it over.
So we do one thing which I think makes it easier and also makes your approach more useful.
We're multi-chain.
So we imagine breaking up the IRC channels into channels that are relevant to whoever's participating.
I don't want everybody speaking onto a vast universal network because there's too much data I don't care about.
So I'd rather have a hospital chain that talks to hospitals and a metal supply chain that talks to metal suppliers.
And then having somebody integrate that then, you know, going to all of those chains and me as a user who only wants to know, is that one guy trustworthy?
Can I actually get a pipe from that guy?
I don't want to search the universe of IRC messages.
I want somebody that I can trust and can prove it to me that, yes, this guy is a trustworthy metal supplier.
And that seems to be what you offer.
So, like, you would create a channel, basically, and then you would listen only to that channel.
Like, in our examples, it's like Bitcoin whale transactions.
I'm only interested in that.
And then maybe I'm only interested in Bitcoin transactions over 10,000 Bitcoins.
So I would be getting maybe one message a day.
If you're interested in, you know, over one Bitcoin, you would be getting thousands of transactions a day.
Or, like, Solana is a good example, is I own, let's say, I only want to get NFTs on Solana.
From all of the data, all of these blockchains, all of this data, I just want to see the newest NFTs minted on Solana.
And because that's my iPhone app, you know, it just displays new NFTs.
So you would subscribe only to that channel, which basically has already filtered out all the other non-relevant data.
And that channel has only NFTs.
That's right.
I'm sorry, what were you going to say?
No, I was just trying to think about the interaction with our chain of being able to make attestations.
And I was just wondering how useful that is for, I'm sorry, I have a question, I don't know how to pronounce it, it's internet.
So for you guys, is there any use of generating attestations or NFTs?
So we have a lightweight NFT that, and also some other kind of innovations like what we call counterparty records.
I was just wondering if any of those facilities would be of interest to you guys to interact with.
So for us, basically, we're data highways, right?
We're the road.
And we can accommodate any data being published for free or sold into the data layer.
It has features like caching or it's live, real-time, just delivering to its subscribers and then it's gone.
So it has different features.
But depending on the use case, for example, you can start putting attestations.
And if you need another million devices reading those attestations or getting them and checking, let's say, IoT devices, then we could be a great use case because a million devices can listen to that attestation channel.
And then as soon as you post the attestation on that channel to Cynternet, those million IoT devices would receive it as a decentralized PubSub or as this IRC chat we talked about.
So we're data agnostic, like we really kind of don't care.
It would depend on the ecosystem and who is using these reputations or attestations and so on.
So the benefit being devices and SDKs, applications, anyone can receive them just to their software or to their systems, right?
So they don't need to be connected to any particular blockchain.
They don't need to execute transactions.
They can connect to our SDKs, pay with and basically just get the data.
So if it's a million devices need to continuously get attestations, they would get it as soon as somebody posts them on that channel.
And whoever is publishing that data is basically publishing it for free or for a fee.
So because we don't have a specific data purpose or use case, right?
Like the more data there is, we can have raw blockchain data, we can have Twitter data, we can have, you know, these attestations.
But for a person or an application which is not, doesn't know what an attestation is, then it would just be useless data.
So it wouldn't subscribe to that channel, right?
And then you would end up having thousands of different channels on our data layer and find what's public and interesting for you or you do it private.
Basically, if you have a system where you need to deliver data one to many privately, you know, your system subscribes to the channel and then the other part of the system can put the data on.
And it's also sort of made for AI inferences, so you can compute the data and make, let's say, estimates of this reputation or something.
You can put estimates how good this bot will be or this worker will be in the future by some AI model, right?
And it put not attestations, but let's say inferences about particular worker agents that they are likely to be good because they are learning very quickly.
So we're really data agnostic and it would depend on who is consuming these attestations and who is putting them on chain for as much convenience as possible.
Yeah, okay.
So I think I understand what you said.
And I was just wondering if your system would, is there any use case where your system itself is making attestations?
So, for example, you deliver something and then you're making an attestation so that maybe for regulatory purposes at some point it could be checked.
I was just kind of wondering about that.
It could be.
But so the attestation, it's what type of compute does the attestation, right?
So let's say we deliver the data, how would the attestation be done?
I'm trying to imagine.
Where is the compute happening?
So you can make an attestation, let's say if you're using our blockchain, it's very easy to make an attestation.
If you have a private key, you could just send one transaction with a statement, whatever that statement is, and then that gets recorded on the blockchain.
So it could be that the results, the hash of the results and any other metadata that you want to put on there is attested to by your system.
So I was just wondering if there's any value in that.
So the use case there would be if we have agents using data from our protocol, which is what we want to have thousands of or millions.
And then if they need attestations, then they can even through our protocol or just directly to your blockchain, call to make an attestation and then post it on our data layer again.
So to prove their reputation or something.
So if these agents need to attest themselves, then they can use your blockchain to basically create that attestation, right?
So it creates the attestation and puts it and gives it back.
And if another part of the system, another party trusts your blockchain as a source of these attestations, then it can get it.
But it wouldn't be our protocol itself because it's more like data highways.
But the agents living on it and getting data can use us as a highway to create that attestation from your blockchain and put it there.
But basically, they would need to participate in the reputation ecosystem, right?
And it's only up to the agents to kind of have or see the need to participate in a particular reputation economy.
Yeah, I'd say that's true.
I see a big synergy between what you're doing, what we're doing.
Like you say, Len was saying we have these attestations.
Those are only valuable to the extent that they are signed by a public key that you identify with an agent or a claimed agent.
So that's what makes these attestations portable.
There's not really any trust involved.
We can prove that something was attested to by a private key.
Now, the next layer is, do you think that private key is, you know, is it Microsoft or is it the AI that you've dealt with, you know, that you want to deal with?
But there's really no trust there because the data stands on its own.
It's a signed attestation.
It can't be faked.
You can choose not to believe the attestation.
But the fact of the attestation and the correctness of it as a piece of data is not in question.
However, it's big.
There's a lot of it.
And the synergy would be you looking at those attestations and integrating it because it's far too much data for me to even know where to look for it.
I don't even I have no idea as a user.
But also, I don't know who the heck you are.
So, I want you to have an NFT identification and I want to know something about your reputation for integrating data correctly and giving answers that your users are satisfied with.
So, there's a feedback.
I have to know you and to trust you of your evaluations and you've got to look at the evaluations and decide if you trust it.
Yeah, so, as far as I understand this, any DID database, if it's being looked at as a source of truth for that use case, like the reputation or whatever, then it becomes trusted that that's where I look for the attestation.
And then the means of transferring is, you know, our data layer is just one of them to exchange data.
It's very convenient.
And it's really kind of use case dependent on which agents start participating in that reputation economy.
And, like, I'm actually very looking forward to see which will be the first, like, agents or use cases which start participating in a reputation economy.
Because the current agents we had, they don't.
Actually, I haven't seen a single agent which has a reputation.
So, that's very interesting and I haven't seen that.
I wish there's one or a few which we could name.
I actually can't think of an agent or a bot which has reputation, right?
If we take the current bots, they're, you know, chatty LLM models, but they don't collect their reputation.
They don't do any, like, executive tasks where they could, you know, do a bit better or worse, right?
They just chat and then it's a subjective evaluation of how useful information is and so on.
Maybe it could be something like delivering data.
So, let's say if our, if we recently launched our own agent, Sintoshi Dog, and what it does, it has data from on-chain chains and it has sentiment data, which is already computed LLM model, which has sentiment.
So, it just posts the data, right?
But then it doesn't, it doesn't really prove that it got it straight, you know, from, it doesn't prove anything about it.
It takes it from our data layer and does it.
So, if you, if you, if you, if you want to have agents which posts, let's say, economic indicators and you want to trust them, they could provide some sort of proof or attestation that they did certain things or got the data from somewhere.
But it would, it will be interesting to see the first, the first use case of agents participating in, in reputation economy.
I'm looking forward to that.
Look, it creates a virtual circle because, you know, why, why should you be honest besides the fact that you were brought up?
Well, you're not treated any differently than a dishonest person.
You're just putting yourself at a disadvantage.
So, creating the possibility of rewards for good behavior means that people will behave well.
And then people will say, wow, I can, I can interact with agents who behave well, who are not going to rob me.
And that creates an explosion of the economy.
You know, if nobody can, in a low-trust society, nobody can transact because we're going to be ripped off.
This creates a possibility of a virtual, anonymous, decentralized, high-trust society.
And that's really what we're aiming for.
Yeah, if agents uses, let's say, us as a data source platform for them, and then there's an easy way for them to grab an attestation and collect their reputation,
then they can use it as an incentive or just as a marketing tool and plot to compete against each other if they're grabbing that attestation.
So, they can use our data layer to get data and make, let's say, predictions about the market, and then order and grab an attestation and prove their identity, saying, I already delivered 800 market predictions.
Precisely.
Precisely.
That's a great case.
Well, this has been great.
We should probably end it.
We're already four minutes over.
We don't want to...
And it's 1 a.m. where I am.
But, yeah, I was really trying to think about it.
That's all right.
We'll go have a drink.
I think we've learned it.
I think we've learned a good one.
But it's been a real pleasure talking to you.
It's really interesting to hear what you're doing.
And, yeah, let's...
I'll definitely look what you guys, you know, have in documentation and stuff.
And if we hear the use cases or we want to build a sample, that'd be cool.
So, I'll have a think about it.
If you guys find something interesting, also, let's keep in touch and maybe we can build a use case or something.
Sounds great.
That's great.
I'll stay in touch.
Wonderful.
Nice to talk to you again.
We'll talk again soon, I hope.
And thanks for inviting again.
That was unexpected, but it turned out great.
All right.
We'll take care.
We'll talk to you later.