DAO Talk: Measuring grant impact

Recorded: Jan. 16, 2024 Duration: 1:03:31

Player

Snippets

hey hey I'm not sure if it happened to you guys as well but the tweet gets
blocked because I didn't put my date of birth on Twitter
wow that's surprising um I was gonna give us another few minutes to make sure
people get here so let's let's hopefully see that everyone gets here fine but
that's weird yeah that's an interesting feature an expected feature or anti
feature even but yeah it's good to see themes and math here and waiting for my
from karma and then Carl's joining us in a bit as well but let's let's give
people a better time to join us first
I keep forgetting that the music shuts off I in that case let's just begin and
people will show up as they do I am to be from Dexi and this is course that talk
that we do Tuesdays today's topic is measuring grant impact so I'm going to
be talking about grants and not surprisingly their impact grants are
very very interesting and important part of the ecosystem not just for gals but I
think for all the five but of course in Dallas grants are great way to get
things done and moving and started and just really really good way to encourage
some really interesting building and experimenting and creating and we have
some people today who know a lot about it and I'm really yeah curious to learn
more from them so we have themes and that's from down masons we've also been
a bit later have Carl and of course someone from Carmel Q but first things
first themes and Matt since you guys are here please introduce yourselves and
tell us a bit about what you're up to
sure I can go first my name is themes I guess self-described governor nerd now
focusing on grants impact evaluation and grantee experience on a post funding area
I am currently with a project called Plurality Lab Thank Arbitrum in which
we experiment with different funding program funding streams in order to onboard
and develop contributor pathways but as well as assess whether or not what post
funding could actually look like post funding review could actually look like
love it Matt hey yeah my name is Matt I'm into different governance systems of
the background with holacracy and different dal governance interested in
how organizations can work and have governance that works for them getting
away from traditional hierarchy and centralized power structure I'm with
down masons were a small development shock where we recently received a grant
from plurality labs to build a project called grant ships which is a
competitive grant giving framework called an evolutionary framework to have
multiple grant giving or just kind of compete to do best at grant giving if
that's possible so yeah looking forward to talk a bit about how we can measure
grant impact today because that's a problem we need to solve awesome I see
the Karmas here Mahesh is that you say hi yeah that's me hey everyone this is
Mahesh from Karma I'm excited to be here and talk about the grant impact so we've
been we've been building a new protocol called GAP it stands for grantee
accountability protocol it's for helping all the grantees build up reputation so
they can showcase their work and hopefully like receive more grant funding
to continue to build on their continue to build their projects and more
importantly for grant managers and the wider ecosystem to understand who are
the grantees what kind of work are they doing and also measure impact so we
funded this grant how much what kind of impact did it have on our ecosystem so
so impact measurement and builder reputation are the two things which we
are focusing on with our protocol and the product which we released a couple
months ago and so far we have gotten good traction
Arbitrum we got a grant from Arbitrum like through plurality labs and they've
been very active and have been using our protocol quite a bit and there are a
few other ecosystems like optimism and public nouns and get coin are also using
it so we're learning a lot about like measuring the impact and we're doing a
lot of work on that and I'm sure we'll just call of us to talk about it more
in this call yes I would love to talk about all of this so let's do that just
real quickly before we dive into the exciting you know measuring of grants
and making them more precise and impactful anyone want to reflect on what
got us here in terms of you know for people coming in who don't really know
much about grants or at least the inside look from you know what they you
have about them how has it been you know from the past few years in terms of
this have been just everyone gets a grant I can you know with Oprah like you
get a grant you get a good you get a grant or is it been maybe just hard to
measure or hard to understand where they're going or what's what's been
leading up to this what's kind of been the progress or genesis or whatever you
want to call it of the grants within the DeFi ecosystem the past couple of years
just real quickly before we dive into the cool measuring part of it in the
future I think that we're first to start is that it's still very new right
and and I think that at this current stage we're doing a lot of
experimentation to understand what types of frameworks or limitations or
assessments can actually be made what I'm actually learning a lot from this
process is that grant programs can't really build in a silo I'm also kind of
reframing the way that I think rather than is this impact from a program's
perspective but what is the impact from a grantees perspective and what sort of
tools can we provide grantees to allow them much more sustainability but also
prepare them as they go through the grant cycle yeah I agree with going
man I was just gonna say I could speak from my experience just a little bit
with grants that we're we're receiving larger grants you're seeing larger
amounts of funds flow through with the success of L2 is like optimism and
arbitrary so it's almost this like new level of grant funding I don't even I
don't know if Grant maybe it maybe it's not quite a grant anymore my experience
with grants we're having small teams that just had a big pile of money like
Metacartel where they joined a Dow and they had all bought in with some ETH
and then ETH skyrockets and all of a sudden they're rich and so they have a
grants program and we would give one to ten thousand dollars away and Dow house
is another one where we would get small amounts of funding so there had been
kind of this more community ecosystem of grants that to my mind was the grants
ecosystem like that's what grants were for the longest time at least that's
where I was tapped in and now getting more with arbitrary and optimism is
kind of becoming more industrialized or what the word is but it had been kind of
I guess amateur grants you know like giving money away is fun but like the
actual following up and seeing the results of the grants was a little
harder to track or getting a grant to give away pizza at ETH Denver or something
like that you know so just to say that the history of it over the past years
has been a little bit more informal like smaller scale grants is yeah that's
that was my experience with grants up until recently by the way show of hands
who's come with ETH Denver this year I am
no one oh well it's cold it is cold yeah unfortunately Denver's cold I wish they
held this summer a little warmer but not my choice not my decision maybe we
should measure the impact of locations for for each conferences that would be
very interesting but another day another topic so we are where we are right and
like like you said it is getting a little bit more enough industrialized
so where else just realizes the word I would use but you know certainly more
organized more systematized in the way right and with optimism doing this
thing with our picture I'm doing this thing with get coin of course having
different sort of categories let's say right of the grants they're going with
let's let's see where we are right so how are the different ways that the
grants are getting measured now and obviously each of you have seen different
approaches so let's talk about those a little bit let's kind of compare notes
and then go from there
I can speak on like from my experience as a grantee recipient a lot of the grants
that we had apply for either ones throughout the Dow in which you would
pitch and then receive it and then provide your reporting in regards to that
however there was no I think because at them at that time everything was just
moving so fast and it was the bull there was no period of actual retrospective
from the Dow perspective to determine like what our priorities are in regards
to that I think that the evolution of grants in general within this ecosystem
has been quite decentralized but I also think that now in the bear more and
more are collaborating with one another and utilizing tools like karma or grant
ships or open source observer that exists now to kind of allow much more
evaluation processes or easy evaluation process yeah I agree with what femes is
saying I also wanted to add that so we ourselves have like received like few
grants and one thing it took me a while to realize is I hadn't the mental
change I had to make was so everything is clubbed under grants but you can see
there are sometimes like service providing like you're providing a service
to the Dow like for example like we have like these delegate dashboards which
number of dows use that we receive a grant for that but the dows use it and
it's like it's like a SAS product or something which we are like selling to
that Dow it's also we but we do get a grant so you cannot have to like
differentiate between like the grant given out to a service provider in which
case it's like okay they're providing some service and like we're giving them
some money versus like here is some experimental thing you're building or
here is a protocol like you're building let's fund it with the goal of
growing our ecosystem or the protocol and how did how did it work out the
latter part is something which I haven't seen anyone like doing it formally
in a very methodical way which is what like like we are building it our open
source observer is doing a lot of the things I can talk more about it later
but essentially I think not many people are doing it in a very formal way like
a lot of like grant teams when they give these funds they do requests metrics
or like and they internally evaluate a little bit and I've also heard from a lot
of people like they just don't have the bandwidth they just don't have the time
to go back and look at all the projects they funded and how they are doing they
just don't have the bandwidth for it so it hasn't happened or it's slowly starting
to happen now but for a while it for a long time it there was no not much
work done in that front do you think that to increase the bandwidth to make
the grant system or well I guess open to down members you know kind of more
democratizing decentralized you think that's gonna help with evaluating in
the metrics or it might slow down further where do you fall on the we
have a dedicated team and let them do the thing versus let's just open this up
to the public so to speak and let you let's use the crowd sourcing methods to
get it value it is there to cut me there what do you think I am in the camp of get
the community involved in evaluating this and crowdsource it it might not be
like perfect but I think we'll get a lot out of it than having one centralized
team because you are like the teams are the programs are allocating like so many
like funds to so many projects it's almost impossible for them to go and
look at all of those like where we are betting is the one which we are working
on is decentralized reviews which was funded by plurality labs through the
Arbitrum that's the bet we are making is let's involve the community and they
want to contribute so they can like pick up any projects go dig in see what kind
of impact these projects are having and then rate them also they're like some
amazing like tools like being built to help these reviewers review like open
source observer like classic like any project like they track their on chain
like activity the github activity so the reviewers the contributors the DAO
contributors can go look at all the data and then say what they think about it
so all these tools definitely help and I think in summary like decentralized
reviewing and evaluating is the way to go by the way Carl welcome I see here
community saw quickly you know we'll get back to the discussion hey everyone
sorry I'm a little bit late I was actually on another workshop right now
about grants measurement which means that grants impact measurement is really
going to be a theme this year and very excited to see people showing up here
and it's great to see themes and my hash again because obviously this is a
space that we really want to want to grow as I guess I just heard my hash
talking a little bit about open source observer but that's a project that I'm
working on right now the goal is to create a rich data layer that allows the
community to track what projects are having the most impact and that can
then inform better funding decisions ultimately if we want there to be more
funding for more high-impact work we need to be able to have the data to
back that up so we have to play a small piece and trying to surface the
most relevant data and then ultimately empower funders and projects to
highlight the data that is most relevant to their goals and the impact
they want to see before that I did a lot of web to impact measurement so I'm
knowledgeable in how traditional grant measurement and impact measurement is
tracked and I'm very excited to take the the best of what we can learn from the
old world and try and develop some new funding mechanisms and impact tracking
mechanisms for the the crypto native world actually simply when you're
talking about this and thinking of bounty programs because I was just
dealing with one for the taxi protocol we just launched one and so it's on my
mind and I wonder for grants is that a thing that can be done is that a
framework that can inspire to sort of kind of have instead of back bounties
you know have grant impact boundaries or something like that obviously not a
very literal sense but to use that kind of framework to kind of crowdsource to
people who get and care about the governance to look for all these ways to
maximize impact of grants or is it going to be too much and all around the
purpose what do you guys think sorry could you ask a question as like one
question each of them yeah totally I kind of do it as a brain dump but
basically I'm saying this is the back bounty framework something that could
help with measuring the impacts of grants in terms of bringing the kind of
crowdsourcing in a way that that bounty is being done right for people to instead
of hunting for bugs in the code to be hunting for inefficient grants and ways
to improve them you know incentivize in some way financial otherwise oh I can
actually just talk to you about a current campaign that we're running on think
arbitrant I think it's it's twofold there are a couple of ways that we're
testing this decentralized review and one is that I'm holding onboarding sessions
on Thursday to kind of provide grantees and reviewers an opportunity to
understand how they can better arbitrary grantees that is how they can better
their profiles for the community review because I also understand that in order
to review anything there also has to be somewhat of an incentive so with those
types of incentives outside of just grantees what sort of incentives can we
provide reviewers or those completing their profiles in order for that to
actually work and so we have think arbitrum in which we are testing out
different activities and then tying that to actual ARB as well so I will pin that
tweet and so we have been playing around with what types of incentives we
can put in for particular activities but then also how do we measure specific
activity in terms of value to kind of increase incentives just like bug
grantees why have you found so far from this any kind of quick insights it
hasn't started yet actually so yeah so we're currently on season 5 and right
now I will be posting the contribution opportunities but I will also be posting
the sessions that will be holding so that by the end of next month I can
provide you with insights and learnings from that yeah totally fair I'm
mistaken thought that already started sounds very interesting I definitely
think there's gonna be something that comes out of it that's gonna give some
interesting insights and ways to move forward because you know in the
doubt world we're always thinking of how to use the talents of everyone in
the best way and to kind of massively decentralize everything so it's
interesting to see what will come out of it guys any insights from you Carl
Hashmat in terms of yeah yeah sure oh go ahead Matt I was I was gonna say with
grant ships we had hit on the need for incentives and this idea of a vote
modifier vote modifier system and I saw that DEXI protocol actually has
something similar where members of a doubt like you can kind of be like an
expert delegate that gets a bonus or an amplifier and your voting power if
you've met some kind of standard and so with grant ships we were wanting to have
incentives where it's based on ARB token voting so if you had a lot of ARB
tokens you have a lot of say on if a particular grant is considered impactful
or not so it's just the subjective measurement who has tokens they get to
decide but over time as rounds accumulate having those who participate
get maybe an amplifier or getting bonus token voting or those who made votes
that turned out to be accurate yes so that if there was some kind of
retrospective assessment and say hey these guys voted in a way that when we
look back was spot-on so let's give them a bonus so you're kind of
accumulating this reputation of being a good evaluator over time and you know
right now that's just a few paragraphs in our design docs something that we
wanted to add in and so I was interested to see something similar in
Dexi and it's really curious what the other guys on this call have to say
because you know we have to build the thing over the next month or two and
you know rather than just guess and check I'd like to have some you know
upfront Intel and what would make a good system like that so yeah what it what
were you gonna say Carl oh I was just gonna say that I think that it's impact
measurement is hard and there really aren't anywhere you look incredibly
effective ways of doing it and so I like the idea of actually trying to confine
the scope and maybe just be focused on bug bounties or be focused on improving
one form of governance contribution and once you've learned from that ideally you
can transfer some of those learnings to bigger and more generalizable sets of
problems but I think a lot of us gonna go down the path of trying to find say
one metric that explains the world or that is the thing that everybody should
be working towards and we may find that but I don't think it's probably the place
to start and so ideally we find good use cases for specific impact areas and
then expand from there that's all what would you question for you what do you
think of a impact evaluation system that uses a pre-existing community who
may not even may not really be experts but like arbitrary for example letting
those who have lots of arbitrary tokens be prime decision-makers on what was
impactful or not and do you think that you'll get an accurate signal that way
or are there ways to help make that signal accurate you just live with it if
it's not yeah I think that's really the the challenge here so ideally you
want to be able to find a community that cares about a certain form of impact
feels like they have a good subjective sense of what impact should be rewarded
more and are incentivized to actually care about the outcome so one of the
challenges you often find in traditional Dow governance is that the people who
have the most tokens might not necessarily be the ones who are going
to be experiencing the impact firsthand I think the degree to which that can
be either you know the incentives can be aligned so actually token holders are
voting on things that are very relevant to the overall success of the network or
they're actually in a position where they can say we don't actually have the
details but we're going to do is take our tokens and use that to delegate to
people who do and so they express what they care about but without having to
be the ones who are looking at the product in reviewing it so I think both
mechanisms are are quite possible there and what's really cool about what
Arbutum is doing is they're running a number of experiments to try and test
out those different ways of sensing what matters in funding it I see both yeah
please I just wanted to add so in this particular community review majority of
the projects haven't necessarily completed at all so I don't think that
we should really go about impact evaluation I think that there are
stages of evaluation on impact right so first is like a lot of dows are funding
a lot of many things but what are some of the tools or initiatives that we can
apply to see what are the alignments in terms of funding or what are the
perceptions of the priorities of those within the community so if you have
tons of grants and all and all of them are like protocols and and everybody in
the community review was like yes continue to fund this grant then we
have a data source to say okay these types of projects actually have high
confidence within the community and if they're the ones that say no don't fund
this grant then that also provides us with a data source so I think that when
we're looking at impact evaluation it's it's more of a journey but we should
kind of prepare for like all right from this first stage what is the data sets
that we're receiving and then how can we learn from it to then move on to
actually assessing impact
I'm with the themes on that I'm with the same opinion the one thing I wanted to
add was there is the everyone talks about like impact evaluation and like how
we can involve the community to like engage and like evaluate these projects I
want to separate out between outcomes and impact so it's almost like when we talk
about impact evaluation like everyone like mixes up all those but one thing it's
good to like separate out is this project said they're gonna do this did
they actually do it and did it did what they said happened so that's like what's
the outcome of that that's like the first layer and then there is like impact
again is like too broad I feel like you should have like what's the short-term
impact it had and what's the long-term effect or long-term impact this this
project will have so for long-term impacts like I don't think anyone like
knows there's not enough data at all a lot of these protocols are all like still
relatively new and the projects which have been funded have only been like been
around for like only a few months so long-term impact is almost like non
existent and no one knows so I think the bounties the things which you are
talking about would be ideal would be good for outcome evaluation that's like
anyone can do it did they say what they said they will do and did it have the
immediate intended effect once it comes to impact like my thinking is just
because someone as Carl said like just because someone has like a lot of tokens
doesn't mean like they are in a good position to like evaluate it so either
you could delegate your tokens and let someone else who's good at it do it
or another thing we are experimenting is let anyone like evaluate the impact and
give their opinion whoever is looking at it let them make up their mind so you
want to look at what some of that heavy like token holders are like thinking you
can filter and you should be able to look at it it's just like reputation
which is like very subjective like what you think of one is not necessarily like
what someone else thinks so I feel like collect all this data and let everyone
look at it in however the way they look at it so we should build tools to filter
these things and then let people make up their mind but the long-term impact
will be it'll just take more time when none of us have that data
you know it's actually something had my notes preparing for today one has said
about long-term impact kind of put a question to myself of how long of a
window is a deal for for grants measurement how long is possible right
can you measure can you do a grant for years is it better to do it for months
or maybe some sort of hybrid milestone approach in your experience for everyone
here what do we see in terms of how far out you can push a grant and it's
tracking its measurement I'll jump in and start this one obviously in crypto we
only have say 10 years or so when the clock started so long term in the case
crypto might just be making things for more than a year or so obviously in the
real world there are experiments that go on for say a entire political term or an
entire generation and so on so you know it's really going to depend hopefully we
can start getting longer and longer durations have been able to measure
things I think the the challenge here is that the more you try and predict
the future and the more you try and commit to a set of pre-specified goals
the less you're going to be able to pivot and change course should there be new
information or should that be need to kind of change your strategy so I think
the sweet spot is really number one doing what you say you're going to do
and having a ability to track the quality of your commitments or your
predictions but at the same time being able to pivot and be flexible so kind
of going back to what my hash was saying in the long term it will be
possible to see what kind of impact you've achieved but being able to see
as you set milestones are you actually achieving them are you getting there
faster or slower and ideally based on new information you know change milestones
if you need to or force correct because ultimately if I were to predict what I
may be doing three or four years from now you know I'm very very wrong unless
I create very very basic safe assumptions you don't want people to do
that you want them to stretch you want them to push themselves but you also need
to you know learn the difference between people that are continuously stretching
me but missing and then trying again versus ones that are sandbagging and
setting a target which is very achievable but then easily exceeding it
and not really having the right kind of long-term impact you actually raised an
interesting point of do you look at the impact of people behind the
projects they're receiving the grant that kind of create a parallel or
reputation for them as well you know kind of the more you do different things
the more grants you use to work in a way that's been found generally to be
effective and useful the more trust you get the more leeway you get the bigger
grants you get later or is it interest path I see if you can see your hand up
good I think something to note which was also kind of brought up in the
step back fund proposal is that you can't measure a large protocol like
Balancer that maybe I've had years of experience and and much more funding and
much more money in their treasury for operational costs and for development to
maybe a small protocol that is entering the ecosystem because if we want a
plurality of different projects and a plurality of different protocols to
actually make it decentralized I think that we should be measuring things also
at their stages and so I mean I definitely see a lot more like larger
grants coming about so like fundings for like a million or two million or three
million but I think that like if we were to create somewhat of a system where it
could be like staggered grants and I don't mean like a thousand I mean maybe
a hundred thousand and then developing frameworks for like what do those you
know medium-sized grants look like and what sort of impact evaluation questions
can we you know propose to them and then if they progress and then you do receive
funding in the second round what are their expectations I think that like
because everything happens so fast and we have been really thinking on the
short term if we really want sustainability and ensure that projects
succeed and they stay in the ecosystem of which we're serving I think that we
have to really present like a grant pathway otherwise people are just going
to be applying for the same things in different ecosystems and then nothing is
really going to be developed to meet the needs of the whole ecosystem and not just like
arbitrary number optimism or whatever oh god I could just picture it now
people farming grants you know where everywhere they can get them
hopefully it's going to be minimized if we do this right
also anyone would like to fund this podcast this for this space I'm gonna be
applying for a million or grants and million op grant and million everything
grant tomorrow but jokes inside called you want to say them I was gonna say I
think this is an area where crypto can lead the way and pioneer models that are
then expanded on beyond crypto and so what's great is that we have these
composable funding mechanisms so project and start out say on get coin grants and
build a reputation there it can create a community they can start getting small
bits of funding that is not going to support a ten-person team but that is
going to potentially give a person who's working on something nights and
weekends or as a hobbyist a chance to actually get some some real funding and
some signal about if they're building some of the people actually want after
that ideally you get to the next stage and you're able to get some kind of a
builder grant or something that will allow you to put your job or work on
something full-time and then eventually like you were describing there might be
larger grants of a hundred thousand even a million I think that would be the
goal is you can start getting real funding to support teams that are
building things which are kind of truly a public good and don't have a
traditional revenue model that would be the goal I don't think we're there yet
at least there's much more funding for smaller projects and less funding for
larger ones but what's incredible is that we are seeing how powerful
composability can be and there is a lot of collaboration between funding
ecosystems and products that are getting funding from different sources but in a
way which is not farming it's it's allowing them to actually grow and you
know build a community and start doing this full-time
I love by the way the individual part of grants as well that you kind of touch
on a bit right that it's not just projects that it's individuals who are
getting money to do things that they love doing that they're good at that
they're useful with you know I'm thinking of Selma needs a job for
example or there's a music Dao song a day Dao where the guy publishes a song
every day and gets funded not as a grant but definitely could be a grant so I
really like that this could be both for for projects and for individuals to
really pursue something that kind of does the whole ego I think right of being
useful in good interesting and something they're really amazing at and want to
see more of that for sure just one final point on that to be clear I'm not
saying that there should just be that should be the only path that you have
to start out as a nice and weekend or a hobbyist person on your project and
then slowly bootstrap things over time but the fact that path does exist is
something which is cool yeah I mean the whole idea is that there should be
different paths and then the accountability part is what's keeping
in check and making sure that those paths are actually useful speaking to
accountability I was talking with Deakin with metacartel and they're one of
those that they started a Dao early in web 3 and then everybody had to buy in
with like point one ether one ether something like that and when he was $100
and then he goes up to $2,000 and all of a sudden they're rich right and so they
started a grants program they're like well let's just give this money away
and he was saying that you know you go to conventions and you see different
teams and projects and it was it was fun to find enthusiastic motivated teams
that were building cool things and ask them if a grant would help them and you
know like make that match and then give them the money and what wasn't so fun
was following up over the following months and kind of like seeing what
happened or holding them accountable at all you know or like knowing how to
measure that he said that that part started to feel like work you know the
first part was fun you know like the joy of giving and all all of that and
seeing the benefit that they could give to these to these teams but the
second party is like man I need paid for this and that dichotomy I think is
part of it like measuring impact is not as fun you know like going and asking
people how they did so I think it kind of just naturally doesn't happen as
often or happen as thoroughly as the other part so like with these grant
models it's like you almost need people being paid to do it and then like with
plurality I'm curious themes like you guys received a you know good chunk of
cash for yourselves and then also to give away to others and you know part of
your part of your job is to measure the impact and you know when we're talking
about all these grants programs if they don't have that wired in it's just kind
of you're not going to happen like does it happen you know just naturally or do
we got to pay people to do it basically I guess the clarification is that we didn't
get a bunch of funds to to run it we actually undervalued or our service to
run it we did get funds from the Dow to distribute on programs and experiments
that can assist in you know the key areas that the Dow needs in the area that
I am focusing on is just one small component but we are utilizing tools like
karma and OSO to better understand what can decentralized post funding review can
look like and so I like so I don't want to just kind of show or say anything
that's like inaccurate is that there are so many a plethora of programs that
plurality labs has invested in including OSO and karma as well as down
masons and grant ships and so I think from a grantees perspective in regards to
that I think you're solving real problems and they and we were allowed to
you know to provide funding for these things that we call experiments and so
what I'm actually just focusing on is what does the grantee post evaluation
look like and so I'm under stage one and that's why I'm holding the sessions
also to get feedback but then also is that we funded a bunch of great tools
and how do we utilize the tools that we have within the grant flow like the
flow the workflow how do we use the tools that an ecosystem has already
funded to collaborate with one another so that we can kind of you know go
throughout one goal and and I think that like maybe Carl and the hash
could probably speak best on that and and where these tools and the ones that
you're creating kind of fallen to evaluation kind of journey
yeah go for it my hash thanks I can so what Matt was just saying it's
allocating funds for these projects is kind of cool but it's actually work to
go and actually evaluate them and tracking them down and holding them
accountable so which is which is literally like what the problem we are
trying to solve with this decentralized grant reviews is it's a combination of
let the like just get the community involved the funding is allocated by a
small group of people but then let the community involve let them go measure the
outcome and like potential like impact and with the tank of the experiment we
are doing is no one wants to do the just for the work for like for free like some
some will do it but then can we compensate them compensation will the we
still have to see how it will work like will people just do it do an okay job
just to get some some money or are they actually going to do a good job we still
have to see it but I feel like every program should allocate some funds for
evaluation because otherwise like what's the point of like allocating all these
funds if you don't have that feedback loop of like what is working and what's
not working and then make adjustments to your allocation technique so I feel like
Matt I'm just curious like you said like yeah people were not too keen on going
and evaluating but did you did you guys like ever like fund a group or someone
to like say like here's a grant go evaluate all the other grants I'm just
curious like for anyone else like why hasn't that been done or experimented
yeah that's a good idea I was speaking of metacartel which is you know I've
been involved with them but not directly so I wasn't giving those grants but they
were one that had kind of an informal grants program you know where they is
just you know some people that could do whatever they wanted so that's a good
idea I don't I don't know why they didn't do that and or pay themselves
people can be kind of hesitant to get paid you know to pay yourselves to to do
things can get kind of touchy with money with grant ships we're gonna have to
solve that problem directly because it's set up where we're gonna have three
grant-giving organizations with an equal amount of funding and then their job is
to give the money in grant you know like create grants and give them to
recipients and then the system has on-chain attestations that keeps track
of all of the funding distributions and they can post updates at any time so
they'll create these feeds of who applied the communications between them the
funding amounts and then you know the end results and things like that so it's
all in chain so kind of have this formal record of everything that happened for to
be able to compare each of these three grant ships that are that are funding so
to my mind that would actually give you kind of an AB test to be like well
they did this and they did this and they did this and then here's the
impact it's like equal equal playing grounds is kind of a laboratory for
measuring grants impact and the one of the one of the pieces that we have to
figure out is when all that data is there and you can see what was funded and
you can see the results having the arbitrary community in this case come in
and look at that and make some kind of assessment that boils down into
basically a number that lets us rank the the grant ships so that we can say who
did best and then in the following rounds the ones who did better get more
funding so that you're kind of competing to get fuel for your grant ship in a
way and there's lots of different ways like when I think about this you're
kind of in this dance between the objective and the subjective is kind of
really just subjective like what do the arbitrary token holders think like
what are they saying so it's a purely subjective but it's based on data so
there's an objective metric there and then will they do it at all if I don't
give them incentives and you know you can have a very distorted signal in
there so it's kind of an unknown how that goes down a little bit of social
engineering to get people to come in and do a good job yeah just a quick follow-on
from what you said previously Matt I think everybody wants to be in a position
where they can find the best projects and reward them generously and I think
it's also fun to be in a position where you are just giving out money to begin
with the hard thing is actually having the difficult conversations or deciding when
to pull the plug on products that aren't having impact and here is where I think
that the decentralized nature of the impact measurement systems that we want
to see and want to build can actually be pretty important simply because they
they make that difficult conversation that is happening between say the
product and the community facilitated by data as opposed to it feeling like it is
one person who often has a first incentive to make every project that
they funded look good so I don't think we are yet at the point where we have
good examples of this that are actually happening but I think in the same way
that we've seen with say restaurants or other other marketplaces where you now
have a review system and any person who experiences it is able to be review it's
going to create a much more powerful data layer to help identify which
projects need to be cut because they haven't been achieving the impact
objectives that they set out to or the community is not being well served by
them so I just want to make sure that we continue to look at impact measurement
and helping make the job of reallocating money easier so I just actually wanted
to add that because I think that a lot of focus ends up you know how you use
that example of like our brick term token holders or large token holders I
think that in general people are always looking ways to contribute
meaningfully to the DAO and so I think that when you're saying like what are the
incentives incentives there's necessarily to be the money but it's like how do we
provide kind of pathways for people to have some sort of like I don't know
LinkedIn for web 3 or LinkedIn for their DAO in which they can show like hey I
participated in this review for this thing and this was like my one step at
doing it or I participated in two reviews or three reviews or four reviews
and those are the things that we're kind of exploring not only from the top
level down but from the like from the bottom up right and I think that if we
start to see the importance of contributors in any of their journey
then we can also see how we can better incentivize them and also what is it
that they actually really want do they want to build reputation do they want to
to provide much more valuable contribution like what is it exactly and
so I think when you're looking at like Dalmatians or grant ships obviously we
could work together since we have funded you of course in regards to this
is that you just look at the contributor pathways right and then say
like what would make more sense for this types of people to review this
and like what would be the benefit of them reviewing it within that particular
community
okay yeah it yeah I hear what you're saying contributor pathways the I guess I
guess for me like my my personal hang-up is anytime there's a big community I just
see it as very unpredictable and I don't necessarily have my finger on the pulse
of what they want and so finding a way to get that and yeah we should talk
because I have a feeling that you have a better sense of that you know like you're
a great resource for us I'll check out that thing that you've got on Thursdays
but you know being able to read the signal of their community it's not easy
and you know there's a lot it's overwhelming or sometimes it can be
underwhelming like hard to like what do I where do I look like what signal do I
tune into especially when it's remote you know like if I'm in a room of people
I can kind of sense the vibe but it can be hard when it's online to be like what
are these what do these people want and unless you're in their reading forums and
you're doing it all the time you know I'm over here building you know so it's like
how do I how do I sense the the crowd and get you get that that signal refined
so I'm wondering especially Matt with you've been funded by themes previously
what about bringing in previous grant recipients to review newer ones to kind
of create sort of a tree structure of generation so to speak of grant
recipients judging newer applicants and using their own experience using their
own expertise is it is that something that would be beneficial because of the
because you know you went through it before where is there possibility of sort
of jadedness or something in sense of well it was tough for us why should be
easy for you and kind of thing have you guys thought to these kinds of systems
where previous band recipients eventually become the viewers for newer ones kind
of a peer review type of thing no hadn't hadn't thought about that really
when we design the thing we we said it would be the arbitrary community curating
the signal about who is having impact and there's a lot of different ways that
you can go within that because we could have you know something like that where
we have a peer review that becomes kind of data for the reviewers you know like
when you vote you get the little ballot thing that kind of shows you the basics
you know there might be a quote on each one or something and then how much does
that if that's the only thing you read how is that going to impact your vote so
we kind of have to think about how we're going to present these grants to
people because the process of voting or the process of reviewing might be the
only experience they have with it at all so the presentation is very
important and then trying to not bias them but give them the information they
need to make a good decision and then I think Mahesh said something about the
the grant giver has an incentive to make their grant look good maybe that was
Carl and so if it's being presented by the grant giver or the grant recipient
you know of course they're going to try to make themselves look good how
do you get that I guess it is the difficult conversation that Carl
mentioned you know like how do you get in there and do kind of that brutal
analysis of like okay here's what it really is and then put that data up
front and it's just tricky it just seems like such a subjective subjective
experience that something I go back and forth on a lot like what what should
that final presentation look like yeah that was one of the things probably
trying to like solve like what we did for the the decentralized review system is not
only does the reviewer look at all the data and look at the project and what
they have done we also have a place where the grantee can give additional
context to the reviewer to say these are the things we did and these are the ways
you can evaluate us I'm positive like everyone sets themselves up for success
there not being completely honest like but we do give them an opportunity to say
like how should your grant be evaluated like a short-term impact we have like a
number of questions there which we ask the grantees to fill out with the with
the hope that it will help the reviewers and improves the evaluation it
will make a better evaluation but again like hopefully in another next in month
we'll have like tons of data to look at and come back and revisit this and we
spoke previously Mahesh about your grant evaluation system and how great that
would be for grant ships yeah so we definitely planning to have that be part
of it yeah we would love to have that and then to one of the other questions was
about like asking grantees to evaluate other grantees like as a grantee for me
like I would be like man that's all like a lot of work like if it's like one or
two grants like I wouldn't mind like evaluating but like actually doing a
good job it takes a lot of time so in theory it looks it sounds good but I
don't know how feasible that is
this is interesting I need to wrap this up soon but as I'm listening to you I'm
thinking we should do this again but with a case study or maybe a few case
studies of various grants maybe various projects applying for grants or who
have received grants and to kind of just take a look at it nothing formal but
just for the audience and for our own sake to just go through it and see how
everything was done and and what we can learn from it what do you guys think
yeah I like that idea sounds great all right so much I want to ask but to wrap
this up all right let's let's wrap it up here just one question that I was
asking I'm always curious about and you know you already been here so you know
but just for everyone any people that that we should be aware of that we
should be following that would that are doing real cool things they're building
and that's I should give a follow to right now on Twitter
well I think I'm gonna kind of plug the project that we're working on in terms
of think Arbortram that's kind of where you get the things that are happening
with portal labs and all the programming right now we have an awesome
Joe grace for our biggest mini grants yet ever and so for this week its focus
is on empowering web free communities but we have eight weeks of that so if you
wanted to have any information on what's going on with us and what's going on
with some of our experiments or what we funded please give us a follow at think
Arbortram
yeah and I'll I'll mention grant ships so you can see the link to the Twitter
profile in my profile for grant ships and we have an application form on our
website if you want to try your hand at grant allocation and or if you want to
be an assessor or a game facilitator anything like that we have lots of
different ways to participate so you can actually play the grant-giving game I'll
jump in quickly and say there's some great people here in the audience that
are builders that are active in the space so I won't list everyone here
because I'll probably make a mistake forgetting someone but definitely go
through the profiles here there's some great builders and familiar faces so
nice to see all of you and give them a follow and a real quick plug if any of
the people listening here if you are interested in reviewing grants please we
have a session on Thursday I think it seems just tweeted that out so if you
are interested in reviewing grants it's a good way to get into the Arbortram
ecosystem as well please sign up it's a very cool I'm gonna take a look at the
grants to just to see how it works it's fascinating and so what happens I'm
already following everything you mentioned because you know because it's you guys
and it's amazing and doing great work so let's wrap this up here and I will
plan another session later time with the case study review I think we have a lot
to learn from just dissecting specific grants and grant applications and in how
they're being reviewed they think there's going to be a lot happening with
that in the next year or two because obviously there's various ecosystems
developing the grant programs and like you guys talked about today new
mechanisms new tools for measuring the impact and trying to evaluate and trying
to make it not necessarily fun although sometimes the gamification share but
less of a stress and more effective so very much looking forward to that thank
you all of you for coming thank you for speaking today I've learned so much my
head's kind of processing all of it because it's really good insights into
impact and thank you everyone who came to listen of course we'll be back next