Hey Kyle, can you hear me okay?
Thanks FileVerse team, thanks Wasabi team.
We'll start in 10 to 15 minutes, let everyone join and give around a few extra minutes and
then hop into the good stuff.
Hey Kyle, were you able to resolve the app permissions issue?
Hey Kyle, I think I can hear you, yep yep yep, they're in.
That's good, Tom as a host as well.
He should be joining in just a moment and then I'll add him as co-host and then some
of the other speakers join, we'll get them added as well.
Hey Tom, I'm gonna make your co-host give me one moment.
Okay, cool, I'm gonna stick with desktop and mobile to make me go.
That sounds good, that sounds good, I'll play pre-show tunes and then I think we'll start
about five after or once everyone's in, I'll send some reminders to the speakers with the
Yeah, that'd be great, thank you.
Hey folks, we'll get started in about five minutes.
We have some of the speakers joining us already, thanks Zora and thanks Willy for being with
us, Andre as well, and Tom will start in about five, six minutes so we give folks a chance
to join and we'll get started.
In the meantime, please enjoy a few beats by a decent day in the meantime, and please
if you have questions, feel free to go ahead and start adding them in the chat, we'll switch
to those and try and get your questions answered during spaces as well.
He says, Jack, peace and love, for God's sake, kid, nice, lay off the drugs, and this is the
gospel according to some dead people.
Hey folks, if you just joined, we're going to start in a few minutes.
I think we have all the speakers on board here, and we'll get started from there.
Hey, Tom, do you want to do a quick mic check?
We still have another minute or two.
We'll just get started in about another minute.
Okay, folks, let's go ahead and get started.
I'm Dara, Decent Dao's head of marketing and I'm super excited to introduce this group
or actually have Tom Stewart, Fractal's CEO, introduce this group.
So Tom, why don't you go ahead and take it away.
Thank you so much to the panelists for joining.
Just to start, my name is Tom.
I'm another work stream leader at Decent Dao.
We're focused on essentially we're a builder, Dao focused specifically on building privacy
tooling and applications for the next generation of builders.
At the same time, I'm also CEO of Fractal and we built Dao solutions that are focused
on modular governance and modular privacy, but that's enough about me.
The rest of this time, I really want to focus on our amazing panelists today.
Thank you so much for joining.
Today, we have Oren from Nozdyskill.
We have Lewis from Shutter Network.
We have Andre, who's from Safe, as well as Willy from Shapeshift.
I've had the pleasure of knowing most of you now and I've been talking to you about Dao
privacy as much as I can over the last six months or so.
We're going to hopefully have time to hear from all of your very distinguished and unique
perspectives, considering your career so far and what you've seen and what you've seen
work and what hasn't worked.
The last thing I'll say is, before I ask for your own introductions, is the vague structure
is going to be very fluid, going to see where the conversation takes us with such brilliant
But at the same time, we're generally going to go from a high-level, big-picture theory
of Dao's privacy and transparency and why there's a bit of a dilemma there and what
the nuance we need to explore there is, and then we'll dive more into specific applications
like private voting and other areas.
So without further ado, I'd love to start by going around and asking each of you to
talk through your role, but we're going to add a spin to this, rather than just saying
I'd love for you to talk about how your role is either recently related to Dao's privacy
or even Dao's transparency and then maybe just one hot take you have about Dao's privacy
Hopefully that will give us a lot of juice to run with the rest of this call.
So maybe I'll ask, let's see, it sounds like Oren's maybe having connectivity issues.
So Louis, why don't we start with you from Shutter?
I'll pass it over to you so that's your current role, how it relates to Dao's and privacy,
even Dao's products you're building, and then a hot take you have about the topic we're
I'm the product lead at Shutter network and how it relates to Dao and Dao's and privacy
is that Shutter, this is encryption, special encryption mechanism, so originally and primarily
designed for MEV and front running and censorship protection as this encrypted mempool solution.
But we also have this shielded voting integration together with Snapshot and that is something
that Dao's can use, switch on and have essentially tally privacy while the vote is going on.
It's been life since about a year, I would say.
And my hot take around privacy in Dao's is, I would say my hot take is everything should
always be private by default and then you can disclose stuff on.
And the limiting, I think we should always be anonymous and we should always have everything
be private and then the Dao or things can be disclosed on top, basically.
So I think the default should be privacy and full anonymity, and then things can be disclosed
That's a great start to the hot takes.
Andre, do you want to go next?
And then we can keep going around there.
Andre, I'm currently the head of governance at SAFE.
I'm a lawyer from training and that was actually my start into Web3.
I have been advising a lot of corporations and a lot of like the regulation sphere.
And with regard to how, what privacy plays a role for SAFE or for SAFE Dao, I think there's
One is for SAFE as a product that generally tries to collect as little data as possible.
There's some very exciting things happening in the ecosystem around privacy and smart
And when it comes to Dao's, I think that we by default tend to have everything transparent.
And the hot take is here that privacy is something that needs to be upheld and does not come
It didn't come natural kind of like in the old world when people just raise their hands
It always needs a technical implementation to make things private.
And then it needs to be upheld and doesn't doesn't come by itself.
Your background is the one I'm least aware of, Andre, but I love the lawyers' take on
I can't wait to have your discussion.
Actually, let's test Rn first.
Rn, have you got your connectivity issues?
Are you available to quickly intro yourself and add your background on Dao and privacy
So why don't we do Willie first?
Willie, why don't you jump in if that's good?
Yeah, I'll stall for you, Rn.
GM friends, thanks for hosting.
My name is Willie, and I'm head of decentralization at the Fox Foundation, which is the foundation
that supports ShapeShift Dao in its mission to also contribute to a couple other Dao's
And one thing that I've learned from working with Dao is that people are generally pretty
nice and don't want to hurt each other's feelings, which is good sometimes, but it can also lead
to some challenges, I think.
And for example, one thing I've noticed in a lot of Dao's is that it's generally easier
to start something than to end it, or especially it can be easier to hire someone than to fire
In that ShapeShift, we have workstream leaders, and workstream leaders have the ability to
hire and fire within their workstream, so it's not really problematic there, but workstream
leaders are hired by the community.
And when you don't have a CEO or a board to make the decisions and the decisions are made
by the community, I think a lack of privacy can actually lead to a lack of candidness
and even a lack of transparency, which can lead to a lack of efficiency.
So yeah, I believe community members should be able to vote without fear of backlash for
expressing their true opinion.
And we've been using Shutter Network for a while, which has been great for shielded voting
so that people can't see the results of the vote while a proposal's in progress.
And I think that's great for kind of reducing the bias people can have when they see the
But for anyone that's used shielded voting, you know that the voting results still get
So grateful for projects like Fractal and Shutter that are working on the hard problems
to solve the next step, which is enabling the voting results to truly be private, I
think is really necessary for DOWs to be able to operate at the same level of efficiency
as centralized organizations.
Yeah, man, we've spoken a lot on this topic and your kind of depth of experience and like
your candidness in how you're able to talk about the pros and cons that you've seen
of transparency is really been a marvel for me.
So can't wait to dig into that with everyone here.
And I just got a telegram saying that you're good to go.
The question that we had, if you didn't hear the first time, was a quick intro of your
role, how it relates to DOWs and privacy or both or neither, and what's the hot take that
maybe you have on the topic just as you come into this call, that would be great to hear.
Sorry about the delay getting connections sorted, but I think I'm good to go now.
I'm Oren from Nozus Guild.
We put a whole bunch of different tooling for the DOW ecosystem.
A lot of it is modules to extend the most, just safe to essentially add functionality,
particularly for DOW-like use cases, but extending across the full range of on-chain entities.
I personally have been, I guess, on a pretty long-term mission to try to use and to help
enable secret ballots, private voting in the Ethereum ecosystem.
I don't know if it's a hot take, but I think properly receipt-free voting is one of the
biggest constraining factors, the lack of properly receipt-free voting is one of the
biggest constraining factors for using DOW-like technology in larger and more critical use
I think until we can properly solve for that, then the technology will essentially remain
Solving for that, then it really makes it applicable to a much broader scope of potential
I guess I should be clear there when I say receipt-free, I mean, this is the quality
whereby you cast a vote and then are unable to prove how you voted to anyone.
In doing so, if your system has this quality, then I think it's much more resilient to collusion
and coercion in all of its many forms, a really necessary feature for robust governance.
I didn't know when I first raised you about the topic of privacy that you had such an
experience at Clear Fund and Macy, which is to do with the topic of receipt-free-ness
So definitely going to ask you to dig into that.
There's also some awesome hot takes, guys.
I've got so many comments, but I'm sure you each have comments on each other's work there.
Before we dive into, which I think the most common topic, which will be the first big
dive that we do into voting privacy, I imagine that'll be where we spend at least a third
of our time today due to how topical and top-of-mind it is for all of us.
Before we do that, I would just like to take the kind of high-level view and maybe act
as devil's advocate to everyone here as hot take and be, like, if I was at, when I was
at ETH Denver in 2023, I heard Rolf from Metacartel and Hydra talking about Wendell's work.
One of the quotes I really liked was that one of the best parts about DOWNS is they give
transparent, continuous auditability of decision-making.
So, yeah, I mean, you've already hinted at it, but maybe we could dig into this.
Like, Oren, let's start with you.
With that statement, do you agree with that statement?
And I'll repeat it again, it's transparent, continuous auditability of decision-making
is one of the reasons that MixDAO is so powerful.
Why is privacy important if that's the case?
I just want to spend a little bit of time on the theory and then we can dive into private
voting as an application.
I think this is a really common kind of knee-juck reaction to the idea of secret ballots of
So if it's private, then if we can't see the inputs, then how do we kind of audit?
How do we ensure that the output corresponds to the input, that the output is actually
a result of legitimate input into the system and it's not being maliciously altered in
I think this is where the kind of technologies that we have access to now are really
interesting because we can have both qualities in one system.
So MACI is a really great example of this where because the votes are calculated using,
sorry, the tally is calculated using a zero-knowledge proof, we can cryptographically verify that
the output is the result of the inputs without actually knowing what the inputs are.
And because of the way the registration process works, we can guarantee that only users that
we've allowed to register that have kind of made their way through the sign-up gatekeeper
are allowed to create legitimate inputs to the system.
And I think, yeah, the concern when we're talking about auditability is that we want
to make sure that the system is behaving correctly, behaving correctly, behaving as intended.
And so the way that we've traditionally done that in voting systems is to have, I should
say, the way that, say, a secret ballot is typically implemented at, say, nation-state
scale governance is to have very strict checks around when you're handed your ballot paper,
very strict checks in terms of who is putting a ballot paper into a ballot box, and then
very strict checks around the counting of those ballots.
By kind of divorcing all of those steps, we kind of create a receipt-free type scenario
in a very kind of analog way.
And so in that way, we can kind of have reasonably strong guarantees that the inputs are legitimate,
and we can count all of the ballots manually and kind of double-count and triple-count
and kind of recheck to ensure that the outputs reported correspond to the legitimate inputs
that we're given, the legitimate votes that we've cast.
With the use of Zung or with Cruz in the case of Macy, for example, we can have similarly
strong guarantees without having to reveal the outputs to all but one trusted user, the
And so, yeah, I think the, again, point there is that we can have this quality of auditable
We can have very strong guarantees around the outputs being the legitimate results of
correctly-casted inputs without having to actually know who cast which inputs.
I'm going to jump in because I can be maybe lost or in just the last bit.
To verify, the reality of it is that we can have both privacy and verify ability.
Yeah, like I remember when I first heard about some new novel encryption methods like zero
knowledge and homomorphic encryption, I literally thought it was magic, being able to prove
something without knowing, seeing the actual proof is fantastic.
Willie, I saw you do a big 100% mark.
For those people on the call who maybe aren't as aware of the different types of voting,
could you just quickly define, like you did for me, the difference between, you know,
shielded voting, public voting, secret ballot voting, and then the orange talk to?
And then some of the, just add a bit of color to what you alluded to already around, some
of the challenges that each of those maybe have in turn, maybe we can talk to that.
And I thought Oran just gave such a great overview there of the importance of private
voting, basically, and how people might not expect private voting to actually bring more
transparency, but actually think that it's hard to have transparency when votes are public.
So yeah, I think most DAOs, or I think the original way that most DAOs launched was where
all the votes were public, basically.
And that's because solving these problems are hard, right?
And we've come just a really long way with these zero knowledge proofs just in the past few years.
So we're finally becoming feasible, although it's still difficult to build, but to actually
have it where the identity of the voter can be hidden, but the community can still verify it,
which is essential if you want to have trustless governance.
So lots of DAOs, I think the majority of DAOs probably just have completely public voting.
I think a lot of DAOs kind of learned from that that it can bias the results.
And it's the same reason that nation states have private voting.
And also, like, even just I was just thinking about how in classrooms in school, right, like
when you're going to do a vote, they tell everyone to put their head down so that you can't see
everyone's answer. Right.
But we haven't had that yet in DAO governance.
So Stutter Network crushed it with launching Shielded Voting, which enabled DAOs to basically
keep the results of the vote hidden during the proposal voting period to mitigate the risk of
people being biased when they see the results. And it's pretty awesome to implement, like,
I know sometimes it can be tough if you're a proposer or if you if you care a lot about a
proposal and you can't really see until the end of the voting period if that proposal is on track
to pass or not. So it takes a little bit of getting used to. But I think overall, we haven't
really had any complaints from the community about it. And I think overall, it's a net
positive for DAOs. And that's why I think we're seeing a lot more DAOs adopt Shielded Voting.
But still with Shielded Voting, once the proposal concludes, everyone can see the results of the
vote and what each address voted for. So again, when we have when you have contentious proposals
or any kind of proposal where someone might not feel at liberty to express their true opinion
for fear of backlash or for fear of what people would think, which I think is, again,
particularly important when you have a DAO with workstream leaders, basically. Workstream leaders,
at least in ShapeShift, have quite a bit of power and authority. And you can imagine if you're a
member of a workstream, let's say you don't think your boss is the best fit to lead your workstream.
It's pretty scary to try and express that opinion, basically. And similarly, even contributors on
other workstreams and stuff might not feel at liberty to express that. And it's obviously a
problem. So just like Oren was saying earlier, and I think others had the same vibe, is that
you really can't have a DAO operating at its optimal efficiency if the voters, the governors,
don't feel at liberty to express their true opinion. So yeah, very much in favor of implementing
private voting once you guys solve those hard problems.
Yeah, definitely hard problems. I love all of this. And because we're all, I guess,
because we all believe this is the case, and I'm super down for everything that's been said.
At the same time, I want to keep playing the devil's advocate role here, because I have heard
at times that people in the ability to track quorum, for example, during a shielded vote can
be annoying, as an example, and I've been doing my own research into this area. I just want to
bring in you, Andre. I did notice that Saif did consider shielded voting as an example. I forget,
I was looking at your forum discussion, if I'm not mistaken. And I don't believe that you use it
right now, but you did consider it. Could you maybe talk through how you maybe have looked at
potentially implementing shielded voting in your snapshot vote? Maybe if you've considered using
any sort of technologies to move any sort of on-chain voting as well into a privacy territory,
like, could you maybe talk to both sides with the pros and the cons here? And then maybe, Lewis,
I can bring you in after and you can talk to, you know, what brought you to shielded voting,
because you were originally in MEV before. Yeah, absolutely. And I think there were already some
we talked now about like the pros of privacy and of shielded voting. And I kind of also wanted to
bring in a little bit like the other side saying that privacy, per se, can go against accountability.
And we had like examples here already, like nation states and or like classrooms where
you vote privately. But we also have the other way around, we do have on nation state levels,
your delegates that usually vote publicly, not during the ballot. I think that is almost like a
no-brainer, that you shouldn't have to be able to see different signaling during the vote. But even
now, we've been talking about keeping it private afterwards. And even like voted delegates and
nation state, they usually the votes are public and together with like something like a lobby
register, this information becomes very important. It becomes important to see that the delegate that
you voted in, how they voted on different matters, the same way as with in capital markets with large
corporations. Usually votes are also public. And it just shows that we don't have it will depend
from doubt to doubt. We don't have a common understanding. What a doubt is, is down more
like a nation state, meaning that token holders vote on their own behalf, usually with their own
voting power, with their own interests. Do delegates vote in the interest of token holders
or in their own interest? Is there a need for an accountability afterwards so that token holders can
are able to see how the delegates voted? We're also not clear yet if it's something like proxy
voters when it comes to corporations. So I think the answer to that is one, I think we said it
right at the beginning, is having the choice. So there should be the choice on down level if it's
more like, if it has more a delegate structure that delegates vote on other behalf or on their
own behalf, maybe it can also be dependent on the type of proposal that is for something I think
when you brought up shielded voting in the context of safe-doubt, that it can also be
can be implemented for different proposal types. For instance, when it was brought up,
I think there were not a lot of votes yet. So it wasn't more like, hey, let's try it out,
this shielded voting. There might be something, for instance, Howard also said here,
that comes to voting on working group council members, if it comes to maybe resource allocation,
that a shielded voting that goes beyond the voting process, but also keeps the result or keeps the
individual's choice shielded, can be of importance so that people don't feel like they either need to
abstain or not vote at all to have their true voice heard. But I think, yeah, just to wrap it up,
I think there is counter examples when it comes to delegates in nation states that are public
because of accountability. Same is for corporations, for publicly traded corporations, where proxy
voters can vote on or vote on behalf sometimes of their shareholders, their directed proxy voters.
And I would say it comes down to choice. And even maybe so granular at one point
that some delegates open it up and are able to prove how they voted and some don't.
Wow, really, really well put. Big believer in modular privacy here for my opt-in and sort of
that level of specificity. So thank you for that. That was awesome. And Louis, I think you're the
perfect person to follow up with that. I'd love to hear a bit about the journey of what brought you
to shielded voting from NEB and malicious attacks. And then also just a little bit of what you've
been hearing on the ground, to Andre's point, which sort of DAOs is this really working for
and which sort of DAOs maybe is this, oh, maybe this comes later in the process. And then we can
talk to maybe a little bit about secret ballots as well. This is good. Yeah, thanks. So I just
wanted to say, as for Andre's part, I think this is really interesting. I hadn't even considered
that much this sort of accountability topic and this really, and just wanted to reiterate that
point about sort of choice and modularity, because then if you take into consideration
accountability, then there's really this sort of potential need for someone who has tele-privacy,
but then it gets decrypted. So then they can sort of be held accountable after the fact, right?
But then a lot of other, I think, DAOs will be more interested in this infinite tele-privacy
where it stays encrypted. And then maybe some others don't want it at all or just are more
interested in anonymity from the start. So I think this is really a super good point about
sort of choice and modularity. And so yeah, but the question was sort of how do we get to
shield of voting? So as I said, sort of shutter is this originally kind of encrypted mempool,
MEV solution. And that's what we're still sort of pursuing as the primary use case,
currently working with nose chain, sort of in the OP stack ecosystem, and some other actors kind
of in the two spaces. But then sort of we were introduced to Snapshot with this idea,
actually Snapshot and Clarus at the same time kind of. And the idea there was a little bit about,
I think it was originally with the noses team actually, and the idea from them and us was
a little bit about having sort of this DKG and this special encryption mechanism as a more
generalized sort of time lock encryption slash commit and reveal primitive. So this could be
sort of a gadget that can encrypt and then encrypt for a predictable amount of time and then
forcibly sort of decrypt. So the thing about the keeper is sort of there's no free option
not to decrypt. So it's kind of pretty safe and pretty assured that it will be decrypted and it
doesn't need a second transaction such as let's say the ENS commit and reveal when you're just
a new name. So it came up in this discussion about, okay, it could be this more generalized
time lock encryption, commit and reveal gadget. And in this discussion, we were introduced to
Clarus and Snapshot, who both had these issues. For Clarus, it's actually extremely obvious
why they would need a commit and reveal scheme, because they have sort of this incentive scheme
on top of the juror voting, which incentivizes voting with the majority. So if the ongoing
results are public, then the whole game theory is kind of broken, because then everyone would
just follow whatever is the highest, the favorite vote, because you're earning more money,
essentially, when you vote with the majority. So that's super obvious to implement it. And we're
actually working with the Clarus team as well, currently, but Snapshot was sort of easier and
kind of the more immediate fit. That's how we originally got started on Snapshot, yeah.
So yeah, and I think it fits also, so maybe to talk a little bit, it sounds like super unrelated,
right? This MEV topic and this cheer the voting topic, but I think it's not really, I think it's
it's really both about information and asymmetry, in that in both cases, there is sort of the
there's sort of actors and there's sort of information asymmetry. And the cool thing is,
with encryption, we can sort of level the playing field and have more information and symmetry,
both in the front running and the voting. Love that, man. It's such a fascinating backstory.
Just before we move on from this idea of accountability that we touched on, I think
it's a really important piece. And Oren, I've got to tag you in here. So yeah, please go ahead,
especially when it comes to like one chain voting, which is kind of your area of specialty as well.
Yeah, absolutely. And I think this is a super important point. We mentioned at the start,
talking about kind of auditability. And I mean, like the reason or one of the reasons that you
would want to be able to audit a vote is for accountability purpose. One is the other is
obviously what we talked about earlier, for kind of correct execution purposes. But then, yeah,
as someone kind of delegating to a delegate, someone who is voted in a representative,
being able to audit that your representative has kind of acted, truthfully acted, how they
have represented themselves to act in the system is I think at surface level, a desirable property.
But I would challenge and I guess like you're asking for a spicy take at the start,
I think this might be my spiciest take here is that I think there are very few borderline zero
cases where that property is actually beneficial. And the reason I would say that is that
generally, the people that are best able to capitalize on that auditability on that
accountability are folks that would be kind of maliciously influencing the system.
And what I mean by that is that if I can know how you voted, then I'm able to effectively
coerce your vote, I'm able to bribe you or use some other form of coercion to influence how you
vote. If I cannot know how you voted, if it's not possible for you to prove how you voted, then
that coercion attempt becomes much more trustful. I have to trust your word when you say you voted a
specific way. And so, again, the ability for that auditability comes at a cost, essentially,
if you enable auditability at the individual level, at the delegate level, at the
representative level, then you open the system up to coercion and collusion. And I think those
coercive actors are always going to be the ones that are more legitimately able to use that
auditability in a way that would influence outcomes. And so I think my suggestion for people
designing, implementing, using voting systems is if you have the option to use something that
is legitimately receipt-free, the argument should be that accountability is on an outcome level as
opposed to an input level. So if you delegate to someone and the outcome of our vote ends up going
differently to what you expected, you cannot know the input, but you can know what the output of
the vote was. And so you can say, maybe my delegation was incorrect, maybe I should try
delegating to someone else or casting my vote directly, as opposed to expecting to be able to
hold your delegate directly accountable, essentially hold delegates accountable to
the outcomes that they produce, as opposed to the inputs that they put into the system.
Again, because if you can hold a delegate accountable to its input,
so too can someone who's trying to coercively manipulate the system.
Maybe just to jump in here, because I think I do have a different take on this,
I think if it really comes to coercion, having it private doesn't completely get rid of this,
meaning for instance, if there's two parties who try to bribe one person who votes, then it's
kind of like the person who votes to take both bribes. Privacy will always be a way how you
shield the information from one, you can still open it up for the other. I don't think that
completely we would be able to get rid of coercion. The one who got bribed will somehow prove it to
the one he got the bribe from. Right, but that's the whole point. It is possible to design systems
where you cannot prove your input. This is the whole problem that Macy attempts to solve,
and does so with reasonably strong guarantees, to where I can create a proof that I can show you that
seems valid, but in reality is not. Any kind of bribery or collusion attempt there becomes a
really trustful relationship. In that case, in that system, yes, my optimal behavior is to accept
your bribe, but vote how I would have, even if you had not bribed me, because you cannot prove how
I voted. I can tell you I voted the way that you wanted me to, vote how I would have anyway,
take your money, and you get no influence on the system. I just want to add the aspect that the
often component is very strong, will always be ways how to prove it. Also, I think that if we were
to say there is no room for public voting, I think would cut off a lot what can develop around
delegates. I think there is, and if one looks at a parliament, there are strong reasons that certain
voted delegates have had public to keep them accountable. And if we do make two things private,
it's kind of like if somebody voted and how they voted, I think then we would cut off a lot of
things that are being able where you say for certain topics, for instance, you delegate it to
someone or afterwards, you want to hold them accountable, you change your delegates, like this
would kind of be a delegate and forget. And I think this would be limiting if we say it so strongly.
I certainly agree that it's limiting, but I think it's, again, I think it's limiting for a good
reason. And the optimal behavior there, again, is to redelicate based on outcomes, like hold your
delegates, hold your representatives accountable based on the outcome they produce, as opposed to
the inputs they put into the system. I'm loving this. I'd love to bring in, like, I do want to
talk about other areas of privacy, and we are getting close to time, but Willy, just before
we close up, do you have anything to share on this? Because I think it's such a meaty debate that I do
like. Yeah, I love the points on both sides. I think, yeah, generally I do lean towards,
I like optionality, right? So like, I think there's definitely can be risk of coercion if
the delegates are required to keep their votes public. But there can be use cases for it, right?
So I think I like what what Lewis was saying earlier about privacy by default, but still having
a pathway for people who do want to vote publicly to be able to do so. So that's what I think is
generally the best of both worlds in a doubt, unless there's very strong arguments to put
people into a closed box, keep it open, give them the option to vote publicly or privately. And I
think that in general will lead to the best outcomes and give people the most freedom.
Love that, man. Yeah, super. This is all awesome. And it makes me think a lot about,
you know, I had didn't attend to that at the start, that, you know, when it comes to privacy
and doubts, I think we all jump to voting as the go to. And what we mean generally by that is
voting anonymity between different down members, as we discussed today, whether that's shielded
voting during the vote itself, or some sort of permanent secret ballots that remain
that's encrypted, or even this receipt freeness that Aaron has talked to you today.
Now, what I'd love to push all of our thinking on, and actually, I'm trying to push a lot of
everyone in my network to think about is where else can privacy interact and intersect with
gals? There's lots of other areas of the lifecycle of like a proposal, even like there's the
discussion of a proposal, you mentioned the discussion and submitting of the proposal,
really, when we spoke about this. There's also, you know, other areas like payroll, for example,
and like, how could that be private, especially if people are getting paid on chain? Do you need to
show their entire address as well as what they're receiving? Treasury, many, many more. And the other
aspect I'd like to think about is so far we've discussed kind of member to member privacy. What
about privacy from the organization that doubt itself to the rest of the world? Should that
always be transparent? Are there cases where that shouldn't be? I'm going to open this up. I'm not
going to pick anyone. Someone whoever's interested in either of those topics, please jump in. We'd
love to, we'd love to cover the use of us to find on this topic. I just want to give another,
another example of where this would be really cool and needed is sort of ad-rob discussions,
right? So having a doubt deciding on a future ad-rob is kind of pointless because then
everyone sees what the, who, who's going to allocate, be allocated what. So it's like,
it's not going to be such a cool surprise ad-rob. So this would be another nice
example. And this could be a cool use case of sort of people. Ideally, there would be some,
you know, system where, FHE system where people can input their, so their ideas about an allocation
and, and the system can combine and those could be encrypted and the system would combine them
and could vote on them essentially in maybe even in private and thus come up with an allocation
that is, that is sort of defined crowdsource by, by, by, by the doubt, but this, but not,
but no one saw it in public before it was actually announced. I think it would be
maybe one, just wanted to add this potential use case or area where privacy would be used.
I think there's a fundamental limitation for DOWs, at least DOWs with kind of very
polluted borders and very kind of commissionless entry. There's a fundamental limitation in their
ability to keep secrets, right? Just because of the broad scope of their membership, you know,
if you can relatively frictionlessly join a DOW and gain access to its secrets, then you,
you know, any external actor who wants to know what the DOW is thinking, what's going to be a
part of its secret discussion can just do jump through whatever the minimal set of hoops are to,
to kind of gain access. And so I think that because of that, because of the kind of structure
of DOWs, because of their typically quite commissionless nature, their, their ability,
and because of often the, the very public deliberation and kind of proposal processes,
I think they have a pretty fundamental limitation on, on how they can keep secrets, which will
probably make some traditional business models and some traditional kind of operational models
just fundamentally incompatible with, with how the DOWs operate, kind of business models that
revolve around trade secrets and kind of maintaining secrecy over some, some IP until it's,
until it's kind of been put into some patent process or something like this, I think likely
going to indefinitely be pretty fundamentally incompatible with DOWs. But there are a bunch
of other places within DOWs where I think there's potential for privacy and secrecy to be valuable.
In particular, I think about like membership sets, there's, there's the potential to create
organizations, whether the full membership set is not known or cannot be known, or the,
the vote weight distribution cannot be known. Things like this, I think are super interesting
and well worth exploring.
I agree with all the statements before, I just want to add kind of a different perspective on
it, which is more like the regulatory angle. I think that as long we don't have everything like
self enforcing, self implementing, there will always be people that are needed to do certain
actions. And as you brought up the whole lifecycle of a proposal in a DOW, it starts with somebody
who authors it with everybody giving feedback changes up to if you choose whatever voting
platform you choose, somebody will have to upload it, etc. And I think we are in a regulatory
environment where regulators will at one point look at the whole chain. And Iosco recently,
for instance, published a DeFi recommendations and recommendation two of that says, who could
be a responsible person, there's a whole list of what could be different actions that that
make you responsible. So completely shifting away from what usually security regulators did
is looking who's like owning, managing, operating it to something very fuzzy, like operating,
or like, like responsible, which comes more like from AML laws. So I think that in order to
not have someone dependent on somebody uploading it, for instance, or so, we should actually look
at like the whole chain of it. And as long as technically, for instance, not possible to have
it all completely self implementing, I think there's a there's a need for privacy also from that angle.
Well said, Andre, I was gonna add a very similar point. So big plus one, I think, ideally,
contributors to DOWs should have optionality for privacy at every point in the lifecycle. So
there shouldn't be any fear for voting, there shouldn't be any fear to propose an idea to give
feedback on an idea, or really just to participate in the DOW in any capacity. And yeah, I think you
hit the nail on the head where right now there is a lot of uncertainty around the regulatory
environment and like any liability that that DOW participants can inherit just for for
contributing or even just for voting. And so definitely privacy enabling tools for DOWs
are critical to help help me to mitigate that. One thing I'll say is that if you are going to
implement private privacy tooling in your DOW for things like forum discussions or discord chat,
probably a good idea to have a code of conduct, as well as like a means of enforcing that,
because I think definitely can open up the door for patrolling or for harassment and stuff like
that. So you want to be careful about that and be proactive. And another thing that we have at
Shapeshift in the meantime, until we have this privacy enabling tooling is we do have an anonymous
feedback form and question form where people can submit feedback or questions and have it raised
at a community call or submit directly to a work stream leader or to the group of work stream
leaders. And it actually doesn't get used too much, which might be a sign that perhaps
yeah, there isn't a strong need for it in the community right now, but it's not perfect,
right? And there's still, I think is fear, even if you submit something through that,
that it could be traced back to you or people could guess who submitted it. So I do think
ultimately like the governance is what decides things of the DOW. So until we have at least the
ability for private voting, there's going to be people that have fear at different points in the
life cycle. And that fear is not ideal for an organization to operate optimally.
Yeah, this is so interesting, because, yeah, like when you talk to it to start really about,
you know, sometimes people are too nice in DOWs and like ending things like ending a role,
or even if it's like, oh, I'm glad we had a good go at this work stream, but we're going to close
it down. That can be a really hard conversation in a DOW when it's done publicly. But then weighing
that up with what you just said around like, you know, I don't know how your form is set up,
but, you know, sometimes privacy in our space doesn't get the pickup that maybe we all on this
call would want it to. So I guess in something that we've been thinking about a decent
and fractal is, you know, what is the influx moment, or like that big pivot point that
really pushes privacy to the front of discussions, not only in DOWs, but across
crypto. So yeah, I'd love to maybe close up there. If anyone does, I haven't had a chance to look at
the comment section, but while everyone's answering this, if anyone does have any questions from the
audience, please feel free to throw that in. But yeah, maybe we can talk to the point of like,
what is the pivot point, or maybe some predictions around what that could be that could really
propel privacy to the top of conversations going forward. Why don't we start with you, Andrej,
and then go around. I think the pivoting point will probably be if we don't introduce kind of
like the worst sort of all the worlds. I'm not sure how this pivot point looks like, but for
instance, I'm taking away too much responsibility from tokeners and delegates up to working groups
that manifests certain things and that will not feel in that privacy matter so much. All of the
things that we talk about, for instance, when it comes to stopping certain working students,
these are all decisions that are quite important for the whole DOW and then usually are voted on
the whole DOW level. If we push too much up to that working group layer, we'll probably not
feel like it's so needed. So yeah, I think it's kind of always thinking if one has to introduce
these things. And as I said right at the beginning, I think having a modular approach, experimenting
more at a small scale for certain votes, learning from other DOWs and seeing how that worked out,
that's probably the most important. Awesome. Thanks, man. Let's go with you, Lois next.
Sorry, that was a glitch. Yeah, no, I think everything has been said. Yeah, I just want
to reiterate some of the earlier points. I think this choice thing is the most important, that we'll
have all the privacy options for anonymity, for static privacy, for kind of ongoing,
like infinite storage privacy that DOWs can then pick and choose and choose the level of privacy
that they want to have. And then just to be excited to keep working in the space and hopefully
work on some of these things. Awesome, thanks, man. Oren, when are you gonna...
Yeah, so can you refresh me on the question? I was distracted. No, I was thinking about like,
what propels privacy to the front of conversation? And I really wanted your kind of prediction on
this because you've probably been tackling this privacy point, especially around bringing Macy
to more eyes for longer than most of us, you know? So like, we'd love to kind of get your take on
maybe like, why it's challenging to get privacy, the focus in needs, and what could be the game
changer for that? I mean, I think the unfortunate reality for a lot of disruptive technologies and
particularly a lot of technologies that kind of challenge our assumptions about how systems work
is that the kind of inflection point is when something about the existing systems we work
breaks and they start behaving how we thought they would. And I think like the thing that probably
tips the balance more towards privacy is something, you know, some kind of large scale exploit on the
systems that we're using currently that makes them behave in a way that is counter to how we would
want them to behave or kind of expected them to behave. So, you know, a good example in the DAO
space is about like curves voting, right? Curve has this kind of V tokenomics model. And
a couple of years ago now already, it became kind of really standard practice to essentially
bribe V token holders to vote in a particular way. And so there's a bunch of contracts deployed
and like websites up and running where you can legitimately just go and kind of sell your votes
in curve, the same thing for Moloch. And so I think like those kind of things happening on a large
scale, very, very kind of obvious collusion and kind of subversion of systems that you're
participating in, the types of things I think will ultimately end up spurring people to think more
seriously about privacy. This is the kind of thing, I mean, I guess like outside of Web3,
think about times when populations have been like blacked out of their communication, you know,
either through a government kind of shutting down communication networks to kind of
try to quell an uprising of sorts. I recall a few years ago, the, was it fire chat application
becoming, I think it's fire chat or it was a wildfire, but a kind of mesh networking chat
application became really, really popular very quickly because there was a bunch of protests
and riots and the kind of traditional communication channels got shut down by the authorities in that
region. And so it's like, you don't realize that you need this kind of thing until the infrastructure
that you usually depended on stops functioning. And I think the same thing is probably true of
privacy. Once systems that's for decision making break down because of some large scale influence
on them, subverting them, that's when people will realize that, hey, it would be great if, you know,
we had a private version of this that, you know, was not as susceptible to those kind of
influences. And yeah, again, it's kind of an unfortunate reality that it takes us being
burned to kind of learn those lessons usually. And yeah, people like us doing a really hard
infrastructure while it's still in its early days, I think is also necessary. But I completely,
I was one of the biggest 100% I put up for this thing. We've got a, there's an element of patience
here and it's, it is unfortunate, but hopefully it will work out. Willie, do you have anything
burning on this topic? There is one last question from the audience. I just wanted to raise up,
but yeah, please go ahead if there's something you wanted to add.
Sure. I'll jump in because I love this question, but we'll try and keep it quick. Yeah, I think,
you know, how do you get people to embrace privacy and use the tools, even though we all say we want
privacy and then create new privacy tools come out and no one adopts them. I think privacy by
default is the way and like with shielded voting, for example, every voter now is using a tool to
make their vote private simply because they don't really have the choice. Or I guess the community
decided at a high level to implement it. And now everyone is just, all those votes are private,
at least during the voting period by default. But I also don't think necessarily like all DAOs,
I'm not going to say all DAOs should aim for privacy by default all the time. I think as we all
know, like, there's so many different shapes and forms that DAOs can take and DAOs will keep
evolving. So for some DAOs, I think maybe it will make sense for them to force publicity.
Some DAOs might want everything to be private. And some DAOs may want to give the option. And
I think all those can work. But ultimately, the community should be able to decide. And currently,
like the community just does not have the ability to decide to implement truly private voting,
because it doesn't exist. So just glad that smart people are working hard to make it a reality.
And I think that finally having this option available will just be like yet another unlock
in DAO's ongoing journey to become one of the top ways that communities of all sorts coordinate.
Without saying too much, hopefully more to come in this space very, very soon.
But otherwise, thank you so much, everyone for today. Wasteland, I saw your question.
I also saw a question asking for like the DAO privacy stack and what's out there.
I'll post both in Twitter and Walkcast somewhere out of this. My Walkcast is back as soon as this
answer to both those questions. And if anyone else here has answers for the questions in there,
please post on your socials to get responses. I don't want to hold people up there. And I do want
to highlight that, one, how grateful I am to have everyone on this call. These are some really
powerful power names in the industry talking about really hard topics. This is amazing. So
thank you. Secondly, that Willie, Oren and I, at minimum, will be at ETH Denver running this panel
again, IRL in real life. We'd love you all to come down and we'll share more information again
on socials for that soon. And Andre, I haven't spoken to you about this, but Lewis can't make it
through Denver. But if you're keen, we'd love to have you there as well. And we can continue this
discussion in person on the way if you can, but we'll talk offline. The last piece I want to
highlight is I see a man with wings is listening. He's about DAO, DAO star and MediGov. One thing I
spoke to them about is potentially mapping the DAO lifecycle as a research project, an open source
research project that I'd like to work on. And also highlighting within that DAO stack where
privacy is most important, where it should be modular, where it should be by default,
and all these kinds of areas we've discussed. So if anyone's interested in that, we'd love to
discuss more, as well as obviously as I teased before, what Fractal's building on under the
covers with Oren, you, Shutter and many others. But otherwise, enough plugging from my side.
Thank you so much, everyone. So do we have any final words or should we just say thank you to
goodbyes because we're two minutes over? Thanks for running the space, guys. This was a lot of fun.
Thanks, Brent. See you on Denver.