📄 A Social Choice Analysis of Optimism's Retroactive Project Funding
2 chunks · Format: pdf
Priorities Extracted from This Source
#1
Improve RetroPGF voting and allocation mechanisms
#2
Strengthen strategyproofness and resistance to manipulation
#3
Maximize social welfare in funding outcomes
#4
Ensure fairness, equity, and balanced voting power
#5
Maintain majoritarian decision-making and ground-truth alignment
#6
Reward demonstrated impact through retroactive public goods funding
#7
Support iterative decentralization and governance learning
#8
Evaluate governance mechanisms through formal and empirical analysis
#9
Evaluation of voting and budget allocation rules for RetroPGF
#10
Strategyproofness and resistance to manipulation in governance mechanisms
#11
Fairness and efficiency in public goods funding allocation
#12
Participation and monotonicity in funding votes
#13
Robustness to bribery and voter influence
#14
Alignment of funding outcomes with social welfare and ground truth
#15
Use of computational social choice methods in blockchain governance
Document Content
Full text from all 2 processed chunks:
Chunk 0
A Social Choice Analysis of Optimism’s Retroactive Project
Funding
Eyal Briman1, Nimrod Talmon1, Angela Kreitenweis2, and Muhammad Idrees3
1 Ben-Gurion University, GovXS, Israel
{briman@post.bgu.ac.il, nimrodtalmon77@gmail.com}
2 GovXS, Germany
angela@tokenengineering.net
3 GovXS, Pakistan
idrees535@gmail.com
Abstract. TheOptimismRetroactiveProjectFunding(RetroPGF)isakeyinitiativewithin
theblockchainecosystemthatretroactivelyrewardsprojectsdeemedvaluabletotheEthereum
andOptimismcommunities.ManagedbytheOptimismCollective,adecentralizedautonomous
organization (DAO), RetroPGF represents a large-scale experiment in decentralized gover-
nance. Funding rewards are distributed in OP tokens, the native digital currency of the
ecosystem. As of this writing, four funding rounds have been completed, collectively allocat-
ing over $100M, with an additional $1.3B reserved for future rounds. However, we identify
significantshortcomingsinthecurrentallocationsystem,underscoringtheneedforimproved
governance mechanisms given the scale of funds involved.
Leveraging computational social choice techniques and insights from multiagent systems, we
propose improvements to the voting process by recommending the adoption of a utilitarian
movingphantomsmechanism[16].ThismechanismwasoriginallyintroducedbyFreemanet
al. (2019), is designed to enhance social welfare (using the ℓ norm) while satisfying strat-
1
egyproofness—two key properties aligned with the application’s governance requirements.
Our analysis provides a formal framework for designing improved funding mechanisms for
DAOs, contributing to the broader discourse on decentralized governance and public goods
allocation.
Keywords: DAOs, Computational Social Choice, Voting Mechanisms, Simulations
1 Introduction
TheOptimismRetroactivePublicGoodsFunding(RetroPGF)initiative,managedbytheOptimism
Collective—a decentralized autonomous organization (DAO)—is a prominent funding mechanism
within the blockchain ecosystem.4 It operates within a broader economic framework designed to
incentivizeinnovationandretroactivelyrewardprojectsthathavedemonstrablycontributedtothe
EthereumandOptimismecosystems.Theinitiativehasundergonefourfundingrounds[25,10,18,11],
with this study conducted in the context of the ongoing fifth round [19]. Unlike forward-looking
grantsystems,RetroPGFdistributesfundsexpost,rewardingprojectsbasedontheirverifiedimpact
rather than projected outcomes. This approach, increasingly adopted in decentralized ecosystems,
addresses challenges in evaluating contributions in real time.
4https://retrofunding.optimism.io/
5202
guA
22
]TG.sc[
1v58261.8052:viXra
2 Briman et al.
Todate,RetroPGFhasdistributedover$100M,with$1.3Bearmarkedforfuturerounds.5 Given
the scale of resources involved, the governance structure of the funding process plays a crucial role
in ensuring fair and effective allocation. However, we identify several limitations in the election
system currently used, motivating the need for improved voting mechanisms.
Research Approach. In this work, we employ computational social choice to analyze and enhance
the RetroPGF voting system. Specifically, based on formalized governance requirements derived
from public discussions and direct engagement with the Optimism Collective, we propose a the-
oretically grounded and empirically tested improvement. Our analysis conceptualizes RetroPGF
asamajority-based,ground-truthrevealing,token-basedallocationprocess,incorporatingdomain-
specific constraints from blockchain governance.
To address observed inefficiencies, we propose adopting the utilitarian moving phantoms mech-
anism [16], which balances strategyproofness with social welfare maximization under the ℓ norm.
1
Our study consists of two primary components:
– AtheoreticalanalysisofboththevotingrulesusedinpastRetroPGFroundsandtheproposed
alternative voting rules, along with their formal properties.
– A simulation-based evaluation of previous rules and an alternative rule to assess their practical
performance under realistic voter behavior models.
1.1 Our Contributions
Thispaperadvancesthestudyofdecentralizedfundingmechanisms,withafocusontheRetroPGF
process in Optimism. Our key contributions are as follows:
– Formalization of Governance Requirements: We provide the first formal mapping of
Optimism’s ideological and practical desiderata into social choice criteria, encompassing both
qualitative and quantitative dimensions.
– Comparative Analysis of Voting Mechanisms: We conduct a rigorous theoretical and
empirical evaluation of existing and proposed voting rules within the RetroPGF framework.
– Proposal for an Improved Voting Rule: We introduce and analyze the moving phantoms
mechanism,demonstratingitsadvantagesinstrategyproofness,fairness,andsocialwelfaremax-
imization.
1.2 Paper Structure
The paper is structured as follows: We begin with a discussion on the informal requirements by
Optimism, given by their forums and from personal communication (Section 2). We then discuss
related work (Section 3). We continue to describe a formal model of RetroPGF (Section 4). Then,
we discuss voting rules used for RetroPGF (Section 5). Concrete evaluation metrics are discussed
in Section 6. We then report on our theoretical analysis (Section 7) and on our simulation-based
analysis (Section 8 and Section 9). We conclude with a discussion (Section 10).
5https://optimism.mirror.xyz/nz5II2tucf3k8tJ76O6HWwvidLB6TLQXszmMnlnhxWU
Optimism’s Retroactive Project Funding 3
2 Application Requirements
BasedondiscussionsfromOptimism’spublicforumsanddirectcommunicationwithkeystakehold-
ers,weidentifiedthespecificapplicationrequirementsfortheRetroPGFprocess.Theprimarygoal
istocompareandevaluatethebehaviorofdifferentvotingrulesinthecontextoftheserequirements.
Our analysis is framed around key dimensions outlined in Optimism’s governance documentation6
and tailored to the unique characteristics of this decentralized funding mechanism.
We note that these requirements reflect a blend of common-value elements—where all voters
aim to reward impactful projects—and private-value elements, stemming from heterogeneity in
voters’domainexpertise,interpretationof“impact,”andstrategicconsiderations.Asabenchmark,
we highlight that if all badgeholders had fully aligned beliefs and no individual bias, a symmetric
preference profile would emerge, resulting in unanimous allocations. We return to this benchmark
in later discussion.
– Badgeholder (voter) Responsibility: A key feature of the RetroPGF process is the dele-
gation of fund allocation to a small group of certified badgeholders (voters). These individuals
possess a reputation within the ecosystem and are selected by the collective to distribute to-
kens to projects based on assessed needs. Badgeholders are expected to represent the broader
interests of the community while maintaining impartiality through a strict code of conduct,
includingconflictofinterestprotocols.Eachfundingroundistargetedataspecificcontext.For
example, in Round 6, badgeholders evaluated governance-oriented projects such as Delegation
AnalyticsandAgora,whileinRound4thefocusincludedinfrastructureandpublicgoodsmore
broadly.
– Majoritarian Decision-Making: The RetroPGF process emphasizes a majority-based vot-
ingapproach. Thegoal isto capturethe “groundtruth” ofwhich projects have contributedthe
most value to the ecosystem. Badgeholders play a pivotal role in aligning their token distri-
bution decisions with collective goals. For instance, projects that receive broad support across
badgeholders are presumed to reflect shared community values.
– Iterative Decentralization:Acoreprincipleofthesystemisitsabilitytoadaptandimprove
throughiteration.AstheRetroPGFprocessevolves,decentralizationbecomesmoreprominent.
This iterative learning mechanism resembles adaptive systems in social choice theory, where
feedbackloops(suchaspost-roundreports,retrospectivedebates,andproposaldesignchanges)
help refine collective decisions over time.
– Equity and Balance: Optimism seeks to balance governance power, ensuring that financial
influencedoesnotovershadowcommunityinput.InRetroactiveProjectFunding,oneperson—
or more precisely, a pseudonymous wallet holding a badgeholder token—equals one vote, with
equal voting power for all voters. This design aligns with social choice concepts of fairness,
aiming to avoid plutocratic control and to promote outcomes that serve the ecosystem as a
whole. For example, the cap on the number of badgeholders per funding round reinforces equal
voice in decision-making.
– Impact = Profit: The RetroPGF mechanism rewards projects based on their demonstrated
contributions to the community. The guiding principle is that “impact should be rewarded,”
and therefore projects creating ecosystem value should be financially supported. This principle
parallels utilitarian and proportional notions in social choice theory. For instance, in Round
4, large allocations were awarded to projects like L2Beat and Gitcoin, reflecting their visible
impact on infrastructure and community support.
6https://gov.optimism.io/t/the-future-of-optimism-governance/6471
4 Briman et al.
These application requirements form the basis for our analysis of voting rules. In particular, we
focus on how well various mechanisms perform with respect to efficiency, strategic resistance, and
fairnessunderrealisticvotermodels.Whileasymmetricpreferencescenariomayleadtoagreement
on allocations, we argue that the presence of individual biases, reputational considerations, and
information asymmetries necessitates a more nuanced modeling of badgeholder behavior.
3 Related Work
Research at the intersection of public good funding, computational social choice, and blockchain
governance has developed significantly over the past decade. Early models such as participatory
budgeting [2] and quadratic funding [26] have been adapted to decentralized contexts, raising new
challenges in designing fair, efficient, and strategy-resistant allocation mechanisms. More recently,
retroactive funding systems have emerged as a novel paradigm, where reward is based on verified
impact rather than anticipated outcomes. This shift demands mechanisms that can robustly ag-
gregate diverse preferences and uncover ground truth contributions—a setting where social choice
theory, particularly in its algorithmic and strategic dimensions, becomes highly relevan [8].
Public Good Funding Public good funding distributes resources to benefit communities, with de-
centralized models such as quadratic funding [6,17,14] amplifying smaller contributions to ensure
fairness.Participatorybudgeting[2,15]allowsvoterstodecideonresourceallocationpriortoimple-
mentation.RetroPGF,asusedinOptimism,insteadrewardsprojectsbasedontheirproven impact,
representing a shift from ex-ante to ex-post evaluation.
RetroPGF and Portioning Optimism’s RetroPGF evaluates completed projects to reward those
with the most community value [34]. This approach parallels the setting of portioning [1], where
resources are divided after evaluating outcomes. In both settings, the role of the voter is evaluative
rather than predictive, and the aggregation mechanism must navigate heterogeneous signals of
impact.
Optimism’s RetroPGF Operating within a DAO framework, Optimism’s RetroPGF integrates
blockchain transparency and social choice theory [31], leveraging majoritarian decision-making to
align token distribution with perceived community value. The design space intersects with work on
blockchaingovernance[21]andcomputationalsocialchoiceindigitalsettings[20],raisingquestions
about fairness, robustness, and decentralization in large-scale decision-making.
Uncovering the Ground Truth A central challenge in RetroPGF is to accurately aggregate badge-
holder votes to reveal the true impact of projects. Prior work in elicitation and aggregation under
uncertainty, such as Bayesian Truth Serum [28,29] and noisy preference models [7], offer tools for
designing mechanisms that align collective outcomes with objective contribution measures. These
methodsareparticularlyrelevantforblockchain-basedvotingsystems,wherepreferenceaggregation
must also resist manipulation while remaining transparent.
4 Formal Model
We formalize the token-based Retroactive Project Funding process as a voting-based budget allo-
cation mechanism. Let N ={1,...,n} be the set of voters and P ={1,...,m} the set of projects.
Optimism’s Retroactive Project Funding 5
ThetotalavailablebudgetisB,whichwenormalizeto1,ensuringthatallallocationsareexpressed
as fractions of the total budget. Each voter is allocated c tokens, with the constraint that the total
number of tokens does not exceed the budget, i.e., c·n≤B.
A feasible allocation is a vector a=(a ,...,a ) where:
1 m
X
a ≥0, ∀p∈P, and a ≤B.
p p
p∈P
Each voter i submits a cumulative ballot X = (x ,...,x ), where x ≥ 0 represents the
i i,1 i,m i,p
fraction of their tokens allocated to project p, subject to:
X
x ≤c.
i,p
p∈P
Since voters benefit from fully utilizing their allocated tokens, we assume:
X
x =c, ∀i∈N.
i,p
p∈P
The final allocation a is determined by a voting rule, formally defined as a function:
f :(X ,...,X )→a=(a ,...,a ),
1 n 1 m
which satisfies the budget constraint:
X
a ≤B.
p
p∈P
Preference Structure. Although the RetroPGF process aims to reward projects according to their
objectiveimpact,weassumethatbadgeholdersmayexhibitheterogeneouspreferences,modeledvia
different cumulative ballots. This heterogeneity reflects several practical considerations:
– Epistemicdiversity:Voterspossessvaryingknowledgeabouttheecosystemandspecificprojects,
leading to different interpretations of what constitutes impact.
– Domainaffinity:Somevotersmayprioritizeprojectsalignedwiththeirexpertiseorvalues(e.g.,
governance vs. infrastructure).
– Reputational strategy: Badgeholders might consider not only what is best for the ecosystem,
but also how their votes will be perceived within the community.
Asatheoreticalbenchmark,weconsiderthesymmetricpreferencecase,inwhichallbadgeholders
submitthesameballot,i.e.,X =···=X .Thiscaseservesasausefulbaselineforevaluatinghow
1 n
different rules behave under full alignment. However, since empirical data from past rounds (e.g.,
Round 4) shows clear variation in allocations, we argue that modeling badgeholder preferences as
heterogeneous provides a more realistic and informative foundation for mechanism evaluation.
5 Voting Rules
ThissectionreviewsthevotingrulesusedinOptimism’sRetroPGFroundsandintroducesimproved
mechanisms.
6 Briman et al.
Quadratic Voting (Round 1). Quadratic Voting (QV) [22] allows voters to allocate x tokens to
i,p
project p, with the effective vote weight given by:
P √
x
a p = P i P ∈N i √ ,p x .
i∈N p′∈P i,p′
Mean Rule (Round 2). Allocations are proportional to the total tokens received:
P
x
a = i∈N i,p .
p P P x
i∈N p′∈P i,p′
QuorumMedianRule(Round3). Allocationsarebasedonthemedianvotewithquorumconstraints
q (minimum tokens) and q (minimum voters):
1 2
median{x |x >0}
b = i,p i,p .
p P median{x |x >0}
p′∈P i,p′ i,p′
If b ≥q and at least q voters contribute, then d =b ; otherwise, d =0. The final allocation is:
p 1 2 p p p
d
a = p .
p P d
p′∈P p′
Capped Median Rule (Round 4). A variant of the Quorum Median Rule, with an upper bound K
1
and redistribution of excess funds:
P
c = median{x i,p } ·B, d =min(c ,K )+ j max(0,c j −K 1 )·c p .
p P median{x } p p 1 P c
p′∈P i,p′ j j
Projects below K are eliminated, and their allocations redistributed:
2
0, d <K ,
p 2
P
b p =
d
p
+ Pp′ d p′<K2 ·d
p
, otherwise.
p′
d p′≥K2
Final normalization ensures:
b
a = p .
p P b
p′ p′
Midpoint Rule. The allocation minimizes the ℓ distance from all voter allocations [24]:
1
n
X
x=X , i∗ =argmin ∥X −X ∥ .
i∗ i j 1
i∈N
j=1
Moving Phantoms. The Moving Phantoms mechanism [16] addresses budget inconsistencies in
median-based voting by introducing phantom voters whose influence dynamically adjusts alloca-
tions while preserving strategyproofness. The final allocation is determined using a median-based
adjustment:
AF(a )=med(f (t∗),...,f (t∗),x ,...,x ),
p 0 n 1,p n,p
Optimism’s Retroactive Project Funding 7
where t∗ is chosen such that:
n n m
X XX
f (t∗)+ f (t∗)+ x =1.
0 i i,p
i=1 i=1p=1
This constraint ensures that the total allocation remains within budget. The optimal value of t∗
can be computed efficiently using binary search.
Two variants of the Moving Phantoms mechanism are considered:
Independent Markets Algorithm. In this variant, phantom influence follows a linear distribution:
f (t)=min{t(n−k),1}.
k
Majoritarian Phantoms Algorithm. This variant prioritizes majority preferences by defining:
0, 0≤t≤ k ,
n+1
f (t)= t(n+1)−k, k <t≤ k+1,
k n+1 n+1
1, k+1 ≤t≤1.
n+1
.
6 Properties and Metrics
ToevaluatevotingrulesintheRetroactiveProjectFundingframework,weconsiderkeytheoretical
properties and performance metrics. These criteria incorporate classical social choice principles [4]
and RetroPGF-specific requirements. To define these metrics and properties, we first establish the
agent utility function based on the ℓ distance, as prior work commonly assumes that agents assess
1
budget distributions by their ℓ distance from their ideal allocation [16,12].
1
Resistance to Manipulation. A voting rule should be robust against strategic behavior, including
bribery and control. The cost of bribery is the minimum expenditure required to increase project
p’s allocation by X tokens:
n
XX
b= |x −x′ |,
i,q i,q
i=1q∈P
where V ={X ,...,X } and V′ ={X′,...,X′} are the original and modified vote profiles, and x
1 n 1 n
is the outcome. The cost of control measures the minimal number of voters that must be added or
removed to increase p’s allocation by r. A rule is robust if the expected deviation in outcomes due
to small vote perturbations remains bounded:
E[d(x,x′)],
where d is a distance metric such as ℓ or ℓ . The Voter Extractable Value (VEV) quantifies the
1 2
maximum allocation shift a single voter can induce:
VEV= max d(x,x(i,k)),
i∈[n],k∈[m]
where x(i,k) is the outcome after voter i reallocates r% of their vote to project k.
8 Briman et al.
Incentive Compatibility. A voting rule is strategyproof if truthful reporting is a weakly dominant
strategy for all voters. Formally, for every i ∈ N, let u (X ,X ) be their utility when reporting
i i −i
truthfully and u (X′,X ) their utility when submitting a strategic misreport X′. The rule is
i i −i i
strategyproof if:
u (X ,X )≥u (X′,X ) ∀i∈N, ∀X′ ̸=X , ∀X .
i i −i i i −i i i −i
Outcome Quality. A voting rule satisfies Pareto efficiency if there exists no alternative allocation
x′ that strictly improves the utility of at least one voter without making any other voter worse off.
Formally, an allocation x=(a ,...,a ) is Pareto efficient if:
1 m
∀x′ ∈Rm , ∃i∈N such that U (x′)>U (x)⇒∃j ∈N such that U (x′)<U (x).
≥0 i i j j
This ensures that no allocation x′ is strictly better for all voters simultaneously.
A voting rule satisfies monotonicity if increasing support for a project cannot decrease its allo-
cation.Thatis,foranytwovoterprofilesV andV′,whereV′ isobtainedbyincreasingthesupport
for project p for all voters:
f(V′,p)≥f(V,p) for V′ where x′ ≥x for all i.
i,p i,p
The rule satisfies reinforcement if combining two disjoint voter groups that yield the same
outcome separately does not change the result:
f(V ∪V )=f(V )=f(V ).
1 2 1 2
Fairness and Representation. A rule should balance majority rule with minority protection.
Utilitarian social welfare measures how well the outcome reflects voter preferences:
n
1 X
W = ∥x−X ∥ .
n i 1
i=1
Proportionality ensuresthatanysubsetS ⊆N with|S|≥n/k thatexclusivelysupportsasingle
project p guarantees p at least B/k of the total budget:
n B
∀S ⊆N, |S|≥ , ∀i∈S, x =c, x =0 for p′ ̸=p⇒a ≥ .
k i,p i,p′ p k
AllocationinequalityisquantifiedusingtheGiniindex,whichmeasuresthedisparityinallocated
resources:
Pm Pm
|a −a |
G= i=1 j=1 i j .
2m
Pm
a
i=1 i
Participation and Ground-Truth Alignment. A rule satisfies participation if no voter is worse off by
voting, meaning that submitting a ballot cannot result in a lower utility than abstaining. Formally,
for every voter i, let x = (a ,...,a ) denote the allocation when i does not participate, and let
1 m
x′ =(a′,...,a′ ) be the allocation when i submits a ballot X . The rule satisfies participation if:
1 m i
u (x′)≥u (x) ∀i∈N.
i i
Optimism’s Retroactive Project Funding 9
VotingRule ReinforcementPareto-EfficiencyMonotonicityParticipationProportionalityMaxSocialWelfareStrategyproofness
R1Quadratic ✓ × ✓ × × × ×
R2MeanRule ✓ ✓ ✓ ✓ ✓ × ×
R3QuorumMedian × × × × × × ×
R4CappedMedianRule × × ✓ × × × ×
NormalizedMedianRule ✓ ✓ ✓ ✓ × × ×
MidpointRule ✓ ✓ ✓ ✓ × × ×
IndependentMarket[16] ✓ ×for(m≥n2) ✓ ✓ ✓ × ✓(form>2)
MajoritarianPhantom[16] ✓ ✓ ✓ ✓ × ✓ ✓(form>2)
Table 1. Properties of Voting Rules.
This ensures that participation is always beneficial or neutral.
Alignmentwithgroundtruthmeasuresthedeviationfromanobjectiveallocationx∗ =(a∗,...,a∗ ),
1 m
whichrepresentsthetheoreticallycorrectfundingdistributionassumingperfectvoterexpertiseand
full knowledge of project impact. The alignment metric is given by:
m
X
d = |a∗−a |.
GT i i
i=1
This ensures that the allocation reflects the true contribution of funded projects [7,9].
7 Theoretical Results
We provide Table 1 that summarizes the theoretical results. For space considerations proofs are in
the appendix. Note that we have added the Normalized median rule that is a generalization of R3
and R4 voting rules (since it does not include using of capping or quorum).
8 Experimental Design
ExperimentswereconductedusingartificialvoterdataandasimplifiedversionofOptimism’sRound
4 dataset, the only fully available real-world Optimism voting dataset, in which 108 badge-holders
collectively allocated 8 million tokens across 229 projects. The analysis compares the voting rules
employed in Optimism Rounds 1-4 with the Majoritarian Phantoms mechanism, selected for its
capacity to maximize ℓ -based social welfare while ensuring strategyproofness—two fundamental
1
properties aligned with the application’s requirements.
VoteGeneration. CumulativeballotinstancesweregeneratedusingMallows’model[3,30],ensuring
structured yet diverse voter preferences. The process is as follows:
1. A base vote X is sampled from a Dirichlet distribution with parameters αm =1m, ensuring
base
it sums to 1:
X
X =1.
base,p
p∈P
2. An independent vote X is sampled from the same Dirichlet distribution (k =1).
independent
3. Each voter’s ballot is computed as a weighted combination:
X =0.5·X +0.5·X .
i base independent
10 Briman et al.
Experimental Setup. To evaluate different voting rules, we conduct simulations across multiple
scenarios,eachdesignedtoassessspecificpropertiesofthemechanisms.Theexperimentalconditions
are as follows:
– Bribery, Control, Robustness, and Voter Extractable Value (VEV) Experiments: These experi-
ments use a setup with 40 voters distributing 8 million OP tokens across 145 projects.
– Social Welfare, Gini Index, and Alignment Experiments: To analyze broader allocation trends,
these experiments use an expanded setting with 145 voters, 600 projects, and 30 million OP
tokens.
– Trial Averaging: Each experimental condition is repeated for 100 independent trials to ensure
statistical reliability and minimize variance.
Resistance to Manipulation. Toexaminethevulnerabilityofeachvotingruletostrategicbehavior,
we conduct two experiments:
1. Control Experiment: This experiment assesses how resistant a voting rule is to the addition or
removal of voters. We measure the minimum number of strategically placed voters required to
increasethefundingofaspecifictargetprojectbyapredeterminedpercentage.Theexperiment
systematicallyvariesthetargetfundingincreasefrom1%to30%andruns10independenttrials
per setting.
2. Bribery Experiment: This experiment quantifies the cost of influencing the outcome by reallo-
catingtokens.Assumingaunitcostpertokenreallocated,wemeasuretheminimumexpenditure
required to shift funding in favor of a given project. Similar to the control experiment, we vary
the targeted funding increase from 1% to 30% and conduct 10 trials per condition.
Robustness and Voter Influence. These experiments assess the stability of allocations under small
perturbations and the extent of influence exerted by individual voters:
1. Robustness Experiment: We introduce controlled random variations to individual voter pref-
erences and measure the impact on the final allocations. The robustness of a voting rule is
quantified by computing the ℓ distance between the original allocation and the perturbed al-
1
location. This process is repeated over 100 trials to evaluate stability across different scenarios.
2. VoterExtractableValue(VEV)Experiment:Thisexperimentmeasuresthemaximumimpacta
singlevotercanexertontheallocation.Avoterisallowedtoconcentratebetween90%and99%
of their total tokens on a single project, and we observe the resulting change in the project’s
final funding allocation. The goal is to quantify how susceptible each rule is to concentrated
voting power.
Alignment with Ground Truth. To assess how well each voting rule reflects the true distribution of
voter preferences, we compute the ℓ distance between the final allocation produced by a voting
1
rule and a benchmark referencedistribution. We set the benchmarkto bethe base voteused inthe
generation of all ballots.
9 Experimental Results
We present Figure 1, which summarizes the key findings from our simulations on bribery costs,
control, robustness, and VEV using Round 4 data (108 voters, 229 projects, 8M tokens).
Optimism’s Retroactive Project Funding 11
Fig.1. Bribery costs, cost of control by deleting voters, robustness, and Max VEV for each voting rule
using Round 4 data.
Majoritarian Phantom outperforms other rules across strategic robustness metrics, including
higher bribery and control costs, stronger stability under noise, and reduced voter extractable
value. However, as shown in the other figures in the appendix, the advantage is less pronounced
for metrics like Gini index and alignment with ground truth. In those cases, differences between
Majoritarian Phantom and the R3/R4 median-based rules are often small and not statistically
significant, indicating that the practical gains, while consistent, may be incremental rather than
large.
10 Discussion and Outlook
Our theoretical and simulation results indicate that the Majoritarian Phantom rule is the most
effective mechanism for RetroPGF among those tested, based on the application requirements set
in Section 2. It maximizes ℓ social welfare, satisfies key axioms such as Pareto and Participation,
1
maintainsahighGiniindex(favoringimpactfulprojectsoverequaldistribution),andexhibitsstrong
resistance to manipulation. It also aligns well with the ground truth—although it ranks third in
12 Briman et al.
this metric after Quadratic and Mean rules. However, as shown in our simulations, the empirical
advantage of Majoritarian Phantom over median-based rules (R3 and R4) is often modest. In
fairness and alignment metrics, the differences are typically small and not statistically significant.
This suggests that while the rule offers strong theoretical benefits, its practical improvements may
be incremental.
Our analysis focuses on the voting stage of the allocation process, assuming fixed distributions
ofvotingtokens.Wedonotmodelpre-votingdynamics,suchastradingorinfluencebetweenbadge-
holders.Insettingswheremorebiasedvotersaccumulatevotingpowerthroughsuchexchanges,the
choice of voting rule may interact with deeper strategic incentives. While Majoritarian Phantom
remains strategy-resistant in the voting stage, its resilience under pre-voting manipulation is an
open question warranting further study.
In addition, our current framework does not account for dynamic reputation costs. In practice,
badgeholders face reputational consequences for misaligned or self-serving behavior, which could
deter manipulation even without formal enforcement. Modeling such costs explicitly could improve
the realism of strategic analyses and explain observed behavior in real-world rounds.
We formalized RetroPGF as a majority-based, token-weighted allocation process and analyzed
its voting mechanisms through the lens of computational social choice. Our results contribute a
structured framework for evaluating decentralized funding mechanisms, identifying tradeoffs be-
tween fairness, efficiency, and strategic robustness. We conclude with several directions for future
work.
Comparison with Other Retroactive Project Funding Models. Other ecosystems such as Filecoin’s
RetroPGF7 use similar mechanisms. A comparative analysis could reveal institutional design in-
sights across DAOs.
Metric-Based,IndirectVoting. Optimismisexperimentingwithmetric-basedvoting,8 wherebadge-
holders vote on evaluation criteria rather than projects. This may reduce cognitive load and limit
manipulation. Evaluating this model is a promising research direction.
Mechanism Design Refinements. Future refinements include reputation-weighted voting [23,13],
mitigation of pseudonymity risks [27], moving phantoms [12], and adaptations of VCG-like or Con-
tinuousThielerules[32,33].Bayesiantruthserummethods[28,29]mayfurtherhelpvalidatewhether
a rule meaningfully uncovers ground truth.
References
1. Stéphane Airiau, Haris Aziz, Ioannis Caragiannis, Justin Kruger, Jérôme Lang, and Dominik Peters.
Portioning using ordinal preferences: Fairness and efficiency. Artificial Intelligence, 314:103809, 2023.
2. HarisAzizandNisargShah.Participatorybudgeting:Modelsandapproaches.PathwaysBetweenSocial
Science and Computational Social Science: Theories, Methods, and Interpretations, NA(NA):215–236,
2021.
3. Niclas Boehmer, Robert Bredereck, Piotr Faliszewski, Rolf Niedermeier, and Stanisław Szufa. Putting
a compass on the map of elections. arXiv preprint arXiv:2105.07815, NA(NA):NA, 2021.
7https://filecoin.io/blog/posts/unveiling-fil-retropgf-1-retroactively-funding-filecoin-public-goods/
8https://gov.optimism.io/t/experimentation-impact-metric-based-voting/7727
Chunk 1
Optimism’s Retroactive Project Funding 13
4. Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D Procaccia. Handbook of
computational social choice. Cambridge University Press, Cambridge, UK, 2016.
5. Laurent Bulteau, Gal Shahaf, Ehud Shapiro, and Nimrod Talmon. Aggregation over metric spaces:
Proposingandvotinginelections,budgeting,andlegislation.JournalofArtificialIntelligenceResearch,
70:1413–1439, 2021.
6. VitalikButerin,ZoëHitzig,andEGlenWeyl. Aflexibledesignforfundingpublicgoods. Management
Science, 65(11):5171–5187, 2019.
7. IoannisCaragiannis,ArielDProcaccia,andNisargShah. Whendonoisyvotesrevealthetruth? ACM
Transactions on Economics and Computation (TEAC), 4(3):1–30, 2016.
8. Joshua Cohen. An epistemic conception of democracy. Ethics, 97(1):26–38, 1986.
9. Gal Cohensius and Reshef Meir. Proxy voting for revealing ground truth. In Proceedings of the 4th
WorkshoponExploringBeyondtheWorstCaseinComputationalSocialChoice(EXPLORE),pageNA,
NA, 2017. NA.
10. Optimism Collective. Retropgf round 2, 2022. Accessed: 2024-10-07.
11. Optimism Collective. Retropgf round 4, 2023. Accessed: 2024-10-07.
12. Mark de Berg, Rupert Freeman, Ulrike Schmidt-Kraepelin, and Markus Utke. Truthful budget aggre-
gation: Beyond moving-phantom mechanisms. arXiv preprint arXiv:2405.20303, NA(NA):NA, 2024.
13. MarcelaTdeOliveira,LúcioHAReis,DianneSVMedeiros,RicardoCCarrano,SílviaDOlabarriaga,
andDiogoMFMattos. Blockchainreputation-basedconsensus:Ascalableandresilientmechanismfor
distributed mistrusting applications. Computer Networks, 179:107367, 2020.
14. Nicola Dimitri. Quadratic voting in blockchain governance. Information, 13(6):305, 2022.
15. Roy Fairstein, Gerdus Benadè, and Kobi Gal. Participatory budgeting designs for the real world. In
Proceedings of the AAAI Conference on Artificial Intelligence,volume37,pages5633–5640,PaloAlto,
CA, USA, 2023. AAAI.
16. RupertFreeman,DavidMPennock,DominikPeters,andJenniferWortmanVaughan.Truthfulaggrega-
tionofbudgetproposals. InProceedingsofthe2019ACMConferenceonEconomicsandComputation,
pages 751–752, 2019.
17. LauraGeorgescu,JamesFox,AnnaGautier,andMichaelWooldridge. Fixed-budgetandmultiple-issue
quadratic voting. arXiv preprint arXiv:2409.06614, NA(NA):NA, 2024.
18. Optimism Collective Governance. Retropgf 3 round design, 2023. Accessed: 2024-10-07.
19. OptimismCollectiveGovernance. Retrofunding5:Opstackrounddetails,2024. Accessed:2024-10-07.
20. DavideGrossi. Socialchoicearoundtheblock:Onthecomputationalsocialchoiceofblockchain. arXiv
preprint arXiv:2203.07777, NA(NA):NA, 2022.
21. Kristopher Jones. Blockchain in or as governance? evolutions in experimentation, social impacts, and
prefigurative practice in the blockchain and dao space. Information Polity, 24(4):469–486, 2019.
22. Steven P Lalley and E Glen Weyl. Quadratic voting: How mechanism design can radicalize democ-
racy. In AEA Papers and Proceedings, volume 108, pages 33–37, Nashville, TN, USA, 2018. American
Economic Association, NA.
23. Stefanos Leonardos, Daniël Reijsbergen, and Georgios Piliouras. Weighted voting on the blockchain:
Improving consensus in proof of stake protocols. International Journal of Network Management,
30(5):e2093, 2020.
24. KlausNehring,ClemensPuppe,andTLinder. Allocatingpublicgoodsviathemidpointrule. Preprint,
NA(NA):NA, 2008.
25. Optimism. The retropgf experiment, 2021. Accessed: 2024-10-07.
26. Ricardo A Pasquini. Quadratic funding and matching funds requirements. arXiv preprint
arXiv:2010.01193, 2020.
27. AnthonyJPerezandEbrimaNCeesay. Improvingend-to-endverifiablevotingsystemswithblockchain
technologies. In2018IEEEInternationalConferenceonInternetofThings(iThings)andIEEEGreen
Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CP-
SCom) and IEEE Smart Data (SmartData), pages 1108–1115, Piscataway, NJ, USA, 2018. IEEE.
28. Drazen Prelec. A bayesian truth serum for subjective data. Science, 306(5695):462–466, 2004.
14 Briman et al.
29. Dražen Prelec, H Sebastian Seung, and John McCoy. A solution to the single-question crowd wisdom
problem. Nature, 541(7638):532–535, 2017.
30. Stanisław Szufa, Piotr Faliszewski, Piotr Skowron, Arkadii Slinko, and Nimrod Talmon. Drawing a
mapofelectionsinthespaceofstatisticalcultures. InProceedingsofthe19thInternationalConference
on Autonomous Agents and Multiagent Systems, pages 1341–1349, New York, NY, USA, 2020. ACM.
31. Nimrod Talmon. Social choice around decentralized autonomous organizations: on the computational
socialchoiceofdigitalcommunities.InProceedingsofthe2023InternationalConferenceonAutonomous
Agents and Multiagent Systems, pages 1768–1773, New York, NY, USA, 2023. ACM.
32. JonathanWagnerandReshefMeir.Strategy-proofbudgetingviaavcg-likemechanism.InInternational
Symposium on Algorithmic Game Theory, pages 401–418, NA, 2023. Springer.
33. Jonathan Wagner and Reshef Meir. Distribution aggregation via continuous thiele’s rules. arXiv
preprint arXiv:2408.01054, NA(NA):NA, 2024.
34. Mirko Zichichi, Michele Contu, Stefano Ferretti, and Gabriele D’Angelo. Likestarter: a smart-contract
basedsocialdaoforcrowdfunding. InIEEE INFOCOM 2019-IEEE Conference on Computer Commu-
nications Workshops (INFOCOM WKSHPS), pages 313–318, Piscataway, NJ, USA, 2019. IEEE.
A Missing Proofs
A.1 Reinforcement
Theorem 1. The Mean, Normalized Median, and Midpoint Rules satisfy reinforcement.
Proof. For two profiles V and V , let:
1 2
P P
x x
a1 = i∈N1 i,p , a2 = i∈N2 i,p .
p P P x p P P x
i∈N1 p′∈P i,p′ i∈N2 p′∈P i,p′
Since allocation summation is linear (Mean Rule) or preserves order statistics (Median Rule), the
combined profile maintains relative rankings:
P
x
a3 = i∈N1∪N2 i,p .
p P P x
i∈N1∪N2 p′∈P i,p′
For the Midpoint Rule, the voter minimizing ℓ remains unchanged under profile union, ensuring
1
reinforcement.
Corollary 1. The Quadratic Rule satisfies reinforcement as it follows the Mean Rule after square-
root transformation.
Example 1. The R3 Quorum Median and R4 Capped Median Rules violate reinforcement due to
quorum effects. Consider two projects p ,p with quorum 0.2. - V : v votes [0.9,0.1] → Winner:
1 2 1 1
[1,0]. - V : v votes [0.9,0.1] → Winner: [1,0]. - V ∪V : Quorum applies, shifting the result to
2 2 1 2
[0.8,0.2].
A.2 Pareto Efficiency
Theorem 2. The Mean Rule satisfies Pareto efficiency.
Optimism’s Retroactive Project Funding 15
Proof. TheMeanRuledistributesthebudgetproportionallyamongprojects.Anyreallocationthat
benefits one voter must reduce another project’s allocation due to the budget constraint, making
at least one voter worse off. Thus, no Pareto improvement is possible, proving Pareto efficiency.
Theorem 3. The Normalized Median Rule satisfies Pareto efficiency.
Proof. The median allocation minimizes absolute deviations from voter preferences. Increasing a
project’s allocation necessarily reduces another’s, harming at least one voter. Thus, no Pareto
improvement exists, proving Pareto efficiency.
Theorem 4. The Midpoint Rule satisfies Pareto efficiency.
Proof. The selected allocation corresponds to a voter’s input. Any alternative would increase some
voters’ distance from their preferred allocation, making them worse off, ensuring Pareto efficiency.
Example 2. The Quadratic Rule does not satisfy Pareto efficiency.
A voter allocating [0.7,0.3] receives an outcome based on square roots, deviating from their
preference and allowing a Pareto improvement.
Example 3. The R3 Quorum Median Rule does not satisfy Pareto efficiency.
A voter allocating [0.7,0.3] with a quorum of 0.4 results in [1,0], which is strictly worse for the
voter.
Corollary 2. The R4 Capped Median Rule does not satisfy Pareto efficiency due to the quorum
constraint K .
2
A.3 Monotonicity
Theorem 5. The Mean Rule satisfies monotonicity.
Proof. Increasing a voter’s allocation to project p increases the numerator in:
P
x
a = i∈N i,p
p P P x
i∈N p′∈P i,p′
while the denominator grows less significantly, ensuring a does not decrease.
p
Theorem 6. The Normalized Median Rule satisfies monotonicity.
Proof. Increasing x either shifts the median upward or leaves it unchanged. Thus, a does not
i,p p
decrease.
Theorem 7. The Midpoint Rule satisfies monotonicity.
Proof. Ifavoterincreasestheirallocationtop,itreducestheirℓ -distancetotheMidpointoutcome,
1
preventing a worse allocation.
Corollary 3. The Quadratic Rule satisfies monotonicity.
Theorem 8. The R3 Quorum Median Rule does not satisfy monotonicity.
Example 4. Voters v ,v ,v cast [0.5,0.5], [0,1], [1,0], yielding [0.5,0.5]. If v changes to [0.1,0.4],
1 2 3 2
the new allocation is [0.4,0.6], violating monotonicity.
Theorem 9. The R4 Capped Median Rule satisfies monotonicity.
Proof. If a project’s allocation is below the cap K , additional votes increase a . If at or above K ,
1 p 1
itremainsunchanged.IfbelowquorumK ,additionalvotespusha abovequorum.Ifaprojectfalls
2 p
below quorum, fewer competing projects benefit from redistribution. Thus, monotonicity holds.
16 Briman et al.
A.4 Participation
Theorem 10. The Mean Rule satisfies participation.
Proof. If voter i abstains, their influence on allocations is removed, generally resulting in a worse
outcome for them, as their preferred projects may receive less funding. Thus, abstaining never
increases utility.
Theorem 11. The Normalized Median Rule satisfies participation.
Proof. Abstainingshiftsthemedianinawaythatcanonlyworsenormaintainthevoter’spreferred
outcome. If x is below the median, abstaining does not improve it; if above, abstaining reduces
i,p
the median allocation.
Theorem 12. The Midpoint Rule satisfies participation.
Proof. If the voter’s ballot is selected as the midpoint, participation directly benefits them. Other-
wise, it follows from the Mean Rule proof, as their participation reduces total ℓ -distance.
1
Example 5. The Quadratic Rule does not satisfy participation.
A voter with [0.8,0.2] in a setting where the initial allocation is also [0.8,0.2] may, by partici-
pating, shift the outcome away due to the square root transformation.
Example 6. The R3 Quorum Median Rule does not satisfy participation.
Aprojectp isclosetofundingbutdoesnotmeetquorum.Ifvoterv addsasmallvote[1−ϵ,ϵ],
2
p suddenly receives significant funding, potentially harming v.
2
Corollary 4. The R4 Capped Median Rule does not satisfy participation as it includes a Quorum
K .
2
A.5 Proportionality
Theorem 13. The Mean Rule satisfies proportionality.
Proof. TheMeanRuleaggregatesvotesbysummingacrossallvoters,ensuringeachprojectreceives
fundingproportionaltoitstotalallocation.Sincethefinalallocationmirrorstheproportionoftokens
assigned, the rule satisfies proportionality.
Example 7. The Normalized Median Rule does not satisfy proportionality.
Consider three voters with allocations [1,0], [1,0], and [0,1]. The median allocation is [1,0],
while proportionality requires [2/3,1/3], meaning the rule fails proportionality.
Example 8. The Midpoint Rule does not satisfy proportionality.
With the same three voters as above, the rule selects [1,0], minimizing the ℓ distance rather
1
than providing [2/3,1/3], violating proportionality.
Example 9. The Quadratic Rule does not satisfy proportionality.
For three voters voting [9,1], [5,5], and [4,6], the rule outputs [5.6,4.4] instead of the propor-
tional outcome [6,4], failing proportionality.
Example 10. The R3 Quorum Median Rule does not satisfy proportionality.
With voters [1,0], [1,0], and [0,1], and a quorum of 2 voters with 1.1 tokens, the rule yields
[1,0] instead of [2/3,1/3], violating proportionality.
Corollary 5. TheR4CappedMedianRuledoesnotsatisfyproportionalityasitincludesaQuorum
K3 and is median-based.
Optimism’s Retroactive Project Funding 17
A.6 Strategyproofness
Theorem 14. The Mean Rule is not strategyproof.
Example 11. Considertwovoterswithvotes[0.75,0.25]and[0,1].TheMeanRuleoutputs[0.375,0.625],
givingvoter1anℓ distanceof0.75.Ifvoter1misreports[1,0],thenewallocation[0.5,0.5]reduces
1
their ℓ distance to 0.5, demonstrating manipulability.
1
Theorem 15. The Normalized Median Rule is not strategyproof.
Example 12. Three voters vote [0.57,
0.24,0.19],[0.39,0.48,0.13],and[0.44,0.09,0.48].TheNormalizedMedianRuleoutputs[0.506,0.276,0.218],
givingvoter1anℓ distanceof0.1285.Iftheymisreport[0.6,0.2,0.2],thenewoutcome[0.524,0.238,0.238]
1
lowers their ℓ distance to 0.0962.
1
Theorem 16. The Midpoint Rule is not strategyproof.
Example 13. Threevotersvote[0.9,0.1],[0.4,0.6],and[0.2,0.8].TheMidpointRuleoutputs[0.4,0.6],
with voter 1’s ℓ distance at 1. By misreporting [0.5,0.5], they reduce their distance to 0.8.
1
Theorem 17. The Independent Markets Rule is strategyproof.
Proof. Follows from [16][Theorem 4.8] for m>2.
Theorem 18. The Majoritarian Phantom Rule is strategyproof.
Proof. Follows from [16][Theorem 4.8] for m>2.
Theorem 19. The Quadratic Rule is not SP.
Example 14. Threevotersvote[0.7,0.3],[0.4,0.6],and[0.3,0.7].TheQuadraticRuleoutputs[0.483,0.517],
withvoter1’sℓ distanceat0.434.Misreporting[0.8,0.2]changestheoutcometo[0.502,0.498],low-
1
ering their distance to 0.396.
Theorem 20. The R3 Quorum Median Rule is not strategyproof.
Example 15. Three voters vote [0.2,0.8], [0.1,0.9], and [0.5,0.5] with a quorum of 0.3 tokens and
2 voters per project. The R3 Quorum Median Rule outputs [0,1], giving voter 1 an ℓ distance of
1
0.4. If they misreport [0,1], the new allocation [0.25,0.75] reduces their distance to 0.1.
Corollary 6. The R4 Capped Median Rule is not strategyproof since it includes a Quorum K3 and
is median-based.
A.7 Maximal Social Welfare (Minimal L1)
Anallocationsatisfiesmaximalsocialwelfare(SWF)ifnoalternativeallocationresultsinasmaller
total ℓ distance between the final allocation and voter ballots.
1
Theorem 21. The Mean Rule does not satisfy maximal SWF.
18 Briman et al.
Example 16. Three voters vote (1,0), (1,0), and (0.2,0.8). The Mean Rule outputs (0.733,0.267)
with a total ℓ distance of 2.133. An alternative allocation (1,0) reduces the total ℓ distance to
1 1
1.6, proving that the Mean Rule is suboptimal.
Remark 1. The Mean Rule satisfies maximal SWF if measured by minimizing ℓ distance [5].
2
Theorem 22. The Normalized Median Rule does not satisfy maximal SWF.
Example 17. Three voters vote (0.1,0.9), (0.4,0.2), and (0.6,0.1). The Normalized Median Rule
outputs (0.667,0.333) with a total ℓ distance of 1.834. The alternative allocation (0.5,0.5) reduces
1
the distance to 1.7, showing suboptimality.
Remark 2. The Median Rule satisfies maximal SWF if it directly minimizes ℓ distance without
1
normalization [5].
Theorem 23. The Midpoint Rule does not satisfy maximal SWF.
Example 18. Fourvotersvoteforfourprojects:(0.8,0.1,0.05,0.05),(0.1,0.8,0.05,0.05),(0.05,0.05,0.8,0.1),
(0.05,0.05,0.1,0.8). The Midpoint Rule selects any voter’s ballot, leading to a total ℓ distance of
1
4.6. An alternative allocation (0.25,0.25,0.25,0.25) reduces it to 4.4, proving suboptimality.
Theorem 24. TheIndependentMarketsalgorithmdoesnotsatisfymaximalSWF,following[16][Theorem
6.1].
Theorem 25. TheMajoritarianPhantomsalgorithmsatisfiesmaximalSWF,following[16][Theorem
6.1].
Theorem 26. The Quadratic Voting Rule does not satisfy maximal SWF.
Example 19. Threevotersvote(0.01,0.99),(0.01,0.99),and(0.99,0.01).QuadraticVotingoutputs
(0.364,0.636) with a total ℓ distance of 2.668. The alternative allocation (0.2,0.8) reduces the
1
distance to 2.34, proving suboptimality.
Corollary 7. The R3 Quorum Median Rule does not satisfy maximal SWF, as Median-based rules
require normalization.
Corollary 8. The R4 Capped Median Rule does not satisfy maximal SWF, as it relies on Median-
based normalization.
Optimism’s Retroactive Project Funding 19
B Missing Figures
Fig.2. Cost of Adding and deleting voters as a function of the desired funding increase.
20 Briman et al.
Fig.3. Comparison of Gini Index, L1 Utilitarian Social Welfare, and L1 Egalitarian Social Welfare across
voting rules.
Optimism’s Retroactive Project Funding 21
Fig.4. Robustness of voting rules under manipulation, represented by L1 distance between original and
manipulated allocations.
Fig.5. Bribery costs for different voting rules as a function of desired increase in funds for a project.
Fig.6.Themaximalnewprojectallocationcausedbyonevoterskewingtheoutcomedividedbythetotal
number of tokens to be funded for each voting rule.
22 Briman et al.
Fig.7. Alignment with ground truth for different voting rules, measured by distance to ground truth.
Fig.8. The maximal difference in token allocation caused by one voter skewing the outcome, divided by
the total number of tokens to be funded for each voting rule.
Optimism’s Retroactive Project Funding 23
Fig.9. Gini Index, Utilitarian Social Welfare (ℓ distance), and Egalitarian Social Welfare (maximum ℓ
1 1
distance) for each voting rule on round 4 data.