Strengths and Weaknesses of Conviction Voting and Other Mechanisms

My formula here is horribly wrong and over complicated.

@rex Kindly pointed out to me. SME Signal boost coefficient is calculated for the voter, not for the project.

The following should be correct:

def quadratic_fund(str: project) -> float:
    allocation = np.square(np.sum([sqrt(agent.coefficient * agent.votes.get(project)) for agent in system.agents]))
    return allocation
1 Like

The TEC is funding QF and other funding mechanism analysis here: GitHub - CommonsBuild/alloha: Processing for the TE grants round based on the Gitcoin Allo protocol

Iā€™ve made a dataframe implementation based off of @octopus code.

This software is super alpha, version 0.1.0. Expect the structure to change.

The purpose is to facilitate the TECQFSME signal processing for their QF rounds, and also to serve as a research hub for funding mechanisms in token engineering and token sciences.

1 Like

Given a dataframe with rows being donation events, with columns projectId, and amountUSD, here is a vectorized version of QF algo using dataframes:

# Compute Quadratic Funding:
qf = donations_df.groupby('projectId')['amountUSD'].apply(
            lambda x: np.square(np.sum(np.sqrt(x)))
        )

# Scale to be proportional to matching pool
qf_allocations = (qf / qf.sum()) * matching_pool

Thatā€™s phenomenal, @linuxiscool !

Note to any students or non-programmers that: the last time we checked this:

np speed >>>> list speed >> for-loop speed

I should do the experiments again so I donā€™t risk spreading misinformationā€¦computer things change quickly.

If we are using QF with pairwise -bonding coefficient penalties (which GitCoin was using when we got there), itā€™s only slightly harder to take that into account.

1 Like

Whoo this is so fun. Thanks for replying :octopus: . I really appreciate your comments.

I took the time to write some thoughts:

Vectorization :tada:

YEAH Thatā€™s a great point about speed! Numpy is vectorized! Which means you get contiguous data structures (arrays) in memory!!! Because everything in Python is linked lists LMAO!!! So like contiguous memory data structures in python is like not a thing by default.

Numpy grants us contiguous arrays and vectorized functions. A vectorized function applied to an array tends to be likeā€¦ orders of magnitude faster than say iterating over a list (I believeā€¦ pretty bold claim I knowā€¦ letā€™s see the experimentationā€¦ Banking on anecdotal experience hereā€¦ Embodiment practiceā€¦ But also verify).

Sooā€¦ Vectorization is really fun.

DataFrame Oriented Software Engineering

For the reasons above, I really enjoy the art of producing Dataframe one liners. I enjoy seeing the loading and transforming of data as dataframes or arrays. I consider them to have the following qualities:

  1. expresiveness
  2. clarity
  3. aesthetics
  4. speed/performance
  5. interprability
  6. standardization

I often think about a prospective future of software engineering practice that uses pandas dataframes as a primary in-memory data structure. Combining this with in memory caches like redis is very powerful. Itā€™s like a data-science-first software engineering practice. Hmmmā€¦ that gets me thinking, is anyone doing on-chain in-memory?. Imagine like an ipfs Redis on chain.

I plan on returning with some experimentation resultsā€¦ :bat:

1 Like

To start towards a speedtest I made a minimum viable example for the object oriented approach based off of the code provided by @octopus at the top of the thread.

import numpy as np
from collections import defaultdict

class System:
    agents = []
    
    def __init__(self, agents):
        self.agents = agents


class DefaultDictWithGet(defaultdict):
    """
    This is a funny little class that is a defaultdict that works with the `.get` function that dicts have. Made with help of chatgpt.
    """
    def get(self, key, default=None):
        # If the key is not present, return the default factory value
        if key not in self:
            return self.default_factory()
        return super().get(key, default)


class Agent:
    votes = {}
    name = ''
    
    def __init__(self, name, votes):
        self.name = name
        self.votes = DefaultDictWithGet(int, votes) # Int returns 0 by default
    
    def get(project: str):
        return self.votes[project]

agents = [Agent(name='Shawn', votes={'happysaucepublicgoods':5, 'TECProWrestlingLeague':4}), Agent(name='Kai', votes={'scoobysnacks':10}), Agent(name='Octopus', votes={'TECProWrestlingLeague':9})]

system = System(agents)

def quadratic_fund(project: str) -> float:
    allocation = np.square(np.sum([np.sqrt(agent.votes.get(project)) for agent in system.agents]))
    return allocation

[(project, quadratic_fund(project)) for project in ['happysaucepublicgoods', 'scoobysnacks', 'TECProWrestlingLeague']]

The above code yields:

[('happysaucepublicgoods', 5.000000000000001),
 ('scoobysnacks', 10.000000000000002),
 ('TECProWrestlingLeague', 25.0)]

Donā€™t mind the python precision errors.

I really love this idea of focusing on dataframes because itā€™s a relatively easy pathway for getting non-programming users to running experiments on their own parameters, i.e. edit a predefined spreadsheet which can then be collected via a utility script, made into a DataFrame, run through whatever internal logic, and displayed as a predefined output.

Is ā€œuploading a csvā€ more intuitive than clicking through a nice website GUI? No, and thatā€™s the point. It offers a low-pain but not painless entry point into how the data science structuring actually works. I think of a knowledge commons like a community garden: it is easier to just buy the fruits and flowers, but the sweat and dirt from growing it yourself is desirableā€¦

Thank you so much for staying on this @ygg_anderson

1 Like

Loving this thread, and Iā€™ve been meaning to reply with a few points for a while now!

I believe anything that has a charge/discharge dynamic has the possibility to exhibit low pass filter effects. Often we use the capacitor/battery analogy in electrical systems, but that could also be mass gain/loss in biological systems, or water stored in a dam (or even a bathtub) in a physical system. LPF effects can appear any time there are stocks & flows, where stocks can build up and simulate flows even when there arenā€™t any (e.g. the tap is off in your bathtub, yet outflow stays consistent for a time because your tub is full with a stock of water). Hope some of those analogies help!

I hope someone corrects me if Iā€™m wrong, but I think QF (on its own) is more of a static optimization criteria than a low-pass filter, since it doesnā€™t contain a temporal component. CV has LPF properties since individuals ā€˜convictionā€™ or preference grows and decays over a time according to the estimator function.

However, as has been discussed in a few DMs & research calls, Iā€™d be curious if you could combine QF and CV into Quadratic Conviction Funding (QCF?). Much more thought probably needs to go into the concept, but the initial discussion came about as an attempt to simplify the UX of TEC Gitcoin Grants - what if, rather than doing a discrete round of donations every month (which are a huge time & attention cost on the donor), each community member had a dashboard of their preferred WG projects & a sliding scale as to what % of their total monthly support goes to each project? The TEC could then use that as a continuous signal from the community for (quadratically) streaming funds from the TEC treasury to individual ā€œinvestable workstreamsā€ (i.e. subDAO bonding curves) on an ongoing basis. Perhaps the quadratic calculations would still have to happen at monthly intervals, but at least the time & attention cost on donors could be cut down by a system that ā€œremembersā€ their preferences and simply prompts them each month if they would like to update their previous roundsā€™ preferences.

I couldnā€™t agree more with the need for that assessment! Iā€™ve had a few discussions with Luke Duncan about some of the data analysis potential of early CV experiments, and would love to see some of that empirical discussion & post mortem analysis funded at some point. Iā€™ve been collating some notes on a CV research & modeling effort that I would love to dive further into, if there is further interest in this direction.

A short synthesis of the problems with CV, as experienced in the TEC:

  • Requires competitive proposal environment
  • Requires consistent pool of funds for ongoing dispersal
  • Passed proposals greatly deflated the ā€œabstainā€ pool relative to active proposals, leading to conviction growing faster than normal after large proposals passed

Some mechanisms that could address those problems (in no particular order):

  • Create ā€œexpiryā€ window for proposals in CV, to ensure proposals canā€™t just sit there forever gaining momentum
  • Utilize existing ā€œsocial consensus vetoā€ mechanisms like Celeste for proposals that are not in line with community needs
  • Add in ā€œnegativeā€ conviction to counterbalance ā€œup-onlyā€ conviction
  • Adjust the ā€œabstainā€ pool dynamics to prevent large swings in ā€œconviction inertiaā€ that allowed some proposals to pass faster than expected
  • Adjust estimator function of conviction growth to increase more slowly
  • Implement streaming proposals/Osmotic Funding (basically Conviction Voting + Superfluid) for individual contributors who are consistent in their efforts in the TEC, like 1Hive has been experimenting with

Thanks for starting up this great thread @octopus, and for all the great contributions! Iā€™m eager to see more analysis & discussion of CV (& all its latest evolutions)!

3 Likes

Hi @JeffEmmett I am really glad that you are here for the discussion; I love hearing your ideas.

Iā€™m going to try speaking engineering, even though itā€™s not my first language. Iā€™m mainly going to be asking questions; I really want to understand this.

I learned what I know about filter theory in the context of digital signal processing (specifically acoustics), so my mental definition of filter would be something like ā€œa filter is a map from sequences to sequencesā€, so the analogies are helpful.

Do you have a sense of when you are thinking ā€œlow-pass filterā€ vs. just ā€œdelay lineā€? ā€¦( recognizing that a low-pass filter is often implemented as a weighted sum of delayed copies of a signal-- but I think itā€™s possible to have a distinction between the design approaches even when they lead to the same mechanism).

And then I struggle even more with comparing low-pass filters to capacitors ā€“ reading something like "A capacitor can be used as part of a high pass, low pass, or band pass filter, depending on how itā€™s connected to other parts " (from
Is a capacitor a high-pass filter or a band-pass filter? - Electrical Engineering Stack Exchange). Do you have a good reference on these building blocks that isnā€™t overspecified to a particular branch of engineering?

The trouble I personally have with this analog/concept for Conviction Voting in the TEC (especially as it relates to retrospective) is that it seems the mechanism worked exactly as intended, and yet the overall system did not. Iā€™m still not sure that Iā€™ve been able to understand this dichotomy to my own satisfaction.

How can a filter have issues because it didnā€™t have any ā€œnoiseā€ to filter out? It is something like, ā€œWe only wanted to let low signals through, and all we had was low signals?ā€ It seems like this is a case of ā€œMake the System Follow Its Own Rulesā€ strategy from Alinskyā€™s Rules for Radicals. .

Incidentally, I do see how QF could work as a kind of low-pass filter where @gideonro discusses it: think of the incoming donations signal as a histogram: donation amounts are the frequency and height of bars is the gain. Even though QF doesnā€™t work as a linear operator, the dampening of single impulse signals is still useful here as a measure of its impact. The key is to see the signals coming into QF as living in the frequency domain, rather than the time domain.

I really appreciate your summary ā€œshort synthesisā€. and would like to work towards quantifying these effects mathematically.

My main hope here is to gain a deeper understanding not only of the mechanism, but also of the design lessons that can be carried forward to other mechanisms.

1 Like

Loving the post @JeffEmmett thanks for posting. Iā€™m only yet half way through reading because Iā€™m learning a lot as I go.

You mention a gain system as a bathtub. The Charge Discharge dynamic. This reminds me that @octopus and I collaborated on the haunted bathtub param model at LTF.

The model exposes a bathtub model with the following parameters:

import param as pm
class HauntedBathTub(pm.Parameterized):
    G = pm.Number(0.01, bounds=(0,1), step=0.001, doc="(GAIN) Constant rate of water flow.")
    L = pm.Number(0.02, bounds=(0,1), step=0.001, doc="(LOSS) Constant rate of water drain.")
    water_is_on = pm.Boolean(True, doc="Whether the water is ON or not.")
    drain_is_open = pm.Boolean(True, doc="Whether the drain is OPEN or not.")
    tub_water_level = pm.Number(0.5, bounds=(0,1), step=0.01, doc="The current water level in the tub.")
    increment = pm.Action(lambda self: self._increment())

Haunted Bathtub Notebook

The notebook also demonstrates the interactive model in panel and renders the bathtub animation using holoviews.

I still owe this thread a OO X DF speedtest performance experiment for QF calc. Unless someone beats me to it.

1 Like

Thanks so much for bringing this up @linuxiscool That session was a lot of fun.

This was intended to be a hands-on visualizer for Chapter 2 of ā€œThinking in Systemsā€ by Meadows, with different types of inflow and outflow into the subtle stock (the Bathtub). It was ā€œHauntedā€ because the inflow and outflow were either ā€œonā€ or ā€œoffā€ at a constant rate and were controlled by outside factors, rather than us.

1 Like

Iā€™ve been thinking about Conviction Voting again now that Gitcoin is starting to play with it. What Iā€™m now seeing is that the decision to use Effective Supply (the amount of tokens that are actively voting on proposals in Conviction Voting) as the denominator in determining a given projectā€™s conviction (the numerator is the number of tokens staked on a given proposal) is behind some of CVā€™s unexpected behaviors. In particular, it helps explain the strange phenomenon we experienced where projects with relatively low conviction would suddenly and quite unexpectedly lurch past the conviction threshold.

To illustrate, letā€™s assume two scenarios: one with high levels of voter engagement and one with relatively low levels of engagement. In both cases, weā€™ll assume that a particular project has the same number of tokens staked on it (the purple tokens - or 9 in both cases). Yes, Iā€™m simplifying a bit of how CV actually works in order to hone in on the disproportionate role that I believe the effective supply plays.


In this first case, there are lots of people voting on other projects (pink tokens); these are the allocated tokens, which constitutes the effective supply (90 tokens). The ā€œconvictionā€ ratio in this case, is 10% (9/90). If the CV ratio required to pass is 15%, the project will not pass.


In the second case where there is much less voting, allocated tokens drop and the effective supply drops to 45, which makes the conviction ratio jumps to 20% (9/45). Now, with that same required CV ratio of 15%, the project does pass.

So, how does this explain the phenomenon of projects suddenly passing? At some point, I think we must have realized that the low level of voting we were experiencing (i.e. a low effective supply) was making it easier than expected for projects to pass. So, we deployed a creative hack of introducing an ā€œabstainā€ project that token holders could use to stake their tokens and thereby increase the size of the effective supply (more allocated tokens). This boosted the denominator, which had the effect of requiring more conviction (a bigger numerator).

The surprise we ran into was when voters temporarily took tokens off of the abstain project to vote on a project they cared about - but then forgot to put it back on. The result was that the effective supply would drop dramatically and thereby dramatically decrease the amount of required conviction. And, boom, a project with relatively low conviction that seemed unlikely to pass for a long time would suddenly zoom pass the finish line and be funded.

@JeffEmmett, Iā€™m pinging you because I know youā€™re interested in this and am wondering whether you have any insight around the decision to use effective supply, rather than say the total supply. I seem to remember Griff saying at one point that part of the design behind CV was to stimulate treasury expenditures, and Iā€™m wondering if that was part of the rationale? Also, what do you think might be the impact of using total supply instead?

@octopus, Iā€™m pinging you because were the one who got us digging into this again.

3 Likes

@gideonro 8 really appreciate this level of analysis and visualization (would love to know the tool/process you used to design the picture!)

It sounds like the introduction of abstain created an inadvertent bypass to the low-pass filter mechanism, since the value undertook a large change immediately. Probably using a ramp/delay-line approach to address this (the abstain is not immediately dumped back in) would work?

@JeffEmmett Would love to explore specifics or this more with you. My feeling is that for ā€œ[X] Conviction Votingā€ to work as intended (where [X] is e.g. Quadratic or some other mechanism), we will need to understand the ā€œConvictionā€ part more clearly ā€“ and this is the entire point of this discussion. :slightly_smiling_face:

It was pointed out that the design space may be different when optimizing for dynamic ongoing processes, rather than for static ones. Probably so ā€“ could fixing a time window and using the Fourier Transform help translate between the two? From my self-study on filter design, it seems like this is a reasonable approach.

@mzargham It appears I inadvertently misunderstood your role in the design process here, so my apologies for that. I donā€™t know much about the protocol for such things, in terms of who is the ā€œdesigner of recordā€ for authorship purposes, or e.g. who gives final approval in the role of senior engineer, etc.

Looking forward to continuing this discussion in deep and enjoyable time, and continuing to develop both knowledge and meta-knowledge together in the TokenEngineering field.

@mZ 8 inadvertently assumed you were one of the authors of the mechanism,

2 Likes

Thanks, @octopus.

Just good, old Google Slides.

As for the abstain option, I still see that as a workaround hack. What Iā€™m wondering is what might happen to the underlying dynamics were the denominator to be changed from effective supply to total supply.

In the above examples, the conviction would remain 5% in both cases (9/180).

Using the sensor metaphor, a shift from effective supply to total supply would make the sensor less sensitive, as it would no longer take voting activity into account and thus dampen the volatility introduced by fluctuating levels of voter engagement. In a smaller community like the TEC, that probably makes sense, whereas a large protocol might have enough proposal flow that the variability in voter engagement is easier to deal with.

CV is such a powerful mechanism that I think it really is worth understanding the causes of the unexpected behavior we experienced and figuring out solutions to it that go deeper than the ā€œabstain hackā€ that the TEC deployed. I get the sense that using effective supply was a parameter decision made by the TEC rather than an intrinsic feature of CV. @mzargham and @JeffEmmett probably have better insight on that question.

1 Like

Itā€™s probably worth noting that design can be evolutionary ~ at this stage many people have contributed.

I am happy to claim the introduction of ā€œlow pass filteringā€ via a discounted integral technique. i am not sure which links are the first prototypes for voting but its something along these linesā€“ commit date is 2019 but its archived work from 2018.

That work was directly informed by a paper i had written in 2014: Discounted Integral Priority Routing in Data Networks.

but the application to voting bubbled out from lots of conversations with other folks in the web3 scene as described in A brief history of conviction voting.

I tend to cite conviction voting back to this specific proposal i wrote on Sensor Networks and Social Choice (Zargham, 2018) which presents some of the core principles but does not yet use the term ā€œconvictionā€ but provides clear backward references to the first principles the approach was built upon and suggests directions of future exploration.

Notably Jeff brought a lot of attention to them mechanism with Conviction Voting: A Novel Continuous Decision Making Alternative to Governance, for which i provided some of my diagrams, simulation plots, and answered clarifying questions; we also did a podcast or two talking about it but Jeff and others have really championed Conviction Voting since.

Personally, the core principle of applying signal processing techniques and introducing time into mechanisms in market and institutional design is major cause of mine. Iā€™ve always been quite leery of copy-pasta ā€œthis is the best mechanism everā€ narratives (see early quadratic voting hype cycles) and didnā€™t really want to see conviction voting have that kind of adoption. I am particularly happy to see people grabbing ideas from it, reworking it, contextualizing it, extending it and broadly fitting it to their specific needs.

To my knowledge the most complete documentation of the Conviction Voting algorithm is in this repo that @JeffEmmett and I compiled years ago with other collaborators from BlockScience and 1Hive under a small grant from Aragon.

Most of the further experimentation and iteration on conviction voting implementation, including variations, has been done by others at Common Stack, Token Engineering Commons and elsewhere. I am happy to see more people taking it up and i excited to see where that goes.

3 Likes