Outlining the Rewards System Process.. v2!

Rewards System Process

The purpose of this system is to create a way to record, reward and analyze fairly the work done by contributors to the TEC, creating a manner of providing decentralized updates of the action happening in the Commons. By this system we can gather realtime acknowledgement of contributions, in a decentralized manner. We have multiple systems that are designed to reduce as much as possible the mental overhead and time commitment of quantification and also average out the subjectivity of praise in itself, by having a decentralized oracle of quantifiers. This is our current proposed process by which we will achieve the goals of this system.

Step 1 - Collect Data

The most important step is to collect the data related to the work done by TEC Contributors. This is broken down into subcategories and further into specific input streams. Data is collected continuously and submitted for Quantification on a bi-weekly schedule. The two sub-categories are Automatically Quantified and Manually Quantified. The corresponding inputs for each category are as such:

  1. Auto Quantified
    a. Github
    b. Discourse(Forum)
    c. Discord Meeting Attendance (In Development)
    d. Twitter (In Development)
  2. Manually Quantified
    a. Discord Praise
    b. Telegram Praise

It’s important to note that in general, auto quantification is related to objective contributions and manual quantification to subjective contributions. It’s possible that some actions in Github and Discourse(Forum) for example fall under the Praise process if they have a subjective character. (i.e. we praise the content of the contribution rather than the factual action of making a contribution.

Step 2 - Quantification

We aggregate all input streams and run each through their distinct quantification paths.

Automatic Quantification

For Automatic Quantification we will be using a SourceCred instance and bot(s) that will track and quantify specified actions. The Reward Board will have the ability to establish and propose to modify the actions we value and how much they are relatively worth. These “weight” configurations will be shown in the output of Step 7, for every quant.

For each input we outline the actions that can be automatically tracked and quantified, the four main sources for automatic quantification are Github, Discourse(Forum), Discord Meeting Attendance (Alexandra?), and Twitter. A comprehensive list of the parameters for each source can be found in the Appendix at the bottom of this document.

Manual Quantification

Praise will be recorded and saved to the Praise backend by TEC tailored bots in both Telegram and Discord. The amassed praise will have to be quantified each cycle manually. Manual Quantification will be done by members of the community. We have a pool of Quantifiers that have opted in for the responsibility of quantifying praise and we will draft a group of randomly selected Quantifiers from this pool to complete this task. Quantification Periods normally will be a bi-weekly period. Quantifiers will also be rewarded for the work they do in manual quantification to incentivize participation in the system.

The UI and user flow for Quantifiers that will be built will adhere to a few important points:

  • Quantifiers will have five (5) days to asynchronously quantify their assigned praise

  • Praise will be grouped by Praise Receiver inside the UI for quantifiers. This is important to see any duplicate praise, or praise from multiple sources for the same contribution.

  • Quantifiers will have overlapping sections of praise that they will need to quantify, not the entire praise for the whole cycle. Meaning that more than 1 quantifier will evaluate each praise and the average will be taken and passed on as the final score.

  • A Fibonacci sequence will be used as the incremental factor for quantifying. To the Quantifier this will show up as simply a slider that returns a numerical value. Higher score → more valuable the contribution. The range of values go from the least relevant which is 0 to most impactful which is 144, according to this sequence:

    • 0 → 1 → 1 → 2 → 3 → 5 → 8 → 13 → 21 → 34 → 55 → 89 → 144
  • Quantifiers will reference the Rules of Praise and Quantification and Quantifier Onboarding posts which will help them navigate the quantification process.

Both streams will merge once they have been quantified in their respective manners and passed onto the next step.

Step 3 - Analysis

At the end of the 5 day window for quantifying praise a call will be held for the Quantifiers who wish to review the entire praise data to collaboratively analyze and suggest future considerations for other quantifiers. We’ll also be using the [RAD dashboard]GitHub - CommonsBuild/tec-rewards to analyze and cross-reference data. These Review sessions will show us if there are any gaps in education around quantfying or dishing praise and also identify any problematic or errant quantifiers, bringing us cultural insights and help us find problems and strengths in the early stages. You can find the agenda and notes from the review sessions here.

Step 4 - Calculate Rewards

Once we have the combined praise data from all streams we’ll use the associated quantification values to calculate the actual token amounts that will be distributed to each eligible recipient. The initial distirbution relationship between sourceCred and Praise is yet to be determined as well as the amount of funds distributed for every round. Keep on the lookout for a forthcoming post from the Reward Board which will detail all relevant distribution parameters.

Step 5 - Final Approval

The final token distribution will then be voted by the Reward Board, which also holds the Reward System Funds. This DAO will be stewarded by a board of 3-7 trusted members (Reward Board) who have been vested with the responsibility of inspecting the final distribution and pushing the button to release funds. They will need to check for any oversights or collusion between Quantifiers. The Vote for releasing rewards DAO funds must meet a Quorum of 41%(at least 3 of 7 must vote) and have minimum 81% Support (2 members voting No can block the proposal).

Choosing the Reward Board

We will have a formal nomination process for the initial Reward Board with up to a maximum of 7 seats. A Reward Board member who has not participated for 3 consecutive rounds will be asked to give up their seat.

Powers of the Reward Board

This DAO will have extraordinary powers over the Rewards System, the greater community however will act as the arbiter in case of metagovernance and distribution modifications. We will use Snapshot for these instances. An exhaustive list of this board’s powers are as follows:

  1. Change the weight configurations in SourceCred
  2. Change the allocation percentage for each round of rewards.
  3. Modify the distribution ratio between SourceCred and Praise
  4. Mint tokens for new board members, burn tokens for outgoing members
  5. Modify the distribution percentages of the three entities outlined in Step 6

…But Who Watches the Watchers?

The community will have several instances to act as the backstop in instances of collusion or poor judgement from board members. We define checkpoints for each power numbered in the previous sub-section:

  • Action 1 will be output in Step 7 allowing the community to see the weights used and flag issues.
  • Actions 2-5 will require community votes to approve proposals made by the board.

Step 6 - Distribution

Reward Allocations are valued in DAI, but paid out in TEC. We want to inject our contributors directly into our token economy and we’ll use our own Augmented Bonding Curve to achieve that. The ABC will convert the received funds from the TEC treasury, swapping wxDAI to TEC. When the funds have been released by the Reward Board they will be paid out to three entities - the Reward Board, Quantifiers, and Contributors. The distribution will have already been set by the Reward Board and ratified by the greater community at the initialization of the Rewards System.

To aid in visualization we can use this imagined example:

The allocation percentage is 1% of the Common Pool for each rewards round

If the Common Pool has 600,000 wxDAI , we distribute 6000 wxDAI, if the price of TEC is 2 wxDAI then this comes to 3000 TEC to distribute for this round

The Reward Board establishes an allocation distribution of 90% to contributors, 7% to Quantifiers, 3% to the Reward Board. Allocations is set to 75% Praise, 25% SourceCred.

From these amounts, assuming no modifications were made by the Rewards DAO in Step 5, the token distribution amounts would break down as follows:

  • Contributors receive 2700 TEC
    • 675 for SourceCred
    • 2025 for Praise
    • Or averaged out across 50 unique contributors renders 54 TEC each
  • Quantifiers receives 210 TEC
    • If there’s 10 Quantifiers that’s 21 TEC each.
  • Reward Board receives 90 TEC
    • If there’s 7 board members that’s 12.85 TEC each

Once the amounts are calculated and approved by the DAO vote they are distributed using the Transactions App on the Reward Board Aragon DAO.

Step 7 - Iterate and Improve

After the data analysis of each round we integrate insights and inform the parameters of sourcecred and the reach of praise.

A forum post will be generated for each quant showing who received how much funds and the SourceCred weight configurations. By this process we invite analysis and discussions and empower transparency.

Data points and more metrics will be added in Post MVP that will allow robust analytics of praise.

*A note on record-keeping: Ultimately the blockchain is the final source of truth for observing distributions. However, a ledger of final praise distributions will be maintained and made publicly available, this will be the responsibility of the Reward Board.

Quantification Parameters Appendix

*This list is subject to change as we discover new technical possibilities and limitations

Data Source Inputs Parameters and Actions to Weigh


  • Like received
  • Like given
  • Topic made
  • Post made
  • Replied to post
  • Post receives reply
  • mention user
  • user is mentioned
  • topic is referenced
  • reference topic
  • Create repository
  • Create Issue
  • Create PR
  • Review PR
  • Add Comment
  • Add Commit
  • Merge PR
  • Retweeted TEC Post
  • Post was retweeted by TEC
  • Mentioned by TEC
  • Mentions TEC
  • uses hashtag #????
Discord Meeting Attendance
  • member attended meeting {x}
  • time spent in meeting?
  • time spent talking in meeting?


  • /praise in the TEC Discord
  • !praise in TEC Telegram group

Rewards Distribution

Reward Board
  • Registers initial rewards parameters config. Proposes and executes changes to this config
  • Approving (or rejecting) rewards distribution
  • Manages Quantification Periods and facilitates Review Sessions
  • Quantifies praise
  • Receives praise for work/contributions

(:point_up_2: Are some of these people Trusted seeds?:thinking:)

This could be great for continuity and to have a pool of more people as to not run into burnouts.

I overheard @Griff speak to there being an element of importance to the quantifiers being able to know who is doing what work as it supports continuity. I couldn’t agree more about the continuity part. The best database you have in a decentralized group is eachother. Teaching people to be aware of their surroundings is how information can be collected and shared. Common language is a piece too.

I appreciate the context I gain from reading Praise. I personally read every props and did-a-thing in my communities server. That’s where a majority of the deliverables go. I learn what’s happening and as time goes on I learn the people associated with the working groups too. After testing this process out in over a dozen servers I can say that It’s possible to read those without names/faces attached and eventually be able to know who is being spoken of. The dots will get connected on the video calls when people are giving updates (I don’t recognize faces much which is why I know this works i’m never tracking faces to contributions rather behavior.)

I’m appreciating the anonymous aliases. Anonymity has been brought up on a few occasions by many people over the span of sourcecred and even recently it’s resurfaced I like the idea because it sounds like it would make more people engage with it.

Quantification Parameters Appendix


  • Does anything happen for reviews?
  • Does anything happen if someone links a fix / closes out an issue within their pull request?

From a developers point of view are these things part of a first buildout or do they come in later on down the road?

It’s so rad to be able to see an instance be built with more intention of the behavioral impacts.

1 Like

My vision for this pool of Quantifiers is to have a very low barrier for entry. Really I only see 3 requirements to opt-in to be a Quantifier:

  1. Are Human.
  2. Know WTF the TEC is.
  3. Have read some sort of onboarding document, like the Rules of Praise that Livi is working on.

The bigger the pool and the greater variety of Quantifiers we get I think will help us achieve less subjective praise and also creates a new engagement tool for newcomers.


Those are some solid parameters. :joy: I was hoping the pooling was huge! It’s so good for people to be witnessing all the good happening around them. I dig that the bar to entry is low.

1 Like

Great point! Do you think the benefit of knowing who did what is greater than the bias that can come with it? The reason we thought about hiding the names of who was praised during quantification is to try to get a more approximate value of the contribution per se, rather than having that value attached and influenced by the persona.
Some people will still know who did what intuitively but I believe it plays a role in the psychology of the quant to not see the name immediately there.


Formula for the “Total hours”(net amount of tokens distributed in a Quant period)

From my current understanding of the mint process, we might be allocating a fixed percentage of the commons pool to the funding for the rewards distributed by Rewards DAO.
I’m wondering if it would be of any value to somehow make the total “hours” distributed or total funds distributed a function of no. of members that were praised, arbitrary “effort” metric that the quantifiers decide each quant(which represents how productive they felt the community as a whole for that period), changes to our economy and the relative success of the project(or relative to the funds available in the commons pool, in DAI) ?

My responses are from a vantage of earning through sourcecred, considering praise and taking into mind that the commons has different cultural practices for appreciating people than sourcecred’s community does. (our cultural practices around gratitude are timid, not radical)

No I don’t.

There’s a personal bias within me not minding the transparency, it gives myself and others an opportunity to practice gratitude for someone’s energy expelled (i come from the military, we have practices for noticing and honoring and mentoring and collaborating shadow does not manifest in the same ways there as it does out here with resource scarcity on the horizon for so many)

As I’m thinking this through I would be foolish not to think about how competitive everyone is capable of being naturally. Maybe when i look at !praise, !props and #did-a-things. I don’t get activated by seeing others succeed. By succeed i mean the labor it takes to start and follow through even if it means asking for support.

My current stance is that if people were anonymous folx wouldn’t have a choice but to rate the work and the anonymity of it all will make people more reflexive in thier emoting. Seeing a person introduces bias to the labor. This points driving home in my mind as I’m typing it out.

I’ve heard people say over and over !props is just a popularity contest. That sentence makes my heart shudder. Such hurts one must have in them to have sour feelings stir in a !props channel or a !praise channel. I understand a great deal of what festers for people in my community around the #did-a-thing channel and the mental Olympics folx go through having been abused for a lifetime by ableism and processes that support capitalism. I do dream of the architecture being more intentional than it has been and am appreciating the anonymity being presented / discussed here.


are !props and #did-a-thing reward mechanisms from sourcecred?


One point @Griff brought up in a call which I think is important is that anonimizing the praise to some extents kills the “onboarding” dimension of praise, as in: it helps to get a feeling where stuff is happening, and who is involved. This can be specially useful for newer or less involved members to get an overview, and could even be a good way to incentivize joining the quant pool: “Hey, are you new and don’t know where to start? Join a quant, learn about everything that is happening in the TEC and GET PAID for it!”

On the other hand there exist specific onboarding processes, and it’s arguable if praise quants should be involved in that at all…


Yes they’re the cultural pieces I speak to when I long for some more intentional design. I had not heard anything about anyone changing practices with a new instance. I was under the impression sourcecred would be setup the same for this community. I’m curious to know what the differences will be.

I hear that, however there was some lengthy discussion about how explicitly identifying praisors and praisees increases the subjectivity of praise, however not having enough context, as in being able to distinguish one praisee from another, makes it very difficult to quantify. This is especially true in the case of praisees receiving multiple praises for the same thing.

In this struggle between objectivity vs. context we decided this method as the happy medium.

1 Like

I think even if praise becomes anonymous for the quantification it can still be useful for onboarding. People will have an idea of what is happening and what types of contributions are valued in the community. They will start to put the puzzle together once they become more active and have sweet aha! moments finding out who did what in everyday interactions :slight_smile:

1 Like

Here is a rough overview of the Roadmap to First Quant from the Rewards System-

Roadmap To First Quant

  • Agree on the goals of the praise system.
  • Agree on system design within rewards system team and get technical validation
  • Present to community for feedback on system (no params/weights proposed)
  • Spec front-end and back-end requirements
  • Design UI for quantifying praise and reading data
  • Begin development starting with the back-end <---- WE ARE HERE
  • Choose who will be on the Reward Board
    • Nominations? Maximum 5 seats
  • Revisit reward committee proposal and propose changes.
  • Reward Board chooses initial rewards system first set of parameters
    • how many tokens to distribute per cycle? Fixed amount or variable? Do we have caps on individual contributors?
    • Distribution percentages for 3 buckets: Reward Board, Quantifiers, Contributors
    • Distribution Ratio between SourceCred and Praise
    • Weight configurations for SourceCred and praise
    • Use praise receiver pseudonyms during quantification: on/off
    • Max quantifiers per praise receiver: number
    • Max praise receivers per quantifier: number
  • Rewards System Team runs tests on configurations, revise and adjust w/ rewards committee, log simulation results
  • Forum post for advice process on the settings chosen by the Reward Board, adjust proposals based on community feedback.
  • Ratify the Reward System Initial Parameters via Snapshot Vote
  • Make initial funding request from the TEC Treasury/Common Pool with the value denominated in wxDAI, receive TEC or convert it to TEC after receipt in wxDAI.
  • First quant!

Does this Rewards System Design look good?

  • Looks good! BUIDL IT
  • No, I have unresolved concerns in my comments
  • Abstain

0 voters

We’ll make this one of the configurable parameters managed by the Rewards DAO Committee.

Use praise receiver pseudonyms during quantification: on/off

Two other parameters have been identified so far:

Max quantifiers per praise receiver: number

Determines how many quantifiers should do “the same job” of quantifying praise for one person. More quantifiers per praise receiver means increased workload for quantifiers but less risk of personal bias.

Max praise receivers per quantifier: number

Determines the amount of work each quantifier has to do. If above parameter was set to 10, that reads as: One individual quantifier should not have to quantify praise for more than 10 praise receivers.

The two parameters are used in combination to determine the minimum size of the quantifier pool needed. When assigning/randomising quantifiers for a quant period the system will warn if quantifier pool is too small. Option 1 is then to recruit more quantifiers and then assign again. Option 2 is modifying the parameters, increasing quantifier workload or increasing the risk of personal bias but requiring a smaller pool size.


I incorporated these parameters into the appropriate sub-section of the roadmap

Rules for praise and quantification

1 Like

Will there be a system to make comparisons? How do we compare a Dev contribution to a Comms contribution, for example?