Reward System moving forward

Proposal Title : Reward System moving forward

  1. Who is going to be affected by this proposal?
    The community in general, but specially the Stewards who have been dedicating so much to the Praise process
  2. Who are the experts of the proposed subject?
    @Griff @octopus @ygg_anderson @mateodaza @Vyvy-vi @akrtws

Feel free to tag others that might also have a lot to say about this proposal :pray: and a HUGE thank you to @octopus who helped so much and offered such great solutions to improve praise and implement frequent analysis and to @ygg_anderson who offered the Labs space to develop this implementation!

Description:

The Praise data analysis that happened a couple of months ago showed us many routes for improving Praise. The quantification and analysis processes were two main points that needed improvement, and the interoperability with SourceCred is a desired addition that has been discussed for a while. This proposal is a first step to move forward, considering that other more complex improvements can be proposed in the future when the results from the Reward Systemā€™s research group from the Governauts will be available.

Problem

  • Quantification process is time consuming

  • Only a few people were volunteering to do it

  • Data set was dirty and the data analysis was looking into one year of data.

  • Praise and SourceCred are rewarding similar things

Proposed solution

  • Quantification process being partially asynchronous
  • Have one short meeting to chat about questions and reflections of the async process
  • Having a brief report, analysis and distribution every 2 weeks
  • Distribute it via Reward DAO (the one weā€™ve been using for SourceCred) in TEC tokens
  • Limit Praise to tasks that arenā€™t rewarded by SourceCred
  • Implement Alexandra - the bot that records the time people spent in a discord call, developed by Johann Saurez from LTF

Proposal Details

Steps required for Praise:

  1. Take the csv file of all praises (which is already being generated)

  2. Load the csv file in Python

  3. Write to new csv files or separate sheets in an Excel file

  4. Automatically push to GitHub or post on Google Docs

  5. Create a list that randomly assigns sheets to a set of judges (to minimize correlation of similar judges who think alike)

  • The same sheet would be handed to 2 or 3 people so we continue having the average as the final result, instead of just one person deciding the value of one praise.
  1. Pull the finished sheets back into a single document

  2. Auto generate Gini Coefficient, Histograms, or the metrics we find useful.

  3. Have a call to discuss the analysis and the quantification process - This will bring us cultural insights and help us find problems and strengths in early stages.

  4. Write a brief report to add transparency. (it can be the notes of this call being published)

  5. No deductions will be made in the final data like we had for paid contributors.

This model will distribute the work among multiple people and minimize the time each person spends on quantification. It means weā€™ll need 10 to 15 people to be quantifiers. This is a role that can be compensated by the Commons.

The compensation amount should be discussed, but it can be part of the Reward Proposal that needs to be templated to be sent to the DAO on a frequent basis - maybe every 2 or 3 months.

This can be a role taken by people who want to get more involved in the community since praise is a good informative of what happens and what is valued. We should aim for cultural diversity in the group of quantifiers, as well as experience diversity, some older members mixed with newer ones.

Interoperability with SourceCred and Alexandra

  • SourceCred is currently capturing Github and Discourse contributions.
  • Praise captures a multitude of subjective and objective contributions.
  • Alexandra hasnā€™t been implemented yet, but it captures time spent in calls.

Integrating these 3 tools should mean that the scope of Praise is reduced. Weā€™ll need cultural guidelines and training to get there, but it will be incredibly valuable for us, reducing even more the admin time spent in quantifications, and shaping Praise to be an even better tool for subjective contributions.

Rewards Distribution

I propose we use the Reward DAO instance weā€™ve been using for SourceCred to send praise and SourceCred rewards to all the contributors.

The DAO will only have these 2 functions plus sending compensation to the Quantifiers. It will be managed by the SourceCred Committee, which will become the Reward System Committee.

  • Every 2 or 3 months a proposal is submitted to the TEC by a committee member requesting funds to the Reward DAO
  • Itā€™s interesting we use TEC tokens to empower our economy, so a designated member of the committee would swap wxDAI for TEC tokens
  • Every 2 weeks, after the quantification, rewards are sent in TEC tokens to the contributors from the Reward DAO

What value will this provide for the TE community, commons and ecosystem?*

  1. A continuous stream of funding for all TEC collaborators

  2. Quantifiers could work on their own schedule, as long as they were finished by a predetermined time.

  3. There would be less social influence of Quantifiers on each other and no ā€œdoctoringā€ of numbers possible,

  4. Analysis would be instant and frequent.

  5. Data driven cultural insights will be constantly available.

  6. Tool interoperability will be tested

  7. Rewards will bring movement to the TEC token economy and people contributing work will have a governance voice as well.

Expected duration or delivery date

6 weeks optimistically :slight_smile:

Team Information

SOFT GOV wg, LABS wg, LTF team support

17 Likes

@liviade very cool and in-depth proposal. Super cool. I have skimmed and will take some time to review deeper.

Can you remind me if there is an estimated start date to this research series? Or does that depend upon proposal outcomes? Thanks!

1 Like

I hope we can try to have some dry runs before the commons upgrade! in the end , we canā€™t do it for real until the Commons Upgrade since the Rewards DAO has no funding right now! But we should have a few practice rounds before it goes live anyway.

A couple thoughts.

  1. We should explicitly add Twitter to the list, similar to Alexandra and SourceCred, it is already quantifiable data and then Praise will only cover qualitative data.

  2. Instead of passing out the same sheet to multiple peopleā€¦ I would suggest we randomly generate the praise sheets for each person, excluding the praise that was dished to them.

  3. Some thought needs to be put into how we take the scores given and allocate tokens based on them. We were doing a straight average % for each praises score and then deciding on a total amount of IH given and distributing it evenly across the praises based on their score. There are a million ways to do this thoā€¦ do we give relative amounts of TEC to each bucket? Do we rank people each week and then just distribute on a curve? Too many choices honestly. Im happy to have this critical detail evolve over time and the people in the review committee openly tweak itā€¦ but curious to know where we will start from.

  4. I think we should do a thought exercise and run thru the full process. Letā€™s pretend we just got 20,000 wxDai from the TEC to pass out via the reward system for the next 3 months. What steps would happen to go from there to people getting TEC in their wallet.

  5. @krisj and @rdfbbx are working on Praise for Commons Stack too and i bet they would love to be more involved in this process. Along with Dan Yinesi and Maria, the Trusted Seed Gardeners.

3 Likes

Could we run a test with TestDai and/or one of the TESTTECH/TESTTEC tokens that were used to prepare for the hatch?

This will definitely get more people involved, including me. Sign me up. :slight_smile:

2 Likes

I think it can just be a thought exerciseā€¦ a step by step list of what has to happen but no need to demo itā€¦ just write it down

1 Like

I dig it.

The decentralizing of any work down to some collaborative effort is primo for cutting out the chances of a single point of failure (not that the person is a failure but rather the lack of having coverage when life happens would be considered as such)

Iā€™m also not going to rebel against the use of sourcecred, in fact Iā€™m quite biased. Iā€™ve been dogfooding it and I know it works to reward labor.

Putting on my trusted seed hat nowā€¦

With that said I would not use an instance configured in the same manner as sourcecred (the algorithm Iā€™m in) is now for the commons.

I wouldnā€™t necessarily model our weight configurations and might want to see if new minds looking at it might be able to come up with new ways (socially) to use/ configure (weights) discord channels.

Because I think this needs mindful support I would gather the opinions of people who have seen it in action and have had to decolonize a bit under the hood of it. Names that come to mind would be considered stewards to the highest degree despite whether they are actively developing the sourcecred product now. (These are all people whose integrity I value and whose opinion I trust)

Iā€™m not volunteering any of these people but I do want to name them because they have been tremendously integral to the past and or present of sourcecreds developmentā€¦also this is my 1st post so I can only mention 2 people.

Lineage (historians)

While these two burrrata, beanow
are inactive as far as I can tell, and Iā€™ve never met them, I have scoured our forums and taken in hours upon hours of thier opinions. (which often challenge the capitalistic norms :wink:) Theyā€™re gold. And from what I read it was not uncommon for them to be undervalued. :face_with_monocle: Canā€™t speak to thier desires but I would value thier input.

Modern day

  • backend wizards: thena, topocount, hz
  • METADREAMER :octopus:, wchargin
  • Community Weavers: ryeder, harold

I personally would want the tec-cred instance tailored to fit the Commons Culture. The values and the vision are a part of the architectural design and it will show when people are engaging with it.
The intentions you set will influence the behavior you will get.

I know it exists but has it been tailored to fit our needs?

I guess Iā€™m basically saying I would recommend 60-90 minutes with some seasoned lab rats to brain spar on the configuration before I would be an awwwwwww heck yes!

You can take an instance bare bones and the limit to what you can do with it is your imagination. Iā€™d be really curious what could come of passionate folx gathering to consider the architecture. For the sake of the common stack I think it would be legit to have an intentionally tailored out of the box altruistic tool. I believe it is very close to being just that.

When I heard about the Token Engineering Commons And I started learning about it I could tell that science happens here. That folx are eager to experimentā€¦and ready to throw stuff out if it doesnā€™t work. Iterating towards change is a personal passion of mine. I live it out daily.

There hasnā€™t been an intentional modern day iteration on sourcecred from the vantage of ā€œokay so now we see how weā€™re behaving/relatingā€¦do we wanna change the product at all based on how itā€™s affecting us?ā€ Thatā€™s a thing Iā€™m quite curious about.

3 Likes

Me and @liviade just ran a modelling session going through the proposal. We created this overview that can be used as a starting point for more detailed discussion about the different parts, how they connect and interact.

Miro board here: Miro | Online Whiteboard for Visual Collaboration

Weā€™ll try to schedule a session in a larger group the coming week.

5 Likes

Notes from the proposal debate we just had :slight_smile:

  • hope twitter continues to be rewarded
  • all the quantitative data will be recorded
  • analyzing the data and qualitative inputs
  • How do we decide the quantifiers? Itā€™s important they have the ability to adjust and tweak things
  • paying the quantifiers is cool
  • praise could stand alone
  • SC is a great incentive alignment tool but it lacks qualitative precision
  • it could bring unnecessary conflict in the future
  • we need to set up and go
  • the most important is having mutual monitoring and transparency
  • Twitter could be under SourceCred Yay!
  • There is a new feature in SourceCred that can read data from an API - Itā€™s called the ā€œexternal pluginā€
  • having everything in the same umbrella would be cool
  • concerns of qualitative vs quantitative aspects of the system
4 Likes

Thank you to all the commenters so far.

I have seen Sourcecred get successfully used at 1Hive. I support using SourceCred here at the TEC, supplemented with praise.

My feelings are not strong on this matter so I generally will support the majority community view here.

We must not let the perfect be the enemy of the good. I echo the ā€˜Set up and Go idea.ā€™ Praise got this community this far. Therefore why not set a time-limited window for experimenting with adding SourceCred. In 30 days we will remove the Sourcecred instance unless community consensus instructs us to keep using SourceCred and/or tweak the weightings.

The point is that it becomes a safe finite limited space in which to experiment. If it goes totally sideways we have already agreed on ditching it at the 30 day mark. Only action by the community can keep the SC going after 30 days. It auto-expires.

Just my thoughts. Thanks.

2 Likes