Token Engineering Stakeholder Study

TL;DR

  • Conduct stakeholder interviews & surveys with the broad TE ecosystem to understand token engineering needs and challenges.
  • Develop a methodology that can be reproducible for future studies.
  • Deliver an analysis report with the findings from this team.
  • Open source data and taxonomy to engage future studies.

Proposal description

Itā€™s time we understand with depth the challenges and needs from token engineering stakeholders. The discipline has matured since its name was coined in 2018. The economies designed ever since have had time to play out their failures and successes, the Token Engineering Academy has been providing consistent education and forming professionals. Tools like cadCAD, Token Spice, Machinations and others are being used by many projects and the Token Engineering Commons is working to sustain the economic ground for TE (Token Engineering) public goods to be funded.

The next step is to create a solid and well informed roadmap for advancing the TE discipline based on the industry needs, desires and ideas, which should be preceded by understanding exactly who makes token engineering as a field.

This will be solved through the work of this proposal, which covers the development of a methodology, interviews of TE stakeholders who will be grouped during the study, a survey, and a final analysis report from the results.

The interviews will be scheduled one on one with a max 1-hour duration. Fireflies AI will be used for transcription + a designated note taker will be part of the interviews highlighting important points. With the consent of the participants, data will be open sourced to the community and will remain available for engaging junior TEā€™s and researchers on future studies.

The information collected on the interviews will be synthesized into a report and used for creating stakeholder-specific surveys, which will allow both qualitative and quantitative conclusions to be made.

Additional information to add

This work will generate the following artifacts by the end of the research arc:

  • A synthesis of the token engineering ecosystem,grouped by stakeholder profiles
  • A methodological report on how the synthesis was generated
  • The entire unstructured and structured data generated by this research stored on a persistent storage

How does this proposal benefit the community and/or the field of Token Engineering?

This is the first study of its kind in the token engineering space. The interviews will expose the needs, challenges and desires of token engineers and this study and its research processes will set the ground for future studies to happen more easily. The information drawn from it will allow the TEC to build a roadmap to serve the community and will likely open room for collaboration between TE projects and professionals.

Amount requested

How much in wxDAI are you requesting?

34,515 xDAI

The amount requested generated some debate with the team:

  1. Danilo is a Token Engineer and researcher at BlockScience, experienced with estimates on cost and project bandwidth. He suggested higher rates than what the TEC can pay, however they reflect the market and quality of work being proposed.

  2. The cost of the project was estimated to be around ~69k xDAI. Of those, about 50% is going to be requested from TEC. The initially suggested rates are compatible with the average Data Analyst / Social Researcher job wages.

  3. In talking with the team, there was a suggestion of reducing the proposed rates and recognizing these reductions as pro bono contributions. So percentages of the initial proposal were reduced by 50%+ to reflect these pro bono contributions and the financial reality of the TEC.

  4. This internal debate served to show us how costs for research and token engineering work are also unclear in our ecosystem, and should be included in the questions of our interviews so that this study can also reflect expected & actual rates of token engineering work and research.

How will these funds be used?

The values in this spreadsheet are an estimate of cost and bandwidth. Everyone in the team will track their hours and commitments. The funds will be distributed according to these roles:

Science SME: The Science SME (Subject Matter Expert) is an expert on qualitative and quantitative research on social sciences that will own the methodology of this project. It is expected he will ensure the results have good quality standards in terms of relevance, inclusiveness and reproducibility.

Infra SME: The Infra SME is an expert on the tooling required for doing reproducible science. He has knowledge on persistent data storage as well as tools for communicating transient data.

TE SME: The TE SMEs are the owners of the synthesis narrative (the final analysis report). Active participation on all steps of this project is to be expected, and required skills include a good understanding of the TE technical fields, projects and groups. Communication and writing skills, as well as synthesis skills are required. Although we have one main TE SME, interviewers will also share this role.

Communications: Comms will be responsible for the content distribution and writing support on the synthesis / summaries. Part of this budget will also be allocated for design.

Social Researchers: They will be responsible for interviewing people with heterogeneous backgrounds on the TE field and producing summaries on it. They will be supported by the Science and Token Engineering SMEs to streamline their work. It is expected they will provide regular feedback on the project methodology and artifacts being produced. The social researchers are able to understand their subjectā€™s language by being familiar with token engineering.

Transcriber: All interviews will be recorded and transcribed by AI. The role of the human transcribers is to be present in all interviews as a support point for the interviewer and help highlight important points and patterns within the interviews.

The roles are roughly described above to delineate the work to be done even though one person may play multiple roles and some roles may require multiple people.

What does success look like?

  • The methodology is successfully followed
  • An informative analysis report is shared with the community and there is a sentiment of relevance between stakeholders
  • A reproducible research pattern is shared with the community to incentivize future studies

How will you share progress?

A weekly report will be submitted in the TEC forum until the project completion.

Team Information

Ordered alphabetically

Bear: Comms, social researcher
Member of the TEC Coordination Team

Danilo Lessa: Science SME
Engineer at Block Science

Gideon Rosenblatt: Financial support, social researcher
Member of the TEC Coordination Team

Lisa Wocken: Social researcher
TalentDAO and Bolster Leadership. PhD on Organizational Leadership
(qualitative researcher and adjunct faculty at the University of Minnesota)

Livia Deschermayer: PM, social researcher
Cultural Build at Commons Stack, TEC former steward

Malik Tag: Transcriber, comms
Electronic engineering background and TE Academy student

Mark Date: Infra SME, social researcher
Founder of Rendar.org, Alum ( SC02) of the Seedclub.xyz accelerator,
study host at the Token Engineering Academy

Nathalia Scherer: Interviewer, social researcher
Researcher at International Institute of Psychoanalysis and DAOstack. Former TE Academy.
background in industrial & systems engineering, focusing on distributed tech and governance.

Shawn Anderson: TE SME, social researcher
Founder & Data Scientist at Longtail Financial, former TEC Steward

Complementary information

Timeline as proposed in the meta methodology

The timeline is subject to change as the methodology will also evolve, this information is a guiding structure and estimated time.

Timeline

  • Week 1: Initial set-up
  • Week 2: Data keeping & provenance is defined
  • Week 3: Groups to be interviewed are defined
  • Week 4: Sampling strategy and Interview questions are defined
  • Week 5: Interviews are ready to be started!
  • Weeks 6 to 8: Interviews
  • Week 9: Interviews are synthetized and summarized
  • Week 10: Follow-up survey is defined and distributed
  • Week 11: Follow-up survey analysis is done
  • Week 12: Final report is written and distributed along with the research data
12 Likes

This is the first proposal after two years of TEC, Iā€™m considering buying $TEC for to support (others I volunteered for 100%).

Reasons:

  1. Already the meta-methodology is a valuable contribution to TE knowledge commons (which I personally consider to be the Commons of TE, not the wxDAI in the commons pool, just reiterating here for the new ones). I want to see proposals like this that are following, adding to, and creating the TE Practice
  2. I feel the TEs that pointed out the market rates and still agreed to volunteer 50% - and all TEs that I have contact with say: itā€™s the projects they volunteer for are the ones they get the most out from. And as this proposal shows as well, those are the ones that add to TE knowledge Commons. Exactly this is my point when I say: letā€™s distribute $TEC to contributors to TE knowledge Commons.

80% of $TEC is held by CommonsStack Trusted Seed, and most of them are familiar with incentive mechanism design (not wishful thinking): just holding on to a token wonā€™t make it gain value.

So, hereā€™s my appeal to all $TEC holders, and my personal pledge:

[edited to make it less burdensome to experiment for the proposers and $TEC holders]
If the proposal updates with an address to receive $TEC to be distributed to reward volunteering below market rates:

As a $TEC holder [then] I will put [all] $TEC to CV for this proposal. When proposal passes CV, I will send [a %] of my $TEC that was parked in CV to the proposals address that is provided, to be distributed according to the % of volunteering

[after experimenting with such ā€œmeaningful because shared convictionā€: such a user story could be a automated feature of CV]

Letā€™s not only park our $TEC in CV and have it, too - but actually reward fellow TEs that are minting time to volunteer for OUR knowledge COMMONS - the only source of fundamental $TEC value (as the initial experiment proved that exit/entry tributes, no matter how goldilocks, donā€™t cut it).

Letā€™s keep experimenting!

:dove: :green_heart: :tec_logo: , yours @Solsista

8 Likes

Methodologies can be a good tool to provide strategic guidance, but having worked with many methodological assessments in my professional capacity I would suggest that to be useful are accompanied by a clear implementation plan. Implementation guidance go beyond a high level final report and research data but provide technical specifications that would be relevant to advancing token engineering.

Furthermore, there already exist well established system engineering methodologies that would be highly applicable to token engineering. To avoid reinventing established system engineering methodologies I would suggest that the proposal is refined to align with well established engineering methodology best practice .

In addition, approaching token engineering methodological review through an interview and survey data collection risks subjective and anecdotal evidence driving the underlying research that may not justify the funds requested. Would suggest that we approach a token engineering methodology through a more rigorous research approach that would deep dive into academic literature on token engineering and related area, review existing token models in the broader ecosystem, and arrive at concrete recommendations, implementation guidance, and technical specifications for TEC to move forward.

Hi Marc,
thanks for your comment and feedback.

For you first point, I want to clarify that the methodology we refer in the proposal is for the conduction of the stakeholder interview and analysis process, and not for token engineering practices.

For your second point, this proposal is focused on understanding the challenges, needs and ideas of who is practicing token engineering so it has a subjective nature indeed. Thatā€™s why most of our team is composed of social researchers. Reviewing token models is an excellent research idea which would produce different and equally relevant results to the field.

2 Likes

This is clearly needed within TEC. A thorough update to the ā€œmethodologyā€, especially taking in mind that this will be potentially adopted across bioregional ReFi ABCs, we donā€™t want to get it wrong and replicate flawed assumptions.

1 Like

I support this and Iā€™m glad the proposal has passed.

2 Likes

Thanks to everyone who supported this proposal!
We just had our kickoff meeting and we are all super excited with the opportunity to do this work!

3 Likes

Hi everyone,
Weā€™ve reached the seventh week of our study and have completed the first week of interviews. It has been a pleasure to learn from all of the participants so far. The weeks previous to the interviews starting were used to cover the following points:

  • We have defined our sampling strategy, which involves using an emergent approach. Our group compiled a list of stakeholders to interview, and we are now asking this initial group for recommendations of others whose work they admire and who should also be interviewed.
  • We defined the overarching research question to be: What is token engineering? This is bringing a depth of definition.
  • To design the interview questions, we began with a brainstorming session that generated over 50 questions. We then worked collaboratively to refine and elaborate these questions, reducing them to 14 that bring comprehensive information of the practices, needs, and challenges of TE.
  • Working with a restricted budget for the size of this study has been quite a challenge. Although we had estimated the workload beforehand, we realized that certain costs were not budgeted for. These included the need for a project manager and a financial manager, the understanding that every interview needs to be watched by the 3 social researchers so we can produce a coherent analysis, subscriptions to platforms such as Zoom, Otter.ai, Typeform, and persistent storage. Additionally, weā€™ll have to allocate time for transcription cleaning before feeding the data to the AI for analysis. Also, the number of interviews is still uncertain, which makes it a variable to consider. Nevertheless, we are working to refine our roles and consider many options for using the funds in the fairest and wisest way possible.

We are relatively on time considering the proposed timeline :slight_smile:

We considered this to become a TEC interest group but thought it could risk harming focus. The deliverables of this study could start a very fruitful group to dive even deeper into the data as mentioned in this proposal.

Let us know if you have any comments or questions.

2 Likes

Updates from the study:

Apologies for the lack of updates, there is a lot happening at the Stakeholder Study!

2 Likes

This study is completed and available for download here OSF :partying_face:

The three deliverables outlined in this proposal can be found in the link above.

  • A synthesis of the token engineering ecosystem
  • A methodological report on how the synthesis was generated
    The methodology we used is described in detail in the report so anyone can use it to conduct further studies.
  • The entire unstructured and structured data generated by this research stored on a persistent storage
    The anonymized transcripts from the interviews are available here https://zenodo.org/records/10888336 and also linked in the publication under ā€œpublic dataā€.

Major thanks to @nathalia and Lisa for their restless efforts in co-writing this paper with such dedication and commitment to quality and generosity in our teamwork. To @danlessa for helping since day 1 to structure this proposal, advising the methodology, being an incredible editor to the paper, for taking care of financials, advising on the publication, persistent storage, and for grounding this work through his token engineering expertise. Thanks to @ygg_anderson for guiding the cleaning and storing of the transcripts, for cleaning many of them, for analyzing the objective data creating the graphs we are using, and for his editing insights as a token engineer. Thanks to Mark for organizing the Google Drive with all the data from the interviews, for cleaning many transcripts, and for supporting as a note taker in some interviews, as well as for his research into persistent storage. Thanks to @bear100 for his general support at the beginning of this project and for creating and maintaining the page https://www.testakeholderstudy.com/. Thanks to @gideonro for his detailed editing notes and additions. Thanks to Malik for cleaning transcripts and supporting us as a note taker to a few interviews. Thanks to Ataberk and @roro for their amazing editing notes.

And finally, thanks to all our participants who dedicated their time to share their token engineering expertise with us!

3 Likes