Universal Python Library for TE Models

Universal Python Library for TE Models

We need to start maintaining a token engineering library for standard DeFi and Governance components. It’s inevitable that this will happen soon enough. If tecmns can be the ones to do it then we will get infinite street cred and branding as every token engineer in the world, for all time will be installing our models in their python environments!

Imagine:

pip install tecmns
from tecmns.commonsbuild import praise, abc, convictionvoting
from tecmns.1hive import continuousissuance, convictionvoting
from tecmns.ltfte import multisigmoid, cashflow, alexandra
from tecmns.tea import skilltree

Conviction Voting

We are not far from this reality. These models are already being implemented [Sem et al. 1hive]. They just havn’t been organized anywhere. All we have to do is start a repository, create a standard structure that we are happy with, include tests and standards for contributions, and then push to pypi. This is straight forward. Hey why don’t we even hash our models and have them trace a legacy. It’s like krypto kiddies for TE models. It’s so simple, we hash the files. This way. Hashes can be associated with github authors.

Parameterized Classes

I propose a standard way of implementing token engineering component models. Specifically, using python param. Parameterized python classes are a revolution in software engineering. It’s the absolutely perfect choice for making re-usable token engineering component models.

From the param docs:

Are you a Python programmer? If so, you need Param!

Param is a library for handling all the user-modifiable parameters, arguments, and attributes that control your code. It provides automatic, robust error-checking while dramatically reducing boilerplate code, letting you focus on what you want your code to do rather than on checking for all the possible ways users could supply inappropriate values to a function or class.

Param lets you program declaratively in Python, stating facts about each of your parameters up front. Once you have done that, Param can handle the rest (type checking, range validation, documentation, serialization, and more!).

Param-based programs tend to contain much less code than other Python programs, instead just having easily readable and maintainable manifests of Parameters for each object or function. This way your remaining code can be much simpler and clearer, while users can also easily see how to use it properly. Plus, Param doesn’t require any code outside of the Python standard library, making it simple to add to any project.

Param is also useful as a way to keep your domain-specific code independent of any GUI or other user-interface code, letting you maintain a single codebase to support both GUI and non-GUI usage, with the GUI maintainable by UI experts and the domain-specific code maintained by domain experts.

Proposing a Modular Approach.

It would be cool if this repository was more like a bootloader containing registered repositories. This would allow a fractal structure, a generalization of repositories containing registry files. such that registered models(repositories) could themselves contain registered models. Thus we can have a standard packaging and loading framework for models. This would form a network topology of models. Like an internet of models.

Consider the following netlist contained in commonsbuild/tecmns-models
netlist.py

longtailfinancial/tokenengineering
commonsbuild/hatch
commonsbuild/commonsupgrade
1hive/models

another file, tecmns.py

tecmns --init

The above command will clone the repositories in the netlist and verify their hash. The hash could potentially be broadcast on-chain. This effort could borrow and lend efforts to the skills forest considerations for skills trees. Consider the badge of honour in being an early author or repositories registered in the TE netlist.

This is not another simulation framework.

The beautiful part about this approach is that it is perfectly complimentary to simulation frameworks like cadCAD and Tokenspice. This will not be another simulation framework. It will be a modelling framework. The tecmns models can be passed into cadCAD simulations like so:

# Simulation Imports
from cadCAD.configuration.utils import config_sim
from cadCAD.configuration import Experiment
from cadCAD.engine import ExecutionMode, ExecutionContext
from cadCAD.engine import Executor
from cadCAD import configs

# Model Imports
from tecmns.commonsbuild import AgumentedBondingCurve, CulturalBuild
from tecmns.ltfte import InterestRate, MultiSigmoid, Alexandra, Opex, LTFTE
from tecmns.1hive.models import ConvictionVoting

# Initial State 
cb_system = CommonsBuild(
                phase1=CulturalBuild()
                phase2=HatchUpgrade(
                        fundraiser=AugmentedBondingCurve(reserve_ratio=0.65, starting_price=1, starting_supply=1e6)
                        governance=ConvictionVoting(parameter='supagov')

ltfte_system = LTFTE(
                        fundraiser = MultiSigmoid(),
                        governance = ConvictionVoting(parameter='karmaconf')

# Metrics and emergent properties
initial_state = {
        **cb_system.metrics(),
        **ltfte_system.metrics(),
        **ecosystem.metrics(),
        'x_factor': 1,
        'market_scale': 1,
}


# Controllable and Uncontrollable models
system_params={
'controllables' = [{
        'tecmns': cb_system,
        'ltfte': ltfte_system,
        'ecosystem': Eco(parameter='tea'),
}],
'uncontrollables' = [{
        'maket': Market(parameter='bull_market'),
        'interest_rates' = InterestRate(0.314),
    },
    {
        'maket': Market(parameter='bear_market'),
        'interest_rates' = InterestRate(-.0125),
}]

### Component Step Logic
def p_step(params, substep, state_history, previous_state):
        controllables = params['system']['controllables']
        uncontrollables = params['system']['uncontrollables']
        for model in uncontrollables:
                    model.step()
        for model in controllables:
                    model.step()
        return {}

### Interaction Logic
def p_interact(params, substep, state_history, previous_state)
        controllables = params['system']['controllables']
        uncontrollables = params['system']['uncontrollables']

        tec = controllables['tecmns']
        ltf  = controllables['ltfte']
        ecosystem = controllables['ecosystem']
        market = uncontrollables['market']


        x_factor = ltf.treasury * tec.treasury
        market.x_factor = x_factor
        market.scale = ecosystem.scale(market)

        hatchers: int = controllables['tecmns'].hatchers
        token_engineers: int = controllables['ecosystem'].token_engineers
        token_engineering_opportunities = uncontrollables['market'].opportunities

        market.open(hatchers, token_engineers, token_engineering_opportunities)

        ltf.grow(market_opportunities)
        tec.grow(ltf.job_opprotunities())

        nft_skill_forest.develop(ltf, tec, market)

        te_academy.teach()


s_metrics(params, substep, state_history, previous_state, policy_input):
        controllables = params['system']['controllables']
        uncontrollables = params['system']['uncontrollables']
        tec = controllables['tecmns']
        ltf  = controllables['ltfte']
        ecosystem = controllables['ecosystem']
        market = uncontrollables['market']

        return **{
                **tec.metrics(),
                **ltf.metrics(),
                **ecosystem.metrics(),
                **market.metrics(),
                'x_factor': policy_input['x_factor'],
                'market_scale': policy_input['market_scale'],
        }

partial_state_update_blocks = [
    {
        'policies': {
            'p_step' : p_step,
            'p_interact' : p_interact,
        },

        'variables': {
            'metrics': s_metrics,
}


del configs[:]

sim_config = config_sim({
    'T': range(300), # No of timesteps to simulate
    'N': 1, # No. of Monte Carlo Run
    'M': system_params # passing system parameters as defined earlier
})

experiment = Experiment()
experiment.append_configs(
    initial_state=model.initial_state,
    partial_state_update_blocks=model.partial_state_update_blocks,
    sim_configs=sim_config,
)

exec_context = ExecutionContext()
simulation = Executor(
    exec_context=exec_context,
    configs=configs,
)

#print("Started Simulation at:", datetime.now().strftime("%H:%M:%S"))

raw_result, tensor_field, sessions = simulation.execute()

df = pd.DataFrame(raw_result)

In the above example, we borrow a pattern that Dr. McConaghy implements in tokenspice, that is the idea of controllables, uncontrollables, and metrics.

Proposal Information

Proposal Description:
Implementation of the netlist modular TE library described above

Proposal Details:
See proposal content above.

Expected duration or delivery date (if applicable):
Two weeks

How does this help Token Engineers and benefit the Token Engineering community?
See proposal content above.

Team Information (For Funding Proposals)

Names, usernames, and/or relevant social links for team members (Twitter, Github, TEC Forum, etc.):
@ygg_anderson
@sem
@0xNuggan
@navier_stoked
@Heater

Skills and previous experience in related or similar work:
We have been busy building. This proposal is to organize work done and make it highly available to the world.

Funding Information (For Funding Proposals)

Amount of tokens requested:
25KWXDAI

Ethereum address where funds shall be transferred:
0x8C6e8021de64150BF374640Eaf7732542D93aEb8

More detailed description of how funds will be handled and used:
The address above is the Longtail Financial vault. Contributors can register with LTF to be compensated based on a combination of hours logged, commits pushed, and models authored.

9 Likes

That’s rad, thank you for posting this!

I think the most interesting thing of these modules is its interoperability. If we map somehow the smart contracts inputs and outputs with the python classes inputs and outputs, we can model any tokenomic system and visualize how those small pieces behave when they are interconnected.

3 Likes

Very cool, thanks for writing this up! Regarding the hashing idea, I see some some similarities with the work stream we’ve been hacking away at with the TE Academy to better understand the distribution of educational digital content (i.e. blog posts, eTextbooks, videos, etc.) through a network with the intention of flowing value back down those contribution channels to original creators. In this common repository use case, however, the digital content is a Token Engineering model component (not a textbook), and the creators are developers (not teachers/researchers), but the same value reward mechanism could come into play!

@akrtws FYI since this idea came up as a use case in our last meeting too!