I think it’s valuable to player motivation to give them the ability to track their own performance between games
Well, I agree, but… I think it’s also valuable to invite players to reckon with ambiguity, but yeah it would be good if there were a reliable scoring system most of the time, but I don’t get the sense that’s feasible, due to the necessity of having a lot of asymmetry (when means and ends are symmetric, bargaining is trivialized, and it invites the chimeric simplifications of population ethics), and the importance of randomization for exploration, and the boons of not having to balance strength.
I think that being able to violate a contract (at some cost, perhaps reputational) when the motivation is sufficiently high gives some spice to the dynamics of negotiation. Also, seems like players should be able to mutually consent to dissolving a contract. So I think future versions should have some thought put into non-omnipotent contract enforcement mechanisms. Not letting any single player become so powerful that they can afford to ignore all contractual obligations is a pretty important lesson.
I think I could get on board with that. I’m currently researching smart contract compute (mainly for self-sovereign identity with key rotation, portable name systems, and censorship resistant databases), and it seems like the most hopeful approaches (EG, Holochain, TPMs) can only provide partial or probabilistic guarantees, breach of contract never becomes totally impossible, and that may just be the way of things in all worlds and all futures, intellectual labor is inherently difficult to check, and the other is inherently difficult to trust, and partial trust may always be more efficient than absolute. And often these breaches, however rare or expensive, can have cascading effects that make them important to study, despite.
One approach I forgot to mention here (might edit) was making contracts punishable with a limited quantity of subtractors that you can conditionally point at yourself. So violations have a fixed, agreed impact, instead of being infinite. And they’re scarce, which would make it super clear that the ability to constrain your future behavior is valuable.
Well, I agree, but… I think it’s also valuable to invite players to reckon with ambiguity, but yeah it would be good if there were a reliable scoring system most of the time, but I don’t get the sense that’s feasible, due to the necessity of having a lot of asymmetry (when means and ends are symmetric, bargaining is trivialized, and it invites the chimeric simplifications of population ethics), and the importance of randomization for exploration, and the boons of not having to balance strength.
I think I could get on board with that. I’m currently researching smart contract compute (mainly for self-sovereign identity with key rotation, portable name systems, and censorship resistant databases), and it seems like the most hopeful approaches (EG, Holochain, TPMs) can only provide partial or probabilistic guarantees, breach of contract never becomes totally impossible, and that may just be the way of things in all worlds and all futures, intellectual labor is inherently difficult to check, and the other is inherently difficult to trust, and partial trust may always be more efficient than absolute.
And often these breaches, however rare or expensive, can have cascading effects that make them important to study, despite.
One approach I forgot to mention here (might edit) was making contracts punishable with a limited quantity of subtractors that you can conditionally point at yourself. So violations have a fixed, agreed impact, instead of being infinite. And they’re scarce, which would make it super clear that the ability to constrain your future behavior is valuable.