This is really interesting work, and I hope it will be of value to educating people about negotiation. I agree with many of the things you say here, and many of your design decisions, but I think there’s a few minor points where I see things differently.
I think it’s valuable to player motivation to give them the ability to track their own performance between games. I think this could look like having a set of pre-designed maps, with a certain difficulty score assigned to each role. Then players could track how well they did against a difficulty score. Having deliberately unbalanced maps, with some easy roles and some hard roles could be good for uneven matches (experienced player vs new player).
I think that being able to violate a contract (at some cost, perhaps reputational) when the motivation is sufficiently high gives some spice to the dynamics of negotiation. Also, seems like players should be able to mutually consent to dissolving a contract. So I think future versions should have some thought put into non-omnipotent contract enforcement mechanisms. Not letting any single player become so powerful that they can afford to ignore all contractual obligations is a pretty important lesson.
I’m not sure that having hidden agendas makes a negotiation game not work. What if you have a hidden agenda, but you are allowed to proclaim (but not prove) it? This adds a layer of complexity which might be undesirable, especially for less advanced players, since you have to track whether another player’s claimed goals match up with their behavior. I think this is pretty key if you want to have an ‘evil’ player, who all the other players can only ‘win’ against by eliminating them. Like in the social Werewolf game, the ‘evil’ player would be motivated to claim not to be. In order to not get ganged up against, they’d have to try to manufacture situations in which it seemed justified for them to eliminate another player. Others would have to detect them by noticing the unnecessary and unprofitable conflict. This would teach the important real-world negotiation lesson of keeping an eye out for true values conflicts versus lying troublemakers with hidden agendas who should be punished. Then you’d have to negotiate things like, ‘if I pay a cost to eliminate an agent of player X, who we suspect of being evil, then I want concession A as a reward.’
Or a partial information style of game, like Stratego. Maybe if you can prove someone violated a contract, you can call them out and they must pay the forfeit, but if you only suspect it then you must just be wary of them.
It could also be interesting to have rounds of the game with and without omnipotent contract enforcement, and with/without hidden agendas, and with/without the game being a partial-information game, so people could learn how each of those elements changes things. I think it would quickly become clear that a world with trustworthy contract enforcement is a less violent, more positive-sum world to live in.
I think it’s worth thinking about what makes cooperative games fun, and what makes creative games like playing minecraft or legos with friends fun. What if you were playing minecraft with other people in a very space-limited minecraft world, and had to negotiate over resources? Being able to build what you wanted to build would require negotiation. I was trying to think through a design of such a game with the intention of using it as a testing ground for AI alignment. In my hazy undeveloped thoughts were mechanics like: blocks get harder to destroy every time they are placed (so the more times a specific block is moved, the harder it gets to move in the future), players can tag a limited number of blocks as ‘theirs’ making it much harder for other people to destroy them. Imagine if each player had a different goal they were trying to achieve in a limited time, such as making the coolest waterfall or the coolest castle or treehouse or cave dwelling. When the time in a round ran out, the blocks would all get frozen and players could then take their ‘final result’ screenshot from whatever point of view they thought best showed off their work. Then some other group of players, playing their own version of the game, would vote on the anonymous submissions. Or maybe a ML model could be made to judge the competition. In any case, seems like something that a minecraft mod could encompass.
An interesting thing about this, is that there wouldn’t be an omniscient overview judge. So players could try to get away with sneakily stealing resources they’d agreed not to, and then face consequences if they were caught. So you could more easily trust agreements made about resources that were easier to monitor.
What if you catch a player violating a contract, and then they also decide to refuse to pay the agreed upon penalty? This sort of thing is a common cause of conflict escalation between nation-states. Would the other players come to the assistance of the wronged party, or choose to gang up against them or simply refuse to intervene since they’d have to pay a cost to intervene...
I think there’s a lot of room for thought and game design here.
What if you have a hidden agenda, but you are allowed to proclaim (but not prove) it?
One issue is that the game before the reveal and the game after the reveal are completely different games, it’s too destabilizing or multifarious as a design problem, it’s probably impossible to design a game good when people are switching from A to B independently whenever they decide to. Coupling hidden goal semicoop games to peacewagers is probably not a good way to carve gamespace into digestible chunks.
But discovering peoples’ real motives by looking at their actions is interesting, in the context of the value learning problem. Ultimately, preferences only meaningfully exist in terms of the effects they have on our actions, realized or counterfactual. So looking at action and inferring preferences is an interesting exercise.
But another issue is that I’m not actually sure hidden motives can elude the sight of a competent peacebroker. If I ask you why you want a deal, and you try to give me an answer where the math doesn’t work out, I notice that immediately. I know you’re lying about something, and I say, “try again.” You don’t get a deal until it makes sense to me.
I suspect that in natural situations, or at least any situation in our own future, transparency is a dominant trend, it ends up winning out, and that’s good, so a situation where we beg the question, what if transparency doesn’t win out, I don’t see what’s interesting about that.
This adds a layer of complexity which might be undesirable, especially for less advanced players
Right, and I don’t think it’s a good starting point for studying negotiation. How do you learn to go from opaque preferences to exposed preferences if you don’t know what the end point is like, if you’ve never experienced that, and how would you find the motivation to learn it, if you’ve never experienced the benefits of having exposed preferences.
The fun in shared creativity
Mm I kind of want the lego of legal experimentation. I want to get to a world where laws can be changed by those who live under them as easily as a gridbeam construction. To do that irl, we’re going to first have to reckon with the fact that most of us are not very competent in it yet, we have to become better at setting our laws than the lawyers before we can justify deposing the lawyers (and it is possible for us to get better at it because the lawyers are few, overspecialized, distant, and old).
But in games we can just try legal experimentation and see what happens. So there could be a lot of fun in game focused on unexpected consequences of attempts to legislate, hmm.
minecraft mod
Oh, some modded minecraft servers are already peacewagers, I forgot to mention. Sounds like very much that kind of thing. A nice story about one such server is: https://www.alicemaz.com/writing/minecraft.html
But yeah, I’m super interested in games designed to test civic principles and economic mechanism designs, because they can’t always be tested irl.
My friend Murat Ayfer is both working on open multiplayer games and governance systems for online communities and I hope something comes out of that.
It could also be interesting to have rounds of the game with and without omnipotent contract enforcement, and with/without hidden agendas, and with/without the game being a partial-information game
For sure. There’s going to need to be a combination of random permutation and levels of play to explore the whole gamespace.
another issue is that I’m not actually sure hidden motives can elude the sight of a competent peacebroker. If I ask you why you want a deal, and you try to give me an answer where the math doesn’t work out, I notice that immediately. I know you’re lying about something, and I say, “try again.” You don’t get a deal until it makes sense to me.
This actually sounds to me like an argument in favor of the hidden-agenda mode, because I bet most players don’t currently have this level of competence, and this might be a useful exercise for training it.
I agree that it’s worth giving them that experience a few times, but I sense there might not be a deep game there. Another way of putting it is that I think suggesting that players take delight in negotiation under incomplete information is almost the same as training them to delay the approach towards complete information, which I think is a very unhealthy tendency to train, like, I don’t want to teach people to avoid looking gift horses in the mouth. That is a real and common vice.
I think I was also intuiting that exposing a person’s hidden motives is basically the same process as brokering a shared plan. Hints of the existence of hidden motives start to arise when the opponent starts telling you nonsense about how their claimed values are not satisfied the proposed plan even when the plan is very good for their claimed values, or when they start deviating from the plan. You can only tell a plan is good for their claimed values (whether they’re behaving disingenuously) by first having the negotiation skills to know what the fair compromise is and take a stand when the opponent diverges from it.
I think that being able to violate a contract (at some cost, perhaps reputational) when the motivation is sufficiently high gives some spice to the dynamics of negotiation.
Technically, if you let players make arbitrary contracts, then they can add that themselves, by making contracts along the lines of “I will either freeze that lake within the next 5 turns OR pay you 10 gold”.
It might be helpful to think of the game as defining the maximum penalty for breaching a contract, rather than “the” penalty. If the game says that breaching a contract is death, you can say “I’ll either do X or skip a turn” to create a lesser penalty. But if the game says that breaching a contract only makes you lose a turn, then you can’t write a contract that will kill a player if they renege, because they’ll always have the option of losing a turn instead.
I think it’s valuable to player motivation to give them the ability to track their own performance between games
Well, I agree, but… I think it’s also valuable to invite players to reckon with ambiguity, but yeah it would be good if there were a reliable scoring system most of the time, but I don’t get the sense that’s feasible, due to the necessity of having a lot of asymmetry (when means and ends are symmetric, bargaining is trivialized, and it invites the chimeric simplifications of population ethics), and the importance of randomization for exploration, and the boons of not having to balance strength.
I think that being able to violate a contract (at some cost, perhaps reputational) when the motivation is sufficiently high gives some spice to the dynamics of negotiation. Also, seems like players should be able to mutually consent to dissolving a contract. So I think future versions should have some thought put into non-omnipotent contract enforcement mechanisms. Not letting any single player become so powerful that they can afford to ignore all contractual obligations is a pretty important lesson.
I think I could get on board with that. I’m currently researching smart contract compute (mainly for self-sovereign identity with key rotation, portable name systems, and censorship resistant databases), and it seems like the most hopeful approaches (EG, Holochain, TPMs) can only provide partial or probabilistic guarantees, breach of contract never becomes totally impossible, and that may just be the way of things in all worlds and all futures, intellectual labor is inherently difficult to check, and the other is inherently difficult to trust, and partial trust may always be more efficient than absolute. And often these breaches, however rare or expensive, can have cascading effects that make them important to study, despite.
One approach I forgot to mention here (might edit) was making contracts punishable with a limited quantity of subtractors that you can conditionally point at yourself. So violations have a fixed, agreed impact, instead of being infinite. And they’re scarce, which would make it super clear that the ability to constrain your future behavior is valuable.
This is really interesting work, and I hope it will be of value to educating people about negotiation. I agree with many of the things you say here, and many of your design decisions, but I think there’s a few minor points where I see things differently.
I think it’s valuable to player motivation to give them the ability to track their own performance between games. I think this could look like having a set of pre-designed maps, with a certain difficulty score assigned to each role. Then players could track how well they did against a difficulty score. Having deliberately unbalanced maps, with some easy roles and some hard roles could be good for uneven matches (experienced player vs new player).
I think that being able to violate a contract (at some cost, perhaps reputational) when the motivation is sufficiently high gives some spice to the dynamics of negotiation. Also, seems like players should be able to mutually consent to dissolving a contract. So I think future versions should have some thought put into non-omnipotent contract enforcement mechanisms. Not letting any single player become so powerful that they can afford to ignore all contractual obligations is a pretty important lesson.
I’m not sure that having hidden agendas makes a negotiation game not work. What if you have a hidden agenda, but you are allowed to proclaim (but not prove) it? This adds a layer of complexity which might be undesirable, especially for less advanced players, since you have to track whether another player’s claimed goals match up with their behavior. I think this is pretty key if you want to have an ‘evil’ player, who all the other players can only ‘win’ against by eliminating them. Like in the social Werewolf game, the ‘evil’ player would be motivated to claim not to be. In order to not get ganged up against, they’d have to try to manufacture situations in which it seemed justified for them to eliminate another player. Others would have to detect them by noticing the unnecessary and unprofitable conflict. This would teach the important real-world negotiation lesson of keeping an eye out for true values conflicts versus lying troublemakers with hidden agendas who should be punished. Then you’d have to negotiate things like, ‘if I pay a cost to eliminate an agent of player X, who we suspect of being evil, then I want concession A as a reward.’
Or a partial information style of game, like Stratego. Maybe if you can prove someone violated a contract, you can call them out and they must pay the forfeit, but if you only suspect it then you must just be wary of them.
It could also be interesting to have rounds of the game with and without omnipotent contract enforcement, and with/without hidden agendas, and with/without the game being a partial-information game, so people could learn how each of those elements changes things. I think it would quickly become clear that a world with trustworthy contract enforcement is a less violent, more positive-sum world to live in.
I think it’s worth thinking about what makes cooperative games fun, and what makes creative games like playing minecraft or legos with friends fun. What if you were playing minecraft with other people in a very space-limited minecraft world, and had to negotiate over resources? Being able to build what you wanted to build would require negotiation. I was trying to think through a design of such a game with the intention of using it as a testing ground for AI alignment. In my hazy undeveloped thoughts were mechanics like: blocks get harder to destroy every time they are placed (so the more times a specific block is moved, the harder it gets to move in the future), players can tag a limited number of blocks as ‘theirs’ making it much harder for other people to destroy them. Imagine if each player had a different goal they were trying to achieve in a limited time, such as making the coolest waterfall or the coolest castle or treehouse or cave dwelling. When the time in a round ran out, the blocks would all get frozen and players could then take their ‘final result’ screenshot from whatever point of view they thought best showed off their work. Then some other group of players, playing their own version of the game, would vote on the anonymous submissions. Or maybe a ML model could be made to judge the competition. In any case, seems like something that a minecraft mod could encompass.
An interesting thing about this, is that there wouldn’t be an omniscient overview judge. So players could try to get away with sneakily stealing resources they’d agreed not to, and then face consequences if they were caught. So you could more easily trust agreements made about resources that were easier to monitor.
What if you catch a player violating a contract, and then they also decide to refuse to pay the agreed upon penalty? This sort of thing is a common cause of conflict escalation between nation-states. Would the other players come to the assistance of the wronged party, or choose to gang up against them or simply refuse to intervene since they’d have to pay a cost to intervene...
I think there’s a lot of room for thought and game design here.
One issue is that the game before the reveal and the game after the reveal are completely different games, it’s too destabilizing or multifarious as a design problem, it’s probably impossible to design a game good when people are switching from A to B independently whenever they decide to. Coupling hidden goal semicoop games to peacewagers is probably not a good way to carve gamespace into digestible chunks.
But discovering peoples’ real motives by looking at their actions is interesting, in the context of the value learning problem. Ultimately, preferences only meaningfully exist in terms of the effects they have on our actions, realized or counterfactual. So looking at action and inferring preferences is an interesting exercise.
But another issue is that I’m not actually sure hidden motives can elude the sight of a competent peacebroker. If I ask you why you want a deal, and you try to give me an answer where the math doesn’t work out, I notice that immediately. I know you’re lying about something, and I say, “try again.” You don’t get a deal until it makes sense to me.
I suspect that in natural situations, or at least any situation in our own future, transparency is a dominant trend, it ends up winning out, and that’s good, so a situation where we beg the question, what if transparency doesn’t win out, I don’t see what’s interesting about that.
Right, and I don’t think it’s a good starting point for studying negotiation. How do you learn to go from opaque preferences to exposed preferences if you don’t know what the end point is like, if you’ve never experienced that, and how would you find the motivation to learn it, if you’ve never experienced the benefits of having exposed preferences.
Mm I kind of want the lego of legal experimentation. I want to get to a world where laws can be changed by those who live under them as easily as a gridbeam construction. To do that irl, we’re going to first have to reckon with the fact that most of us are not very competent in it yet, we have to become better at setting our laws than the lawyers before we can justify deposing the lawyers (and it is possible for us to get better at it because the lawyers are few, overspecialized, distant, and old).
But in games we can just try legal experimentation and see what happens. So there could be a lot of fun in game focused on unexpected consequences of attempts to legislate, hmm.
Oh, some modded minecraft servers are already peacewagers, I forgot to mention. Sounds like very much that kind of thing. A nice story about one such server is: https://www.alicemaz.com/writing/minecraft.html
But yeah, I’m super interested in games designed to test civic principles and economic mechanism designs, because they can’t always be tested irl.
My friend Murat Ayfer is both working on open multiplayer games and governance systems for online communities and I hope something comes out of that.
For sure. There’s going to need to be a combination of random permutation and levels of play to explore the whole gamespace.
This actually sounds to me like an argument in favor of the hidden-agenda mode, because I bet most players don’t currently have this level of competence, and this might be a useful exercise for training it.
I agree that it’s worth giving them that experience a few times, but I sense there might not be a deep game there. Another way of putting it is that I think suggesting that players take delight in negotiation under incomplete information is almost the same as training them to delay the approach towards complete information, which I think is a very unhealthy tendency to train, like, I don’t want to teach people to avoid looking gift horses in the mouth. That is a real and common vice.
I think I was also intuiting that exposing a person’s hidden motives is basically the same process as brokering a shared plan. Hints of the existence of hidden motives start to arise when the opponent starts telling you nonsense about how their claimed values are not satisfied the proposed plan even when the plan is very good for their claimed values, or when they start deviating from the plan. You can only tell a plan is good for their claimed values (whether they’re behaving disingenuously) by first having the negotiation skills to know what the fair compromise is and take a stand when the opponent diverges from it.
Technically, if you let players make arbitrary contracts, then they can add that themselves, by making contracts along the lines of “I will either freeze that lake within the next 5 turns OR pay you 10 gold”.
It might be helpful to think of the game as defining the maximum penalty for breaching a contract, rather than “the” penalty. If the game says that breaching a contract is death, you can say “I’ll either do X or skip a turn” to create a lesser penalty. But if the game says that breaching a contract only makes you lose a turn, then you can’t write a contract that will kill a player if they renege, because they’ll always have the option of losing a turn instead.
Well, I agree, but… I think it’s also valuable to invite players to reckon with ambiguity, but yeah it would be good if there were a reliable scoring system most of the time, but I don’t get the sense that’s feasible, due to the necessity of having a lot of asymmetry (when means and ends are symmetric, bargaining is trivialized, and it invites the chimeric simplifications of population ethics), and the importance of randomization for exploration, and the boons of not having to balance strength.
I think I could get on board with that. I’m currently researching smart contract compute (mainly for self-sovereign identity with key rotation, portable name systems, and censorship resistant databases), and it seems like the most hopeful approaches (EG, Holochain, TPMs) can only provide partial or probabilistic guarantees, breach of contract never becomes totally impossible, and that may just be the way of things in all worlds and all futures, intellectual labor is inherently difficult to check, and the other is inherently difficult to trust, and partial trust may always be more efficient than absolute.
And often these breaches, however rare or expensive, can have cascading effects that make them important to study, despite.
One approach I forgot to mention here (might edit) was making contracts punishable with a limited quantity of subtractors that you can conditionally point at yourself. So violations have a fixed, agreed impact, instead of being infinite. And they’re scarce, which would make it super clear that the ability to constrain your future behavior is valuable.