I precommit to acting as if I had made any precommittment I find myself wishing I had made. If I make this clear before iterated prisoners’ dilemma, a rational partner would not try the “finger slipped” excuse against me, because I would wish that I had precommitted to punishing defection due to finger slipping.
I would still allow the grieving student to turn in the paper late, because in that situation, I do not wish I had precommitted to rejecting that excuse.
Sounds like a weak precomittment. Schelling includes the theory of excuses in his work, and they are a key part of bargaining, since pre-commitments that can be averted without appearing to weaken the bargainer’s position will be.
IOW, once a breach has been made it will be in both parties’ interests not to have the threat carried out, and any “wiggle room” in the precommitment will be exploited. Because of this, bargainers are well-advised to make the circumstances that will trigger their threat as unambiguous and externally verifiable as possible.
I don’t see any way to do this with your model precommitment, unless the agent(s) you’re bargaining with and any third parties observing have access to your source code.
If typing an abbreviation saves you less than 10 keystrokes, but increases the time taken to parse your post by at least 30 seconds for at least one reader, it almost certainly isn’t socially optimal to use it (although I did get the pleasure of an ‘aha’ moment when I finally figured out what ‘IOW’ was supposed to mean).
IOW is such a common abbreviation online that it actually INCREASED my speed of parsing the post.
And I suspect, if you encounter it in future, you may eventually find it to save you time also. IOW, “IOW” may actually be socially optimal in many contexts, even if some people don’t understand it.
You both managed to have this discussion without actually saying that IOW should be parsed as “In other words.” This was sub optimal as it forced me to google it myself. Hopefully this post will provide utility to future readers.
It can be weak on its own, if I am not predictable. But if I combine it with more specific precomittments, then other agents who plan to exploit a one time excuse, where part of convincing me not to carry through the threat, is that I can precommit never to allow that excuse again, can predict that I will wish I had precommitted from the beginning to never allow that excuse at all, and therefore act as if I had made that precommittment, and still cary through the punishment, so they should not provoke the punishment planning to offer that excuse.
This greatly strengthens any specific precommittment I make, by preventing the exploitation of one time excuses. If an agent wants to offer me an excuse, they will need to be able to convince me that I should always allow that excuse.
In the grieving student example, I am willing to allow the excuse for the same reasons that I am willing to explicitly ammend the precommittment to allow an exception in those circumstances.
I see—as an anti-single-exception rule that makes sense to me, as long as it’s communicable clearly. The term “wishing” seemed insufficiently constrained and precise to me, at first.
For UDT, only partially for TDT. Though the main purpose of precommitment is in credibly signaling that you have precommited, which is harder for meta-precommitments like this.
I think TDT and UDT are more sophisticated than my precommitment strategy.
Two agents facing each other in a one shot true prisoner’s dilemma would mutually cooperate if they were both using TDT or UDT, but not if using CDT and my precommitment strategy.
The problem is not cooperating conditionally on making that prediction. The problem is being able to make that prediction. TDT and UDT solve this by noting that it is the same algorithm making the decision for both agents.
Making the prediction is absolutely the hard part, but I still think that two agents using CDT with your precommitment strategy who are able to accurately predict would cooperate.
TDT/UDT do seem a bit more sophisticated, but I’m not solid enough on this decision theory thing to see where they advocated different decisions. I just don’t think this is one of them.
I precommit to acting as if I had made any precommittment I find myself wishing I had made. If I make this clear before iterated prisoners’ dilemma, a rational partner would not try the “finger slipped” excuse against me, because I would wish that I had precommitted to punishing defection due to finger slipping.
I would still allow the grieving student to turn in the paper late, because in that situation, I do not wish I had precommitted to rejecting that excuse.
Hopefully, of course, everybody involved will intuitively understand what sorts of things you are likely to “wish [you] had precommitted to”.
Sounds like a weak precomittment. Schelling includes the theory of excuses in his work, and they are a key part of bargaining, since pre-commitments that can be averted without appearing to weaken the bargainer’s position will be.
IOW, once a breach has been made it will be in both parties’ interests not to have the threat carried out, and any “wiggle room” in the precommitment will be exploited. Because of this, bargainers are well-advised to make the circumstances that will trigger their threat as unambiguous and externally verifiable as possible.
I don’t see any way to do this with your model precommitment, unless the agent(s) you’re bargaining with and any third parties observing have access to your source code.
If typing an abbreviation saves you less than 10 keystrokes, but increases the time taken to parse your post by at least 30 seconds for at least one reader, it almost certainly isn’t socially optimal to use it (although I did get the pleasure of an ‘aha’ moment when I finally figured out what ‘IOW’ was supposed to mean).
IOW is such a common abbreviation online that it actually INCREASED my speed of parsing the post.
And I suspect, if you encounter it in future, you may eventually find it to save you time also. IOW, “IOW” may actually be socially optimal in many contexts, even if some people don’t understand it.
Much like using the abbreviation FAI.
You both managed to have this discussion without actually saying that IOW should be parsed as “In other words.” This was sub optimal as it forced me to google it myself. Hopefully this post will provide utility to future readers.
Random future reader (ten years in the future in fact) confirming that this post was indeed of utility to me.
My greatest legacy
I live on the Isle of Wight. I was very confused!
It can be weak on its own, if I am not predictable. But if I combine it with more specific precomittments, then other agents who plan to exploit a one time excuse, where part of convincing me not to carry through the threat, is that I can precommit never to allow that excuse again, can predict that I will wish I had precommitted from the beginning to never allow that excuse at all, and therefore act as if I had made that precommittment, and still cary through the punishment, so they should not provoke the punishment planning to offer that excuse.
This greatly strengthens any specific precommittment I make, by preventing the exploitation of one time excuses. If an agent wants to offer me an excuse, they will need to be able to convince me that I should always allow that excuse.
In the grieving student example, I am willing to allow the excuse for the same reasons that I am willing to explicitly ammend the precommittment to allow an exception in those circumstances.
I see—as an anti-single-exception rule that makes sense to me, as long as it’s communicable clearly. The term “wishing” seemed insufficiently constrained and precise to me, at first.
Isn’t this the default position for TDT and UDT?
For UDT, only partially for TDT. Though the main purpose of precommitment is in credibly signaling that you have precommited, which is harder for meta-precommitments like this.
I think TDT and UDT are more sophisticated than my precommitment strategy.
Two agents facing each other in a one shot true prisoner’s dilemma would mutually cooperate if they were both using TDT or UDT, but not if using CDT and my precommitment strategy.
You wouldn’t wish that you precommitted to cooperating iff you predicted that the other agent would cooperate iff he predicts that you will cooperate?
The problem is not cooperating conditionally on making that prediction. The problem is being able to make that prediction. TDT and UDT solve this by noting that it is the same algorithm making the decision for both agents.
Making the prediction is absolutely the hard part, but I still think that two agents using CDT with your precommitment strategy who are able to accurately predict would cooperate.
TDT/UDT do seem a bit more sophisticated, but I’m not solid enough on this decision theory thing to see where they advocated different decisions. I just don’t think this is one of them.
Thats fine, as long as you lay out the relative importance of different aspects so people can predict what will and won’t be important to you.