This is a case where a modern (or even science fictional) problem can be solved with a piece of technology that was known to the builders of the pyramids.
The technology in question is the promise. If the overall deal is worth while then the solution is for me to agree to it upfront. After that I don’t have to do any more utility calculations; I simply follow through on my agreement.
The game theorists don’t believe in promises, if there are no consequences for breaking them. That’s what all the “Omega subsequently leaves for a distant galaxy” is about.
If you’re using game theory as a normative guide to making decisions, then promises become problematic.
Personally, I think keeping promises is excellent, and I think I could and would, even in absence of consequences. However, everyone would agree that I am only of bounded rationality, and the game theorists have a very good explanation for why I would loudly support keeping promises—pro-social signaling—so my claim might not mean that much.
Recall, however, that the objective is not to be someone who would do well in fictional game theory scenarios, but someone who does well in real life.
So one answer is that real life people don’t suddenly emigrate to a distant galaxy after one transaction.
But the deeper answer is that it’s not just the negative consequences of breaking one promise, but of being someone who has a policy of breaking promises whenever it superficially appears useful.
This is a case where a modern (or even science fictional) problem can be solved with a piece of technology that was known to the builders of the pyramids.
The technology in question is the promise. If the overall deal is worth while then the solution is for me to agree to it upfront. After that I don’t have to do any more utility calculations; I simply follow through on my agreement.
The game theorists don’t believe in promises, if there are no consequences for breaking them. That’s what all the “Omega subsequently leaves for a distant galaxy” is about.
If you’re using game theory as a normative guide to making decisions, then promises become problematic.
Personally, I think keeping promises is excellent, and I think I could and would, even in absence of consequences. However, everyone would agree that I am only of bounded rationality, and the game theorists have a very good explanation for why I would loudly support keeping promises—pro-social signaling—so my claim might not mean that much.
Recall, however, that the objective is not to be someone who would do well in fictional game theory scenarios, but someone who does well in real life.
So one answer is that real life people don’t suddenly emigrate to a distant galaxy after one transaction.
But the deeper answer is that it’s not just the negative consequences of breaking one promise, but of being someone who has a policy of breaking promises whenever it superficially appears useful.
We are trying to figure out a formal decision theory of how you