The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people
This is one of two claims here that I’m not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn’t ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don’t trust intelligence monotonicity, but I don’t trust it’s negation either.
The second one is:
You can already foresee the part where you’re going to be asked to play this game for longer, until fewer offers get rejected, as people learn to converge on a shared idea of what is fair.
Should you update your idea of fairness if you get rejected often? It’s not clear to me that that doesn’t make you exploitable again. And I think this is very important to your claim about not burning utility: In the case of the ultimatum game, Eliezers strategy burns very little over a reasonable-seeming range of fairness ideals, but in the complex, high-dimensional action spaces of the real world, it could easily be almost as bad as never giving in, if there’s no updating.
If you are a smart individual in todays society, you shouldn’t ignore threats of punishment
If today’s society consisted mostly of smart individuals, they would overthrow the government that does something unfair instead of giving in to its threats.
Should you update your idea of fairness if you get rejected often?
Only if you’re a kid who’s playing with other human kids (which is the scenario described in the quoted text), and converging on fairness possibly includes getting some idea of how much effort various things take different people.
If you’re an actual grown-up (not that we have those) and you’re playing with aliens, you probably don’t update, and you certainly don’t update in the direction of anything asymmetric.
This is one of two claims here that I’m not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn’t ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don’t trust intelligence monotonicity, but I don’t trust it’s negation either.
The second one is:
Should you update your idea of fairness if you get rejected often? It’s not clear to me that that doesn’t make you exploitable again. And I think this is very important to your claim about not burning utility: In the case of the ultimatum game, Eliezers strategy burns very little over a reasonable-seeming range of fairness ideals, but in the complex, high-dimensional action spaces of the real world, it could easily be almost as bad as never giving in, if there’s no updating.
If today’s society consisted mostly of smart individuals, they would overthrow the government that does something unfair instead of giving in to its threats.
Only if you’re a kid who’s playing with other human kids (which is the scenario described in the quoted text), and converging on fairness possibly includes getting some idea of how much effort various things take different people.
If you’re an actual grown-up (not that we have those) and you’re playing with aliens, you probably don’t update, and you certainly don’t update in the direction of anything asymmetric.