The post is mostly trying to imply things about AI systems and agents in a larger universe, like “aliens and AIs usually coordinate with other aliens annd AIs, and ~no commitment races happen”.
For humans, it’s applicable to bargaining and threat-shape situations. I think bargaining situations are common; clearly threat-shaped situations are rarer.
I think while taxes in our world are somewhat threat-shaped, it’s not clear they’re “unfair”- I think we want everyone to pay them so that good governments work and provide value. But if you think taxes are unfair, you can leave the country and pay some different taxes somewhere else instead of going to jail.
The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people: it makes sense to prevent people from committing more crimes by putting them in jails or not trading with them, but the threat of punishment that exists only to prevent an agent from doing something won’t work on smarter agents.
But if you think taxes are unfair, you can leave the country and pay some different taxes somewhere else instead of going to jail.
It’s quite difficult to do that in the US, at least. You pay taxes if you’re a citizen, even if you’re not a resident, and you’re required to pay taxes for the 10 years following your renouncing citizenship.
As far as I know, there’s no way for US citizens to leave the US tax regime within a decade.
The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people
This is one of two claims here that I’m not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn’t ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don’t trust intelligence monotonicity, but I don’t trust it’s negation either.
The second one is:
You can already foresee the part where you’re going to be asked to play this game for longer, until fewer offers get rejected, as people learn to converge on a shared idea of what is fair.
Should you update your idea of fairness if you get rejected often? It’s not clear to me that that doesn’t make you exploitable again. And I think this is very important to your claim about not burning utility: In the case of the ultimatum game, Eliezers strategy burns very little over a reasonable-seeming range of fairness ideals, but in the complex, high-dimensional action spaces of the real world, it could easily be almost as bad as never giving in, if there’s no updating.
If you are a smart individual in todays society, you shouldn’t ignore threats of punishment
If today’s society consisted mostly of smart individuals, they would overthrow the government that does something unfair instead of giving in to its threats.
Should you update your idea of fairness if you get rejected often?
Only if you’re a kid who’s playing with other human kids (which is the scenario described in the quoted text), and converging on fairness possibly includes getting some idea of how much effort various things take different people.
If you’re an actual grown-up (not that we have those) and you’re playing with aliens, you probably don’t update, and you certainly don’t update in the direction of anything asymmetric.
Thanks!
The post is mostly trying to imply things about AI systems and agents in a larger universe, like “aliens and AIs usually coordinate with other aliens annd AIs, and ~no commitment races happen”.
For humans, it’s applicable to bargaining and threat-shape situations. I think bargaining situations are common; clearly threat-shaped situations are rarer.
I think while taxes in our world are somewhat threat-shaped, it’s not clear they’re “unfair”- I think we want everyone to pay them so that good governments work and provide value. But if you think taxes are unfair, you can leave the country and pay some different taxes somewhere else instead of going to jail.
The society’s stance towards crime- preventing it via the threat of punishment- is not what would work on smarter people: it makes sense to prevent people from committing more crimes by putting them in jails or not trading with them, but the threat of punishment that exists only to prevent an agent from doing something won’t work on smarter agents.
It’s quite difficult to do that in the US, at least. You pay taxes if you’re a citizen, even if you’re not a resident, and you’re required to pay taxes for the 10 years following your renouncing citizenship.
As far as I know, there’s no way for US citizens to leave the US tax regime within a decade.
This is one of two claims here that I’m not convinced by. Informal disproof: If you are a smart individual in todays society, you shouldn’t ignore threats of punishment, because it is in the states interest to follow through anyway, pour encourager les autres. If crime prevention is in peoples interest, intelligence monotonicity implies that a smart population should be able to make punishment work at least this well. Now I don’t trust intelligence monotonicity, but I don’t trust it’s negation either.
The second one is:
Should you update your idea of fairness if you get rejected often? It’s not clear to me that that doesn’t make you exploitable again. And I think this is very important to your claim about not burning utility: In the case of the ultimatum game, Eliezers strategy burns very little over a reasonable-seeming range of fairness ideals, but in the complex, high-dimensional action spaces of the real world, it could easily be almost as bad as never giving in, if there’s no updating.
If today’s society consisted mostly of smart individuals, they would overthrow the government that does something unfair instead of giving in to its threats.
Only if you’re a kid who’s playing with other human kids (which is the scenario described in the quoted text), and converging on fairness possibly includes getting some idea of how much effort various things take different people.
If you’re an actual grown-up (not that we have those) and you’re playing with aliens, you probably don’t update, and you certainly don’t update in the direction of anything asymmetric.