Fair enough. The only small note I’d like to add is that the phrase “if all people acted according to a [sufficiently] better decision theory” does not seem to quite convey how distant from reality—or just realism—such a proposition is. It’s less in the ballpark of “if everyone had IQ 230″ than in that of “if everyone uploaded and then took the time to thoroughly grok and rewrite their own code”.
I don’t think that’s true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that’s desirable. If you are precommited to choosing option A no matter what, it doesn’t matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
Fair enough. The only small note I’d like to add is that the phrase “if all people acted according to a [sufficiently] better decision theory” does not seem to quite convey how distant from reality—or just realism—such a proposition is. It’s less in the ballpark of “if everyone had IQ 230″ than in that of “if everyone uploaded and then took the time to thoroughly grok and rewrite their own code”.
I don’t think that’s true, as people can be as simple (in given situations) as they wish to be, thus allowing others to model them, if that’s desirable. If you are precommited to choosing option A no matter what, it doesn’t matter that you have a brain with hundred billion neurons, you can be modeled as easily as a constant answer.
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
This is not a problem of IQ.