Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, or that your ability to mentally trace through the consequences of actions comes anywhere near the richness of reality’s ability to do so.
Rich enough that, if you’re going to make these sorts of calculations, you’ll get reasonable results (rather than misleading or wildly misleading ones).
The catch is of course that your reply is in itself a statement of the form that you declared useless (misleading/wildly misleading—how do you know that?).
I think there’s some misunderstanding here. I said don’t assume. If you have some reason to think what you’re doing is reasonable or ok, then you’re not assuming.
True. However this caveat applies to any formalism for decision making—my claim is that expected utility maximization is hurt especially badly by these limitations.
A lot of this probably comes down to:
Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, or that your ability to mentally trace through the consequences of actions comes anywhere near the richness of reality’s ability to do so.
Enough for what? Or better/worse as opposed to what?
Rich enough that, if you’re going to make these sorts of calculations, you’ll get reasonable results (rather than misleading or wildly misleading ones).
The catch is of course that your reply is in itself a statement of the form that you declared useless (misleading/wildly misleading—how do you know that?).
I think there’s some misunderstanding here. I said don’t assume. If you have some reason to think what you’re doing is reasonable or ok, then you’re not assuming.
You could use that feedback from the results of prior actions. Like: http://www.aleph.se/Trans/Individual/Self/zahn.txt
True. However this caveat applies to any formalism for decision making—my claim is that expected utility maximization is hurt especially badly by these limitations.