Thanks for the reasoned reply. I guess I wasn’t clear, because I actually agree with a lot of what you just said! To reply to your points as best I can:
I dislike the suggestion that natural selection finetuned (or filtered) our decision theory to the optimal degree of irrationality which was needed to do well in lost-in-desert situations involving omniscient beings.
I would prefer to assume that natural selection endowed us with a rational or near-rational decision theory and then invested its fine tuning into adjusting our utility functions.
Natural selection filtered us for at least one omniscience/desert situation: the decision to care for offspring (in one particular domain of attraction). Like Omega, it prevents us (though with only near-perfect rather than perfect probability) from being around in the n-th generation if we don’t care about the (n+1)th generation.
Also, why do you say that giving weight to SAMELs doesn’t count as rational?
I would also prefer to assume that natural selection endowed us with sub-conscious body language and other cues which make us very bad at lying.
I would prefer to assume that natural selection endowed us with a natural aversion to not keeping promises.
Difficulty of lying actually counts as another example of Parfitian filtering: from the present perspective, you would prefer to be able to lie (as you would prefer having slightly more money). However, by having previously sabotaged your ability to lie, people now treat you better. “Regarding it as suboptimal to lie” is one form this “sabotage” takes, and it is part of the reason you received previous benefits.
Ditto for keeping promises.
Therefore, my analysis of hitchhiker scenarios would involve 3 steps. (1) The hitchhiker rationally promises to pay. (2) the (non-omniscient) driver looks at the body language and estimates a low probability that the promise is a lie, therefore it is rational for the driver to take the hitchhiker into town. (3). The hitchhiker rationally pays because the disutility of paying is outweighed by the disutility of breaking a promise.
But I didn’t make it that easy for you—in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it’s no different than what natural selection is capable of.
That is, instead of giving us an irrational decision theory, natural selection tuned the body language, the body language analysis capability, and the “honor” module (disutility for breaking promises) - tuned them so that the average human does well in interaction with other average humans in the kinds of realistic situations that humans face.
And it all works with standard game/decision theory from Econ 401. All of morality is there in the utility function as can be measured by standard revealed-preference experiments.
But my point was that the revealed preference does not reveal a unique utility function. If someone pays Omega, you can say this reveals that they like Omega, or that they don’t like Omega, but view paying it as a way to benefit themselves. But at the point where you start positing that each happens-to-win decision is made in order to satisfy yet-another terminal value, your description of the situation becomes increasingly ad hoc, to the point where you have to claim that someone terminally values “keeping a promise that was never received”.
But I didn’t make it that easy for you—in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it’s no different than what natural selection is capable of.
I find it totally unrealistic. And therefore I will totally ignore it. The only realistic scenario, and the one that natural selection tries out enough times so that it matters, is the one with an explicit spoken promise. That is how the non-omniscient driver gets the information he needs in order to make his rational decision.
But my point was that the revealed preference does not reveal a unique utility function.
Sure it does … As long as there has or has not been an explicit promise made to pay the driver, you can easily distinguish how much the driver gets due to the promise from what the driver gets because you like him.
Thanks for the reasoned reply. I guess I wasn’t clear, because I actually agree with a lot of what you just said! To reply to your points as best I can:
Natural selection filtered us for at least one omniscience/desert situation: the decision to care for offspring (in one particular domain of attraction). Like Omega, it prevents us (though with only near-perfect rather than perfect probability) from being around in the n-th generation if we don’t care about the (n+1)th generation.
Also, why do you say that giving weight to SAMELs doesn’t count as rational?
Difficulty of lying actually counts as another example of Parfitian filtering: from the present perspective, you would prefer to be able to lie (as you would prefer having slightly more money). However, by having previously sabotaged your ability to lie, people now treat you better. “Regarding it as suboptimal to lie” is one form this “sabotage” takes, and it is part of the reason you received previous benefits.
Ditto for keeping promises.
But I didn’t make it that easy for you—in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it’s no different than what natural selection is capable of.
But my point was that the revealed preference does not reveal a unique utility function. If someone pays Omega, you can say this reveals that they like Omega, or that they don’t like Omega, but view paying it as a way to benefit themselves. But at the point where you start positing that each happens-to-win decision is made in order to satisfy yet-another terminal value, your description of the situation becomes increasingly ad hoc, to the point where you have to claim that someone terminally values “keeping a promise that was never received”.
I find it totally unrealistic. And therefore I will totally ignore it. The only realistic scenario, and the one that natural selection tries out enough times so that it matters, is the one with an explicit spoken promise. That is how the non-omniscient driver gets the information he needs in order to make his rational decision.
Sure it does … As long as there has or has not been an explicit promise made to pay the driver, you can easily distinguish how much the driver gets due to the promise from what the driver gets because you like him.