Mr Eliezer, I think you’ve missed a few points here. However, I’ve probably missed more. I apologise for errors in advance.
To start with, I speculate than any system of decision making consistently gives the wrong results on a specific problem. The whole point of decision theory is finding principles which usually end up with a better result. As such, you can always formulate a situation in which it gives the wrong answer: maybe one of the facts you thought you knew was incorrect, and led you astray. (At the very least, Omega may decide to reward only those who have never heard of a particular brand of decision theory.)
It’s like with file compression. In bitmaps, there are frequently large areas with similar colour. With this fact we can design a system that writes that taking less space. However, if we then try to compress a random bitmap, it will take more space than before the compression. Same thing with human minds. They work simply and relatively efficiently, but there’s a whole field dedicated to finding flaws in its method. If you use causal decision theory, you sacrifice your ability at games against superhuman creatures that can predict the future, in return for better decision making when that isn’t the case. That seems like a reasonably fair trade-off to me. Any theory which gets this one right opens itself to either getting another one wrong, or being more complex and thus harder for a human to use correctly.
The scientific method and what I know of rationality make the initial assumption that your belief does not affect how the world works. “If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself.” etc. However, this isn’t something which we can actually know.
Some Christians believe that if you pray over someone with faith, they will be immediately healed. If that is true, rationalists are at a disadvantage, because they aren’t as good at self delusion or doublethink as the untrained. They might never end up finding out that truth. I know that religion is the mind killer too, I’m just using the most common example of the supremely effective standard method being unable to deal with an idea. It’s necessarily incomplete.
I don’t agree with you that “reason” means “choosing what ends up with the most reward”. You’re mixing up means and end. Arguing against a method of decision making because it comes up with the wrong answer to a specific case is like complaining that mp3 compression does a lousy job of compressing silence. I don’t think that reason can be the only tool used, just one of them
Incidentally, I would totally only take the $1000 box, and claim that Omega told me I had won immortality, to confuse all decision theorists involved.
Mr Eliezer, I think you’ve missed a few points here. However, I’ve probably missed more. I apologise for errors in advance.
To start with, I speculate than any system of decision making consistently gives the wrong results on a specific problem. The whole point of decision theory is finding principles which usually end up with a better result. As such, you can always formulate a situation in which it gives the wrong answer: maybe one of the facts you thought you knew was incorrect, and led you astray. (At the very least, Omega may decide to reward only those who have never heard of a particular brand of decision theory.)
It’s like with file compression. In bitmaps, there are frequently large areas with similar colour. With this fact we can design a system that writes that taking less space. However, if we then try to compress a random bitmap, it will take more space than before the compression. Same thing with human minds. They work simply and relatively efficiently, but there’s a whole field dedicated to finding flaws in its method. If you use causal decision theory, you sacrifice your ability at games against superhuman creatures that can predict the future, in return for better decision making when that isn’t the case. That seems like a reasonably fair trade-off to me. Any theory which gets this one right opens itself to either getting another one wrong, or being more complex and thus harder for a human to use correctly.
The scientific method and what I know of rationality make the initial assumption that your belief does not affect how the world works. “If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself.” etc. However, this isn’t something which we can actually know.
Some Christians believe that if you pray over someone with faith, they will be immediately healed. If that is true, rationalists are at a disadvantage, because they aren’t as good at self delusion or doublethink as the untrained. They might never end up finding out that truth. I know that religion is the mind killer too, I’m just using the most common example of the supremely effective standard method being unable to deal with an idea. It’s necessarily incomplete.
I don’t agree with you that “reason” means “choosing what ends up with the most reward”. You’re mixing up means and end. Arguing against a method of decision making because it comes up with the wrong answer to a specific case is like complaining that mp3 compression does a lousy job of compressing silence. I don’t think that reason can be the only tool used, just one of them
Incidentally, I would totally only take the $1000 box, and claim that Omega told me I had won immortality, to confuse all decision theorists involved.
See chapters 1-9 of this document for a more detailed treatment of the argument.
This link is 404ing. Anyone have a copy of this?
The current version is here. (It’s Eliezer Yudkowsky (2010). Timeless Decision Theory.)