...there may or or may not be a good response to those arguments against my religion or philosophy or whatever, but in the meantime I need to go with my intuition that the arguments are just wrong, even if I don’t know exactly why. …
I would say (mostly based on intuition, again) that we should assign a probability to our intuitions being correct vs. complicated, subtle arguments. We should take into account that our intuitions are often correct (depending on the issue in question) even if we can’t always explain why. We should also take into account that on similar types of issues with complicated subtle arguments we might not be smart enough to determine how to resolve all the arguments (again, depending on the issue in question).
In some cases there will be so much evidence that our intuitions should bow, and in other cases there will be only weak arguments vs. powerful intuitions in which case (assuming it’s a case where our intuitions have some weight) we’d probably say the intuitions should win.
What about a grey case where we don’t know which one wins? In epistemic questions (which map better fits the territory), we can just leave it as unresolved, assigning probabilities close to 50% for both sides. In some decision questions, however, we must pick one side or the other, and in these cases my impression (I don’t know too much decision theory) is that we essentially pick one side at random.
In the case of a pet religion / philosophy, we know that our intuitions are subject to a number of powerful biases which we need to take into account, and in many cases the arguments against our religion / philosophy might be very powerful. In the case of Pascal’s Mugging decision problems, however, to the best of my knowledge there are only weak biases involved and very iffy arguments, so I’d think we should go with our intuitions.
One more point: Most of the arguments involving extreme cases of decision theory that I’ve seen (including Pascal’s Mugging) start from the assumption that “this is obviously wrong, but why?”. So we’re anyway appealing to our intuition that it’s wrong. Then we go on to say that “maybe it’s wrong because of X”, and we then extrapolate X back up the probability scale to less extreme cases and say that “well, if X is correct then we should counter-intuitively not worry about this case either”. In which case we end up using an extrapolation of a tentative response to one intuition in order to argue against another intuition. I (intuitively?) think there’s something wrong with that.
It irritates me when people talk as though Pascal’s Mugging or Wager is an obvious fallacy, but then give response which are fallacious themselves, like saying that the probability of the opposite is equal (it is not), or that there are alternative scenarios which are just as likely, and then give much less probable scenarios (e.g. that there is a god that rewards people for being atheist), or say that when you are dealing with infinities it does not matter which one is more probable (it does). You are quite correct that people are just going with their intuition and trying to justify that, and it would be much more honest if people admitted that.
It seems to me that the likely true answer is that there is a limit to how much a human being can care about something, so when you assign an incredibly high utility, this does not correspond to anything you care about it in reality, and so you don’t follow decision theory when this comes up. For example, suppose that you knew with 100% certainty that XiXiDu’s last scenario was true: whenever you say “abracadabra”, it causes a nearly immeasurable amount of utility in the simulations, but of course this never touches your experience. Would you do nothing for the rest of your life except say “abracadabra”? Of course not, no more than you are going to donate all of your money to save children in Africa. This simply does not correctly represent what you care about. This is also the answer to Eliezer’s Lifespan Dilemma. Eliezer does NOT care about an infinite lifespan as much as he says he does, or he would indeed take the deal. Likewise, if he really cared infinitely about eternal life, he would become a Christian (or a member of some other religion promising eternal life) immediately, no matter how low the probability of success. But neither he nor any human being cares infinitely about anything.
I agree that people frequently give fallacious responses, and that the opposite is not equal in probability (it may be much higher). I disagree with roughly everything else in the parent. In particular, “god” is not a natural category. By this I mean that if we assume a way to get eternal life exists, the (conditional) probability of any religion that makes this promise still seems vanishingly small—much smaller than the chance of us screwing up the hypothetical opportunity by deliberately adopting an irrational belief.
This does not necessarily mean that we can introduce infinite utility and it will add up to normality. But e.g. we observe the real Eliezer taking unusual actions which could put him in a good position to live forever if the possibility exists.
...there may or or may not be a good response to those arguments against my religion or philosophy or whatever, but in the meantime I need to go with my intuition that the arguments are just wrong, even if I don’t know exactly why. …
I would say (mostly based on intuition, again) that we should assign a probability to our intuitions being correct vs. complicated, subtle arguments. We should take into account that our intuitions are often correct (depending on the issue in question) even if we can’t always explain why. We should also take into account that on similar types of issues with complicated subtle arguments we might not be smart enough to determine how to resolve all the arguments (again, depending on the issue in question).
In some cases there will be so much evidence that our intuitions should bow, and in other cases there will be only weak arguments vs. powerful intuitions in which case (assuming it’s a case where our intuitions have some weight) we’d probably say the intuitions should win.
What about a grey case where we don’t know which one wins? In epistemic questions (which map better fits the territory), we can just leave it as unresolved, assigning probabilities close to 50% for both sides. In some decision questions, however, we must pick one side or the other, and in these cases my impression (I don’t know too much decision theory) is that we essentially pick one side at random.
In the case of a pet religion / philosophy, we know that our intuitions are subject to a number of powerful biases which we need to take into account, and in many cases the arguments against our religion / philosophy might be very powerful. In the case of Pascal’s Mugging decision problems, however, to the best of my knowledge there are only weak biases involved and very iffy arguments, so I’d think we should go with our intuitions.
One more point: Most of the arguments involving extreme cases of decision theory that I’ve seen (including Pascal’s Mugging) start from the assumption that “this is obviously wrong, but why?”. So we’re anyway appealing to our intuition that it’s wrong. Then we go on to say that “maybe it’s wrong because of X”, and we then extrapolate X back up the probability scale to less extreme cases and say that “well, if X is correct then we should counter-intuitively not worry about this case either”. In which case we end up using an extrapolation of a tentative response to one intuition in order to argue against another intuition. I (intuitively?) think there’s something wrong with that.
I agree with basically all of this.
It irritates me when people talk as though Pascal’s Mugging or Wager is an obvious fallacy, but then give response which are fallacious themselves, like saying that the probability of the opposite is equal (it is not), or that there are alternative scenarios which are just as likely, and then give much less probable scenarios (e.g. that there is a god that rewards people for being atheist), or say that when you are dealing with infinities it does not matter which one is more probable (it does). You are quite correct that people are just going with their intuition and trying to justify that, and it would be much more honest if people admitted that.
It seems to me that the likely true answer is that there is a limit to how much a human being can care about something, so when you assign an incredibly high utility, this does not correspond to anything you care about it in reality, and so you don’t follow decision theory when this comes up. For example, suppose that you knew with 100% certainty that XiXiDu’s last scenario was true: whenever you say “abracadabra”, it causes a nearly immeasurable amount of utility in the simulations, but of course this never touches your experience. Would you do nothing for the rest of your life except say “abracadabra”? Of course not, no more than you are going to donate all of your money to save children in Africa. This simply does not correctly represent what you care about. This is also the answer to Eliezer’s Lifespan Dilemma. Eliezer does NOT care about an infinite lifespan as much as he says he does, or he would indeed take the deal. Likewise, if he really cared infinitely about eternal life, he would become a Christian (or a member of some other religion promising eternal life) immediately, no matter how low the probability of success. But neither he nor any human being cares infinitely about anything.
I agree that people frequently give fallacious responses, and that the opposite is not equal in probability (it may be much higher). I disagree with roughly everything else in the parent. In particular, “god” is not a natural category. By this I mean that if we assume a way to get eternal life exists, the (conditional) probability of any religion that makes this promise still seems vanishingly small—much smaller than the chance of us screwing up the hypothetical opportunity by deliberately adopting an irrational belief.
This does not necessarily mean that we can introduce infinite utility and it will add up to normality. But e.g. we observe the real Eliezer taking unusual actions which could put him in a good position to live forever if the possibility exists.