I haven’t yet seen an answer to Pascal’s Wager on LW that wasn’t just wishful thinking. In order to validly answer the Wager, you would also have to answer Eliezer’s Lifespan Dilemma, and no one has done that.
I’m pretty sure Peer meant the original version of Pascal’s Wager, the argument for Christianity, which has the obvious answer, “What if the Muslims are right? or “What if God punishes us for believing?”
“God punishes us for believing” has a much lower probability, because no one believes it, while many people believe in Christianity.
Why does the probability have anything to do with the number of people who believe it?
“Muslims are right” could easily be more probable, but then there is a new Wager for becoming Muslim.
There’s then the problem that the expected value involves adding multiples of positive infinity (if you choose the right religion) to multiples of negative infinity (if you choose the wrong one), which gives you an undefined result.
The probabilities simply do not balance perfectly. That is basically impossible.
The probability of any kind of God existing is extremely low, and it’s not clear we have any information on what kind of God would exist conditioned on some God existing.
There’s also the problem that if you know the probability that God exists is very small, you can’t believe, you can only believe in belief, which may not be enough for the wager.
The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.
That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another. And it is wishful thinking to say that it is just as good to choose the less probable way as the more probable way. For example, there are two doors. One has a 99% chance of giving negative infinite utility, and a 1% chance of positive infinite. The second door has a 1% chance of negative infinite utility, and a 99% chance of positive infinite utility. Defined or not, it is perfectly obvious that you should choose the second door.
We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist. Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.
This can’t be right. The number of people who follow any one religion is affected by how people were raised, by cultural and historical trends, by birth rates, and by the geographic and social isolation of the people involved. None of these things have anything to do with truth. Currently Christianity has twice as many people as any other religion because of historical and political facts; you think this makes it more likely than Islam to be true?
Suppose that in 50 years, because of predicted demographic trends, there are twice as many Muslims as Christians. You then seem to be in the strange position of thinking (a) Christianity is more likely to be true now, but (b) because of changing demographics, you will be likely to think Islam is more likely to be true in 50 years.
We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist.
How do people’s claims give you that information? Religions are human cultural inventions. At most one could be true, which means the others have to be made up anyway. If a God did exist, why is it more likely that one of them is true than that they were all made up and humanity never came close to guessing the nature of the God that did exist?
Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.
My intuition tells me that if a God of some sort does exist, the probabilities end up favoring a God that rewards looking at the evidence and believing only what you have reason to be true, but that may just be my bias showing.
Intuition about what religion is true is likely to reflect your upbringing and your culture more than the actual truth. Given that there’s currently no evidence of any kind of God or afterlife, I can’t see how there is any evidence that God X is more likely to exist than God Y.
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
It’s also worth noticing that Pascal’s Wager uses a spherical cow version of religion. Some religious traditions might require actual belief for infinite utility, others just belief in belief, others just certain behavior or words independent of belief.
I’ll answer this later. For now I’ll just point out that you aren’t addressing my position at all, but other things which I never said. For example, I said that if people believe something, this increases its probability. You respond by asking things like “Currently Christianity has twice as many people… you think this makes it more likely than Islam to be true?” I definitely did not say that the probability of a religion is proportional to the number of people who believe it, just that religions that some people believe are more likely than ones that no one believes.
That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another.
Right; or if you don’t decide exactly, at least you have to do (believe or not believe) one or the other.
I would say that the model breaks down. Mathematics (or at least the particular mathematical model being used) is not capable of describing this situation, but that doesn’t make the situation itself meaningless. (That would be a version of the map/territory fallacy.)
Defined or not, it is perfectly obvious that you should choose the second door.
Here I disagree with you. I would say that you have not given enough information. It is as if you gave the same problem statement but with the word ‘infinite’ removed (so that we only know whether the utilities are positive or negative). It may seem as if you have given all of the information: the probabilities and the utilities. But the mathematics which we use to calculate everything else out of those values breaks down, so in fact you have not given all of the information.
One important missing piece of information is the ratio of the first positive utility to the second. That and two other independent ratios would be enough information, if they’re all finite. (If not, then we might need more information.)
And don’t tell me that these ratios are undefined; the mathematical model that calculates the ratios from the information given breaks down, that’s all. In fact, there is an atlernative mathematical model of decision which deals only in ratios between utilities; if you’d followed that model from the beginning, then you would never have tried to state the actual utilities themselves at all. (For mathematicians: instead of trying to plot these 4 utilities in a 4-dimensional affine space, plot them in a 3-dimensional projective space.)
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
Right; the proper conclusion of the argument is not to believe, but to try to believe. And if you buy the argument, then you should try very hard!
I agree with everything you’ve said here, including that in the two door situation the decision could go the other way if you had more information about the ratio of the utilities. Still, it seems to me that what I said is right in this way: if you are given no other information except as stated, you should choose the second door, because your best estimate of the ratios in question will be 1-1. But if you have some other evidence regarding the ratios, or if they are otherwise specified in the problem, your argument is correct.
If you read the article and the comments, you will see that no one really gave an answer.
As far as I can see, it absolutely requires either a bounded utility function (which Eliezer would consider scope insensitivity), or it requires accepting an indefinitely small probability of something extremely good (e.g. Pascal’s Wager).
If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.
Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there’s nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn’t. I don’t see how this is scope insensitivity; it’s just how my preferences are.
Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That’s really what they’d prefer. Most of us just don’t have a utility function like that.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it’s a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn’t die to save the world. I’m too selfish for that.
But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
Then he should keep taking Omega’s offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.
I would die right now to prevent the world from ending 50 years from now. It’s actually even hard for me to imagine that you’re actually as selfish as you say. If the situation actually came up you might find out differently. But I guess it’s possible.
You might be right that Eliezer should simply accept the Lifespan dilemma as the necessary consequence of his utility function (at least as he defines it.)
It’s actually even hard for me to imagine that you’re actually as selfish as you say.
Really? Why? I can’t imagine myself dying to save the world; it’s completely implausible to me and I have a hard time understanding what it would feel like to be willing to do so. But people often die for much less.
It’s simple. The ‘selfish’ terminology is just obscuring matters. Just keep your feelings about one thing (your life) and substitute it with something else (someones life).
Unknowns utility functions is of a type that it assigns infinitely high utility to saving the world. Not saving the world is simply no option. That’s what Unknowns wants.
Edit: Forget what I said about Unknowns previously.
The standard answer is “But what if the Muslims are right?” You can’t be both a Christian and a Muslim, and you lose by guessing wrong. We have no more reason to believe we’ll be rewarded for believing in God X than we have to believe we’ll be punished for believing in God X, as we would be if God Y were the correct one.
All this does is show that the dilemma must have a flaw somewhere, but it doesn’t explicitly show that flaw. The same problem occurs with finding the flaws in proposed perpeptual motion machines, you know there must be a flaw somewhere, but it’s often tricky to find it.
I think the flaw in Pascal’s wager is allowing “Heaven” to have infinite utility. Unbounded utilities, fine; infinite utilities, no.
“The original problem with Pascal’s Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal’s original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God). ”
This is just wishful thinking, as I said in another reply. The probabilities do not balance.
What about “living forever”? According to Eliezer, this has infinite utility. I agree that if you assign it a finite utility, then the lifespan dilemma fails (at some point), and similarly, if you assign “heaven” a finite utility, then Pascal’s Wager will fail, if you make the utility of heaven low enough.
I haven’t yet seen an answer to Pascal’s Wager on LW that wasn’t just wishful thinking. In order to validly answer the Wager, you would also have to answer Eliezer’s Lifespan Dilemma, and no one has done that.
I’m pretty sure Peer meant the original version of Pascal’s Wager, the argument for Christianity, which has the obvious answer, “What if the Muslims are right? or “What if God punishes us for believing?”
That’s not an answer, because the probabilities of those things are not equal.
“God punishes us for believing” has a much lower probability, because no one believes it, while many people believe in Christianity.
“Muslims are right” could easily be more probable, but then there is a new Wager for becoming Muslim.
The probabilities simply do not balance perfectly. That is basically impossible.
Why does the probability have anything to do with the number of people who believe it?
There’s then the problem that the expected value involves adding multiples of positive infinity (if you choose the right religion) to multiples of negative infinity (if you choose the wrong one), which gives you an undefined result.
The probability of any kind of God existing is extremely low, and it’s not clear we have any information on what kind of God would exist conditioned on some God existing.
There’s also the problem that if you know the probability that God exists is very small, you can’t believe, you can only believe in belief, which may not be enough for the wager.
The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.
That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another. And it is wishful thinking to say that it is just as good to choose the less probable way as the more probable way. For example, there are two doors. One has a 99% chance of giving negative infinite utility, and a 1% chance of positive infinite. The second door has a 1% chance of negative infinite utility, and a 99% chance of positive infinite utility. Defined or not, it is perfectly obvious that you should choose the second door.
We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist. Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.
It may be true that some people couldn’t make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.
This can’t be right. The number of people who follow any one religion is affected by how people were raised, by cultural and historical trends, by birth rates, and by the geographic and social isolation of the people involved. None of these things have anything to do with truth. Currently Christianity has twice as many people as any other religion because of historical and political facts; you think this makes it more likely than Islam to be true?
Suppose that in 50 years, because of predicted demographic trends, there are twice as many Muslims as Christians. You then seem to be in the strange position of thinking (a) Christianity is more likely to be true now, but (b) because of changing demographics, you will be likely to think Islam is more likely to be true in 50 years.
How do people’s claims give you that information? Religions are human cultural inventions. At most one could be true, which means the others have to be made up anyway. If a God did exist, why is it more likely that one of them is true than that they were all made up and humanity never came close to guessing the nature of the God that did exist?
My intuition tells me that if a God of some sort does exist, the probabilities end up favoring a God that rewards looking at the evidence and believing only what you have reason to be true, but that may just be my bias showing.
Intuition about what religion is true is likely to reflect your upbringing and your culture more than the actual truth. Given that there’s currently no evidence of any kind of God or afterlife, I can’t see how there is any evidence that God X is more likely to exist than God Y.
It’s also worth noticing that Pascal’s Wager uses a spherical cow version of religion. Some religious traditions might require actual belief for infinite utility, others just belief in belief, others just certain behavior or words independent of belief.
I’ll answer this later. For now I’ll just point out that you aren’t addressing my position at all, but other things which I never said. For example, I said that if people believe something, this increases its probability. You respond by asking things like “Currently Christianity has twice as many people… you think this makes it more likely than Islam to be true?” I definitely did not say that the probability of a religion is proportional to the number of people who believe it, just that religions that some people believe are more likely than ones that no one believes.
Right; or if you don’t decide exactly, at least you have to do (believe or not believe) one or the other.
I would say that the model breaks down. Mathematics (or at least the particular mathematical model being used) is not capable of describing this situation, but that doesn’t make the situation itself meaningless. (That would be a version of the map/territory fallacy.)
Here I disagree with you. I would say that you have not given enough information. It is as if you gave the same problem statement but with the word ‘infinite’ removed (so that we only know whether the utilities are positive or negative). It may seem as if you have given all of the information: the probabilities and the utilities. But the mathematics which we use to calculate everything else out of those values breaks down, so in fact you have not given all of the information.
One important missing piece of information is the ratio of the first positive utility to the second. That and two other independent ratios would be enough information, if they’re all finite. (If not, then we might need more information.)
And don’t tell me that these ratios are undefined; the mathematical model that calculates the ratios from the information given breaks down, that’s all. In fact, there is an atlernative mathematical model of decision which deals only in ratios between utilities; if you’d followed that model from the beginning, then you would never have tried to state the actual utilities themselves at all. (For mathematicians: instead of trying to plot these 4 utilities in a 4-dimensional affine space, plot them in a 3-dimensional projective space.)
Right; the proper conclusion of the argument is not to believe, but to try to believe. And if you buy the argument, then you should try very hard!
I agree with everything you’ve said here, including that in the two door situation the decision could go the other way if you had more information about the ratio of the utilities. Still, it seems to me that what I said is right in this way: if you are given no other information except as stated, you should choose the second door, because your best estimate of the ratios in question will be 1-1. But if you have some other evidence regarding the ratios, or if they are otherwise specified in the problem, your argument is correct.
Can you please remind me what the question is, that you’re looking for an answer to?
And can you please provide a link to an explanation of what Eliezer’s Lifespan Dilemma is?
http://lesswrong.com/lw/17h/the_lifespan_dilemma/
If you read the article and the comments, you will see that no one really gave an answer.
As far as I can see, it absolutely requires either a bounded utility function (which Eliezer would consider scope insensitivity), or it requires accepting an indefinitely small probability of something extremely good (e.g. Pascal’s Wager).
If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.
Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there’s nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn’t. I don’t see how this is scope insensitivity; it’s just how my preferences are.
Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That’s really what they’d prefer. Most of us just don’t have a utility function like that.
So you wouldn’t die to save the world? Or do you mean hypothetically if you had those preferences?
I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.
If the world is doomed immediately unless I die for it, I have a 100% chance of dying immediately, so I might as well die to save the world. But if it’s a choice between living another 50 years and then the world ending, or dying right now and saving the world, and no one would know, I wouldn’t die to save the world. I’m too selfish for that.
Then he should keep taking Omega’s offers, and any discomfort he has with that is faulty intuition, like the discomfort from choosing TORTURE over SPECKS.
I would die right now to prevent the world from ending 50 years from now. It’s actually even hard for me to imagine that you’re actually as selfish as you say. If the situation actually came up you might find out differently. But I guess it’s possible.
You might be right that Eliezer should simply accept the Lifespan dilemma as the necessary consequence of his utility function (at least as he defines it.)
Really? Why? I can’t imagine myself dying to save the world; it’s completely implausible to me and I have a hard time understanding what it would feel like to be willing to do so. But people often die for much less.
Are you married? If so, would you die to save your wife’s life?
Or if you’re not married, what about your mother?
Do you find it hard to imagine those things too?
It’s simple. The ‘selfish’ terminology is just obscuring matters. Just keep your feelings about one thing (your life) and substitute it with something else (someones life).
Unknowns utility functions is of a type that it assigns infinitely high utility to saving the world. Not saving the world is simply no option. That’s what Unknowns wants.
Edit: Forget what I said about Unknowns previously.
Blueberry was the one who introduced the “selfish” terminology. He said, “I wouldn’t die to save the world. I’m too selfish for that.”
I’m really sorry. I confused you with someone else I talked to yesterday. My mistake, I edited my comment and will keep more care in future.
Thank you.
The standard answer is “But what if the Muslims are right?” You can’t be both a Christian and a Muslim, and you lose by guessing wrong. We have no more reason to believe we’ll be rewarded for believing in God X than we have to believe we’ll be punished for believing in God X, as we would be if God Y were the correct one.
All this does is show that the dilemma must have a flaw somewhere, but it doesn’t explicitly show that flaw. The same problem occurs with finding the flaws in proposed perpeptual motion machines, you know there must be a flaw somewhere, but it’s often tricky to find it.
I think the flaw in Pascal’s wager is allowing “Heaven” to have infinite utility. Unbounded utilities, fine; infinite utilities, no.
See The Pascal’s Wager Fallacy Fallacy.
Betting on infinity.
That’s a great video.
Elliezer in that article:
“The original problem with Pascal’s Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal’s original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God). ”
This is just wishful thinking, as I said in another reply. The probabilities do not balance.
What about “living forever”? According to Eliezer, this has infinite utility. I agree that if you assign it a finite utility, then the lifespan dilemma fails (at some point), and similarly, if you assign “heaven” a finite utility, then Pascal’s Wager will fail, if you make the utility of heaven low enough.