I could not tell from your post if you understood that Pascal’s Wager is a flawed argument for believing in ANY belief system. You do understand this don’t you (That Pascal’s Wager is horribly flawed as an argument for believing in anything)?
Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.
This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don’t make the same mistake with a belief in the Singularity. One needn’t have “Faith” in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.
Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.
In chapter five of Jaynes, “Queer Uses for Probability Theory,” he explains that although a claimed telepath tested 25.8 standard deviations away from chance guessing, that isn’t the probability we should assign to the hypothesis that she’s actually a telepath, because there are many simpler hypotheses that fit the data (for instance, various forms of cheating).
This example is instructive when using Pascal’s Wager to minimax expected utility. Pascal’s Wager is a losing bet for a Christian, because even though expecting positive infinity utility with infitesimal probability seems like a good bet, there are many likelier ways of getting negative infinity utility from that choice. Doing what you can to promote a friendly singularity can still be called “Pascal’s Wager” because it’s betting on a very good outcome with a low probability, but the low probability is so many orders of magnitude better than Christianity’s that it’s actually a rather good bet.
Obviously, you don’t want to let wishful thinking guide your epistemology, but I don’t think that’s what PI’s talking about.
Pascal’s wager is not such a horribly flawed argument. In fact, I wager we can’t even agree on why its flawed.
Later edit: I assume I am getting voted down for trolling (that is, disrupting the flow of conversation), and I agree with that. An argument about Pascal’s wager is not really relevant in this thread. However, especially in the context of being a ‘true believer’, it is interesting to me that statements are often made that something is ‘obvious’, when there are many difficult steps in the argument, or ‘horrible flawed’, when it’s actually just a little bit flawed or even controversially flawed. If anyone wants to comment in a thread dedicated to Pascal’s wager, we can move this to the open thread, which I hope ultimately makes this comment less trollish of me.
Partially seconded. (I think most people agree that the primary flaw is the symmetry argument, but I don’t think that argument does what they think it does, and I do see people holding up other, minority flaws. I do think the classic wager is horribly flawed for other, related but less commonly mentioned, reasons.)
Thanks for the link to the Overcoming Bias post. I read that and it clarified some things for me. If I had known about that post, above I would have just linked to it when I wrote that the fallacy behind Pascal’s wager is probably actually unclear, minor or controversial.
There aren’t many difficult steps in refuting Pascal’s wager, and I dont’ think there’s be much disagreement on it here.
The refutation of PW, in short, is this: it infers high utility based on a very complex (and thus highly-penalized) hypothesis, when you can find equally complex (and equally well-supported) hypotheses that imply the opposite (or worse) utility.
Again, is it the argument that is wrong, or Pascal’s application of it?
It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior). Pascal’s application is a special case of this error, and his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.
(Can you confirm whether you down-voted me because it’s off-topic and inflammatory, or just because I’m wrong?)
I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.
It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior).
Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?
his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.
If he didn’t consider all the cases, his particular application of the argument was bad, not the argument itself, right?
I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.
I have read the material, but I disagreed with it, and it’s often not clear—especially when the posts are old—how I can jump in and chime in that I don’t agree. Often it’s just the subtext I disagree with, so I wait for someone to make it more explicit (or at least more immediate) and then I bring it up.
Thanks for your explanation about the down-voting.
Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?
No (assuming you mean the expected utility of the action given the hypothesis), just that you have to accurately weight its probability.
If he didn’t consider all the cases, his particular application of the argument was bad, not the argument itself, right?
But his argument wouldn’t somehow be improved by considering all the cases (not that it would be practical to even consider all the hypotheses of lengths up to that which implies high utility from faith in God!). Considering those cases would find hypotheses that assign the opposite utility to faith, and worse, some would be more probable.
To salvage the argument, one would have to not just consider more cases, but provide a lot more epistemic labor—that is, make arguments that aren’t part of PW to begin with.
All of your objections to PW seem to be about Pascal’s application of the argument (the probabilities he inputted, the number of cases cases he considered) in which case we can agree that his conclusion wouldn’t be correct.
When I read that Pascal’s Wager is flawed as an argument, I interpret this as ‘the argument does not have good form’. Did people just mean, all along, that they disagreed with the conclusion of the argument because they didn’t agree with the numbers he used?
I think what they mean is, “If an argument allows you to claim an unreasonably huge amount of utility from actions not seemingly capable of that, then you have a complex enough hypothesis that you can find others with the same complexity and opposite conclusion”.
PW-type arguments, then, refer to the class of arguments in which someone tries to justify a course of action through (following the action suggested by) an improbable hypothesis by claiming high enough expected utility. That class of arguments has the flaw that when you allow yourself that much complexity, you necessarily permit hypotheses that advise just as strongly against the action.
That is not something that you can salvage by using different numbers here and there, and so the argument and similar ones have bad (and unsalvageable) form.
“If an argument allows you to claim an unreasonably huge amount of utility from actions not seemingly capable of that, then you have a complex enough hypothesis that you can find others with the same complexity and opposite conclusion”.
That is still fine, because we know how to handle the hypotheses with negative utility. You just optimize over the net utilities of each belief weighted by their probabilities.The fact that there are positive and negative terms together doesn’t invalidate the whole argument. You just do the calculation, if you can, and see what you get.
That is not something that you can salvage by using different numbers here and there, and so the argument and similar ones have bad (and unsalvageable) form.
If you have the right numbers, and a simple enough case to do the computation, would you find PW an acceptable argument?
I’m still having trouble understanding your objection.
When you decide to have faith based on PW, you’re using some epistemology that allows you to pick out the “faith causes infinite utility” hypothesis out of the universe-generating functionspace, and deem it to have some finite probability. The problem is that that epistemology—whatever it is—also allows you to pick out numerous other hypotheses, in which some assert the opposite utility from faith (and their existence is provable by inversion of the faith = utility hypothesis elements).
In order to show net positive utility from believing, you would have to find some way of counting all hypotheses this complex, and finding out which comes ahead. However, the canonical PW argument relies on such anti-faith hypotheses not existing. You would be treading new ground in finding some efficient way to count up all such hypotheses and find which action comes out ahead—keeping in mind, of course, that at this level of complexity, there is a HUGE number of hypotheses to consider.
So you would be making a new argument, only loosely related to canonical PW. If you think you can pull this off, then go ahead and write the article, though I think you’ll soon find it’s not as easy as you expect.
And I would submit that any hypothesis that allows you to claim something has infinite utility (or necessarily more utility than the result of any other action) must itself be infinitely complex, thus infinitely improbable, canceling out the infinity claimed to come from faith.
As you know, I think the essence of Pascal’s wager is this:
If believing in X has positive utility, then you should believe in X.
I think there is enough to debate about in that statement alone.
But suppose that X = God exists. It seems to me that you are consistently writing that Pascal’s Wager fails because in this case the utility of X is impossible to compute due to the complexity of X. I don’t believe this makes the argument fail for two reasons:
Pascal’s Wager says, “If belief in X has positive utility, you should believe in X’. This argument doesn’t fail (in form) if the utility is negative or impossible to compute.
I disagree that the utility is impossible to compute, despite all your arguments about the complexity of X. My reason is straight-forward: atheists do calculate (or at least estimate) the utility of believing in God. Usually, they come up with a value that is negative. So it’s not impossible to estimate the average utility of a complex belief.
And I would submit that any hypothesis that allows you to claim something has infinite utility (or necessarily more utility than the result of any other action) must itself be infinitely complex, thus infinitely improbable, canceling out the infinity claimed to come from faith.
That’s not quite valid— there is some finite program that unfolds Permutation City-style into a universe that allows for infinite computational power, and thus (by some utility functions) infinite utility as the consequence of some actions. It would be wrong for a scientist living in such a universe to reject that hypothesis.
The reason I believe Pascal’s wager is flawed is that it is a false dichotomy. It looks at only one high utility impact, low probability scenario, while excluding others that cancel out its effect on expected utility.
Is there anyone who disagrees with this reason, but still believes it is flawed for a different reason?
This is an argument for why the argument doesn’t work for theism, it doesn’t mean the argument itself is flawed. If you would be willing to multiply the utility of each belief times the probability of each belief and proceed in choosing your belief in this way, then that is an acceptance of the general form of the argument.
If you assume that changing your belief is an available action (which is also questionable), then the idealized form is just expected utility maximization. The criticism is that Pascal incorrectly calculated the expected utility.
Right, one flaw in the idealized form is that it’s not clear that you can simply choose the belief that maximizes utility. But in some cases a person can, and does.
I think that an incorrect calculation, because one person considered 2 cases instead of N cases, is very different from being flawed as an argument.
PeerInfinity was writing about applying Pascal’s wager to atheism—so he must have been referring to the general form of the argument, not a particular application. Matthew B wrote that “Pascal’s Wager is a flawed argument for believing in ANY belief system”. Well, what about a belief system in which there are exactly two beliefs to choose from and the relative probabilities are (.4, .6) and the relative utilities of having the beliefs if they are true are (1000, 100) ? I would say the conclusion of the idealized form of Pascal’s wager is that you should pick the belief that maximizes utility, even though it is lower probability.
I would distinguish between the general form and the idealized general form. One way to generalize Pascal’s wager for belief B, is to compare the expected utilities of believing B and believing one contradictory Belief D in the conditions that B is true and that D is true. This is wrong no matter what belief B you apply it to.
The utility of having a belief is what is being considered in Pascal’s wager, and is quite different from the utility of the belief itself.
The utility of a belief itself wouldn’t sway you to choose one belief over another. Suppose againyou have the two beliefs X and Y, and they each have a certain utility if they are true. If X is true, then you “get” that utility, independently of whether you believed it or not, by virtue of it being true. For example, if there is utility to God existing, then there is that benefit of him existing whether you believe in him or not.
In contrast, there is also utility for having a belief.
To complicate things, there is a component of the utility that is independent of whether the belief is true or not, and there is a component of the utility that depends on the belief being true. In the case of theism, there is a utility to being a theist (positive or negative, depending on who you ask) regardless of whether God exists, and there would also be an extra utility for believing in him if he does exist (possibly zero, if he doesn’t care whether you believe in him or not).
You mean the case of the argument applied to theism? I would be willing to forfeit the applicability of the argument for this case, since I’m just interested in discussing the validity of the general argument.
I don’t like discussing general cases when I don’t have some concrete examples. The only ones I can think of are boring cases of coercion involving unethical mindreaders.
Yes, I agree: the utility of having a belief only makes sense when for some reason you are rewarded for actually having the belief instead of acting as though you have the belief.
OK, since theism is unique in this aspect, in order to generalize away from the theistic, let’s use the utility for acting-as-though-you-believe instead of the utility for actually believing, because in most cases, these should be the same.
… but then, as soon as you do this, the argument become just about choosing actions based on average expected utility and there’s nothing controversial about it. So I guess PW might just suffer from lack of application: there are few cases where you are actually differentially rewarded for having a belief (instead of just acting as though you do), and these cases (generalizing from theism) involve hypotheses that are too complex to parametrize (Silas’ argument).
Back to the immediate object level: PeerInfinity wrote about applying Pascal’s Wager to atheism. However, atheism doesn’t make a utility distinction between having a belief and acting as though you do. Or does it? Having beliefs motivate actions and make them easier to compute.
When PeerInfinity said he chose to believe atheism because it seemed to maximize utility, he might have been summarizing together that acting as though atheism was true was deemed utility maximal, and believing in atheism then followed as utility maximal.
I also think Pascal’s Wager is not horribly flawed in the ways it’s most commonly claimed to be, and am aggrieved that this interesting and important discussion is taking place under a downvoted-to-invisibility comment on an unrelated post. I think I’ll write a top-level post about it today or tomorrow, but right now, I’d like to humbly ask that the above comment be upvoted until not invisible.
Suppose there is a dichotomy of beliefs, X and Y, their probabilities are Px and Py, and the utilities of having each belief is Ux and Uy. Then, the average utility of having belief X is PxUx and the utility of having belief Y is Py\Uy. You “should” choose having the belief (or set of beliefs) that maximizes average utility, because having beliefs are actions and you should choose actions that maximize utility.
What is the flaw in this argument?
For me, the flaw that you should identify is that you should choose beliefs that are most likely to be true, rather than those which maximize average utility. But this is a normative argument, rather than a logical flaw in the argument.
Normally, you should keep many competing beliefs with associated levels of belief in them. The mindset of choosing the action with estimated best expected utility doesn’t apply, as actions are mutually exclusive, while mutually contradictory beliefs can be maintained concurrently. Even when you consider which action to carry out, all promising candidates should be kept in mind until moment of execution.
It is also complicated in the case of religious beliefs where other human beings will judge you by your beliefs, which is one reason why abandoning religions is so hard. But that is off-topic, particularly as you can just lie.
While we’re being off topic, I’m of the opinion that if you are someone who accepts you should one-box then you should also accept Pascal’s wager. I think both are wrong but most people here seem to accept one-boxing is correct but not accept Pascal’s wager. I don’t care enough about either to work the argument out in detail though.
Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision. There are numerous real-world examples of this. Newcomb’s problem only differs in that it takes the limit of the “how well they know you” variable as it approaches “perfect”. There needn’t be an actual Omega, just a decision theory that is robust for all values of the variable up to and including perfect.
Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision.
Which sounds a lot like Pascal’s wager to me, when your decision is whether to believe in god and god is the person who “knows you very well” and is deciding whether to let you into heaven based on whether you believe in him or not.
There are situations which I guess are what you would describe as ‘Newcomb-like’ where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.
But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem—the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.
I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don’t involve myself in discussions about Omega. I wish I’d stuck with that policy now.
Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they’re a lot of work for no obvious reward, but I don’t have a more complex theory.
Anyone have an example of the examination of an implausible hypothetical paying off?
I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can’t it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.
Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.
That’s interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.
Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don’t have detailed knowledge, but I haven’t seen the trolley problem extended to the usual case of not knowing very many of the effects.
Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn’t mean it makes sense to slack off on prevention as much as has happened.
Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.
Well, the fact that they’re implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don’t we think clear thinking is its own reward?
I’ve found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don’t know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...
I’m all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!
Ok, let me try and nail down my true objection here. Is Pascal’s wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really—it doesn’t add much in that case.
Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb’s problem) would I one-box? Well, probably yes but you’ve glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.
I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I’m sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion… (Forgive the casual use of ‘you’ here—I’m not intending to refer to you specifically).
I don’t understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we’re trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we’re not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer’s posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson’s near/far distinction.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates.
It certainly does gloss over that… I mean it has to, you’d require a lot of evidence. But the reason it does so is because the question isn’t could Omega exists or how can we tel when Omega shows up… the details are buried because they aren’t relevant. How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren’t previously- but that’s the kind of confusion we want.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
It’s a great video and I’m grateful you linked me to it but I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical.
The danger is that you can use a hypothetical to illustrate a paradox that isn’t really a paradox, because its preconditions are impossible. A famous example: Suppose you’re driving a car at the speed of light, and you turn on the headlights. What do you see?
How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox.
It confuses because it doesn’t really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box). They quite reasonably aren’t able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.
I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when ‘story mode’ is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.
If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box).
No. The choice is paradoxical because no matter how much evidence you have of Omega’s omniscience the choice you make can’t change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can’t affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.
I disagree, and I think this is what Eliezer is hinting towards now I’ve gone back and re-read Newcomb’s Problem and Regret of Rationality. If you really have had sufficient evidence to believe that Omega is either an omniscient mind reader or some kind of acausal agent such that it makes sense to one-box then it makes sense to one-box. It only look like a paradox because you’re failing to imagine having that much evidence. Which incidentally is not really a problem—an inability to imagine highly implausible scenarios in detail is not generally an actual handicap in real world decision making.
I’m still going to two-box if Omega appears tomorrow though because there are very many more likely explanations for the series of events depicted in the story than the one you are supposed to take as given.
Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?
Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal’s wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don’t see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.
Some of the Norse gods were pretty badass though, they might be fun to believe in.
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility. The peculiarity of Pascal’s wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.
If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal’s wager however.
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility.
I’m not sure that beliefs don’t generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There’s a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it’s generally a good idea.
Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.
If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do.
I don’t in general agree with this. Outcomes have utility, actions have expected utility, beliefs are generally just what you use to try and determine the expected utility of actions. As a rule, true beliefs will allow you to make better estimates of the expected utility of actions.
This is true for ordinary beliefs: I believe it is raining so I expect the action of taking my umbrella to have higher utility than if I did not believe it was raining. It is possible to imagine certain kinds of beliefs that have utility in themselves but these are unusual kinds of beliefs and most beliefs are not of this type. If there is a god who will reward or punish you in the afterlife partly on the basis of whether you believed in him or not then ‘believing in god’ would result in an outcome with positive utility but deciding if you live in such a universe would be a different belief that you would need to come to from other kinds of evidence than Pascal’s wager.
It is possible to imagine other beliefs that could in theory have utility in themselves for humans. For example, it is possible that believing oneself a bit more attractive and more competent than is accurate might benefit ones happiness more than enough to compensate for lost utility due to less accurate beliefs leading to actions with sub-optimal expected utility. If this is true however it is a quirk of human psychology and not a property of the belief in the way that Pascal’s wager works.
It seems very strange to think of someone acting as though they believe something, without them actually believing it.
I don’t find it at all strange to think of someone acting as if they believe in god even though they don’t. This has been common throughout history.
it seems related to the idea of the intuition pump.
Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.
They both have an element of privileging the hypothesis. If I had some reason to think I lived in a universe with an Omega/God then I might agree I should one-box/believe in god but since I don’t have any reason to think I live in such a universe why am I wasting my time even considering this particular implausible scenario?
I see what you mean, but there exists one of two problems with the symmetry.
First, the most annoying form of Pascal’s Wager is the epistemological version: “Believing that God exists has positive expected utility, so you should do so”. This argument fails logically, for reasons SilasBarta listed, and it is usually this form being refuted when people say, “Pascal’s Wager fails”.
Second, the form of Pascal’s Wager concerning worship, “Believing in God, who is known to exist, has positive utility”, has moral complexities which are absent from Newcomb’s dilemma. Objections in this case usually arise from the normative argument that you should not believe things which are false.
First, the most annoying form of Pascal’s Wager is the epistemological version: “Believing that God exists has positive expected utility, so you should do so”. This argument fails logically, for reasons SilasBarta listed, and it is usually this form being refuted when people say, “Pascal’s Wager fails”.
I disagree that it fails logically. The argument, written modus ponens, is:
“If believing in God has positive expected utility, then you should do so”.
If you don’t believe that believing in God has positive expected utility, then this is not a disagreement in the logic of Pascal’s Wager. Pascal’s Wager would equally say,
“If believing in God has negative expected utility, then you should not do so”.
I disagree that it fails logically. The argument, written modus ponens, is:
“If believing in God has positive expected utility, then you should do so”.
Okay, now I think I’m starting to see the miscommunication: PW does not simply say what you’ve quoted there. It’s typically associated with an argument about how the possibility of infinite utility from believing (and perhaps infinite disutility from not believing) outweights the small probability of it being true, and the utility of other courses of action, on account of its infinite size.
You’re taking “Pascal’s Wager” to refer only to certain premises the argument uses, not the full argument itself.
It occurred to me that you might not agree that my distillation of PW contained all the salient features. (For example, there are no infinitesimals and no infinities written in). However, I think it must have been my more general argument that PeerInfinity was referring to, because he was applying it to atheism.
Good point, I edited my form of the argument to include ‘sets of beliefs’. If having a set of beliefs maximizes your utility, then having the set is what you “should” do, I think, in the spirit of the argument.
Accepting God as a probable hypothesis has a lot of epistemic implications. This is not just one thing, everything is connected, one thing being true implies other things being true, other things being false. You won’t be seeing the world as you currently believe it to be, after accepting such change, you will be seeing a strange magical version of it, a version you are certain doesn’t correspond to reality. Mutilating your mind like this has enormous destructive consequences on your ability to understand the real world, and hence on ability to make the right choices, even if you forget about the hideousness of doing this to yourself. This is the part that is usually overlooked in Pascal’s wager.
(Belief in belief keeps the human believers out of most of the trouble, but that’s not what Pascal’s wager advocates! Not understanding this distinction may lead to underestimating the horror of the suggestion.)
A couple of points:
I could not tell from your post if you understood that Pascal’s Wager is a flawed argument for believing in ANY belief system. You do understand this don’t you (That Pascal’s Wager is horribly flawed as an argument for believing in anything)?
Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.
This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don’t make the same mistake with a belief in the Singularity. One needn’t have “Faith” in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.
Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.
In chapter five of Jaynes, “Queer Uses for Probability Theory,” he explains that although a claimed telepath tested 25.8 standard deviations away from chance guessing, that isn’t the probability we should assign to the hypothesis that she’s actually a telepath, because there are many simpler hypotheses that fit the data (for instance, various forms of cheating).
This example is instructive when using Pascal’s Wager to minimax expected utility. Pascal’s Wager is a losing bet for a Christian, because even though expecting positive infinity utility with infitesimal probability seems like a good bet, there are many likelier ways of getting negative infinity utility from that choice. Doing what you can to promote a friendly singularity can still be called “Pascal’s Wager” because it’s betting on a very good outcome with a low probability, but the low probability is so many orders of magnitude better than Christianity’s that it’s actually a rather good bet.
Obviously, you don’t want to let wishful thinking guide your epistemology, but I don’t think that’s what PI’s talking about.
Pascal’s wager is not such a horribly flawed argument. In fact, I wager we can’t even agree on why its flawed.
Later edit: I assume I am getting voted down for trolling (that is, disrupting the flow of conversation), and I agree with that. An argument about Pascal’s wager is not really relevant in this thread. However, especially in the context of being a ‘true believer’, it is interesting to me that statements are often made that something is ‘obvious’, when there are many difficult steps in the argument, or ‘horrible flawed’, when it’s actually just a little bit flawed or even controversially flawed. If anyone wants to comment in a thread dedicated to Pascal’s wager, we can move this to the open thread, which I hope ultimately makes this comment less trollish of me.
Partially seconded. (I think most people agree that the primary flaw is the symmetry argument, but I don’t think that argument does what they think it does, and I do see people holding up other, minority flaws. I do think the classic wager is horribly flawed for other, related but less commonly mentioned, reasons.)
I’ll write a top-level post about this today or tomorrow. (In the meantime, see Where Does Pascal’s Wager Fail? and Carl Shulman’s comments on The Pascal’s Wager Fallacy Fallacy.)
Thanks for the link to the Overcoming Bias post. I read that and it clarified some things for me. If I had known about that post, above I would have just linked to it when I wrote that the fallacy behind Pascal’s wager is probably actually unclear, minor or controversial.
There aren’t many difficult steps in refuting Pascal’s wager, and I dont’ think there’s be much disagreement on it here.
The refutation of PW, in short, is this: it infers high utility based on a very complex (and thus highly-penalized) hypothesis, when you can find equally complex (and equally well-supported) hypotheses that imply the opposite (or worse) utility.
(Btw, I was one of those who voted you down.)
Again, is it the argument that is wrong, or Pascal’s application of it?
(Can you confirm whether you down-voted me because it’s off-topic and inflammatory, or because I’m wrong?)
It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior). Pascal’s application is a special case of this error, and his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.
I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.
Are you saying, here, that it is wrong to factor in the utility of the hypothesis when giving weight to the hypothesis?
If he didn’t consider all the cases, his particular application of the argument was bad, not the argument itself, right?
I have read the material, but I disagreed with it, and it’s often not clear—especially when the posts are old—how I can jump in and chime in that I don’t agree. Often it’s just the subtext I disagree with, so I wait for someone to make it more explicit (or at least more immediate) and then I bring it up.
Thanks for your explanation about the down-voting.
No (assuming you mean the expected utility of the action given the hypothesis), just that you have to accurately weight its probability.
But his argument wouldn’t somehow be improved by considering all the cases (not that it would be practical to even consider all the hypotheses of lengths up to that which implies high utility from faith in God!). Considering those cases would find hypotheses that assign the opposite utility to faith, and worse, some would be more probable.
To salvage the argument, one would have to not just consider more cases, but provide a lot more epistemic labor—that is, make arguments that aren’t part of PW to begin with.
All of your objections to PW seem to be about Pascal’s application of the argument (the probabilities he inputted, the number of cases cases he considered) in which case we can agree that his conclusion wouldn’t be correct.
When I read that Pascal’s Wager is flawed as an argument, I interpret this as ‘the argument does not have good form’. Did people just mean, all along, that they disagreed with the conclusion of the argument because they didn’t agree with the numbers he used?
I think what they mean is, “If an argument allows you to claim an unreasonably huge amount of utility from actions not seemingly capable of that, then you have a complex enough hypothesis that you can find others with the same complexity and opposite conclusion”.
PW-type arguments, then, refer to the class of arguments in which someone tries to justify a course of action through (following the action suggested by) an improbable hypothesis by claiming high enough expected utility. That class of arguments has the flaw that when you allow yourself that much complexity, you necessarily permit hypotheses that advise just as strongly against the action.
That is not something that you can salvage by using different numbers here and there, and so the argument and similar ones have bad (and unsalvageable) form.
That is still fine, because we know how to handle the hypotheses with negative utility. You just optimize over the net utilities of each belief weighted by their probabilities.The fact that there are positive and negative terms together doesn’t invalidate the whole argument. You just do the calculation, if you can, and see what you get.
If you have the right numbers, and a simple enough case to do the computation, would you find PW an acceptable argument?
I’m still having trouble understanding your objection.
When you decide to have faith based on PW, you’re using some epistemology that allows you to pick out the “faith causes infinite utility” hypothesis out of the universe-generating functionspace, and deem it to have some finite probability. The problem is that that epistemology—whatever it is—also allows you to pick out numerous other hypotheses, in which some assert the opposite utility from faith (and their existence is provable by inversion of the faith = utility hypothesis elements).
In order to show net positive utility from believing, you would have to find some way of counting all hypotheses this complex, and finding out which comes ahead. However, the canonical PW argument relies on such anti-faith hypotheses not existing. You would be treading new ground in finding some efficient way to count up all such hypotheses and find which action comes out ahead—keeping in mind, of course, that at this level of complexity, there is a HUGE number of hypotheses to consider.
So you would be making a new argument, only loosely related to canonical PW. If you think you can pull this off, then go ahead and write the article, though I think you’ll soon find it’s not as easy as you expect.
And I would submit that any hypothesis that allows you to claim something has infinite utility (or necessarily more utility than the result of any other action) must itself be infinitely complex, thus infinitely improbable, canceling out the infinity claimed to come from faith.
As you know, I think the essence of Pascal’s wager is this:
I think there is enough to debate about in that statement alone.
But suppose that X = God exists. It seems to me that you are consistently writing that Pascal’s Wager fails because in this case the utility of X is impossible to compute due to the complexity of X. I don’t believe this makes the argument fail for two reasons:
Pascal’s Wager says, “If belief in X has positive utility, you should believe in X’. This argument doesn’t fail (in form) if the utility is negative or impossible to compute.
I disagree that the utility is impossible to compute, despite all your arguments about the complexity of X. My reason is straight-forward: atheists do calculate (or at least estimate) the utility of believing in God. Usually, they come up with a value that is negative. So it’s not impossible to estimate the average utility of a complex belief.
That’s not quite valid— there is some finite program that unfolds Permutation City-style into a universe that allows for infinite computational power, and thus (by some utility functions) infinite utility as the consequence of some actions. It would be wrong for a scientist living in such a universe to reject that hypothesis.
The reason I believe Pascal’s wager is flawed is that it is a false dichotomy. It looks at only one high utility impact, low probability scenario, while excluding others that cancel out its effect on expected utility.
Is there anyone who disagrees with this reason, but still believes it is flawed for a different reason?
This is an argument for why the argument doesn’t work for theism, it doesn’t mean the argument itself is flawed. If you would be willing to multiply the utility of each belief times the probability of each belief and proceed in choosing your belief in this way, then that is an acceptance of the general form of the argument.
If you assume that changing your belief is an available action (which is also questionable), then the idealized form is just expected utility maximization. The criticism is that Pascal incorrectly calculated the expected utility.
Right, one flaw in the idealized form is that it’s not clear that you can simply choose the belief that maximizes utility. But in some cases a person can, and does.
I think that an incorrect calculation, because one person considered 2 cases instead of N cases, is very different from being flawed as an argument.
PeerInfinity was writing about applying Pascal’s wager to atheism—so he must have been referring to the general form of the argument, not a particular application. Matthew B wrote that “Pascal’s Wager is a flawed argument for believing in ANY belief system”. Well, what about a belief system in which there are exactly two beliefs to choose from and the relative probabilities are (.4, .6) and the relative utilities of having the beliefs if they are true are (1000, 100) ? I would say the conclusion of the idealized form of Pascal’s wager is that you should pick the belief that maximizes utility, even though it is lower probability.
I would distinguish between the general form and the idealized general form. One way to generalize Pascal’s wager for belief B, is to compare the expected utilities of believing B and believing one contradictory Belief D in the conditions that B is true and that D is true. This is wrong no matter what belief B you apply it to.
Why would having the beliefs have utility? Isn’t utility a function of actions, as a rule?
There’s no contradiction in thinking “A is unlikely” and yet acting as if A is true—otherwise no-one would wear seat belts.
The utility of having a belief is what is being considered in Pascal’s wager, and is quite different from the utility of the belief itself.
The utility of a belief itself wouldn’t sway you to choose one belief over another. Suppose againyou have the two beliefs X and Y, and they each have a certain utility if they are true. If X is true, then you “get” that utility, independently of whether you believed it or not, by virtue of it being true. For example, if there is utility to God existing, then there is that benefit of him existing whether you believe in him or not.
In contrast, there is also utility for having a belief.
To complicate things, there is a component of the utility that is independent of whether the belief is true or not, and there is a component of the utility that depends on the belief being true. In the case of theism, there is a utility to being a theist (positive or negative, depending on who you ask) regardless of whether God exists, and there would also be an extra utility for believing in him if he does exist (possibly zero, if he doesn’t care whether you believe in him or not).
SilasBarta has pointed out a relevant argument regarding that case.
You mean the case of the argument applied to theism? I would be willing to forfeit the applicability of the argument for this case, since I’m just interested in discussing the validity of the general argument.
I don’t like discussing general cases when I don’t have some concrete examples. The only ones I can think of are boring cases of coercion involving unethical mindreaders.
Yes, I agree: the utility of having a belief only makes sense when for some reason you are rewarded for actually having the belief instead of acting as though you have the belief.
OK, since theism is unique in this aspect, in order to generalize away from the theistic, let’s use the utility for acting-as-though-you-believe instead of the utility for actually believing, because in most cases, these should be the same.
… but then, as soon as you do this, the argument become just about choosing actions based on average expected utility and there’s nothing controversial about it. So I guess PW might just suffer from lack of application: there are few cases where you are actually differentially rewarded for having a belief (instead of just acting as though you do), and these cases (generalizing from theism) involve hypotheses that are too complex to parametrize (Silas’ argument).
Back to the immediate object level: PeerInfinity wrote about applying Pascal’s Wager to atheism. However, atheism doesn’t make a utility distinction between having a belief and acting as though you do. Or does it? Having beliefs motivate actions and make them easier to compute.
When PeerInfinity said he chose to believe atheism because it seemed to maximize utility, he might have been summarizing together that acting as though atheism was true was deemed utility maximal, and believing in atheism then followed as utility maximal.
I also think Pascal’s Wager is not horribly flawed in the ways it’s most commonly claimed to be, and am aggrieved that this interesting and important discussion is taking place under a downvoted-to-invisibility comment on an unrelated post. I think I’ll write a top-level post about it today or tomorrow, but right now, I’d like to humbly ask that the above comment be upvoted until not invisible.
Taboo “Pascal’s wager”, please.
Sure.
Here’s an argument:
Suppose there is a dichotomy of beliefs, X and Y, their probabilities are Px and Py, and the utilities of having each belief is Ux and Uy. Then, the average utility of having belief X is PxUx and the utility of having belief Y is Py\Uy. You “should” choose having the belief (or set of beliefs) that maximizes average utility, because having beliefs are actions and you should choose actions that maximize utility.
What is the flaw in this argument?
For me, the flaw that you should identify is that you should choose beliefs that are most likely to be true, rather than those which maximize average utility. But this is a normative argument, rather than a logical flaw in the argument.
Normally, you should keep many competing beliefs with associated levels of belief in them. The mindset of choosing the action with estimated best expected utility doesn’t apply, as actions are mutually exclusive, while mutually contradictory beliefs can be maintained concurrently. Even when you consider which action to carry out, all promising candidates should be kept in mind until moment of execution.
This is complicated in the case of religious beliefs where the deity will judge you by your beliefs and not just your actions.
It is also complicated in the case of religious beliefs where other human beings will judge you by your beliefs, which is one reason why abandoning religions is so hard. But that is off-topic, particularly as you can just lie.
While we’re being off topic, I’m of the opinion that if you are someone who accepts you should one-box then you should also accept Pascal’s wager. I think both are wrong but most people here seem to accept one-boxing is correct but not accept Pascal’s wager. I don’t care enough about either to work the argument out in detail though.
Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision. There are numerous real-world examples of this. Newcomb’s problem only differs in that it takes the limit of the “how well they know you” variable as it approaches “perfect”. There needn’t be an actual Omega, just a decision theory that is robust for all values of the variable up to and including perfect.
Which sounds a lot like Pascal’s wager to me, when your decision is whether to believe in god and god is the person who “knows you very well” and is deciding whether to let you into heaven based on whether you believe in him or not.
There are situations which I guess are what you would describe as ‘Newcomb-like’ where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.
But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem—the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.
I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don’t involve myself in discussions about Omega. I wish I’d stuck with that policy now.
Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they’re a lot of work for no obvious reward, but I don’t have a more complex theory.
Anyone have an example of the examination of an implausible hypothetical paying off?
I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can’t it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.
Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.
That’s interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.
Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don’t have detailed knowledge, but I haven’t seen the trolley problem extended to the usual case of not knowing very many of the effects.
It might be worth crossing the trolley problem with Protected from Myself.
Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn’t mean it makes sense to slack off on prevention as much as has happened.
Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.
Well, the fact that they’re implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don’t we think clear thinking is its own reward?
I’ve found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don’t know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...
I’m all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!
I would be dead chuffed to talk about the wisdom of considering implausible hypotheticals instead, if that’s what you’d prefer to do. (:
Edit: I would be equally happy to drop the thread entirely, if that’s what you prefer.
Ok, let me try and nail down my true objection here. Is Pascal’s wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really—it doesn’t add much in that case.
Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb’s problem) would I one-box? Well, probably yes but you’ve glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.
I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I’m sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion… (Forgive the casual use of ‘you’ here—I’m not intending to refer to you specifically).
I don’t understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we’re trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we’re not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer’s posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson’s near/far distinction.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
It certainly does gloss over that… I mean it has to, you’d require a lot of evidence. But the reason it does so is because the question isn’t could Omega exists or how can we tel when Omega shows up… the details are buried because they aren’t relevant. How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren’t previously- but that’s the kind of confusion we want.
It’s a great video and I’m grateful you linked me to it but I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
The danger is that you can use a hypothetical to illustrate a paradox that isn’t really a paradox, because its preconditions are impossible. A famous example: Suppose you’re driving a car at the speed of light, and you turn on the headlights. What do you see?
This is a danger. Good point.
It confuses because it doesn’t really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box). They quite reasonably aren’t able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.
Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when ‘story mode’ is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.
No. The choice is paradoxical because no matter how much evidence you have of Omega’s omniscience the choice you make can’t change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can’t affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.
I disagree, and I think this is what Eliezer is hinting towards now I’ve gone back and re-read Newcomb’s Problem and Regret of Rationality. If you really have had sufficient evidence to believe that Omega is either an omniscient mind reader or some kind of acausal agent such that it makes sense to one-box then it makes sense to one-box. It only look like a paradox because you’re failing to imagine having that much evidence. Which incidentally is not really a problem—an inability to imagine highly implausible scenarios in detail is not generally an actual handicap in real world decision making.
I’m still going to two-box if Omega appears tomorrow though because there are very many more likely explanations for the series of events depicted in the story than the one you are supposed to take as given.
Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?
Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal’s wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don’t see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.
Some of the Norse gods were pretty badass though, they might be fun to believe in.
… if I may put the question differently: what average utility do you estimate for not believing in any God?
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility. The peculiarity of Pascal’s wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.
If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal’s wager however.
I’m not sure that beliefs don’t generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There’s a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it’s generally a good idea.
Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.
I don’t in general agree with this. Outcomes have utility, actions have expected utility, beliefs are generally just what you use to try and determine the expected utility of actions. As a rule, true beliefs will allow you to make better estimates of the expected utility of actions.
This is true for ordinary beliefs: I believe it is raining so I expect the action of taking my umbrella to have higher utility than if I did not believe it was raining. It is possible to imagine certain kinds of beliefs that have utility in themselves but these are unusual kinds of beliefs and most beliefs are not of this type. If there is a god who will reward or punish you in the afterlife partly on the basis of whether you believed in him or not then ‘believing in god’ would result in an outcome with positive utility but deciding if you live in such a universe would be a different belief that you would need to come to from other kinds of evidence than Pascal’s wager.
It is possible to imagine other beliefs that could in theory have utility in themselves for humans. For example, it is possible that believing oneself a bit more attractive and more competent than is accurate might benefit ones happiness more than enough to compensate for lost utility due to less accurate beliefs leading to actions with sub-optimal expected utility. If this is true however it is a quirk of human psychology and not a property of the belief in the way that Pascal’s wager works.
I don’t find it at all strange to think of someone acting as if they believe in god even though they don’t. This has been common throughout history.
That looks like a good heuristic you are using—it seems related to the idea of the intuition pump.
...wow, that was a short time-to-agreement. :D
Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.
I think you’re mistaken, therefore I would like to see your proof. It would be a shame if I missed an opportunity to be more correct. ;)
They both have an element of privileging the hypothesis. If I had some reason to think I lived in a universe with an Omega/God then I might agree I should one-box/believe in god but since I don’t have any reason to think I live in such a universe why am I wasting my time even considering this particular implausible scenario?
I see what you mean, but there exists one of two problems with the symmetry.
First, the most annoying form of Pascal’s Wager is the epistemological version: “Believing that God exists has positive expected utility, so you should do so”. This argument fails logically, for reasons SilasBarta listed, and it is usually this form being refuted when people say, “Pascal’s Wager fails”.
Second, the form of Pascal’s Wager concerning worship, “Believing in God, who is known to exist, has positive utility”, has moral complexities which are absent from Newcomb’s dilemma. Objections in this case usually arise from the normative argument that you should not believe things which are false.
I disagree that it fails logically. The argument, written modus ponens, is:
“If believing in God has positive expected utility, then you should do so”.
If you don’t believe that believing in God has positive expected utility, then this is not a disagreement in the logic of Pascal’s Wager. Pascal’s Wager would equally say, “If believing in God has negative expected utility, then you should not do so”.
Okay, now I think I’m starting to see the miscommunication: PW does not simply say what you’ve quoted there. It’s typically associated with an argument about how the possibility of infinite utility from believing (and perhaps infinite disutility from not believing) outweights the small probability of it being true, and the utility of other courses of action, on account of its infinite size.
You’re taking “Pascal’s Wager” to refer only to certain premises the argument uses, not the full argument itself.
It occurred to me that you might not agree that my distillation of PW contained all the salient features. (For example, there are no infinitesimals and no infinities written in). However, I think it must have been my more general argument that PeerInfinity was referring to, because he was applying it to atheism.
Good point, I edited my form of the argument to include ‘sets of beliefs’. If having a set of beliefs maximizes your utility, then having the set is what you “should” do, I think, in the spirit of the argument.
Accepting God as a probable hypothesis has a lot of epistemic implications. This is not just one thing, everything is connected, one thing being true implies other things being true, other things being false. You won’t be seeing the world as you currently believe it to be, after accepting such change, you will be seeing a strange magical version of it, a version you are certain doesn’t correspond to reality. Mutilating your mind like this has enormous destructive consequences on your ability to understand the real world, and hence on ability to make the right choices, even if you forget about the hideousness of doing this to yourself. This is the part that is usually overlooked in Pascal’s wager.
(Belief in belief keeps the human believers out of most of the trouble, but that’s not what Pascal’s wager advocates! Not understanding this distinction may lead to underestimating the horror of the suggestion.)
Thank you. My response appears in another thread.