Newcomb’s problem is just a case of making decisions when someone else, who “knows you very well” has already made a decision based on expectation of your decision.
Which sounds a lot like Pascal’s wager to me, when your decision is whether to believe in god and god is the person who “knows you very well” and is deciding whether to let you into heaven based on whether you believe in him or not.
There are situations which I guess are what you would describe as ‘Newcomb-like’ where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.
But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem—the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.
I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don’t involve myself in discussions about Omega. I wish I’d stuck with that policy now.
Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they’re a lot of work for no obvious reward, but I don’t have a more complex theory.
Anyone have an example of the examination of an implausible hypothetical paying off?
I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can’t it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.
Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.
That’s interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.
Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don’t have detailed knowledge, but I haven’t seen the trolley problem extended to the usual case of not knowing very many of the effects.
Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn’t mean it makes sense to slack off on prevention as much as has happened.
Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.
Well, the fact that they’re implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don’t we think clear thinking is its own reward?
I’ve found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don’t know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...
I’m all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!
Ok, let me try and nail down my true objection here. Is Pascal’s wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really—it doesn’t add much in that case.
Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb’s problem) would I one-box? Well, probably yes but you’ve glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.
I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I’m sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion… (Forgive the casual use of ‘you’ here—I’m not intending to refer to you specifically).
I don’t understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we’re trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we’re not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer’s posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson’s near/far distinction.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates.
It certainly does gloss over that… I mean it has to, you’d require a lot of evidence. But the reason it does so is because the question isn’t could Omega exists or how can we tel when Omega shows up… the details are buried because they aren’t relevant. How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren’t previously- but that’s the kind of confusion we want.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
It’s a great video and I’m grateful you linked me to it but I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical.
The danger is that you can use a hypothetical to illustrate a paradox that isn’t really a paradox, because its preconditions are impossible. A famous example: Suppose you’re driving a car at the speed of light, and you turn on the headlights. What do you see?
How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox.
It confuses because it doesn’t really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box). They quite reasonably aren’t able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.
I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when ‘story mode’ is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.
If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box).
No. The choice is paradoxical because no matter how much evidence you have of Omega’s omniscience the choice you make can’t change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can’t affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.
I disagree, and I think this is what Eliezer is hinting towards now I’ve gone back and re-read Newcomb’s Problem and Regret of Rationality. If you really have had sufficient evidence to believe that Omega is either an omniscient mind reader or some kind of acausal agent such that it makes sense to one-box then it makes sense to one-box. It only look like a paradox because you’re failing to imagine having that much evidence. Which incidentally is not really a problem—an inability to imagine highly implausible scenarios in detail is not generally an actual handicap in real world decision making.
I’m still going to two-box if Omega appears tomorrow though because there are very many more likely explanations for the series of events depicted in the story than the one you are supposed to take as given.
Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?
Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal’s wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don’t see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.
Some of the Norse gods were pretty badass though, they might be fun to believe in.
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility. The peculiarity of Pascal’s wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.
If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal’s wager however.
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility.
I’m not sure that beliefs don’t generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There’s a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it’s generally a good idea.
Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.
If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do.
I don’t in general agree with this. Outcomes have utility, actions have expected utility, beliefs are generally just what you use to try and determine the expected utility of actions. As a rule, true beliefs will allow you to make better estimates of the expected utility of actions.
This is true for ordinary beliefs: I believe it is raining so I expect the action of taking my umbrella to have higher utility than if I did not believe it was raining. It is possible to imagine certain kinds of beliefs that have utility in themselves but these are unusual kinds of beliefs and most beliefs are not of this type. If there is a god who will reward or punish you in the afterlife partly on the basis of whether you believed in him or not then ‘believing in god’ would result in an outcome with positive utility but deciding if you live in such a universe would be a different belief that you would need to come to from other kinds of evidence than Pascal’s wager.
It is possible to imagine other beliefs that could in theory have utility in themselves for humans. For example, it is possible that believing oneself a bit more attractive and more competent than is accurate might benefit ones happiness more than enough to compensate for lost utility due to less accurate beliefs leading to actions with sub-optimal expected utility. If this is true however it is a quirk of human psychology and not a property of the belief in the way that Pascal’s wager works.
It seems very strange to think of someone acting as though they believe something, without them actually believing it.
I don’t find it at all strange to think of someone acting as if they believe in god even though they don’t. This has been common throughout history.
it seems related to the idea of the intuition pump.
Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.
Which sounds a lot like Pascal’s wager to me, when your decision is whether to believe in god and god is the person who “knows you very well” and is deciding whether to let you into heaven based on whether you believe in him or not.
There are situations which I guess are what you would describe as ‘Newcomb-like’ where I would do the equivalent of one-boxing. If Omega shows up this evening though I will be taking both his boxes, because there is too big an epistemic gap for me to cross to reach the point of thinking that one-boxing is sensible in this universe.
But the plausibility of a hypothetical is unrelated to the correct resolution of the hypothetical. One could equally say that two-boxing implies that you should push the man off the bridge in the trolley problem—the latter is just as unphysical as Newcomb. The proper objection to unreasonable hypotheticals is to claim that they do not resemble the real-world situations one might compare them to in the relevant aspects.
I actually think that implausible hypotheticals are unhelpful and probably actively harmful which is why I usually don’t involve myself in discussions about Omega. I wish I’d stuck with that policy now.
Why do you think implausible hypotheticals are unhelpful and probaby harmful? It seems to me that they’re a lot of work for no obvious reward, but I don’t have a more complex theory.
Anyone have an example of the examination of an implausible hypothetical paying off?
I think implausible hypotheticals are often intuition pumps. If they are used as part of an attempt to convince the audience of a certain point of view I automatically get suspicious. If the point of view is correct, why can’t it be illustrated with a plausible hypothetical or a real world example? They often seem to be constructed in a way that tries to move attention away from certain aspects of the situation described and thus allow for dubious assumptions to be hidden in plain sight.
Basically, I always feel like someone is trying to pull a philosophical sleight of hand when they pull out an implausible hypothetical to make their case and they often seem to be used in arguments that are wrong in subtle or hard to detect ways. I feel like I encounter them far more in arguments for positions that I ultimately conclude are incorrect than as support for positions I ultimately conclude to be correct.
That’s interesting, and might apply to the trolley problem which implies that people can have much more knowledge of the alternatives than they are ever likely to have.
Ethical principles and empathy (as a sort of unconscious ethical principle) are needed when you don’t have detailed knowledge, but I haven’t seen the trolley problem extended to the usual case of not knowing very many of the effects.
It might be worth crossing the trolley problem with Protected from Myself.
Taking a look at ethical intuitions with specifics: Sex, Drugs, and AIDS: the desire to only help when it will make a big difference and the desire to not help unworthy people add up to worse effects than having a less dramatic view of the world. Having AIDS drugs doesn’t mean it makes sense to slack off on prevention as much as has happened.
Yes, the trolley problems are another example of harmful implausible hypotheticals in my opinion. The different reaction many people have to the same underlying ethical question framed as a trolley problem vs. an organ donor problem is I think illustrative of the pernicious influence of implausible hypotheticals on clear thought.
Well, the fact that they’re implausible pretty much means the cash rewards are going to have to wait until they are plausible. But don’t we think clear thinking is its own reward?
I’ve found that such things are incredibly crucial for getting people to think clearly about personal identity. In fact I don’t know if I have any way of explaining or defending my views on personal identity to the philosophically untrained without implausible hypotheticals. Same goes for understanding skepticism, causality, maybe induction, problems with causal decision theory (obviously), anthropics, simulation...
I’m all about being aware that using implausible hypotheticals can generate error but I am bewildered by the sudden resistance to them on this thread: we use them all the time here!
I would be dead chuffed to talk about the wisdom of considering implausible hypotheticals instead, if that’s what you’d prefer to do. (:
Edit: I would be equally happy to drop the thread entirely, if that’s what you prefer.
Ok, let me try and nail down my true objection here. Is Pascal’s wager a good reason to believe in God? No. Hypothetically, if you had good reason to believe that the hypothesis of the christian god existing were massively more likely than other hypotheses of similar complexity, would it be a good reason to believe in god? Well, not really—it doesn’t add much in that case.
Similarly, if Omega showed up at my apartment this evening would I one-box? No. Hypothetically, if I had good reason to believe that an Omega-like entity existed and did this kind of thing (which is the set up for Newcomb’s problem) would I one-box? Well, probably yes but you’ve glossed over the rather radical change to my epistemic state required to make me believe such an implausible thing.
I guess I have a general problem with a certain kind of philosophical thought experiment that tries to sneak in a truly colossal amount of implausibility in its premises and ask you not to notice and then whenever you keep pointing to the implausibility telling you to ignore it and focus on the real question. Well I’m sorry, but the staggering implausibility over there in the corner is more significant than the question you want me to focus on in my opinion… (Forgive the casual use of ‘you’ here—I’m not intending to refer to you specifically).
I don’t understand. A hypothetical can be dangerous if it keeps us from attending to aspects of the problem we’re trying to analyze- like the Chinese Room which fails to convey properly the powers it would have to have for us to declare it conscious. The fact that a hypothetical is implausible might make it harder for us to notice that we’re not attending to certain issues, I guess. That hardly seems grounds for rejecting them outright (indeed, Dennett uses plenty of intuition pumps). And the implausibility itself really is irrelevant. No one is claiming that the hypothetical will occur, so why should the probability of its occurrence be an issue?
Using Newcomb’s problem as an example, it seems like it glosses over important details of how much evidence you would actually need to believe in an Omega like entity and as a result confuses more than it illuminates. Re-reading some of Eliezer’s posts on it I get the impression that he is hinting that his resolution of the issue is connected to that problem. It seems to me that it causes a lot of unnecessary confusion because humans are susceptible to stories that require suspension of disbelief in highly implausible occurrences that they would not actually suspend their disbelief for if encountered in real life. This might be an example of Robin Hanson’s near/far distinction.
Tyler Cowen’s cautionary tale about the dangers of stories covers some of the same kinds of human biases that I think are triggered by implausible hypotheticals.
It certainly does gloss over that… I mean it has to, you’d require a lot of evidence. But the reason it does so is because the question isn’t could Omega exists or how can we tel when Omega shows up… the details are buried because they aren’t relevant. How does Newcomb’s problem confuse more that illuminate? It illustrates a problem/paradox. We would not be aware of that paradox were it not for the hypothetical. I suppose it confuses in the sense that one becomes aware of a problem they weren’t previously- but that’s the kind of confusion we want.
It’s a great video and I’m grateful you linked me to it but I don’t see where the problems with the kind of stories Cowen was discussing show up in thought experiments.
The danger is that you can use a hypothetical to illustrate a paradox that isn’t really a paradox, because its preconditions are impossible. A famous example: Suppose you’re driving a car at the speed of light, and you turn on the headlights. What do you see?
This is a danger. Good point.
It confuses because it doesn’t really show a problem/paradox. That is not obvious because of the peculiar construction of the hypothetical. If you actually had enough evidence to make it seem like one-boxing was the obvious choice then it wouldn’t seem like a paradoxical choice. The problem is people generally aren’t able to imagine themselves into such a scenario and so think they should two-box and then think there is a paradox (because you ‘should’ one-box). They quite reasonably aren’t able to imagine themselves into such a scenario because it is wildly implausible. The paradox is just an artifact of difficulties we have mentally dealing with highly implausible scenarios.
Specifically what I had in mind was the fact that people seem to have a natural willingness to suspend disbelief and accept contradictory or wildly implausible premises when ‘story mode’ is activated. We are used to listening to stories and we become less critical of logical inconsistencies and unlikely scenarios because they are a staple of stories. Presenting a thought experiment in the form of a story containing a highly implausible scenario takes advantage of a weakness in our mental defenses which exists for story-shaped language and leads to confusion and misjudgement which we would not exhibit if confronted with a real situation rather than a story.
No. The choice is paradoxical because no matter how much evidence you have of Omega’s omniscience the choice you make can’t change the amount of money in the box. As such traditional decision theory tells you to two- box because the decision you make can’t affect the amount of money the boxes. No matter how much money is in the boxes you should more by two boxing. Most educated people are causal decision makers by default. So a thought experiment where causal decision makers lose is paradox inducing. If one-boxing was the obvious choice people would feel the need to posit new decision theories as a result.
I disagree, and I think this is what Eliezer is hinting towards now I’ve gone back and re-read Newcomb’s Problem and Regret of Rationality. If you really have had sufficient evidence to believe that Omega is either an omniscient mind reader or some kind of acausal agent such that it makes sense to one-box then it makes sense to one-box. It only look like a paradox because you’re failing to imagine having that much evidence. Which incidentally is not really a problem—an inability to imagine highly implausible scenarios in detail is not generally an actual handicap in real world decision making.
I’m still going to two-box if Omega appears tomorrow though because there are very many more likely explanations for the series of events depicted in the story than the one you are supposed to take as given.
Curiously, what is the average utility you would estimate for belief in God? Or do you feel that trying to estimate this forces suspended disbelief in implausible scenarios?
Which god? The God Of Abraham, Isaac, And Jacob? The Christian, Muslim or Jewish flavour? It would seem this is quite important in the context of Pascal’s wager. Some gods are notoriously specific about the form my belief should take in order to win infinite utility. I don’t see any compelling evidence to prefer any of the more popular god hypotheses over any other, nor to prefer them over the infinitude of other possible gods that I could imagine.
Some of the Norse gods were pretty badass though, they might be fun to believe in.
… if I may put the question differently: what average utility do you estimate for not believing in any God?
This strikes me as a rather odd question. I thought we were more or less agreed that beliefs don’t generally have utility. The peculiarity of Pascal’s wager and religious belief in general is that you are postulating a universe in which you are rewarded for holding certain beliefs independently of your actions. In a universe with no god (which I claim is a universe much like our own) belief in god is merely false belief and generally false beliefs are likely to cause bad decisions and thus lead to sub-optimal outcomes.
If the belief in god is completely free-floating and has no implications for actions then it may not have any direct negative effect on expected utility. Presumably given the finite computational capacity of the human brain holding non-consequential false beliefs is a waste of resources and so has slight negative utility. It strikes me that this is not the kind of belief in god that people are usually trying to defend when invoking Pascal’s wager however.
I’m not sure that beliefs don’t generally have utility. It seems to me that beliefs (or something like beliefs) do a lot to organize action. There’s a difference between doing something because of short-term reward and punishment and doing the same thing because one thinks it’s generally a good idea.
Hmm. I think beliefs do have a utility, whether or not you can act on that utility by choosing a belief or whether or not you can accurately estimate the utility. If you believe something, you will act as though you believe it, so that believing in something inherits the utility of acting as though you do. It seems very strange to think of someone acting as though they believe something, without them actually believing it. There are exceptions, but for the most part, if someone bets on a belief, this is because they believe it.
I don’t in general agree with this. Outcomes have utility, actions have expected utility, beliefs are generally just what you use to try and determine the expected utility of actions. As a rule, true beliefs will allow you to make better estimates of the expected utility of actions.
This is true for ordinary beliefs: I believe it is raining so I expect the action of taking my umbrella to have higher utility than if I did not believe it was raining. It is possible to imagine certain kinds of beliefs that have utility in themselves but these are unusual kinds of beliefs and most beliefs are not of this type. If there is a god who will reward or punish you in the afterlife partly on the basis of whether you believed in him or not then ‘believing in god’ would result in an outcome with positive utility but deciding if you live in such a universe would be a different belief that you would need to come to from other kinds of evidence than Pascal’s wager.
It is possible to imagine other beliefs that could in theory have utility in themselves for humans. For example, it is possible that believing oneself a bit more attractive and more competent than is accurate might benefit ones happiness more than enough to compensate for lost utility due to less accurate beliefs leading to actions with sub-optimal expected utility. If this is true however it is a quirk of human psychology and not a property of the belief in the way that Pascal’s wager works.
I don’t find it at all strange to think of someone acting as if they believe in god even though they don’t. This has been common throughout history.
That looks like a good heuristic you are using—it seems related to the idea of the intuition pump.
...wow, that was a short time-to-agreement. :D
Yeah, I think I was always averse to this sort of philosophical sophistry but reading Consciousness Explained probably crystallized my objection to it at a relatively early age.