Ah, Pascal’s mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists.
I understand this idea—in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don’t give Pascal’s mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.
This is what I was afraid of: we can’t do anything about Pascal’s Mugging with respect to purely epistemic questions. (I’m still not entirely sure why, though—what prevents us from treating cryonics just like we would treat the mugger?)
I’m almost never there anymore… I know this is a dick thing to say, but it’s not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it’s been effectively replaced.
Ha, Trent’s essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though—LW and RW have very different methods of evaluating ideas, and I’m suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I’m not quick to judge.) RW tends to use labels a bit too much—if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a “reliable” source claiming that someone is a fraud, then they assume he/she is.
I understand this idea—in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
Eliezer finally published TDT a few days ago, I think it’s up at the singinst.org site by now. Perhaps we should announce it in a top level post… I think we will.
This is what I was afraid of: we can’t do anything about Pascal’s Mugging with respect to purely epistemic questions. (I’m still not entirely sure why, though—what prevents us from treating cryonics just like we would treat the mugger?)
Cryonics isn’t an agent we have to deal with. Pascal’s Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there’s no one to argue with: either cryonics works, or it doesn’t. We just have to figure it out.
The invalidity of paying Pascal’s mugger doesn’t have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, improbable or not, large or small.
And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page.
Might it have been here? That’s where I was first introduced to LW and Eliezer.
(I am sometimes suspicious here too, but I realize I am way out of my depth so I’m not quick to judge.)
Any ideas/heuristics you’re suspicious of specifically? If there was a Less Wrong and an SIAI belief dichotomy I’d definitely fall in the SIAI belief category, but generally I agree with Less Wrong. It’s not exactly a fair dichotomy though; LW is a fun online social site whereas SIAI folk are paid to be professionally rational.
Come to think of it negative sum isn’t quite the right phrase. Rational agents do all sorts of things in negative sum contexts. They do, for example, pay protection money to the thieves guild. Even though robbing someone is negative sum. It isn’t the sum that needs to be negative. The payoff to the other guy must be negative AND the payoff to yourself must be negative.
Eliezer finally published TDT a few days ago, I think it’s up at the singinst.org site by now.
Excellent, that’ll be a fun read.
Cryonics isn’t an agent we have to deal with. Pascal’s Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there’s no one to argue with: either cryonics works, or it doesn’t. We just have to figure it out. The invalidity of paying Pascal’s mugger doesn’t have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.
I’m still not sure if I follow this—I’ll have to do some more reading on it. I still don’t see how the two situations are different—for example, if I was talking to someone selling cryonics, wouldn’t that be qualitatively the same as Pascal’s Mugging? I’m not sure.
Might it have been here? That’s where I was first introduced to LW and Eliezer.
Unfortunately no, it was here. I didn’t look at that article until recently.
Any ideas/heuristics you’re suspicious of specifically?
That opens a whole new can of worms that it’s far too late at night for me to address, but I’m thinking of writing a post on this soon, perhaps tomorrow.
I still don’t see how the two situations are different—for example, if I was talking to someone selling cryonics, wouldn’t that be qualitatively the same as Pascal’s Mugging?
Nah, the cryonics agent isn’t trying to mug you! (Er, hopefully.) He’s just giving you two options and letting you calculate.
In this case of Pascal’s Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don’t care. Unless they find joy in torturing people (then you’re screwed) they have no incentive to actually use up the resources to go through with it. So they leave you alone, ’cuz you won’t budge.
Cryonics is a lot simpler in its nature, but a lot harder to calculate. You have two options, and the options are given to you by reality, not an agent you can outwit. (Throwing in a cryonics agent doesn’t change anything.) When you have to choose between the binary cryonics versus no cryonics, it’s just a matter of seeing which decision is better (or worse). It could be that both are bad, like in the Pascal’s mugger scenario, but in this case you’re just screwed: reality likes to make you suffer, and you have to take the best possible world. Telling reality that it can go ahead and give you tons of disutility doesn’t take away its incentive to give you tons of disutility. There’s no way out of the problem.
That opens a whole new can of worms that it’s far too late at night for me to address, but I’m thinking of writing a post on this soon, perhaps tomorrow.
Cool! Be careful not to generalize too much, though: there might bad general trends, but no one likes to be yelled at for things they didn’t do. Try to frame it as humbly as possible, maybe. Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!
In this case of Pascal’s Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don’t care.
That works for the LW version of the problem (and I understand why it does), but not for Bostrom’s original formulation. In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet. This means that the mugger avoids the rule “ignore all threats of blackmail but accept postiive-sum trades.” That’s why it looks so much like cryonics to me, and therein lies the problem.
Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!
Will do! I obviously don’t want to sound obnoxious; there’s no reason to be rude about rationality.
In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet.
Oh, sorry! In that case all my talk was egregious. That sounds like a much better problem whose answer isn’t immediately obvious to me. I shall think about it.
That sounds like a much better problem whose answer isn’t immediately obvious to me.
Yep, that’s the problem I’ve been struggling with. Like I said, it would help if Pascal’s disbelief in the mugger’s powers scaled with the utility the mugger promises him, but there’s not always a reason for that to be so. In any case, it might help to look at Bostrom’s version. And do let me know if you come up with anything, since this one really bothers me.
In any case, it might help to look at Bostrom’s version. And do let me know if you come up with anything, since this one really bothers me.
Thanks for pointing this out, I’m shocked I hadn’t heard of it. I’ll let you know if I think up something. If I can’t, I’ll ask a decision theory veteran, they’re sure to know.
I understand this idea—in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
This is what I was afraid of: we can’t do anything about Pascal’s Mugging with respect to purely epistemic questions. (I’m still not entirely sure why, though—what prevents us from treating cryonics just like we would treat the mugger?)
Ha, Trent’s essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though—LW and RW have very different methods of evaluating ideas, and I’m suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I’m not quick to judge.) RW tends to use labels a bit too much—if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a “reliable” source claiming that someone is a fraud, then they assume he/she is.
Eliezer finally published TDT a few days ago, I think it’s up at the singinst.org site by now. Perhaps we should announce it in a top level post… I think we will.
Cryonics isn’t an agent we have to deal with. Pascal’s Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there’s no one to argue with: either cryonics works, or it doesn’t. We just have to figure it out.
The invalidity of paying Pascal’s mugger doesn’t have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, improbable or not, large or small.
Might it have been here? That’s where I was first introduced to LW and Eliezer.
Any ideas/heuristics you’re suspicious of specifically? If there was a Less Wrong and an SIAI belief dichotomy I’d definitely fall in the SIAI belief category, but generally I agree with Less Wrong. It’s not exactly a fair dichotomy though; LW is a fun online social site whereas SIAI folk are paid to be professionally rational.
The second ‘negative sum’ seems redundant...
Are you claiming that 100% of negative sum interactions are negative sum?! 1 is not a probability! …just kidding. I meant ‘improbable or not’.
Come to think of it negative sum isn’t quite the right phrase. Rational agents do all sorts of things in negative sum contexts. They do, for example, pay protection money to the thieves guild. Even though robbing someone is negative sum. It isn’t the sum that needs to be negative. The payoff to the other guy must be negative AND the payoff to yourself must be negative.
That’s true. Negative expected value is what I really mean. I’m too lazy to edit it though.
I guess I’m not familiar enough with the positions of LW and SIAI—where do they differ?
Excellent, that’ll be a fun read.
I’m still not sure if I follow this—I’ll have to do some more reading on it. I still don’t see how the two situations are different—for example, if I was talking to someone selling cryonics, wouldn’t that be qualitatively the same as Pascal’s Mugging? I’m not sure.
Unfortunately no, it was here. I didn’t look at that article until recently.
That opens a whole new can of worms that it’s far too late at night for me to address, but I’m thinking of writing a post on this soon, perhaps tomorrow.
Nah, the cryonics agent isn’t trying to mug you! (Er, hopefully.) He’s just giving you two options and letting you calculate.
In this case of Pascal’s Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don’t care. Unless they find joy in torturing people (then you’re screwed) they have no incentive to actually use up the resources to go through with it. So they leave you alone, ’cuz you won’t budge.
Cryonics is a lot simpler in its nature, but a lot harder to calculate. You have two options, and the options are given to you by reality, not an agent you can outwit. (Throwing in a cryonics agent doesn’t change anything.) When you have to choose between the binary cryonics versus no cryonics, it’s just a matter of seeing which decision is better (or worse). It could be that both are bad, like in the Pascal’s mugger scenario, but in this case you’re just screwed: reality likes to make you suffer, and you have to take the best possible world. Telling reality that it can go ahead and give you tons of disutility doesn’t take away its incentive to give you tons of disutility. There’s no way out of the problem.
Cool! Be careful not to generalize too much, though: there might bad general trends, but no one likes to be yelled at for things they didn’t do. Try to frame it as humbly as possible, maybe. Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!
That works for the LW version of the problem (and I understand why it does), but not for Bostrom’s original formulation. In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet. This means that the mugger avoids the rule “ignore all threats of blackmail but accept postiive-sum trades.” That’s why it looks so much like cryonics to me, and therein lies the problem.
Will do! I obviously don’t want to sound obnoxious; there’s no reason to be rude about rationality.
Oh, sorry! In that case all my talk was egregious. That sounds like a much better problem whose answer isn’t immediately obvious to me. I shall think about it.
Yep, that’s the problem I’ve been struggling with. Like I said, it would help if Pascal’s disbelief in the mugger’s powers scaled with the utility the mugger promises him, but there’s not always a reason for that to be so. In any case, it might help to look at Bostrom’s version. And do let me know if you come up with anything, since this one really bothers me.
Thanks for pointing this out, I’m shocked I hadn’t heard of it. I’ll let you know if I think up something. If I can’t, I’ll ask a decision theory veteran, they’re sure to know.
I’m not so sure, but I certainly hope someone knows.