That was the reference class I was referring to, but it really doesn’t matter much in this case—after all, who wouldn’t want to live through a positive Singularity?
True, but a positive Singularity doesn’t necessarily raise the cryonic dead. I’d bet against it, for one. (Figuring out whether I agree or disagree with you is making me think pretty hard right now. At least my post is working for me! I probably agree, though almost assuredly for different reasons than yours.)
My reasons for disagreement are as follows:
(1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death
(2) I am skeptical of the idea of “hard takeoff” for a seed AI
(3) I am pessimistic about existential risk
(4) I do not believe that a good enough seed AI will be produced for at least a few more decades
(5) I do not believe any versions of the Singularity except Eliezer’s (i.e. Moore’s Law will not swoop in to save the day)
(6) Even an FAI might not wake the “cryonic dead” (I like that term, I think I’ll steal it, haha)
(7) Cryonically preserved bodies may be destroyed before we have the ability to revive them
…and a few more minor reasons I can’t remember at the moment.
My thoughts have changed somewhat since writing this post, but that’s the general idea. It would be personally irrational for me to sign up for cryonics at the moment. I’m not sure if this extends to most LW people; I’d have to think about it more.
But even your list of low probabilities might be totally outweighed by the Pascalian counterargument: FAI is a lot of utility if it works. Why don’t you think so?
By the way, I think it’s really cool to see another RWer here! LW’s a different kind of fun than RW, but it’s a neat place.
I remember that post—it got me to think about cryonics a lot more. I agree with most of your arguments, particularly bullet point #3.
I do struggle with Pascal’s Mugging—it seems to me, intuitively, that Pascal’s Mugging can’t be true (that is, in the original scenario, Pascal should not give up his money), but I can’t find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don’t see a reason why this is always the case. So, while I can’t defuse Pascal’s Mugging, I am skeptical about its conclusion.
I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.
There is a reason to expect that it will scale in general.
To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn’t have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.
Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can’t be regarded as equivalent to a thousand century-long lives chained together in sequence?
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can’t be regarded as equivalent to a thousand century-long lives chained together in sequence?
I’m not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal’s Mugging in any general case.
Well, there is no necessary reason why all claimed mechanisms must be equally probable. The mugger could say “I’ll heal the sick with my psychic powers” or “when I get to the bank on Monday, I’ll donate $$$ to medical research”; even if the potential utilities were the same and both probabilities were small, we would not consider the probabilities equal.
Also, the utility of money doesn’t scale indefinitely; if nothing else, it levels off once the amount starts being comparable to all the money in the world, so adding more just creates additional inflation.
Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.
Well, there is no necessary reason why all claimed mechanisms must be equally probable.
That’s why I don’t think we can defuse Pascal’s Mugging, since we can potentially imagine a mechanism for which our probability that the mugger is honest doesn’t scale with the amount of utility the mugger promises to give. That would imply that there is no fully general solution to Bostrom’s formulation of Pascal’s Mugging. And that worries me greatly.
However:
Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.
This gives me a little bit of hope, since we might be able to use it as a heuristic when dealing with situations like these. That’s not as good as a proof, but it’s not bad.
Also:
The mugger could say “I’ll heal the sick with my psychic powers” or “when I get to the bank on Monday, I’ll donate $$$ to medical research”
Only on LessWrong does that sentence make sense and not sound funny :)
I do struggle with Pascal’s Mugging—it seems to me, intuitively, that Pascal’s Mugging can’t be true (that is, in the original scenario, Pascal should not give up his money), but I can’t find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don’t see a reason why this is always the case. So, while I can’t defuse Pascal’s Mugging, I am skeptical about its conclusion.
Ah, Pascal’s mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists. That way they have no incentive to try to terrorize you—you won’t give them what they want no matter what—and you don’t incentivize even more terrorists to show up and demand even bigger sums.
But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don’t give Pascal’s mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.
I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.
I’m almost never there anymore… I know this is a dick thing to say, but it’s not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it’s been effectively replaced.
Ah, Pascal’s mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists.
I understand this idea—in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don’t give Pascal’s mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.
This is what I was afraid of: we can’t do anything about Pascal’s Mugging with respect to purely epistemic questions. (I’m still not entirely sure why, though—what prevents us from treating cryonics just like we would treat the mugger?)
I’m almost never there anymore… I know this is a dick thing to say, but it’s not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it’s been effectively replaced.
Ha, Trent’s essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though—LW and RW have very different methods of evaluating ideas, and I’m suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I’m not quick to judge.) RW tends to use labels a bit too much—if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a “reliable” source claiming that someone is a fraud, then they assume he/she is.
I understand this idea—in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
Eliezer finally published TDT a few days ago, I think it’s up at the singinst.org site by now. Perhaps we should announce it in a top level post… I think we will.
This is what I was afraid of: we can’t do anything about Pascal’s Mugging with respect to purely epistemic questions. (I’m still not entirely sure why, though—what prevents us from treating cryonics just like we would treat the mugger?)
Cryonics isn’t an agent we have to deal with. Pascal’s Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there’s no one to argue with: either cryonics works, or it doesn’t. We just have to figure it out.
The invalidity of paying Pascal’s mugger doesn’t have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, improbable or not, large or small.
And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page.
Might it have been here? That’s where I was first introduced to LW and Eliezer.
(I am sometimes suspicious here too, but I realize I am way out of my depth so I’m not quick to judge.)
Any ideas/heuristics you’re suspicious of specifically? If there was a Less Wrong and an SIAI belief dichotomy I’d definitely fall in the SIAI belief category, but generally I agree with Less Wrong. It’s not exactly a fair dichotomy though; LW is a fun online social site whereas SIAI folk are paid to be professionally rational.
Come to think of it negative sum isn’t quite the right phrase. Rational agents do all sorts of things in negative sum contexts. They do, for example, pay protection money to the thieves guild. Even though robbing someone is negative sum. It isn’t the sum that needs to be negative. The payoff to the other guy must be negative AND the payoff to yourself must be negative.
Eliezer finally published TDT a few days ago, I think it’s up at the singinst.org site by now.
Excellent, that’ll be a fun read.
Cryonics isn’t an agent we have to deal with. Pascal’s Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there’s no one to argue with: either cryonics works, or it doesn’t. We just have to figure it out. The invalidity of paying Pascal’s mugger doesn’t have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.
I’m still not sure if I follow this—I’ll have to do some more reading on it. I still don’t see how the two situations are different—for example, if I was talking to someone selling cryonics, wouldn’t that be qualitatively the same as Pascal’s Mugging? I’m not sure.
Might it have been here? That’s where I was first introduced to LW and Eliezer.
Unfortunately no, it was here. I didn’t look at that article until recently.
Any ideas/heuristics you’re suspicious of specifically?
That opens a whole new can of worms that it’s far too late at night for me to address, but I’m thinking of writing a post on this soon, perhaps tomorrow.
I still don’t see how the two situations are different—for example, if I was talking to someone selling cryonics, wouldn’t that be qualitatively the same as Pascal’s Mugging?
Nah, the cryonics agent isn’t trying to mug you! (Er, hopefully.) He’s just giving you two options and letting you calculate.
In this case of Pascal’s Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don’t care. Unless they find joy in torturing people (then you’re screwed) they have no incentive to actually use up the resources to go through with it. So they leave you alone, ’cuz you won’t budge.
Cryonics is a lot simpler in its nature, but a lot harder to calculate. You have two options, and the options are given to you by reality, not an agent you can outwit. (Throwing in a cryonics agent doesn’t change anything.) When you have to choose between the binary cryonics versus no cryonics, it’s just a matter of seeing which decision is better (or worse). It could be that both are bad, like in the Pascal’s mugger scenario, but in this case you’re just screwed: reality likes to make you suffer, and you have to take the best possible world. Telling reality that it can go ahead and give you tons of disutility doesn’t take away its incentive to give you tons of disutility. There’s no way out of the problem.
That opens a whole new can of worms that it’s far too late at night for me to address, but I’m thinking of writing a post on this soon, perhaps tomorrow.
Cool! Be careful not to generalize too much, though: there might bad general trends, but no one likes to be yelled at for things they didn’t do. Try to frame it as humbly as possible, maybe. Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!
In this case of Pascal’s Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don’t care.
That works for the LW version of the problem (and I understand why it does), but not for Bostrom’s original formulation. In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet. This means that the mugger avoids the rule “ignore all threats of blackmail but accept postiive-sum trades.” That’s why it looks so much like cryonics to me, and therein lies the problem.
Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!
Will do! I obviously don’t want to sound obnoxious; there’s no reason to be rude about rationality.
In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet.
Oh, sorry! In that case all my talk was egregious. That sounds like a much better problem whose answer isn’t immediately obvious to me. I shall think about it.
That sounds like a much better problem whose answer isn’t immediately obvious to me.
Yep, that’s the problem I’ve been struggling with. Like I said, it would help if Pascal’s disbelief in the mugger’s powers scaled with the utility the mugger promises him, but there’s not always a reason for that to be so. In any case, it might help to look at Bostrom’s version. And do let me know if you come up with anything, since this one really bothers me.
In any case, it might help to look at Bostrom’s version. And do let me know if you come up with anything, since this one really bothers me.
Thanks for pointing this out, I’m shocked I hadn’t heard of it. I’ll let you know if I think up something. If I can’t, I’ll ask a decision theory veteran, they’re sure to know.
That was the reference class I was referring to, but it really doesn’t matter much in this case—after all, who wouldn’t want to live through a positive Singularity?
True, but a positive Singularity doesn’t necessarily raise the cryonic dead. I’d bet against it, for one. (Figuring out whether I agree or disagree with you is making me think pretty hard right now. At least my post is working for me! I probably agree, though almost assuredly for different reasons than yours.)
My reasons for disagreement are as follows: (1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death (2) I am skeptical of the idea of “hard takeoff” for a seed AI (3) I am pessimistic about existential risk (4) I do not believe that a good enough seed AI will be produced for at least a few more decades (5) I do not believe any versions of the Singularity except Eliezer’s (i.e. Moore’s Law will not swoop in to save the day) (6) Even an FAI might not wake the “cryonic dead” (I like that term, I think I’ll steal it, haha) (7) Cryonically preserved bodies may be destroyed before we have the ability to revive them …and a few more minor reasons I can’t remember at the moment.
I’m curious, what are yours?
My thoughts have changed somewhat since writing this post, but that’s the general idea. It would be personally irrational for me to sign up for cryonics at the moment. I’m not sure if this extends to most LW people; I’d have to think about it more.
But even your list of low probabilities might be totally outweighed by the Pascalian counterargument: FAI is a lot of utility if it works. Why don’t you think so?
By the way, I think it’s really cool to see another RWer here! LW’s a different kind of fun than RW, but it’s a neat place.
I remember that post—it got me to think about cryonics a lot more. I agree with most of your arguments, particularly bullet point #3.
I do struggle with Pascal’s Mugging—it seems to me, intuitively, that Pascal’s Mugging can’t be true (that is, in the original scenario, Pascal should not give up his money), but I can’t find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don’t see a reason why this is always the case. So, while I can’t defuse Pascal’s Mugging, I am skeptical about its conclusion.
I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.
There is a reason to expect that it will scale in general.
To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn’t have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.
Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can’t be regarded as equivalent to a thousand century-long lives chained together in sequence?
BTW, what does RW refer to?
I’m not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal’s Mugging in any general case.
RW = RationalWiki
Well, there is no necessary reason why all claimed mechanisms must be equally probable. The mugger could say “I’ll heal the sick with my psychic powers” or “when I get to the bank on Monday, I’ll donate $$$ to medical research”; even if the potential utilities were the same and both probabilities were small, we would not consider the probabilities equal.
Also, the utility of money doesn’t scale indefinitely; if nothing else, it levels off once the amount starts being comparable to all the money in the world, so adding more just creates additional inflation.
Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.
That’s why I don’t think we can defuse Pascal’s Mugging, since we can potentially imagine a mechanism for which our probability that the mugger is honest doesn’t scale with the amount of utility the mugger promises to give. That would imply that there is no fully general solution to Bostrom’s formulation of Pascal’s Mugging. And that worries me greatly.
However:
This gives me a little bit of hope, since we might be able to use it as a heuristic when dealing with situations like these. That’s not as good as a proof, but it’s not bad.
Also:
Only on LessWrong does that sentence make sense and not sound funny :)
Ah, Pascal’s mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists. That way they have no incentive to try to terrorize you—you won’t give them what they want no matter what—and you don’t incentivize even more terrorists to show up and demand even bigger sums.
But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don’t give Pascal’s mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.
I’m almost never there anymore… I know this is a dick thing to say, but it’s not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it’s been effectively replaced.
I understand this idea—in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
This is what I was afraid of: we can’t do anything about Pascal’s Mugging with respect to purely epistemic questions. (I’m still not entirely sure why, though—what prevents us from treating cryonics just like we would treat the mugger?)
Ha, Trent’s essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though—LW and RW have very different methods of evaluating ideas, and I’m suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I’m not quick to judge.) RW tends to use labels a bit too much—if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a “reliable” source claiming that someone is a fraud, then they assume he/she is.
Eliezer finally published TDT a few days ago, I think it’s up at the singinst.org site by now. Perhaps we should announce it in a top level post… I think we will.
Cryonics isn’t an agent we have to deal with. Pascal’s Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there’s no one to argue with: either cryonics works, or it doesn’t. We just have to figure it out.
The invalidity of paying Pascal’s mugger doesn’t have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, improbable or not, large or small.
Might it have been here? That’s where I was first introduced to LW and Eliezer.
Any ideas/heuristics you’re suspicious of specifically? If there was a Less Wrong and an SIAI belief dichotomy I’d definitely fall in the SIAI belief category, but generally I agree with Less Wrong. It’s not exactly a fair dichotomy though; LW is a fun online social site whereas SIAI folk are paid to be professionally rational.
The second ‘negative sum’ seems redundant...
Are you claiming that 100% of negative sum interactions are negative sum?! 1 is not a probability! …just kidding. I meant ‘improbable or not’.
Come to think of it negative sum isn’t quite the right phrase. Rational agents do all sorts of things in negative sum contexts. They do, for example, pay protection money to the thieves guild. Even though robbing someone is negative sum. It isn’t the sum that needs to be negative. The payoff to the other guy must be negative AND the payoff to yourself must be negative.
That’s true. Negative expected value is what I really mean. I’m too lazy to edit it though.
I guess I’m not familiar enough with the positions of LW and SIAI—where do they differ?
Excellent, that’ll be a fun read.
I’m still not sure if I follow this—I’ll have to do some more reading on it. I still don’t see how the two situations are different—for example, if I was talking to someone selling cryonics, wouldn’t that be qualitatively the same as Pascal’s Mugging? I’m not sure.
Unfortunately no, it was here. I didn’t look at that article until recently.
That opens a whole new can of worms that it’s far too late at night for me to address, but I’m thinking of writing a post on this soon, perhaps tomorrow.
Nah, the cryonics agent isn’t trying to mug you! (Er, hopefully.) He’s just giving you two options and letting you calculate.
In this case of Pascal’s Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don’t care. Unless they find joy in torturing people (then you’re screwed) they have no incentive to actually use up the resources to go through with it. So they leave you alone, ’cuz you won’t budge.
Cryonics is a lot simpler in its nature, but a lot harder to calculate. You have two options, and the options are given to you by reality, not an agent you can outwit. (Throwing in a cryonics agent doesn’t change anything.) When you have to choose between the binary cryonics versus no cryonics, it’s just a matter of seeing which decision is better (or worse). It could be that both are bad, like in the Pascal’s mugger scenario, but in this case you’re just screwed: reality likes to make you suffer, and you have to take the best possible world. Telling reality that it can go ahead and give you tons of disutility doesn’t take away its incentive to give you tons of disutility. There’s no way out of the problem.
Cool! Be careful not to generalize too much, though: there might bad general trends, but no one likes to be yelled at for things they didn’t do. Try to frame it as humbly as possible, maybe. Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!
That works for the LW version of the problem (and I understand why it does), but not for Bostrom’s original formulation. In that version the mugger claims to have magic powers and will give Pascal quadrillions of utility if he hands over his wallet. This means that the mugger avoids the rule “ignore all threats of blackmail but accept postiive-sum trades.” That’s why it looks so much like cryonics to me, and therein lies the problem.
Will do! I obviously don’t want to sound obnoxious; there’s no reason to be rude about rationality.
Oh, sorry! In that case all my talk was egregious. That sounds like a much better problem whose answer isn’t immediately obvious to me. I shall think about it.
Yep, that’s the problem I’ve been struggling with. Like I said, it would help if Pascal’s disbelief in the mugger’s powers scaled with the utility the mugger promises him, but there’s not always a reason for that to be so. In any case, it might help to look at Bostrom’s version. And do let me know if you come up with anything, since this one really bothers me.
Thanks for pointing this out, I’m shocked I hadn’t heard of it. I’ll let you know if I think up something. If I can’t, I’ll ask a decision theory veteran, they’re sure to know.
I’m not so sure, but I certainly hope someone knows.