by multiplying out by the size of the threat, it still ought to motivate you to give the money. Some belief has to give—the belief that multiplication works, the belief that I shouldn’t pay the money, or the belief that I should be consistent all the time—and right now, consistency seems like the weakest link in the chain.
What gives is the belief that by multiplying out by the size of the threat, it still ought to motivate me to give the money. Multiplication works, I shouldn’t pay the money, and I should be consistent.
I think this is probably the sanest answer that doesn’t throw out consistency, but there are still some distinctly weird things about it. To motivate you not to give up money, a threat to inflict $RIDICULOUSNUMBER units of disutility has to be proportionately incredible—but there’s no particular reason to think that disutility is even roughly linear in 1/credibility, and a number of reasons not to.
Straight multiplication also suggests that for any fixed ridiculous threat there’s always some amount of money that a rational agent will be willing to pay to ward it off, but I think I’d be more comfortable biting that bullet.
To motivate you not to give up money, a threat to inflict $RIDICULOUSNUMBER units of disutility has to be proportionately incredible...there’s no particular reason to think that disutility is even roughly linear in 1/credibility
It’s not at all obvious that someone threatening to inflict disutility if I don’t comply with certain demands would treat me worse if I don’t comply with the demands than if I do.
One can’t simply say “It is rational to one box on Newcomb’s problem”, because one might live in a universe in which an entity, say Sampi (if not Omega itself) executes one boxers painfully and rewards two-boxers.
The possibility that someone will inflict $RIDICULOUSNUMBER units of disutility on me is as latent in the question “give me money or I will inflict $RIDICULOUSNUMBER units of disutility on you” as it is in the question “paper or plastic”, and not because it’s possible bag choice will have a significant impact on my life. If I can’t distinguish the credibility of the threat (that the speaker can and will act as they say) from zero, then I can’t distinguish it from the opposite outcome, that they will act opposite of as they say, as I cant distinguish the possibility of the opposite outcome from zero.
On a personal note, the night before last, I had a wild dream (no laws of physics were violated, not so much laws of congress) that ended similar to how the movie “The Game” starring Michael Douglas ended. Well, it actually ended with me waking up—which is even more to the point. Thins oughtn’t be simply accepted at face value.
Today I misread the following no fewer than two times, I think three times though I cant swear to that:
Marc Hauser, the primatologist psychologist at Harvard who recently was accused of mistreating evidence and graduate students, has resigned. I am in two minds about this. His work, although I am unconvinced by some of it, was very important, and he was good at communicating to the lay reader (including philosophers). I met him and was impressed by his demeanour and generosity. On the other hand, if he did deliberately misinterpret his data, that is an offence. Whether it is a hanging offence is moot.
I read the first sentence as: “Marc Hauser, the primatologist psychologist at Harvard who recently accused me of mistreating evidence and graduate students, has resigned.” That made less and less sense as the post went on, so I took it from the top several time until I finally caught my error.
It’s far more likely that I am misunderstanding someone threatening $RIDICULOUSNUMBER than that they can carry out their threat, and also more likely that I’ll misspeak and say “yes” when I mean no, and say “no” when I mean yes, or mistakenly hand over a one dollar bill, etc. than that they can and will carry it out. Simultaneously, I’m not paralyzed by offending people despite King Incognito being not just a possibility but a trope. I think anyone who jumps on the “it’s a compelling argument” horn of the $RIDICULOUSNUMBER argument has to give an account of how they disagree with anyone, ever, given the possible ramifications if the other person is Agent Smith, Haran al-Rashid, Peter the Great, Yoda, etc. I could easily mistake a threat of unimaginable torture for everyday speech or mild disagreement.
The obvious answer is that agreeing has indistinguishable ramifications (many dislike the teachers pet, which is another trope in itself)...in which case I would like to know why that same reasoning isn’t applicable when a random person actually claims such power. It is no more likely that someone claiming such power has it than that someone not claiming such power has it, likewise for its use.
If you disagree, upvote this or I’ll give you $RIDICULOUSNUMBER units of disutility! I’m kidding, of course. (Or would no number of disclaimers be sufficient? Shall you believe that having expressed this claim, it is more likely than not I abide by it, and that saying I kid was a half-plausible way to deny making threats? Or shall you believe that having disclaimed it, I would be displeased by acts in accordance with it? Both are plausible for humans, but further muddling things, if I have such power, how would my intentions likely differ from a normal human’s?)
It’s not at all obvious that someone threatening to inflict disutility if I don’t comply with certain demands would treat me worse if I don’t comply with the demands than if I do.
In support of this point, I’d like to point out that the ridiculous powers required to inflict $RIDICULOUSNUMBER sanctions are so far removed from our experience, that we have no idea how such an agent could be expected to act. It could do the opposite of what it claims (perhaps it hates cowards) as easily as fulfill its threats, given that we know nothing of its motives.
What gives is the belief that by multiplying out by the size of the threat, it still ought to motivate me to give the money. Multiplication works, I shouldn’t pay the money, and I should be consistent.
I think this is probably the sanest answer that doesn’t throw out consistency, but there are still some distinctly weird things about it. To motivate you not to give up money, a threat to inflict $RIDICULOUSNUMBER units of disutility has to be proportionately incredible—but there’s no particular reason to think that disutility is even roughly linear in 1/credibility, and a number of reasons not to.
Straight multiplication also suggests that for any fixed ridiculous threat there’s always some amount of money that a rational agent will be willing to pay to ward it off, but I think I’d be more comfortable biting that bullet.
It’s not at all obvious that someone threatening to inflict disutility if I don’t comply with certain demands would treat me worse if I don’t comply with the demands than if I do.
One can’t simply say “It is rational to one box on Newcomb’s problem”, because one might live in a universe in which an entity, say Sampi (if not Omega itself) executes one boxers painfully and rewards two-boxers.
The possibility that someone will inflict $RIDICULOUSNUMBER units of disutility on me is as latent in the question “give me money or I will inflict $RIDICULOUSNUMBER units of disutility on you” as it is in the question “paper or plastic”, and not because it’s possible bag choice will have a significant impact on my life. If I can’t distinguish the credibility of the threat (that the speaker can and will act as they say) from zero, then I can’t distinguish it from the opposite outcome, that they will act opposite of as they say, as I cant distinguish the possibility of the opposite outcome from zero.
On a personal note, the night before last, I had a wild dream (no laws of physics were violated, not so much laws of congress) that ended similar to how the movie “The Game” starring Michael Douglas ended. Well, it actually ended with me waking up—which is even more to the point. Thins oughtn’t be simply accepted at face value.
Today I misread the following no fewer than two times, I think three times though I cant swear to that:
I read the first sentence as: “Marc Hauser, the primatologist psychologist at Harvard who recently accused me of mistreating evidence and graduate students, has resigned.” That made less and less sense as the post went on, so I took it from the top several time until I finally caught my error.
It’s far more likely that I am misunderstanding someone threatening $RIDICULOUSNUMBER than that they can carry out their threat, and also more likely that I’ll misspeak and say “yes” when I mean no, and say “no” when I mean yes, or mistakenly hand over a one dollar bill, etc. than that they can and will carry it out. Simultaneously, I’m not paralyzed by offending people despite King Incognito being not just a possibility but a trope. I think anyone who jumps on the “it’s a compelling argument” horn of the $RIDICULOUSNUMBER argument has to give an account of how they disagree with anyone, ever, given the possible ramifications if the other person is Agent Smith, Haran al-Rashid, Peter the Great, Yoda, etc. I could easily mistake a threat of unimaginable torture for everyday speech or mild disagreement.
The obvious answer is that agreeing has indistinguishable ramifications (many dislike the teachers pet, which is another trope in itself)...in which case I would like to know why that same reasoning isn’t applicable when a random person actually claims such power. It is no more likely that someone claiming such power has it than that someone not claiming such power has it, likewise for its use.
If you disagree, upvote this or I’ll give you $RIDICULOUSNUMBER units of disutility! I’m kidding, of course. (Or would no number of disclaimers be sufficient? Shall you believe that having expressed this claim, it is more likely than not I abide by it, and that saying I kid was a half-plausible way to deny making threats? Or shall you believe that having disclaimed it, I would be displeased by acts in accordance with it? Both are plausible for humans, but further muddling things, if I have such power, how would my intentions likely differ from a normal human’s?)
In support of this point, I’d like to point out that the ridiculous powers required to inflict $RIDICULOUSNUMBER sanctions are so far removed from our experience, that we have no idea how such an agent could be expected to act. It could do the opposite of what it claims (perhaps it hates cowards) as easily as fulfill its threats, given that we know nothing of its motives.