Okay, then interpret my answer as “rape and murder are bad because they make others sad, and making others sad is bad by definition”.
Kindly
The tone is well-deserved. This is a serious mistake that renders all further discussion of geometry in the post nonsensical.
You can always keep asking why. That’s not particularly interesting.
It occurs to me that we can express this problem in the following isomorphic way:
Omega makes an identical copy of you.
One copy exists for a week. You get to pick whether that week is torture or nirvana.
The other copy continues to exist as normal, or maybe is unconscious for a week first, and depending on what you picked for step 2, it may lose or receive lots of money.
I’m not sure how enlightening this is. But we can now tie this to the following questions, which we also don’t have answers to: is an existence of torture better than no existence at all? And is an existence of nirvana good when it does not have any effect on the universe?
Yes, this, exactly.
I do nice things for myself not because I have deep-seated beliefs that doing nice things for myself is the right thing to do, but because I feel motivated to do nice things for myself.
I’m not sure that I could avoid doing those things for myself (it might require willpower I do not have) or that I should (it might make me less effective at doing other things), or that I would want to if I could and should (doing nice things for myself feels nice).
But if we invent a new nice thing to do for myself that I don’t currently feel motivated to do, I don’t see any reason to try to make myself do it. If it’s instrumentally useful, then sure: learning to like playing chess means means that my brain gets exercise while I’m having fun.
With cryonics, though? I could try to convince myself that I want it, and then I will want it, and then I will spend money on it. I could also leave things as they are, and spend that money on things I currently want. Why should I want to want something I don’t want?
I did say what I would do, given the premise that I know Omega is right with certainty. Perhaps I was insufficiently clear about this?
I am not trying to fight the hypothetical, I am trying to explain why one’s intuition cannot resist fighting it. This makes the answer I give seem unintuitive.
So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.
Your formulation, however, doesn’t work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don’t precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.
Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it’s hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.
(I think I am willing to bite the suicide bullet as long as we’re clear that I would require truly extraordinary evidence.)
Result spoilers: Fb sne, yvxvat nypbuby nccrnef gb or yvaxrq gb yvxvat pbssrr be pnssrvar, naq gb yvxvat ovggre naq fbhe gnfgrf. (Fbzr artngvir pbeeryngvba orgjrra yvxvat nypbuby naq yvxvat gb qevax ybgf bs jngre.)
I haven’t done the responsible thing and plotted these (or, indeed, done anything else besides take whatever correlation coefficient my software has seen fit to provide me with), so take with a grain of salt.
I believe editing polls resets them, so there’s no reason to do it if it’s just an aesthetically unpleasant mistake that doesn’t hurt the accuracy of the results.
Absolutely. We’re bad at anything that we can’t easily imagine. Probably, for many people, intuition for “torture vs. dust specks” imagines a guy with a broken arm on one side, and a hundred people saying ‘ow’ on the other.
The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn’t take the number of people saved by an intervention into account; we just picture the typical effect on a single person.
What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don’t know how bad being in prison is, but it probably becomes much worse than I imagine if you’re there for 50 years, and we don’t think about that at all when arguing (or voting) about prison sentences.
That wasn’t obvious to me. It’s certainly false that “people who use the strategy of always paying have the same odds of losing $1000 as people who use the strategy of never paying”. This means that the oracle’s prediction takes its own effect into account. When asking about my future, the oracle doesn’t ask “Will Kindly give me $1000 or die in the next week?” but “If hearing a prophecy about it, will Kindly give me $1000 or die in the next week?”
Hearing the prediction certainly changes the odds that the first clause will come true; it’s not obvious to me (and may not be obvious to the oracle, either) that it doesn’t change the odds of the second clause.
It’s true that if I precommit to the strategy of not giving money in this specific case, then as long as many other people do not so precommit, I’m probably safe. But if nobody gives the oracle money, the oracle probably just switches to a different strategy that some people are vulnerable to. There is certainly some prophecy-driven exploit that the oracle can use that will succeed against me; it’s just a question of whether that strategy is sufficiently general that an oracle will use it on people. Unless an oracle is out to get me in particular.
You’re saying that it’s common knowledge that the oracle is, in fact, predicting the future; is this part of the thought experiment?
If so, there’s another issue. Presumably I wouldn’t be giving the oracle $1000 if the oracle hadn’t approached me first; it’s only a true prediction of the future because it was made. In a world where actual predictions of the future are common, there should be laws against this, similar to laws against blackmail (even though it’s not blackmail).
(I obviously hand over the $1000 first, before trying to appeal to the law.)
Given that I remember spending a year of AP statistics only doing calculations with things we assumed to be normally distributed, it’s not an unreasonable objection to at least some forms of teaching statistics.
Hopefully people with statistics degrees move beyond that stage, though.
There are varieties of strawberries that are not sour at all, so I suppose it’s possible that you simply have limited experience with strawberries. (Well, you probably must, since you don’t like them, but maybe that’s the reason you don’t think they’re sour, as opposed to some fundamental difference in how you taste things.)
I actually don’t like the taste of purely-sweet strawberries; the slightly-sour ones are better. A very unripe strawberry would taste very sour, but not at all sweet, and its flesh would also be very hard.
Do you have access to the memory wiping mechanism prior to getting your memory wiped tomorrow?
If so, wipe your memory, leaving yourself a note: “Think of the most unlikely place where you can hide a message, and leave this envelope there.” The envelope contains the information you want to pass on.
Then, before your memory is wiped tomorrow, leave yourself a note: “Think of the most unlikely place where you can hide a message, and open the envelope hidden there.”
Hopefully, your two memory-wiped selves should be sufficiently similar that the unlikely places they think of will coincide. At the same time, the fact that there is an envelope in the unlikely place you just thought of should be evidence that it came from you.
Wouldn’t you forget the password once your memories are wiped?
In an alternate universe, Peter and Sarah could have had the following conversation instead:
P: I don’t know the numbers.
S: I knew you didn’t know the numbers.
P: I knew that you knew that I didn’t know the numbers.
S: I still don’t know the numbers.
P: Now I know the numbers.
S: Now I also know the numbers.
But I’m worried that my version of the puzzle can no longer be solved without brute force.
I believe I have it. rot13:
Sbyq naq hasbyq gur cncre ubevmbagnyyl, gura qb gur fnzr iregvpnyyl, gb znex gur zvqcbvag bs rnpu fvqr. Arkg, sbyq naq hasbyq gb znex sbhe yvarf: vs gur pbearef bs n cncre ner N, O, P, Q va beqre nebhaq gur crevzrgre, gura gur yvarf tb sebz N gb gur zvqcbvag bs O naq P, sebz O gb gur zvqcbvag bs P naq Q, sebz P gb gur zvqcbvag bs N naq Q, naq sebz Q gb gur zvqcbvag bs N naq O.
Gurfr cnegvgvba gur erpgnatyr vagb avar cvrprf: sbhe gevnatyrf, sbhe gencrmbvqf, naq bar cnenyyrybtenz. Yrg gur cnenyyrybtenz or bar cneg, naq tebhc rnpu gencrmbvq jvgu vgf bja nqwnprag gevnatyr gb znxr gur sbhe bgure cnegf.
Obahf: vs jr phg bhg nyy avar cvrprf, n gencrmbvq naq n gevnatyr pna or chg onpx gbtrgure va gur rknpg funcr bs gur cnenyyrybtenz.
Desensitization training is great if it (a) works and (b) is less bad than the problem it’s meant to solve.
(I’m now imagining Alice and Carol’s conversation: “So, alright, I’ll turn my music down this time, but there’s this great program I can point you to that teaches you to be okay with loud noise. It really works, I swear! Um, I think if you did that, we’d both be happier.”)
Treating thin-skinned people (in all senses of the word) as though they were already thick-skinned is not the same, I think. It fails criterion (a) horribly, and does not satisfy (b) by definition: it is the problem desensitization training ought to solve.
Only a single mile to the mile? I’ve seen maps in biology textbooks that were much larger than that.