The Humans are Special trope here gives a lot of examples of this. Reputedly, it was a premise that John Campbell, editor of Amazing Stories, was very fond of, accounting for its prevalence.
brianm
and as such makes bullets an appropriate response to such acts, whereas they were not before.
Ah, I think I’ve misunderstood you—I thought you were talking about the initiating act (ie. that it was as appropriate to initiate shooting someone as to insult them), whereas you’re talking about the response to the act: that bullets are an appropriate response to bullets, therefore if interchangable, they’re an appropriate response to speech too. However, I don’t think you can take the first part of that as given—many (including me) would disagree that bullets are an appropriate response to bullets, but rather that they’re only an appropriate response to the specific case of averting an immediate threat (ie. shoot if it prevents killing, but oppose applying the death penalty once out of danger), and some pacifists may disagree even with violence to prevent other violence.
However, it seems that it’s the initiating act that’s the issue here: is it any more justified to causing offence as to shoot someone. I think it could be argued that they are equivalent issues, though of lesser intensity (ie. back to continuums, not bright lines).
If they are interchangeable it follows that answering an argument with a bullet may be the efficient solution.
That’s clearly not the case. If they’re interchangable, it merely means they’d be equally appropriate, but that doesn’t say anything about their absolute appropriateness level. If neither are appropriate responses, that’s just as interchangable as both being appropriate—and it’s clearly that more restrictive route being advocated here (ie. moving such speech into the bullet category, rather than moving the bullet category into the region of such speech).
The brits are feeling the pain of a real physical assault, under the skin.
So what distinguishes that from emotional pain? It’s all electrochemistry in the end after all. Would things change if it were extreme emotional torment being inflicted by pictures of salmon, rather than pain receptors being stimulated? Eg. inducing an state equivalent to clinical depression, or the feeling of having been dumped by a loved-one. I don’t see an inherent reason to treat these differently—there are occassions where I’d gladly have traded such feelings for a kick in the nuts, so from a utlitarian perspective they seem to be at least as bad.
The intensity in this case is obviously different—offence vs depression is obviously a big difference, so it may be fine to say that one’s OK and the other not because it falls into a tolerable level—but that certainly moves away from the notion of a bright line towards a grey continuum.
A crucial difference is that we can change our minds about what offends us but we cannot choose not to respond to electrodes
This is a better argument (indeed it’s one brought up by the post). I’m not sure it’s entirely valid though, for the reasons Yvain gave there. We can’t entirely choose what hurts us without a much better control over our emotional state than I, at least, posess. If I were brought up in a society where this was the ultimate taboo, I don’t think I could simply choose not to be, anymore than I could choose to be offended by them now. You say “It is within my power to feel zero pain from anything you might say”, but I’ll tell you, it’s not within mine. That may be a failing, but it’s one shared by billions. Further, I’m not sure it would be justified to go around insulting random strangers on the grounds that they can choose to take no harm, which suggests to me that offending is certainly not morally neutral.
Personally, I think one answer we could give to why the situations are different is a more pragmatic one. Accept that causing offence is indeed a bad action, but that it’s justified collateral damage in support of a more important goal. Ie. free speech is important enough that we need to establish that even trying to prevent it will be met by an indescriminate backlash doing the exact opposite. (Though there are also pragmatic grounds to oppose this, such that it’s manipulable by rabble-rousers for political ends).
Is that justified though? Suppose a subset of British go about demanding restriction on salmon image production. Would that justify you going out of your way to promote the production of such images, making them more likely to be seen by the subset not making such demands?
But the argument here is going the other way—less permissive, not more. The equivalent analogy would be:
To hold that speech is interchangeable with violence is to hold that certain forms of speech are no more an appropriate answer than a bullet.
The issue at stake is why. Why is speech OK, but a punch not? Presumably because one causes physical pain and the other not. So, in Yvain’s salmon situation, when such speech does now cause pain should we treat it the same or different from violence? Why or why not? What then about other forms of mental torment, such as emotional pain, hurt feelings or offence? There are times I’ve had my feelings hurt by mere words that frankly, I’d have gladly exchanged for a kicking, so mere intensity doesn’t seem the relevant criteria. So what is, and why is it justified?
To just repeat “violence is different from speech” is to duck the issue, because you haven’t answered this why question, which was the whole point of bringing it up.
Newcomb’s scenario has the added wrinkle that event B also causes event A
I don’t see how. Omega doesn’t make the prediction because you made the action—he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn’t achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe’s paradox, and performing observation as to what such people actually do (so long as my decision criteria weren’t known to the person I was testing).
Am I violating causality by doing this? Clearly not—my prediction is caused by the blog post and my observations, not by the action. The same thing that causes you to say you’d decide one way is also what causes you to act one way. As I get better and better, nothing changes, nor do I see why something would if I am able to simulate you perfectly, achieving 100% accuracy (some degree of determinism is assumed there, but then it’s already in the original thought experiment if we assume literally 100% accuracy).
Assuming I’m understanding it correctly, the same would be true for a manipulationist definition. If we can manipulate your mental state, we’d change both the prediction (assuming Omega factors in this manipulation) and the decision, thus your mental state is a cause of both. However if we could manipulate your action without changing the state that causes it in a way that would affect Omega’s prediction, our actions would not change the prediction. In practice, this may be impossible (it requires Omega not to factor in our manipulation, which is contradicted by assuming he is a perfect predictor), but in principle it seems valid.
I don’t see why Newcombe’s paradox breaks causality—it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega’s prediction and your action are caused by this predisposition, meaning Omega’s prediction is merely correlated with, not a cause of, your choice.
It’s not actually putting it forth as a conclusion though—it’s just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.
Thinking this through a bit more, you’re right—this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)
The doomsday assumption makes the assumptions that:
We are randomly selected from all the observers who will ever exist.
The observers increase expoentially, such that there are 2⁄3 of those who have ever lived at any particular generation
They are wiped out by a catastrophic event, rather than slowly dwindling or other
(Now those assumptions are a bit dubious—things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2⁄3 proportion will live in future generations (eg if the next generation is immortal, they’re guaranteed to be the last, and we’re much less likely depending on how long people are likely to survive after that. Alternatively growth could plateau or fluctuate around the carrying capacity of a planet if most potential observers never expand beyond this) However, assuming they hold, I think the argument is valid.
I don’t think your situation alters the argument, it just changes some of the assumptions. At point D, it reverts back to the original doomsday scenario, and the odds switch back.
At D, the point you’re made aware, you know that you’re in the proportion of people who live. Only 50% of the people who ever existed in this scenario learn this, and 99% of them are blue-doors. Only looking at the people at this point is changing the selection criteria—you’re only picking from survivors, never from those who are now dead despite the fact that they are real people we could have been. If those could be included in the selection (as they are if you give them the information and ask them before they would have died), the situation would remain as in A-C.
Making creating the losing potential people makes this more explicit. If we’re randomly selecting from people who ever exist, we’ll only ever pick those who get created, who will be predominantly blue-doors if we run the experiment multiple times.
The various Newcombe situations have fairly direct analogues in everyday things like ultimatum situations, or promise keeping. They alter it to reduce the number of variables, so the “certainty of trusting other party” dial gets turned up to 100% of Omega, “expectation of repeat” to 0 etc, in order to evaluate how to think of such problems when we cut out certain factors.
That said, I’m not actually sure what this question has to do with Newcombe’s paradox / counterfactual mugging, or what exactly is interesting about it. If it’s just asking “what information do you use to calculate the probability you plug into the EU calculation?” and Newcombe’s paradox is just being used as one particular example of it, I’d say that the obvious answer is “the probability you believe it is now.” After all, that’s going to already be informed by your past estimates, and any information you have available (such as that community of rationalists and their estimates). If the question is something specific to Newcombe’s paradox, I’m not getting it.
I think the problem is that people tend to conflate intention with effect, often with dire effect, (eg. “Banning drugs == reducing harm from drug use”). Thus when they see a mechanism in place that seems intended to penalise guessing, they assume that its the same as actually penalising guessing, and that anything that shows otherwise must be a mistake.
This may explian the “moral” objection of the one student: The test attempts to penalise guessing, so working against this intention is “cheating” by exploiting a flaw in the test. With the no-penalty multiple choice, theres no such intent so the assumption is that the benefits of guessing are already factored in.
This may not in fact be as silly as it sounds. Suppose that the test is unrelated to mathematics, and that there is no external motive to doing well. Eg. you are taking a test on Elizabethan history with no effect on your final grade, and want to calibrate yourself against the rest of the class. Here, this kind of test is a flaw, because the test isn’t measuring solely what it intends to, but will be biased towards those who spot this advantage. If you are interested solely in an accurate result, and you think the rest of the class won’t realise the advantage of guessing, taking the extra marks will just introducing noise, so it is not to your advantage to take them.
For a mathematics or logic based test, the extra benefit could be considered an extra, hidden question. For something else, it could be considered as immoral as taking advantage of any other unintentional effect (a printing error that adds a detectable artifact on the right answer for instance). Taking advantage of it means you are getting extra marks for something the test is not supposed to be counting. I don’t think I’d consider it immoral (certainly not enough to forgo the extra marks in something important), but Larry’s position may not be as inconsistent as you think.
I don’t see the purpose of such thought experiments as being to model reality (we’ve already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.
Reality is messy, and while we have to deal with it eventually, it’s useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily set certain variables (such as predictive ability) to 100% or 0% simply to remove that aspect from consideration.
This does give a fundamentally unrealistic situation, but that’s really the point—they are our equivalent of spherical cows. Dealing with all those variables at once is too hard. In the situations where it isn’t and we have “real” situations we can fruitfully consider, there’s no need for the thought experiment in the first place. Once we can understand the simpler system, we have somewhere to start from once we start adding back in the complexity.
Ah sorry, I’d thought this was in relation to the source available situation. I think this may still be wrong however. Consider the pair of programs below:
A: return Strategy.Defect. B: if(random(0, 1.0) <0.5) {return Strategy.Cooperate; } while(true) { if(simulate(other, self) == Strategy.Cooperate) { return Strategy.Cooperate; } }
simulate(A,A) terminates immediately. simulate(B,B) eventually terminates. simulate(B,A) will not terminate 50% of the time.
I don’t think this holds. Its clearly possible to construct code like:
if(other_src == my_sourcecode) { return Strategy.COOPERATE; } if(simulate(other_src, my_sourcecode) == Strategy.COOPERATE) { return Strategy.COOPERATE; } else { return Strategy.DEFECT; }
B is similar, with slightly different logic in the second part (even a comment difference would suffice).
simulate(A,A) and simulate(B,B) clearly terminate, but simulate(A,B) still calls simulate(B,A) which calls simulate(A,B) …
Type 3 is just impossible.
No—it just means it can’t be perfect. A scanner that works 99.9999999% of the time is effectively indistinguishable from a 100% for the purpose of the problem. One that is 100% except in the presence of recursion is completely identical if we can’t construct such a scanner.
My prior is justified because a workable Omega of type 3 or 4 is harder for me to imagine than 1 or 2. Disagree? What would you do as a good Bayesian?
I would one-box, but I’d do so regardless of the method being used, unless I was confident I could bluff Omega (which would generally require Omega-level resources on my part). It’s just that I don’t think the exact implementation Omega uses (or even whether we know the method) actually matter.
Aren’t these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I’d say the relevant cases are:
3b) Omega uses a scanner, but we don’t know how the scanner works (or we’d be Omega-level entities ourselves).
5) Omega is using one of the above methods, or one we haven’t thought of, but we don’t know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stick by what they say, and who won’t. Unless we actually know the method with sufficient confidence to risk losing the million, we should one-box. ([Edit]: Originally wrote two-box here—I meant to say one-box)
It doesn’t seem at all sensible to me that the principle of “acting as one would formerly have liked to have precommitted to acting” should have unbounded utility.
Mostly agreed, though I’d quibble that it does have unbounded utility, but that I probably don’t have unbounded capability to enact the strategy. If I were capable of (cheaply) compelling my future self to murder in situations where it would be a general advantage to precommit, I would.
From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.
There is no value in being the kind of person who globally optimizes because of the expectation to win on average.
There is no value in being such a person if they happen to lose, but that’s like saying there’s no value in being a person who avoids bets that lose on average by only posing the 1 in several million time they would have won the lottery. On average they’ll come out ahead, just not in the specific situation that was described.
I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information.
I don’t it’s true that everyone starts from “that won’t ever work”—we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we’d reach if we invest the time to consider the issue in more depth, It’s also worth noting that we’re not comparing the general public to those who’ve seriously considered signing up, but the lesswrong population, who are probably a lot more exposed to the idea of cryonics.
I’d agree that it’s not what I would have predicted in advance (having no more expectation for the likelihood assigned to go up as down with more research), but it would be predictable for someone proceeding from the premise that the lesswrong community overestimates the likelihood of cryonics success compared to those who have done more research.