I wasn’t sneaky about it.
Mallah
I don’t think I got visibly hurt or angry. In fact, when I did it, I was feeling more tempted than angry. I was in the middle of a conversation with another guy, and her rear appeared nearby, and I couldn’t resist.
It made me seem like a jerk, which is bad, but not necessarily low status. Acting without apparent fear of the consequences, even stupidly, is often respected as long as you get away with it.
Another factor is that this was a ‘high status’ woman. I’m not sure but she might be related to a celebrity. (I didn’t know that at the time.) Hence, any story linking me and her may be ‘bad publicity’ for me but there is the old saying ‘there’s no such thing as bad publicity’.
It was a single swat to the buttocks, done in full sight of everyone. There was other ass-spanking going on, between people who knew each other—done as a joke - so in context it was not so unusual. I would not have done it outside of that context, nor would I have done it if my inhibitions had not been lowered by alcohol; nor would I do it again even if they are.
Yes, she deserved it!
It was a mistake. Why? It exposed me to more risk than was worthwhile, and while I might have hoped that (aside from simple punishment) it would teach her the lesson that she ought to follow the Golden Rule, or at least should not pull the same tricks on guys, in retrospect it was unlikely to do so.
Other people (that I have talked to) seem to be divided on whether it was a good thing to do or not.
Women seem to have a strong urge to check out what shoes a man has on, and judge their quality. Even they can’t explain it. Perhaps at some unconscious level, they are guarding against men who ‘cheat’ by wearing high heels.
I can confirm that this does happen at least sometimes (USA). I was at a bar, and I approached a woman who is probably considered attractive by many (skinny, bottle blonde) and started talking to her. She soon asked me to buy her a drink. Being not well versed in such matters, I agreed, and asked her what she wanted. She named an expensive wine, which I agreed to get her a glass of. She largely ignored me thereafter, and didn’t even bother taking the drink!
(I did obtain some measure of revenge later that night by spanking her rear end hard, though I do not advise doing such things. She was not amused and her brother threatened me, though as I had apologized, that was the end of it. She did tell some other lies so I don’t know if she is neurotypical; my impression was that she was well below average in morality, being a spoiled brat.)
- 16 May 2010 15:56 UTC; 5 points) 's comment on The Social Coprocessor Model by (
But Stuart_Armstrong’s description is asking us to condition on the camera showing ‘you’ surviving.
That condition imposes post-selection.
I guess it doesn’t matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.
Wrong—it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).
I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli trials becomes arbitrarily large. As p represents the ‘one-shot probability,’ this justifies interpreting the relative frequency in the infinite limit as the ‘one-shot probability.’
You have things backwards. The “relative frequency in the infinite limit” can be defined that way (sort of, as the infinite limit is not actually doable) and is then equal to the pre-defined probability p for each shot if they are independent trials. You can’t go the other way; we don’t have any infinite sequences to examine, so we can’t get p from them, we have to start out with it. It’s true that if we have a large but finite sequence, we can guess that p is “probably” close to our ratio of finite outcomes, but that’s just Bayesian updating given our prior distribution on likely values of p. Also, in the 1-shot case at hand, it is crucial that there is only the 1 shot.
It is only possible to fairly “test” beliefs when a related objective probability is agreed upon
That’s wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities.
Would you really guess “red”, or do we agree?
Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I’d be a fool not to guess red.
But I wouldn’t recommend it to others, because if there are more people, that would only happen in the blue case. This is a case in which the number of observers depends on the unknown, so maximizing expected average utility (which is appropriate for decision theory for a given observer) is not the same as maximizing expected total utility (appropriate for a class of observers).
More tellingly, once I find out the result (and obviously the result becomes known when I get paid or punished), if it is red, I would not be surprised. (Could be either, 50% chance.)
Not that I’ve answered your question, it’s time for you to answer mine: What would you vote, given that the majority of votes determines what SB gets? If you really believe you are probably in a blue room, it seems to me that you should vote blue; and it seems obvious that would be irrational.
Then if you find out it was red, would you be surprised?
The way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to $99.
To fix that, we need to make 2 changes:
1) Let us give any reward/punishment to a third party we care about, e.g. SB.
2) The total reward/punishment she gets won’t depend on the number of people who make the decision. Instead, we will poll all of the survivors from all trials and pool the results (or we can pick 1 survivor at random, but let’s do it the first way).
The majority decides what guess to use, on the principle of one man, one vote. That is surely what we want from our theory—for the majority of observers to guess optimally.
Under these rules, if I know it’s the 1-shot case, I should guess red, since the chance is 50% and the payoff to SB is larger. Surely you see that SB would prefer us to guess red in this case.
OTOH if I know it’s the multi-shot case, the majority will be probably be blue, so I should guess blue.
In practice, of course, it will be the multi-shot case. The universe (and even the population of Earth) is large; besides, I believe in the MWI of QM.
The practical significance of the distinction has nothing to do with casino-style gambling. It is more that 1) it shows that the MWI can give different predictions from a single-world theory, and 2) it disproves the SIA.
- 18 Apr 2010 18:16 UTC; 0 points) 's comment on Avoiding doomsday: a “proof” of the self-indication assumption by (
If that were the case, the camera might show the person being killed; indeed, that is 50% likely.
Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.
Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.
No, it shouldn’t—that’s the point. Why would you think it should?
Note that I am already taking observer-counting into account—among observers that actually exist in each coin-outcome-scenario. Hence the fact that P(heads) approaches 1⁄3 in the many-shot case.
Adding that condition is post-selection.
Note that “If you (being asked before the killing) will survive, what color is your door likely to be?” is very different from “Given that you did already survive, …?”. A member of the population to which the first of these applies might not survive. This changes the result. It’s the difference between pre-selection and post-selection.
This subtly differs from Bostrom’s description, which says ‘When she awakes on Monday’, rather than ‘Monday or Tuesday.’
He makes clear though that she doesn’t know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK.
So on average …
‘On average’ gives you the many-shot case, by definition.
In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). They don’t both happen.
In the 2-shot case, the four possibilities are as I listed. Now there is both uncertainty in what really happens objectively (the four possible coin results), and then given the real situation, relevant uncertainty about which of the real person-wakeups is the one she’s experiencing (upon which her coin result can depend).
The ‘selection’ I have in mind is the selection, at the beginning of the scenario, of the person designated by ‘you’ and ‘your’ in the scenario’s description.
If ‘you’ were selected at the beginning, then you might not have survived.
There are always 2 coin flips, and the results are not known to SB. I can’t guess what you mean, but I think you need to reread Bostrom’s paper.
Under a frequentist interpretation
In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.
Reading Bostrom’s explanation of the SB problem, and interpreting ‘what should her credence be that the coin will fall heads?’ as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1⁄2 however many times Sleeping Beauty’s later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin’s probability of coming up heads is 1⁄2.
I am surprised you think so because you seem stuck in many-shot thinking, which gives 1⁄3.
Maybe you are asking the wrong question. The question is, given that she wakes up on Monday or Tuesday and doesn’t know which, what is her creedence that the coin actually fell heads? Obviously in the many-shot case, she will be woken up twice as often during experiments where it fell tails, so in 2⁄3 or her wakeups the coin will be tails.
In the 1-shot case that is not true, either she wakes up once (heads) or twice (tails) with 50% chance of either.
Consider the 2-shot case. Then we have 4 possibilities:
coins , days , fraction of actual wakeups where it’s heads
HH , M M , 1
HT , M M T , 1⁄3
TH , M T M , 1⁄3
TT , M T M T , 0
Now P(heads) = (1 + 1⁄3 + 1⁄3 + 0) / 4 = 5⁄12 = 0.417
Obviously as the number of trials increases, P(heads) will approach 1⁄3.
This is assuming that she is the only observer and that the experiments are her whole life, BTW.
A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?
Presumably you heard the announcement.
This is post-selection, because pre-selection would have been “Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?”
The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong’s top-level post...?) describes a situation defined to have multiple possible outcomes, but there’s only one outcome to the question ‘what is pi’s millionth bit?’
There’s only one outcome in the 1-shot case.
The fact that there are multiple “possible” outcomes is irrelevant—all that means is that, like in the math case, you don’t have knowledge of which outcome it is.
I think talking about ‘observers’ might be muddling the issue here.
That’s probably why you don’t understand the result; it is an anthropic selection effect. See my reply to Academician above.
We could talk instead about creatures that don’t understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it’s heads, we kill the lone bacterium, otherwise we put the trillion-bacteria dish into an autoclave and kill all of those bacteria. Given that the stained bacterium survives the process, it is far more likely that it was in the trillion-bacteria dish, so it is far more likely that the coin came up heads.
That is not an analogous experiment. Typical survivors are not pre-selected individuals; they are post-selected, from the pool of survivors only. The analogous experiment would be to choose one of the surviving bacteria after the killing and then stain it. To stain it before the killing risks it not being a survivor, and that can’t happen in the case of anthropic selection among survivors.
I don’t think of the pi digit process as equivalent.
That’s because you erroneously believe that your frequency interpretation works. The math problem has only one answer, which makes it a perfect analogy for the 1-shot case.
Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong.
Huh? I haven’t been using the SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1⁄2 for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known—in this case, 0.01; I don’t think anyone disputes that.
P(R|KS) = P(R|K)·P(S|RK)/P(S|K) = 0.01·(0.5)/(0.5) = 0.01
If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so.
Dead men make no observations. The equation you gave is fine for before the killing (for guessing what color you will be if you survive), not for after (when the set of observers is no longer the same).
So, if you are after the killing, you can only be one of the living observers. This is an anthropic selection effect. If you want to simulate it using an outside ‘observer’ (who we will have to assume is not in the reference class; perhaps an unconscious computer), the equivalent would be interviewing the survivors.
The computer will interview all of the survivors. So in the 1-shot case, there is a 50% chance it asks the red door survivor, and a 50% chance it talks to the 99 blue door ones. They all get an interview because all survivors make observations and we want to make it an equivalent situation. So if you get interviewed, there is a 50% chance that you are the red door one, and a 50% chance you are one of the blue door ones.
Note that if the computer were to interview just one survivor at random in either case, then being interviewed would be strong evidence of being the red one, because if the 99 blue ones are the survivors you’d just have a 1 in 99 chance of being picked. P(red) > P(blue). This modified case shows the power of selection.
Of course, we can consider intermediate cases in which N of the blue survivors would be interviewed; then P(blue) approaches 50% as N approaches 99.
The analogous modified MWI case would be for it to interview both the red survivor and one of the blue ones; of course, each survivor has half the original measure. In this case, being interviewed would provide no evidence of being the red one, because now you’d have a 1% chance of being the red one and the same chance of being the blue interviewee. The MWI version (or equivalently, many runs of the experiment, which may be anywhere in the multiverse) negates the selection effect.
If you are having trouble following my explanations, maybe you’d prefer to see what Nick Bostrom has to say. This paper talks about the equivalent Sleeping Beauty problem. The main interesting part is near the end where he talks about his own take on it. He correctly deduces that the probability for the 1-shot case is 1⁄2, and for the many-shot case it approaches 1⁄3 (for the SB problem). I disagree with his ‘hybrid model’ but it is pretty easy to ignore that part for now.
Also of interest is this paper which correctly discusses the difference between single-world and MWI interpretations of QM in terms of anthropic selection effects.
BTW, whoever is knocking down my karma, knock it off. I don’t downvote anything I disagree with, just ones I judge to be of low quality. By chasing me off you are degrading the less wrong site as well as hiding below threshold the comments of those arguing with me who you presumably agree with. If you have something to say than say it, don’t downvote.
Mitchell, you are on to an important point: Observers must be well-defined.
Worlds are not well-defined, and there is no definite number of worlds (given standard physics).
You may be interested in my proposed Many Computations Interpretation, in which observers are identified not with so-called ‘worlds’ but with implementations of computations: http://arxiv.org/abs/0709.0544
See my blog for further discussion: http://onqm.blogspot.com/