Miguel: it doesn’t seem to be a reference to something, but just a word for some experience an alien might have had that is incomprehensible to us humans, analogous to humour for the alien.
simon2
Psy-Kosh, my argument that Boltzmann brains go poof is a theoretical argument, not an anthropic one. Also, if we want to maximize our correct beliefs in the long run, we should commit to ignore the possibility that we are a brain with beliefs not causally affected by the decision to make that commitment (such as a brain that randomly pops into existence and goes poof). This also is not an anthropic argument.
With regard to longer-lived brains, if you expect there to be enough of them that even the ones with your experience are more common than minds in a real civilization with your experience, then you really should rationally expect to be one (although as a practical matter since there’s nothing much a Boltzmann brain can reasonably expect to do one might as well ignore it*). If you expect there to be more long lived Boltzmann brains than civilization-based minds in general, but not enough for ones with your experience to outnumber civilization-based minds with your experience, then your experience tips the balance in favour of believing you are not a Boltzmann brain after all.
I think your confusion is the result of you not being consistent about whether you accept self-indication, or maybe you being inconsistent about whether you think of the possible space with Boltzmann brains and no civilizations as being additional to or a substitute for space with civilizations. Here’s what different choices of those assumptions imply:
(I assume throughout that that the probability of Boltzmann brains per volume in any space is always lower than the probability of minds in civilizations where they are allowed by physics)*
Assumptions → conclusion
self-indication, additional → our experience is not evidence** for or against the existence of the additional space (or evidence for its existence if we consider the possibility that we may be unusually order-observing entities in that space)
self-indication, substitute → our experience is evidence against the existence of the substitute space
instead of self-indication, assume the probability of being a given observer is inversely proportional to number of observers in possible universe containing that observer (this is the most popular alternative to self-indication) → our experience is evidence against the existence of the additional or substitute space
*unless the Boltzmann brain, at further exponentially reduced probability, also obtained effective means of manipulating its environment...
** basically, define “allowed” to mean (density of minds with our experience in civ) >> (density of Boltzmann brains with our experience), and not allowed to mean the opposite (<<). One would expect the probability of a space with comparable densities to be low enough not to have a significant quantitative or qualitative affect on the conclusions.
*It seems rather unlikely that a space with our current apparent physical laws allows more long-lived B-brains than civilization-based brains. I am too tired to want to think about and write out what would follow if this is not true.
**I am using “evidence” here to mean shifts of probability relative to the outside view prior (conditional on the existence of any observers at all), which means that any experience is evidence for a larger universe (other things being equal) given self-indication, etc.
Nick, do you use the normal definition of a Boltzmann brain?
It’s supposed to be a mind which comes into existence by sheer random chance. Additional complexity—such as would be required for some support structure (e.g. an actual brain), or additional thinking without a support structure—comes with an exponential probability penalty. As such, a Boltzmann brain would normally be very short lived.
In principle, though, there could be so much space uninhabitable for regular civilizations that even long-lived Boltzmann brains which coincidentally have experiences similar to minds in civilizations outnumber minds in civilizations.
It’s not clear whether you are worrying about whether you already are a Boltzmann brain, or if you think you are not one but think that if a Boltzmann brain took on your personality it would be ‘you’. If the former, I can only suggest that nothing you do as a Boltzmann brain is likely to have much effect on what happens to you, or on anything else. If the latter, I think you should upgrade your notion of personal identity. While the notion that personality is the essence of identity is a step above the notion that physical continuity is the essence of identity, by granting the notion that there is an essence of identity at all it reifies the concept in a way it doesn’t deserve, a sort of pseudosoul for people who don’t think they believe in souls.
Ultimately what you choose to think of as your ‘self’ is up to you, but personally I find it a bit pointless to be concerned about things that have no causal connection with me whatsoever as if they were me, no matter how closely they may coincidentally happen to resemble me.
Let’s suppose, purely for the sake of argument of course, that the scientists are superrational.
The first scientist chose the most probable theory given the 10 experiments. If the predictions are 100% certain then it will still be the most probable after 10 more successful experiments. So, since the second scientist chose a different theory, there is uncertainty and the other theory assigned an even higher probability to these outcomes.
In reality people are bad at assessing priors (hindsight bias), leading to overfitting. But these scientists are assumed to have assessed the priors correctly, and given this assumption you should believe the second explanation.
Of course, given more realistic scientists, overfitting may be likely.
It may be that most minds with your thoughts do in fact disappear after an instant. Of course if that is the case there will be vastly more with chaotic or jumbled thoughts. But the fact that we observe order is no evidence against the existence of additional minds observing chaos, unless you don’t accept self-indication.
So, your experience of order is not good evidence for your belief that more of you are non-Boltzmann than Boltzmann. But as I said, in the long term your expected accuracy will rise if you commit to not believing you are a Boltzmann brain, even if you believe that you most likely are one now.
A somewhat analogous situation may arise in AGI—AI makers can rule out certain things (e.g. the AI is simulated in a way that the simulated makers are non-conscious) that the AI cannot. Thus by having the AI rule such things out a priori, the makers can improve the AI’s beliefs in ways that the AI itself, however superintelligent, rationally could not.
Nick and Psy-Kosh: here’s a thought on Boltzmann brains.
Let’s suppose the universe has vast spaces uninhabited by anything except Boltzmann brains which briefly form and then disappear, and that any given state of mind has vastly more instantiations in the Boltzmann-brain only spaces than in regular civilizations such as ours.
Does it then follow that one should believe one is a Boltzmann brain? In the short run perhaps, but in the long run you’d be more accurate if you simply committed to not believing it. After all, if you are a Boltzmann brain, that commitment will cease to be relevant soon enough as you disintegrate, but if you are not, the commitment will guide you well for a potentially long time.
And by elementary I mean the 8 different ways W, F, and the comet hit/non hit can turn out.
Err… I actually did the math a silly way, by writing out a table of elementary outcomes… not that that’s silly itself, but it’s silly to get input from the table to apply to Bayes’ theorem instead of just reading off the answer. Not that it’s incorrect of course.
Richard, obviously if F does not imply S due to other dangers, then one must use method 2:
P(W|F,S) = P(F|W,S)P(W|S)/P(F|S)
Let’s do the math.
A comet is going to annihilate us with a probability of (1-x) (outside view) if the LHC would not destroy the Earth, but if the LHC would destroy the Earth, the probability is (1-y) (I put this change in so that it would actually have an effect on the final probability)
The LHC has an outside-view probability of failure of z, whether or not W is true
The universe has a prior probabilty w of being such that the LHC if it does not fail will annihilate us.Then:
P(F|W,S) = 1
P(F|S) = (ywz+x(1-w)z)/(ywz+x(1-w)z+x(1-w)(1-z))
P(W|S) = (ywz)/(ywz+x(1-w)+x(1-w)(1-z))so, P(W|F,S) = ywz/(ywz+x(1-w)z) = yw(yw+x(1-w))
I leave it as an exercise to the reader to show that there is no change in P(W|F,S) if the chance of the comet hitting depends on whether or not the LHC fails (only the relative probability of outcomes given failure matters).
Really though Richard, you should not have assumed in the first place that I was not capable of doing the math. In the future, don’t expect me to bother with a demonstration.
Allan: you’re right, I should have thought that through more carefully. It doesn’t make your interpretation correct though...
I have really already spent much more time here today than I should have...
You have another inconsistency as well. As you should have noticed in the “How many” thread, the assumptions that lead you to believe that failures of the LHC are evidence that it would destroy Earth are the same ones that lead you to believe that annihilational threats are irrelevant (after all, if P(W|S) = P(W), then Bayes’ rule leads to P(S|W) = P(S)).
Thus, given that you believe that failures are evidence of the LHC being dangerous, you shouldn’t care. Unless you’ve changed to a new set of incorrect assumptions, of course.
I might add, for the benefit of others, that self-sampling forbids playing favourites among which observers to believe that you are in a single universe (beyond what is actually justified by the evidence available), and self-indication forbids the same across possible universes.
Nominull: It’s a bad habit of some people to say that reality depends on, or is relative to observers in some way. But even though observers are not a special part of reality, we are observers and the data about the universe that we have is the experience of observers, not an outside view of the universe. So long as each universe has no more than one observer with your experience, you can take your experience as objective evidence that you live in a universe with one such observer instead of zero (and with this evidence to work with, you don’t need to talk about observers). But it’s difficult to avoid talking about observers when a universe might have multiple observers with the same subjective experience.
Why do you reject self-indication? As far as I can recall the only argument Bostrom gave against it was that he found it unintuitive that universes with many observers should be more likely, with absolutely no justification as to why one would expect that intuition to reflect reality. That’s a very poor argument considering the severe problems you get without it.
I suppose you might be worried about universes with many unmangled worlds being made more likely, but I don’t see what makes that bullet so hard to bite either.
Whoops, I didn’t notice that you did specifically claim that P(W|S)=P(W).
Do you arrive at this incorrect claim via Bostrom’s approach, or another one?
Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We’re not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)
You mean you use method 2. Except you don’t, or you would come to the same conclusion that I do. Are you claiming that P(W|S)= P(W)? Ok, I suspect you may be applying Nick Bostrom’s version of observer selection: hold the probability of each possible version of the universe fixed independent of the number of observers, then divide that probability equally amongst the observers. Well, that approach is BS whenever the number of observers differs between possible universes, since if you imagine aliens in the universe but causally separate, the probabilities would depend on their existence.
Also, does it really make sense to you, intuitively, that you should get a different result given two actually existing universes compared to two possible universes?
This could only reflect uncertainty that anthropic reasoning was valid. If you were certain anthropic reasoning were valid (I’m sure not!) then you would make no such update. In practice, after surviving a few hundred rounds of quantum suicide, would further survivals really seem to call for alternative explanations?
As I pointed out earlier, if there was even a tiny chance of the machine being broken in such a way as to appear to be working, that probability would dominate sooner or later.
One last thing: if you really believe that annihilational events are irrelevant, please do not produce any GAIs until you come to your senses.
Eliezer, I used “=>” (intending logical implication), not “>=”.
I would suggest you read my post above on this second page, and see if that changes your mind.
Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.
Sorry Richard, well of course they aren’t necessarily independent. I wasn’t quite sure what you were criticising. But I pointed out already that, for example, a new physical law might in principle both cause the LHC to fail and cause it to destroy the world if it did not fail. But I pointed out that this was not what people were arguing, and assuming that such a relation is not the case then the failure of the LHC provides no information about the chance that a success would destroy the world. (And a small relation would lead to a small amount of information, etc.)
While I’m happy to have had the confidence of Richard, I thought my last comment could use a little improvement.
What we want to know is P(W|F,S)
As I pointed out F=> S so P(W|F,S) = P(W|F)
We can legitimately calculate P(W|F,S) in at least two ways:
1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) ← the easy way
2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) ← harder, but still works
there are also ways you can get it wrong, such as:
3. P(W|F,S) != P(F|W,S)P(W)/P(F) ← what I said other people were doing last post
4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) ← what other people are probably actually doing
In my first comment in this thread, I said it was a simple application of Bayes’ rule (method 1) but then said that Eliezer’s failure was not to apply the anthropic principle enough (ie I told him to update from method 4 to method 2). Sorry if anyone was confused by that or by subsequent posts where I did not make that clear.
Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.
Eliezer: F ⇒ S -!-> P(X|F) = P(X|F,S)
All right, give me an example.
And yeah, anthropic reasoning is all about conditioning on survival, but you have to do it consistently. Conditioning on survival in some terms but not others = fail.
Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification. The second is wrong: the probabilities without conditioning on S are “God’s eye view” probabilities, and really are independent of selection effects.
I’m going to try another explanation that I hope isn’t too redundant with Benja’s.
Consider the events
W = The LHC would destroy Earth F = the LHC fails to operate S = we survive (= F OR not W)
We want to know P(W|F) or P(W|F,S), so let’s apply Bayes.
First thing to note is that since F ⇒ S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
Bayes:
P(W|F) = P(F|W)P(W)/P(F)
Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would destroy Earth, P(F|W) = P(F), and thus P(W|F) = P(W).
(I suppose one could argue that a failure could be caused by a new law of physics that would also lead the LHC to destroy the Earth, but that isn’t what is being argued here—at least so I think; my apologies to anyone who is arguing that)
In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.
Benja, I also think of it that way intuitively. I would like to add though that it doesn’t really matter whether you have branches or just a single nondeterministic world—Bayes’ theorem applies the same either way.
Robinson, I could try to nitpick all the things wrong with your post, but it’s probably better to try to guess at what is leading your intuition (and the intuition of others) astray.
Here’s what I think you think:
Either the laws of physics are such that the LHC would destroy the world, or not.
Given our survival, it is guaranteed that the LHC failed if the universe is such that it would destroy the world, whereas if the universe is not like that, failure of the LHC is not any more likely than one would expect normally.
Thus, failure of the LHC is evidence for the laws of physics being such that the LHC would destroy the world.
This line of argument fails because when you condition on survival, you need to take into account the different probabilities of survival given the different possibilities for the laws of the universe. As an analogy, imagine a quantum suicide apparatus. The apparatus has a 1⁄2 chance of killing you each time you run it and you run it 1000 times. But, while the apparatus is very reliable, it has a one in a googol chance of being broken in such a way that every time it will be guaranteed not to kill you, but appear to have operated successfully and by chance not killed you. Then, if you survive running it 1000 times, the chance of it being broken in that way is over a googol squared times more likely than the chance of it having operated successfully.
Here’s what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking “well, of course I would experience survival”.
Finally a note about the anthropic principle: it is simply the application of normal probability theory to situtations where there are observer selection effects, not a special separate rule.
Just for the sake of devil’s advocacy:
4) You want to attribute good things to your ethics, and thus find a way to interpret events that enables you to do so.