Belief in the Implied Invisible
One generalized lesson not to learn from the Anti-Zombie Argument is, “Anything you can’t see doesn’t exist.”
It’s tempting to conclude the general rule. It would make the Anti-Zombie Argument much simpler, on future occasions, if we could take this as a premise. But unfortunately that’s just not Bayesian.
Suppose I transmit a photon out toward infinity, not aimed at any stars, or any galaxies, pointing it toward one of the great voids between superclusters. Based on standard physics, in other words, I don’t expect this photon to intercept anything on its way out. The photon is moving at light speed, so I can’t chase after it and capture it again.
If the expansion of the universe is accelerating, as current cosmology holds, there will come a future point where I don’t expect to be able to interact with the photon even in principle—a future time beyond which I don’t expect the photon’s future light cone to intercept my world-line. Even if an alien species captured the photon and rushed back to tell us, they couldn’t travel fast enough to make up for the accelerating expansion of the universe.
Should I believe that, in the moment where I can no longer interact with it even in principle, the photon disappears?
No.
It would violate Conservation of Energy. And the second law of thermodynamics. And just about every other law of physics. And probably the Three Laws of Robotics. It would imply the photon knows I care about it and knows exactly when to disappear.
It’s a silly idea.
But if you can believe in the continued existence of photons that have become experimentally undetectable to you, why doesn’t this imply a general license to believe in the invisible?
(If you want to think about this question on your own, do so before the jump...)
Though I failed to Google a source, I remember reading that when it was first proposed that the Milky Way was our galaxy —that the hazy river of light in the night sky was made up of millions (or even billions) of stars—that Occam’s Razor was invoked against the new hypothesis. Because, you see, the hypothesis vastly multiplied the number of “entities” in the believed universe. Or maybe it was the suggestion that “nebulae”—those hazy patches seen through a telescope—might be galaxies full of stars, that got the invocation of Occam’s Razor.
Lex parsimoniae: Entia non sunt multiplicanda praeter necessitatem.
That was Occam’s original formulation, the law of parsimony: Entities should not be multiplied beyond necessity.
If you postulate billions of stars that no one has ever believed in before, you’re multiplying entities, aren’t you?
No. There are two Bayesian formalizations of Occam’s Razor: Solomonoff Induction, and Minimum Message Length. Neither penalizes galaxies for being big.
Which they had better not do! One of the lessons of history is that what-we-call-reality keeps turning out to be bigger and bigger and huger yet. Remember when the Earth was at the center of the universe? Remember when no one had invented Avogadro’s number? If Occam’s Razor was weighing against the multiplication of entities every time, we’d have to start doubting Occam’s Razor, because it would have consistently turned out to be wrong.
In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model. The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute. A model of the universe that contains billions of galaxies containing billions of stars, each star made of a billion trillion decillion quarks, will take a lot of RAM to run—but the code only has to describe the behavior of the quarks, and the stars and galaxies can be left to run themselves. I am speaking semi-metaphorically here—there are things in the universe besides quarks—but the point is, postulating an extra billion galaxies doesn’t count against the size of your code, if you’ve already described one galaxy. It just takes a bit more RAM, and Occam’s Razor doesn’t care about RAM.
Why not? The Minimum Message Length formalism, which is nearly equivalent to Solomonoff Induction, may make the principle clearer: If you have to tell someone how your model of the universe works, you don’t have to individually specify the location of each quark in each star in each galaxy. You just have to write down some equations. The amount of “stuff” that obeys the equation doesn’t affect how long it takes to write the equation down. If you encode the equation into a file, and the file is 100 bits long, then there are 2100 other models that would be around the same file size, and you’ll need roughly 100 bits of supporting evidence. You’ve got a limited amount of probability mass; and a priori, you’ve got to divide that mass up among all the messages you could send; and so postulating a model from within a model space of 2100 alternatives, means you’ve got to accept a 2-100 prior probability penalty—but having more galaxies doesn’t add to this.
Postulating billions of stars in billions of galaxies doesn’t affect the length of your message describing the overall behavior of all those galaxies. So you don’t take a probability hit from having the same equations describing more things. (So long as your model’s predictive successes aren’t sensitive to the exact initial conditions. If you’ve got to specify the exact positions of all the quarks for your model to predict as well as it does, the extra quarks do count as a hit.)
If you suppose that the photon disappears when you are no longer looking at it, this is an additional law in your model of the universe. It’s the laws that are “entities”, costly under the laws of parsimony. Extra quarks are free.
So does it boil down to, “I believe the photon goes on existing as it wings off to nowhere, because my priors say it’s simpler for it to go on existing than to disappear”?
This is what I thought at first, but on reflection, it’s not quite right. (And not just because it opens the door to obvious abuses.)
I would boil it down to a distinction between belief in the implied invisible, and belief in the additional invisible.
When you believe that the photon goes on existing as it wings out to infinity, you’re not believing that as an additional fact.
What you believe (assign probability to) is a set of simple equations; you believe these equations describe the universe. You believe these equations because they are the simplest equations you could find that describe the evidence. These equations are highly experimentally testable; they explain huge mounds of evidence visible in the past, and predict the results of many observations in the future.
You believe these equations, and it is a logical implication of these equations that the photon goes on existing as it wings off to nowhere, so you believe that as well.
Your priors, or even your probabilities, don’t directly talk about the photon. What you assign probability to is not the photon, but the general laws. When you assign probability to the laws of physics as we know them, you automatically contribute that same probability to the photon continuing to exist on its way to nowhere—if you believe the logical implications of what you believe.
It’s not that you believe in the invisible as such, from reasoning about invisible things. Rather the experimental evidence supports certain laws, and belief in those laws logically implies the existence of certain entities that you can’t interact with. This is belief in the implied invisible.
On the other hand, if you believe that the photon is eaten out of existence by the Flying Spaghetti Monster—maybe on this just one occasion—or even if you believed without reason that the photon hit a dust speck on its way out—then you would be believing in a specific extra invisible event, on its own. If you thought that this sort of thing happened in general, you would believe in a specific extra invisible law. This is belief in the additional invisible.
The whole matter would be a lot simpler, admittedly, if we could just rule out the existence of entities we can’t interact with, once and for all—have the universe stop existing at the edge of our telescopes. But this requires us to be very silly.
Saying that you shouldn’t ever need a separate and additional belief about invisible things—that you only believe invisibles that are logical implications of general laws which are themselves testable, and even then, don’t have any further beliefs about them that are not logical implications of visibly testable general rules—actually does seem to rule out all abuses of belief in the invisible, when applied correctly.
Perhaps I should say, “you should assign unaltered prior probability to additional invisibles”, rather than saying, “do not believe in them.” But if you think of a belief as something evidentially additional, something you bother to track, something where you bother to count up support for or against, then it’s questionable whether we should ever have additional beliefs about additional invisibles.
There are exotic cases that break this in theory. (E.g: The epiphenomenal demons are watching you, and will torture 3^^^3 victims for a year, somewhere you can’t ever verify the event, if you ever say the word “Niblick”.) But I can’t think of a case where the principle fails in human practice.
Added: To make it clear why you would sometimes want to think about implied invisibles, suppose you’re going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe’s expansion will have accelerated too much for them to ever send a message back. Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy? Or do you think the spaceship blips out of existence before it gets there? This could be a very real question at some point.
- The Useful Idea of Truth by 2 Oct 2012 18:16 UTC; 190 points) (
- Building Phenomenological Bridges by 23 Dec 2013 19:57 UTC; 95 points) (
- Distinct Configurations by 12 Apr 2008 4:42 UTC; 75 points) (
- The Quantum Physics Sequence by 11 Jun 2008 3:42 UTC; 73 points) (
- Decoherence is Simple by 6 May 2008 7:44 UTC; 72 points) (
- Another attempt to explain UDT by 14 Nov 2010 16:52 UTC; 70 points) (
- Decoherence is Falsifiable and Testable by 7 May 2008 7:54 UTC; 46 points) (
- No Logical Positivist I by 4 Aug 2008 1:06 UTC; 39 points) (
- And the Winner is… Many-Worlds! by 12 Jun 2008 6:05 UTC; 28 points) (
- On Being Decoherent by 27 Apr 2008 4:59 UTC; 25 points) (
- 18 Nov 2013 8:50 UTC; 18 points) 's comment on Quantum versus logical bombs by (
- The Strong Occam’s Razor by 11 Nov 2010 17:28 UTC; 17 points) (
- 3 Jul 2011 14:17 UTC; 16 points) 's comment on An Outside View on Less Wrong’s Advice by (
- The role of mathematical truths by 24 Apr 2010 16:59 UTC; 15 points) (
- Austin meetup notes Nov. 16, 2019: SSC discussion by 19 Nov 2019 13:30 UTC; 13 points) (
- 9 Nov 2010 18:26 UTC; 11 points) 's comment on A note on the description complexity of physical theories by (
- 12 Jun 2010 19:11 UTC; 10 points) 's comment on Open Thread June 2010, Part 2 by (
- 3 Nov 2011 15:34 UTC; 10 points) 's comment on Rationality Quotes November 2011 by (
- Is an Intelligence Explosion a Disjunctive or Conjunctive Event? by 14 Nov 2011 11:35 UTC; 9 points) (
- 8 Dec 2011 18:55 UTC; 8 points) 's comment on What independence between ZFC and P vs NP would imply by (
- 27 May 2010 18:56 UTC; 8 points) 's comment on Open Thread: May 2010, Part 2 by (
- 24 May 2015 19:02 UTC; 8 points) 's comment on Leaving LessWrong for a more rational life by (
- [SEQ RERUN] Belief in the Implied Invisible by 28 Mar 2012 3:22 UTC; 8 points) (
- This Territory Does Not Exist by 13 Aug 2020 0:30 UTC; 7 points) (
- [Resource Request] What’s the sequence post which explains you should continue to believe things about a particle moving that’s moving beyond your ability to observe it? by 4 Aug 2019 22:31 UTC; 7 points) (
- 11 Aug 2013 17:57 UTC; 6 points) 's comment on Common sense as a prior by (
- 31 Oct 2010 2:55 UTC; 5 points) 's comment on Value Deathism by (
- Rationality Reading Group: Part R: Physicalism 201 by 13 Jan 2016 23:41 UTC; 5 points) (
- Some Morals from the Study of Human Irrationality [Link] by 18 Jan 2011 15:56 UTC; 5 points) (
- 9 Nov 2010 22:12 UTC; 5 points) 's comment on A note on the description complexity of physical theories by (
- 6 Dec 2019 15:02 UTC; 5 points) 's comment on The Devil Made Me Write This Post Explaining Why He Probably Didn’t Hide Dinosaur Bones by (
- 21 Jul 2010 21:49 UTC; 4 points) 's comment on Welcome to Less Wrong! by (
- 19 Feb 2013 17:55 UTC; 4 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- 25 Jul 2009 12:20 UTC; 4 points) 's comment on Nonperson Predicates by (
- 20 Sep 2011 20:28 UTC; 3 points) 's comment on Subjective Realities by (
- 14 Apr 2014 6:45 UTC; 3 points) 's comment on Open Thread April 8 - April 14 2014 by (
- 20 Jun 2011 11:18 UTC; 3 points) 's comment on [SEQ RERUN] Making Beliefs Pay Rent (in Anticipated Experiences) by (
- 31 Oct 2010 4:51 UTC; 3 points) 's comment on Value Deathism by (
- 31 Jan 2009 15:55 UTC; 3 points) 's comment on War and/or Peace (2/8) by (
- 9 Aug 2010 19:32 UTC; 3 points) 's comment on Making Beliefs Pay Rent (in Anticipated Experiences) by (
- 7 Dec 2009 19:59 UTC; 3 points) 's comment on Parapsychology: the control group for science by (
- 11 Jul 2010 3:52 UTC; 3 points) 's comment on Assuming Nails by (
- 19 Mar 2011 8:15 UTC; 3 points) 's comment on On Being Decoherent by (
- 15 Dec 2011 14:08 UTC; 3 points) 's comment on A case study in fooling oneself by (
- 11 Aug 2013 5:39 UTC; 3 points) 's comment on Common sense as a prior by (
- 30 Sep 2010 2:45 UTC; 3 points) 's comment on Open Thread September, Part 3 by (
- 8 Mar 2011 15:09 UTC; 2 points) 's comment on Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling by (
- 7 Apr 2024 23:44 UTC; 2 points) 's comment on Why it’s so hard to talk about Consciousness by (
- 29 May 2012 17:10 UTC; 2 points) 's comment on [SEQ RERUN] Living in Many Worlds by (
- 16 Aug 2010 20:15 UTC; 2 points) 's comment on Newcomb’s Problem: A problem for Causal Decision Theories by (
- 17 May 2008 7:12 UTC; 2 points) 's comment on Where Physics Meets Experience by (
- 11 Jan 2010 23:52 UTC; 1 point) 's comment on Savulescu: “Genetically enhance humanity or face extinction” by (
- 26 Mar 2011 4:28 UTC; 1 point) 's comment on “Is there a God” for noobs by (
- 29 May 2012 8:33 UTC; 1 point) 's comment on [SEQ RERUN] Living in Many Worlds by (
- 24 May 2013 15:46 UTC; 1 point) 's comment on Probability is in the Mind by (
- 13 Oct 2012 10:22 UTC; 1 point) 's comment on The Fabric of Real Things by (
- A Short Intro to Humans by 20 Jul 2022 15:28 UTC; 1 point) (
- 6 Apr 2011 0:01 UTC; 0 points) 's comment on Bayesianism versus Critical Rationalism by (
- 2 Oct 2012 20:45 UTC; 0 points) 's comment on The Useful Idea of Truth by (
- 9 Aug 2010 18:45 UTC; 0 points) 's comment on Making Beliefs Pay Rent (in Anticipated Experiences) by (
- 11 May 2012 0:28 UTC; 0 points) 's comment on A wild theist platonist appears, to ask about the path by (
- 12 Nov 2010 14:01 UTC; 0 points) 's comment on The Strong Occam’s Razor by (
- 13 Oct 2012 20:30 UTC; 0 points) 's comment on The Fabric of Real Things by (
- 8 Apr 2013 16:25 UTC; 0 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 11 May 2008 4:36 UTC; 0 points) 's comment on Collapse Postulates by (
- 18 Jul 2009 0:14 UTC; -1 points) 's comment on Absolute denial for atheists by (
- 13 Oct 2012 16:25 UTC; -1 points) 's comment on The Fabric of Real Things by (
- 29 Nov 2010 3:41 UTC; -2 points) 's comment on The Sin of Persuasion by (
“The whole matter would be a lot simpler, admittedly, if we could just rule out the existence of entities we can’t interact with, once and for all—have the universe stop existing at the edge of our telescopes. But this requires us to be very silly.
Why? If I believe that the universe doesn’t exist outside my future and past light cones, then I don’t expect any difference in experiences than you, so I really don’t see what the point of arguing about it is.
“In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model. The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute.”
What!? Are you assuming that everyone has on the exact same data on the position of the quarks of the universe stashed in a variable? The code/data divide is not useful, code can substitute for data and data for code (interpreted languages).
Let us say I am simulating the quarks and stuff for your region of space, I would like my friend bob to be able to make the same predictions (although most likely they would be postdictions as I wouldn’t be able to make them in faster than real time) about you. I send him my program (sans quark positions), but he still can’t simulate you. He needs the quark positions, they are as much code for the simulator as the physical laws.
Or to put it another way, quark positions are to physics simulators as the initial state of the tape is to a UTM simulator. That is code. Especially as physics simulations are computationally universal.
I personally don’t put much stock by occams razor.
You confuse data, which should absolutely be counted (compressed) as complexity, with required RAM, which (EY asserts) should not.
I am well convinced that RAM requirements shouldn’t be counted exclusively, and fairly well convinced that it shouldn’t be counted similarly to rules; I am not convinced it shouldn’t be counted at all. A log*(RAM) factor in the prior wouldn’t make a difference for most judgements, but might tip the scale on MWI vs collapse. That said, I am not at all confident it does weigh in.
In reality, all the computer program specifies is the simulation of a QM wave function (complex scalar field in an infinite dimensional hilbertspace with space curvature or something like that), along with the minimum message of the conditions of the big bang.
This is what I thought at first, but on reflection, it’s not quite right.
Could you explain a little more the distinction between the position preceding this remark and that following it? They seem like different formulations of the same thing to me.
I’ll give it a shot. Solomonoff induction doesn’t even mention photons, so the statement about the photon doesn’t follow directly from it. Solomonoff induction just tells you about the general laws, which then you can use to talk about photons. So “belief in the implied invisible” means you’re going through this two-step process, rather than directly computing probabilities about photons.
Silly? That’s an awfully subjective criticism.
Perhaps you could explain for us all what the difference is between ‘destroying’ a photon and causing it to become unable to affect you in any fashion.
Caledonian: the difference is that if you know that the photon was sent out, you have infinite computing power, and you want to know the exact subjective probability distribution you should hold for what a particular brontosaur ate for lunch 123 million years ago, you need to take that photon into account. You can then test that by looking for the fossilized dung in exactly the right set of places and you will probably find the relevant dung faster than your competition who started without knowledge of the photon in question.
Re: Vassar– that’s not quite right, since Eliezer is proposing you knew about the photon being there when it was on Earth. When it leaves your light cone, you don’t care about it, since it will never affect you and never affect any event that ever effects you.
Or if you are going to be on that spaceship, are you worried that the Earth will blip out of existence on your journey?
Dan: I’m not sure what exactly is being proposed. Actually I think that there is some confusion in the fundamental physics here, as well as in the positivistic assumptions being invoked by Caledonian. If physics is reversible I don’t think that something can ever go from being part of my light cone to not being part of it. The photons future doesn’t impact me past some point, but the past of the future of that photon does. I suspect that when you use causality diagrams, or just do the math, any confusion here goes away.
In Eliezer’s example, the colony is in the future light cone of your current self, but no future version of you is in its future light cone.
One problem is that ‘you’ that can be affected by things that you expect to interact with in the future is in principle no different from those space colonists that are sent out. You can’t interact with future-you. All decisions that we are making form the future with which we don’t directly interact. Future-you is just a result of one more ‘default’ manufacturing process, where laws of physics ensure that there is a physical structure very similar to what was in the past. Hunger is a drive that makes you ‘manufacture’ a fed-future-you, compassion is a drive that makes you ‘manufacture’ a good-feeling-other-person, and so on.
I don’t see any essential difference between decisions that produce ‘observable’ effect and those that produce ‘invisible’ effect. What makes you value some of the future states and not others is your makeup, ‘thousand shards of desire’ as Eliezer put it, and among these things there might as well be those that imply value for physical states that don’t interact with decision-maker’s body.
If I put a person in a black box, and program it to torture that person for 50 years, and then automatically destroy all evidence, so that no tortured-person state can ever be observed, isn’t it as ‘invisible’ as sending a photon away? I know that person is being tortured, and likewise I know that photon is flying away, but I can’t interact with either of them. And yet I assign a distinct negative value to invisible-torture box. It’s one of the stronger drives inbuilt in me.
1) The Second Law is a non-sequitur. It simply isn’t relevant. The loss of a photon due to universal expansion does not violate that principle at all.
2) The First Law was formulated when we found that, in our attempts to examine situations where it was asserted substance was created or destroyed, substance was always conserved. It exists on empirical grounds; it’s not some sacred revelation that cannot be questioned or even discarded if necessary. Citing the First Law against the idea that a bit of mass-energy could be destroyed is simply invalid, because if that substance could be destroyed, we’d have to abandon the Law.
3) The idea that “the photon knows when to disappear” is based on a mistaken understanding of existence. It is not an inherent property of a thing, but a relationship between two or more things. The photon doesn’t keep track of how far it’s gotten from Eliezer and then lose the “existence” property when it’s distant enough. Its existence relative to Eliezer ends when it passes forever out of the universe in which things interact with Eliezer.
There is no difference between saying that a photon that travels far enough away from Eliezer is destroyed, and saying that a photon that travels far enough from Eliezer is no longer part of the set closed under interaction that includes him. Knowing the properties of the photon would no longer be necessary to completely represent Eliezer and the things that interact with him.
The photon is no more. It has ceased to be! Relative to Eliezer, at least. Whether it exists relative to other things is undefined—and utterly irrelevant.
1) The Second Law is a non-sequitur. It simply isn’t relevant. The loss of a photon due to universal expansion does not violate that principle at all.
The photon had some entropy. If it vanishes with no effect, that entropy is gone.
Citing the First Law against the idea that a bit of mass-energy could be destroyed is simply invalid, because if that substance could be destroyed, we’d have to abandon the Law.
More than that, actually.
Let’s drag this back to purpose. What’s your answer to Eliezer’s question at the end?
the colony is in the future light cone of your current self, but no future version of you is in its future light cone.
Right, and if anyone’s still confused how this is possible: wikipedia and a longer explanation
There’s an even simpler computer program that generates your present experiences: the program that runs all programs (each one a little bit at a time), the Universal Dovetailer. But this program does have the potential Occamian disadvantage of creating all possible universes, in addition to the one you see around you. Is this Multiplying Entities Beyond Necessity? Or merely a matter of more RAM?
Hal, some people make the argument that that is just more RAM, and therefore that Ockham’s Razor requires that we assert that all possible universes actually exist; i.e. the simplest claim that will result in your experiences is that all possible experiences are real.
The problem with this is that one can disprove it empirically by anthropic reasoning. If all possible universes are real—i.e. including ones with special coding for specific miracles in the next ten seconds—we should conclude with virtual certainty that the laws of physics will be violated in the next ten seconds. Since this does not typically happen, we can conclude that not all possible universes are real.
I actually use a slightly different principle for statements like that.
I call it the “preferred action principle” (or Reaper’s Law when I’m feeling pretentious)
If a possible model of reality doesn’t give me a preferred action, ie. if all actions, including inaction, are equally reasonable (and therefore, all actions are of relative utility 0) in that model, I reject that model out of hand. Not as false, but as utterly useless.
Even if it’s 3^^^3 9s certain that that is the real world, I might as well ignore that possibility, because it puts no weight into the utility calculations.
If all possible universes are real—i.e. including ones with special coding for specific miracles in the next ten seconds—we should conclude with virtual certainty that the laws of physics will be violated in the next ten seconds.
“Virtual certainty” is a statement of probability, which can’t be resolved without placing relative weights on different possible universes.
So what? That has nothing to do with the Second Law, which describes how closed systems become disordered, probabilistically speaking.
The system Eliezer described 1) is not closed, 2) does not have an increase in order as a result of the photon disappearing. The amount of ‘available work’ in fact decreases as a result of that loss—which doesn’t contradict the Second Law at all.
At Constant, Is there a ‘natural’ probability measure on the set of all possible existences? Otherwise it has to be included in the ‘program’ and not the ‘RAM’.
I don’t think it’s possible to get outside Earth’s light cone by travelling less than the speed of light, is it? I’m not well-educated about such things, but I thought that leaving a light-cone was possible only during the very early stages (eg., the first several seconds) after the big bang. Of course, that was said back when people believed the universe’s expansion was slowing down. But unless the universe’s expansion allows things to move out of Earth’s light-cone—and I suspect that allowing that possibility would allow violation of causality, because it seems it would require a perceived velocity wrt Earth past the speed of light—then the entire exercise may be moot; the notion of invisibles may be as incoherent as the atomically-identical zombies.
I’m pretty sure it is possible to escape Earth’s light cone at sublight speeds. You can go arbitrarily far from earth (if you’re patient). Eventually, you will get to a point where your distance from Earth*the Hubble constant is greater than the speed of light (you are now a Hubble length from Earth). At this point, a photon you shoot straight towards Earth will not approach Earth, because the distance in between is expanding at the speed of light.
I’m not 100% convinced—even after reading Eliezer’s articles—that one interpretation of quantum mechanics is necessarily better than the other (my gut reaction would be to say “a plague on both your houses”), but this article looks like an argument in favor of many-worlds over Copenhagen.
In Copenhagen, the extra configurations “magically” collapse out of existence at some ill-defined point when the system decoheres to the point that we can’t get to see them even in principle. In many-worlds, the macroscopic system decoheres instead. The existence of innumerable and undetectable “extra worlds” is not a violation of Occam’s Razor as defined in this article: as long as it follows from just taking the laws of quantum mechanics to their logical conclusion, there is no extra information needed to describe this law, and the extra worlds are irrelevant to our description in the same sense as extra galaxies are, as it’s only a question of extra RAM rather than extra meaningful information as long as they obey the same fundamental laws.
I’m pretty sure the spaceship doesn’t actually seem to bleep out of existence. It’s just that, from your point of reference, time passes slower and slower for it.
I could be wrong, though.
In either case, you never get to observe the spaceship after a certain point in ship time.
I think this is right… Crossing a cosmological horizon is very similar to crossing a black hole event horizon.
In the reference frame of an observer outside the black hole, the spaceship would never enter the black hole. Rather it just hovers on the edge of the horizon, getting more and more red-shifted. If the black hole evaporates (due to Hawking radiation) then the spaceship’s state is returned in scrambled form by the radiation, so there is no net loss of information from the region outside the black hole.
The same applies to a spaceship crossing our cosmological horizon… From the reference frame of an Earthbound observer, it never does, but (probably) a scrambled ghost image of the spaceship eventually returns in Hawking radiation from the horizon.
At first I was angry with myself for being afraid to say “Niblick”
but then when I said it I was angry with myself because Elizer had manipulated me into saying it via reverse psychology.
My human mind cannot resist something that you taboo so hard which is so easy to do! Damn it! Damn it all!
“Dur? What’s that, God? Don’t what? Eat the apples from the Tree of Knowledge!? Well, if you insist, I’d love to! OM NOM NOM NOM NOM NOM....”
I’M SO SORRY, ALL OF YOU 3^^^3 PEOPLE!
AND I APPARENTLY DOOMED MY DESCENDANTS WITH ORIGINAL SIN, TOO! WILL THE CONSEQUENCES OF MY ACTIONS OF A SINGLE AFTERNOON NEVER END!?
Why, oh god, whyyyyyy!?
AND I APPARENTLY DOOMED MY DESCENDANTS WITH ORIGINAL SIN, TOO! WILL THE CONSEQUENCES OF MY ACTIONS OF A SINGLE AFTERNOON NEVER END!?
If it has the wrong energy, it would veeery likely eventually interact with a photon from one of the diffuse radiation backgrounds into an electron-positron pair. A neutrino would be a much better example.
Conservation laws or not, you ought to believe in the existence of the photon because you continue having the evidence of its existence—it’s your memory of having fired the photon! Your memory is entangled with the state of the universe, not perfectly, but still, it’s Bayesian evidence. And if your memory got erased, then indeed, you’d better stop believing that the photon exists.
“So does it boil down to, “I believe the photon goes on existing as it wings
off to nowhere, because my priors say it’s simpler for it to go on existing than
to disappear”?
This is what I thought at first, but on reflection, it’s not quite right. (And
not just because it opens the door to obvious abuses.)
I would boil it down to a distinction between belief in the implied invisible,
and belief in the additional invisible.”
Eliezer, what are these obvious abuses?