Edit 2: I’m now fairly confident that this is just the Presumptuous Philosopher problem is disguise, which is explained clearly in Section 6.1 here https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u
This is my first post ever on LessWrong. Let me explain my problem.
I was born in a unique situation — I shall omit the details of exactly what this situation was, but for my argument’s sake, assume I was born as the tallest person in the entire world. Or instead suppose that I was born into the richest family in the world. In other words, take as an assumption that I was born into a situation entirely unique relative to all other humans on an easily measurable dimension such as height or wealth (i.e., not some niche measure like “longest tongue”). And indeed, my unique situation is perhaps more immediate and obvious to myself and others than even height or wealth.
For that reason, I’ve always had an unconscious fear that I’m living in a fake, or simulated world. That fear recently entered my awareness. I reasoned a couple days ago that the fear is motivated by an implicit use of anthropic reasoning. Something along the lines of, “I could have been any human, so that fact that I’m this particular one, this unique human, means there’s ‘something wrong’ with my world. And therefore I’m in a simulation.” Something like that. I read through various posts on this site related to anthropic reasoning, including when to use SSA and SIA, but none of them seem to address my concern specifically. Hopefully someone reading this can help me.
To be clear, the question I want answered is the following: “Based on the theory of anthropic reasoning as it is currently understood, from my perspective alone (not your perspective, as the person responding to me, but my own), is my distinctiveness strong evidence for being in a simulation? And if it is, by how much should I ‘update’ my belief in the simulation given my own observation of my distinctiveness?”
Please let me know if you need any clarifications on this question. The question matters a lot to me, so thank you to anyone who responds.
Edit: In particular, I wonder if the following Bayesian update is sound:
As rough estimates, let Pr(I’m in a simulation) = 0.01, Pr(I’m distinct | I’m not in a simulation) = 0.0001, Pr(I’m distinct | I’m in a simulation) = 0.5 — a high probability since I assume simulated observers are quite likely to be ‘special’ or ‘distinct’ with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error. Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not ‘illusory’ would have low probability of distinctiveness and far more observers? I don’t know if this is sound. Should be using SSA instead here to make an entirely separate argument?)
From these estimates, we calculate Pr(I’m distinct)=0.0060, and then using Bayes’ theorem, we find Pr(I’m in a simulation | I’m distinct) = 0.98. So even with a quite small, 0.01 prior on being in a simulation, the fact that I’m distinct gives me a 98% chance that I’m in a simulation.
Yes, it is your main error. Think how justified this assumption is according to your knowledge state. How much evidence do you actually have? Have you check many simulations before generalizing that principle? Or are you just speculating based on total ignorance?
For your own sake, please don’t. Both SIA and SSA are also unjustified assumptions out of nowhere and lead to more counterintuitive conclusions.
Instead consider these two problems.
Problem 1:
Problem 2:
Are you justified to believe that Problem 2 has the same answer as Problem 1? That you can simply assume that half of the balls in blue bag are blue? Not after you went and checked a hundred random blue bags and in all of them half the balls were blue but just a priori? And likewise with a grey bag. Where would these assumptions be coming from?
You can come up with some plausibly sounding just-so story. That people who were filling the bag felt the urge to put blue balls in a blue bag. But what about the opposite just-so story, where people were disincentivized to put blue balls in a blue bag? Or where people payed no attention to the color of bag? Or all the other possible just-so stories? Why do you prioritize this one in particular?
Maybe you imagine yourself tasked with filling two bags with balls of different colors. And when you inspect your thinking process in such situation, you feel the urge to put a lot of blue balls in blue bag.
But why would the way you’d fill the bags, be entangled with the actual causal process that filled these bags in a general case? You don’t know that bags were filled by people with your sensibilities. You don’t know that they were filled by people, to begin with.
Or spin it the other way. Suppose, you could systematically produce correct reasoning by simply assuming things like that. What would be the point in gathering evidence then? Why spend extra energy on checking the way blue bags and grey bags are organized if you can confidently a priori deduce that?
But, on second thought, why are you confident that the way I’d fill the bags is not “entangled with the actual causal process that filled these bags in a general case?” It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I’m the only human and I’m distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I’m distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I’m distinct | I’m not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don’t expect it to be. It’s possible to simply guess correctly, but it’s not probable. That’s not the way to systematically arrive to truth.
Even if it’s true, how could you know that it’s true? Where does this “seeming” comes from? Why do you think that it’s more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility?
If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality.
Have you heard about Follow-The-Improbability game?
I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn’t decrease when consider only a specific type of simulations.
So.
You indeed can consider only a specific type of simulations. But if you don’t have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors.
Consider Problem 2 once again.
You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag… conditionally on your assumption being true.
The prior probability of this assumption to be true is very low. It’s exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same.
Only when you have observed actual evidence in favor of your assumption the improbability goes somewhere. And the more improbable observation you got, the more improbability is removed.
There is no free energy in the engine of cognition.
Thank you Ape, this sounds right.
For what it’s worth I do think observers that observe themselves to be highly unique in important axes rationally should increase their credence in simulation hypotheses.
Everyone is unique, given enough dimensions of measurement. Humans as a species are unique, as far as we can tell. “unique on a common, easy metric” is … rare, but there are still lots of metrics to choose from, so there are likely many who can say that. If you’re one in a million, there are 5 of you in Manhattan and 1400 of you in China.
The problem with anthropic calculations is the same as any singleton observation—your prior is going to be the main determinant of the posterior. The problem with this specific calculation is why in the simulator’s green earth you’d think the chance of uniqueness on this dimension is greater if you’re simulated than if you’re not. If they can simulate you, they can simulate billions or trillions, right?
I don’t think anything observable is useful evidence for or against simulation.
Good questions. Firstly, let’s just take as an assumption that I’m very distinct — not just unique. In my calculation, I set Pr(I’m distinct | I’m not in a simulation)=0.0001 to account for this (1 in 10,000 people), but honesty I think the real probability is much much lower than this figure (maybe 1 in a million) — so I was even being generous to your point there.
To your second question, the reason why, in my simulator’s earth, I imagine the chance of uniqueness to be larger is that if I’m in a simulation then there could be what I will call “NPCs.” People who seem to exist but are really just figments of my mind. (Whereas the probability of NPCs existing if I’m not in a simulation is basically 0.) At least that’s my intuition. There might even be a way of formalizing that intuition; for example, saying that in a simulated world, the population of earth is an upper bound on the number of “true observers” vs NPCs, whereas in the real world, everyone is a “true observer.” Is there something wrong in this intuition?
Note that if your prior is “it’s much cheaper to simulate one person and have most of the rest of the universe be NPC/rougher-than-reality”, then you being unique doesn’t change it by much. This would STILL be true if you were superficially similar to many NPCs.
True, but that wasn’t my prior. My assumption was that if I’m in a simulation, there’s quite a high likelihood that I would be made to be so ‘lucky’ to be the highest on this specific dimension. Like a video game in which the only character has the most Hp.
No, I don’t think it is.
Imagine a scenario in which the people running the simulation decided to simulate every human on Earth as an actual observer.
In this case, Pr(I’m distinct | I’m not in a simulation) = Pr(I’m distinct | I’m in a simulation) because no special treatment has been shown to you. If you think it is very unlikely that you just happened to be distinct in a real world, then in this scenario, you ought to think that it is very unlikely that you just happened to be distinct in a simulated world.
I think what you are actually thinking about is a scenario where only you are an actual observer, whereas everyone else is a p-zombie (or an NPC if you wish).
But this scenario also raises a few questions. Why would the simulators make you a real observer and everyone else a p-zombie? Apparently, p-zombies are able to carry on any tasks that are useful for observation just as well as actual observers.
But even leaving that aside, it is unclear why is your Pr(I’m distinct | I’m in a simulation) so high? Computer-game-style simulations where you are “the main character” are plausible, but so are other types of simulations. For example, imagine a civilization wanting to learn about its past and running a simulation of its history. Or imagine a group of people who want to run an althistory simulation. Perhaps they want to see what could’ve happened had Nazi Germany won WW2 (specifics are irrelevant here). Clearly, in the althistory simulation, there would be no “main characters,” so it would be plausible that everyone would be an actual observer (as opposed to there being only one actual observer). And let’s also imagine for a second that althistory simulation has one hundred billion (10¹¹) observers in total. The chances of you being the only real observer in this world vs. being one of a trillion real observers in althistory world are 1:10¹¹.
And from here, we can generalize. If it is plausible that simulators would ever run a simulation with 10¹¹ observers (or any other large number), then it would require 10¹¹ (or any other large number) simulations with only one observer to match the odds of you being in a “one observer” simulation.
Other people here have responded in similar ways to you; but the problem with your argument is that my original argument could also just consider only simulations in which I am the only observer. In which case Pr(I’m distinct | I’m in a simulation)=1, not 0.5. And since there’s obviously some prior probability of this simulation being true, my argument still follows.
I now think my actual error is saying Pr(I’m distinct | I’m not in a simulation)=0.0001, when in reality this probability should be 1, since I am not a random sample of all humans (i.e., SSA is wrong), I am me. Is that clear?
Lastly, your final paragraph is akin to the SSA + SIA response to the doomsday paradox, which I don’t think is widely accepted since both those assumptions lead to a bunch of paradoxes.
But then this turns Pr(I’m in a simulation) into Pr(I’m in a simulation) + Pr(only simulations with one observer exist | simulations exist). It’s not enough that a simulation exists with only one observer. It needs to be so that simulations with multiple observers also don’t exist. For example, if there is just one simulation with a billion observers, it heavily skews the odds in favor of you not being in a simulation with just one observer.
And I am very much willing to say Pr(I’m in a simulation) + Pr(only simulations with one observer exist | simulations exist) is going to be lower than Pr(I’m distinct | I’m not in a simulation).
That answer seems reasonable to me. However, I think that there is value in my answer as well: it works even if SSA (the “least favorable” assumption) is true.
I think you are overlooking that your explanation requires BOTH SSA and SIA, but yes, I understand where you are coming from.
Can you please explain why my explaination requires SIA? From a quick Google search: “The Self Sampling Assumption (SSA) states that we should reason as if we’re a random sample from the set of actual existent observers”
My last paragraph in my original answer was talking about a scenario where simulators have actually simulated a) a world with 1 observers AND b) a world with 10¹¹ observer. So a set of “actual existent observers” includes 1 + 10¹¹ observers. You are randomly selected from that, giving you 1:10¹¹ odds of being in the world where you are the only observer. I don’t see where SIA is coming in play here.
This is what I was thinking:
If simulations exist, we are choosing between two potentially existing scenarios, either I’m the only real person in my simulation, or there are other real people in my simulation. Your argument prioritizes the latter scenario because it contains more observers, but these are potentially existing observers, not actual observers. SIA is for potentially existing observers.
I have a kind of intuition that something like my argument above is right, but tell me if that is unclear.
And note: one potential problem with your reasoning is that if we take it to it’s logical extreme, it would be 100% certain that we are living in a simulation with infinite invisible observers. Because infinity dominates all the finite possibilities.
But the thing is that, there is a matter of fact of whether there are other observers in our world if it is simulated. Either you are the only observer or there are other observers, but one of them is true. Not just potentially true, but actually true.
The same is true of my last paragraph in the original answer (although perhaps, I could’ve used a clearer wording). If, as a matter of fact there actually exist 10¹¹ + 1 observers, then you are more likely to be in 10¹¹ group as per SSA. We don’t know if there are actually 10¹¹ + 1 observers, but that is merely an epistemic gap.
You are describing the SIA assumption to a T.
The way I understand it, the main difference between SIA and SSA is the fact that in SIA “I” may fail to exist. To illustrate what I mean, I will have to refer to “souls” just because it’s the easiest thing I can come up with.
SSA: There are 10¹¹ + 1 observers and 10¹¹ + 1 souls. Each soul gets randomly assigned to an observer. One of the souls is you. The probability of you existing is 1. You cannot fail to exist.
SIA: There are 10¹¹ + 1 observers and a very large (much larger than 10¹¹ + 1) amount of souls. Let’s call this amount N. Each soul gets assigned to an observer. One of the souls is you. However, in this scenario, you may fail to exist. The probability of you existing is (10¹¹ + 1)/N
This is an interesting observation which may well be true, I’m not sure, but the more intuitive difference is that SSA is about actually existing observers, while SIA is about potentially existing observers. In other words, if you are reasoning about possible realities in the so-called “multiverse of possibilities,” than you are using SIA. Whereas if you are only considering a single reality (e.g., the non-simulated world), you select a reference class from that reality (e.g., humans), you may choose to use use SSA to say that you are a random observer from that class (e.g., a random human in human history).
I guess the word “reality” is kind of ambiguous, and maybe that’s why we’ve been disagreeing for so long.
For example, imagine a scenario where we have 1) a non-simulated base world (let’s say 10¹² observers in it) AND 2) a simulated world with 10¹¹ observers AND 3) a simulated world with 1 observer. All three worlds actually concretely exist. People from world #1 just decided to run two simulations (#2 and #3). Surely, in this scenario, as per SSA, I can say that I am a randomly selected observer from the set of all observers. As far as I see, this “set of all observers” would include 10¹² + 10¹¹ + 1 observer because all of these observers actually exist, and I could’ve been born as any one of them.
Edit 1: I noticed that you edited one of your replies to include this:
I don’t actually think this is true. My reasoning only really says that we are most likely to exist in the world with the most observers as compared to other actual worlds, not other possible worlds.
The most you can get out of this is the fact that conditional on a simulation with infinite observers existing, we are most likely in that simulation. However, because of the weirdness of actual infinity, because of the abysmal computational costs (it’s one thing to simulate billions of observers and another thing to simulate an infinity of observers), and because of the fact that it is probably physically impossible, I put an incredibly low prior on the fact that a simulation with infinite observers actually exists. And if it doesn’t exist, then we are not in it.
Edit 2: You don’t even need to posit a 10¹¹ simulation for it to be unlikely that you are in an “only one observer” simulation. It is enough that the non-simulated world has multiple observers. To illustrate what I mean, imagine that a society in a non-simulated world with 10¹² observers decides to make a simulation with only 1 observer. The odds are overwhelming that you’d be among 10¹² mundane, non-distinct observers in the non-simulated world.
The answer is yes, trivially, because under a wide enough conception of computation, basically everything is simulatable, so everything is evidence for the simulation hypothesis because it includes effectively everything.
It will not help you infer anything else though.
More below:
http://www.amirrorclear.net/academic/ideas/simulation/index.html
https://arxiv.org/abs/1806.08747
In a large universe, you, and everyone else, exists both in and not in simulations. That is: The pattern you identify with exists in both basement reality (in many places) and also in simulations (in many places).
There is a question of what proportion of the you-patterns exist in basement reality, but it has a slightly different flavour, I think. It seems to trigger some deep evolved patterns (around fakeness?) less than the kind of existential fear that simulations with the naive conception of identity sometimes brings up.
But to answer that question: Maybe simulators tend to prefer “flat” simulations, where the entire system is simulated evenly to avoid divergence from the physical system it’s trying to gather information about. Maybe your unique characteristic is the kind of thing that makes you more likely to be simulated in higher fidelity than the average human, and simulators prefer uneven simulations. Or maybe it’s unusual but not particularly relevant for tactical simulations of what emerges from the intelligence explosion (which is probably where the majority of the simulation compute goes).
But, either way, that update is probably pretty small compared to the background high rate of simulations of “humans around at the time of the singularity”. Bostrom’s paper covers the general argument for simulations generally outnumbering basement reality due to ancestor simulations: https://simulation-argument.com/simulation.pdf
However, even granting all of the background assumptions that go into this: Not all observers who are you live in a simulation. You exist in both types of places. Simulations don’t reduce your weight in the basement reality, they can only give you more places which you exist.
Why are you so sure it’s a computer simulation? How do you know it’s not a drug trip? A fever dream? An unfathomable organism staring into some kind of (to it’s particular phenomenology) plugging it’s senses into a pseudo-random pattern generator from which is hallucinates or infers the experience of OP?
How could we falsify the simulation hypothesis?
From the way things sure seem to look, the universe is very big, and has room for lots of computations later on. A bunch of plausible rollouts involve some small fraction of those very large resources going on simulations.
You can, if you want, abandon all epistemic hope and have a very very wide prior. Maybe we’re totally wrong about everything! Maybe we’re Boltzmann brains! But that’s not super informative or helpful, so we look around us and extrapolate assuming that’s a reasonable thing to do, because we ain’t got anything else we can do.
Simulations are very compatible with that. The other examples aren’t so much, if you look up close and have some model of what those things are like and do.
I don’t understand how the assumption that we are living in a simulation which is so convincing as to be indistinguishable from a non-simulation is any more useful than the Boltzmann brain, or a brain in a vat, or a psychedelic trip, or that we’re all just the fantasy of the boy at the end of St. Elsewhere: since, by virtue of being a convincing simulation it has no characteristic which knowingly distinguishes it from a non-simulation. In fact some of those others would be more useful if true, because they would point to phenomena which would better explain the world.
How are the other examples not compatible? What fact could only necessarily be true in a simulation but not on a psychedelically induced hallucination? Or a fever dream? What do you mean “look up close” close to what exactly?