Simulation theology: practical aspect.
In this post, I elaborate a little bit on the simulation argument by Nick Bostrom and discuss what it may lead to from practical, observable purpose. The paper is excellent, but in case you haven’t read it I write everything in a self-consistent manner so you should be able to get the point without it.
Consider highly advanced technological civilization (call it parent civilization) that can create other civilizations (child civilizations). This can be done in multiple ways. Bostrom focuses only on the computer simulation of other civilizations. This is most likely the easiest way to do it. It is also possible to terraform a planet and populate it with humans (let’s call agents in the parent civilization humans for simplicity) effectively putting it to the period just before written history. Finally, one may consider an intermediate solution, something like “The Matrix” movie, with all the people perceiving not actual reality, but artificial. Parent civilization itself may have a parent. If civilization does not have a parent, let’s call it orphan.
In principle, parent civilization can create child civilizations, similar to its own earlier stages of development. Then one may ask, how is it possible to understand, whether you are a child civilization or orphan? For the case of the computer simulation, it is possible to argue that simulated beings have qualitatively different perceptions, or don’t have perception at all. However, for the other two types of child civilization (other planet and Matrix) humans are the same as in the parent civilization and should have the same type of perception. Thus, in the earlier stages of history, it might be not possible to distinguish between being a child and an orphan if the parent aims to not show its existence.
However, with technological development, some progress in this question can be achieved. I am not talking about distinguishing “fake reality” from the “real reality” – if the parent is much more technologically advanced, it should be able to make it indistinguishable for a child. I am rather talking about reaching the stage of becoming a parent themselves. Then, one can create many children, copies of the earlier stages of development, and see, what fraction can reach the stage of parent themselves (so civilization exploring the question becomes a grandparent). If this fraction is not infinitesimal, and the resources of the grandparent are large enough, the number of children who became parents will be large. Then, according to the Copernicus principle, the probability that the grandparent civilization is someone’s child is significantly higher, than that it is an orphan. You can find more detailed argumentation in the original paper.
Can civilization infer anything about its parent? In some sense, yes. As we see, the large number of children is going to be the parent civilization at the early stages – it is necessary to solve the question of whether civilization is child or orphan. Of course, some will be different, but they don’t need to be produced in excessive amounts. It will in some sense resemble evolution, with children resembling parents with except to mutation. Of course, some civilizations will not follow it and generate just whatever children. However, the pattern transmission (making children like parents) is a stable attractor – once civilization preferring to make children alike itself is created, it preserves in generations. Thus, it will be safe to assume that parent civilization is alike the child – it is not guaranteed, but it is likely.
Now we can turn to ourselves. What is the likelihood we one day start to create children civilizations? Is it infinitesimal? I don’t think so. It may be small, the Precipice is near, but I think there is a nonzero chance we will be able to avoid it. If it is the case, then we are very likely to be a child of civilization, that is, in some sense, Future-We. We after avoiding the Precipice. We after the Long Reflection.
Should the parent civilization just create a child without interacting with it for proper study of the simulation argument? Not necessary. Interaction is allowed, if the child civilization can not observe it (or observe, but treat it as natural phenomena, ancient myths, etc.). I.e., as long as the scientific community in the child civilization can’t say, that this and that is interaction with another civilization, the parent is safe from being discovered. This interaction may increase or decrease the chance that child civilization reaches the parenting phase itself – but since the parent civilization itself has no information, whether their parents (if exist) interact with them or not, they definitely should explore multiple options.
Future-We also can interact with us. If we don’t completely change morally, Future-We will be more biased toward benevolent interactions rather than negative – i.e., they will try to improve our well-being and decrease suffering where it is possible to do without demonstrating themselves. Quite easy to see that the best coverage for them will be a religion – it is possible to do a lot and still, all witnesses will be biased and untrustworthy, so scientists will just move on and don’t put on too much weight to it.
And now, finally, a practical aspect that I promised. I want my well-being improved, so can Future-We help me with it? Why not? Maybe if I just ask (since Future-We in religion would be God, one may say – pray), there would be some help? It will not prove that Future-We exist, because everyone around can just say that it is a placebo or a coincidence. However, the placebo will work only as long as I believe in it, so I needed all this theoretical part above. So, I tried, and I feel indeed quite an improvement, which is the most important result of this theoretical construction for me.
“It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way—without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning.”
From here
The ability to change probability of future events in favor of your wishes is not a proof of simulation, because there is an non-simulation alternative where it is also possible.
Imagine that natural selection in quantum multiverse worked in the direction that helped to survive beings which are capable to influence probabilities in favorable way. Even a slightest ability to affect probabilities will give an enormous increase of measure, so the anthropics favor you to be in such world, and such anthropic shift may be even stronger than the shift in the simulation direction.
In that case, it perfectly reasonable to expect that your wishes (in your subjective timeline) will have a higher probability to be fulfilled. A technological example of such probability shift was discussed in The Anthropic Trilemma by EY.
It is exactly the point that there should be no proof of simulation unless simulators want it. Namely, there should be no observable (for us) difference between universe controlled simply by laws of Nature and between one with intervention from simulators. We can’t look at any effect and say—this happens, therefore, we are in the simulation.
The point was the opposite. Assume we are in simulation with benevolent simulators (what, according to what I wrote in the theoretical part of the post, is highly likely). What they can do so that we still was not able to classify this intervention as something outside of laws of nature, but so that our well-being would be improved? What are the practical results of it for us?
By the way, we even do not have to require the ability to change probability. Just the placebo effect is good enough. Consider the person who was suffering from depression, or addiction, or akrasia—and now he is much better. Can a strong placebo (like a very strong religious experience) do it? Well, yes, there were multiple cases. Does it improve well-being? Certainly yes. So the practical point is that if such intervention masquerading under placebo can help, it is certainly worth trying. Of course one can say that I just tricking myself into believing it and then placebo just works, but the point is that I have reasons to believe in it (see theoretical part), and this makes placebo work.
Thank you for directing my attention to the post, I will certainly read it.
Placebo could work because it has some evolutionary fitness, like the ability to stop pain in case of the need of activity.
Benevolent simulators could create an upper limit of subjectively perceived pain, like turning off qualia but continue screaming. This will be unobservable scientifically.
Of course, placebo is useful from the evolutionary point of view, and it is a subject of quite a lot of research. (Main idea—it is energetically costly to have your immune system always at high alert, so you boost it in particular moments, correlating with pleasure, usually from eating/drinking/sex, which is when germs usually get to the body. If interested, I will find the link to the research paper where it is discussed. ).
I am afraid I still fail to explain what I mean. I do not try to deduce from the observation that we are in a simulation, I don’t think it is possible (unless simulators decide to allow it).
I am trying to see how the belief that we are in simulation with benevolent simulators can change my subjective experience. Notice, I can’t just trick myself to believe only because it is healthy to believe. This is why I needed all this theory above—to show that benevolent simulators are indeed highly likely. Then, and only then, I can hope for the placebo effect (or for real intervention masquerading under placebo effect), because now I believe that it may work. If I could just make myself to believe in whatever I needed, of course I would not need all these shenanigans—but, after being faithful LW reader for a while, it is really hard, if possible at all.
Ok. But what if there are other more effective methods to start believe in things which are known to be false? For example, hypnosis is effective for some.
Hmmm, but I am not saying that the benevolent simulators hypothesis is false and that I just choose to believe in it because it brings a positive effect. Rather opposite—I think that benevolent simulators are highly likely (more than 50% chance). So it is not a method “to believe in things which are known to be false”. It is rather an argument why they are likely to be true (of course, I may be wrong somewhere in this argument, so if you find an error, I will appreciate it).
In general, I don’t think people here want to believe false things.