Philosophy Bear here. At the moment I’m composing an anthology of all the work I’ve done on the topic of AI. Simultaneously, as I edit those works for the anthology, I thought it would be a good idea to crosspost the here, as I’ve never shared any of them on less wrong before. The version I’ve posted as text is edited (improved) from the version at the attached link. I’ll be posting the book at my Philosophy Bear Substack at some point.
I’ve been going through Chalmers’s book Reality+. It’s a good refresher on some of the more interesting implications of simulation theory and he has some fascinating new takes as well. I noticed that he’d come to many similar conclusions to me on a variety of topics, so I figured I’d best get what remains of my thinking on these topics into print as quickly as possible :-).
In particular, I wanted to hone in on a question- a kind of modern update on the problem of evil. If we are in a simulation, does it follow our simulators are bad people?
A brief summary of the argument we’re in a simulation
Readers who are already aware of the simulation argument can skip this:
Why think we might be in a simulation? This is my version of the argument, which draws elements from both Bostrom & Chalmers. It’s a little closer to Bostrom than Chalmers because I find Bostrom’s version more persuasive for reasons I won’t get into here. My version of the argument is not as technically complete or comprehensive as it could be, because it is designed to be accessible. Nonetheless, it is, I think, in essence, right, at least on the basis of the evidence available to us at the moment.
1.What it “feels like” to be in a simulation is the same as what it feels like to be outside a simulation. Two people in the same situation (but one simulated) with the same past (but one simulated) will have the exact same experiences.
2. If humans survive the next few hundred years (at the most), human nature being what it is, it seems likely we will create many simulations, including simulations of humans. These will include simulations of our past- before we gained the capacity to create detailed simulations. Call these “ancestor simulations”.
3. The capacity to create simulations is abundant- potential computational power is vast. Our curiosity and desire for entertainment is also abundant. It is therefore likely that, if we start creating ancestor simulations, we will create a vast number of such simulations of our history, many times the number of simulated people than the number of people who ever existed.
Since by (1) we have no other evidence that would discriminate whether we are in a simulation, we need to fall back on the baseline probabilities.
By (2 & 3) the baseline probability that we are in a simulation is higher than the baseline probability that we are not in a simulation,
Ergo we are probably in a simulation. Note that this is not Descartes classical argument that we may be being deceived by a demon, insomuch as Descartes sought merely to show that it is possible that we are in an illusory world created by a demon, whereas this argument attempts to give us positive reason to think that it is probable that we are in a simulation.
Chalmers on the case that our simulators are divine
As Chalmers notes, simulation theory. has been called the most interesting new argument for theism of modern times. If we are in a simulation, then our simulators are:
● Our creators
● Enormously powerful with respect to us.
● Have at least the capacity to be enormously knowledgeable about our lives, even
if they don’t choose to exercise it.
These features can be seen as corresponding to traditional divine attributes. God(s) are generally thought to be creators and immensely powerful. Many, though not all, traditions hold that God(s) know all things or at least a vast amount. Thus the simulation argument can be seen as generating a kind of limited theism.
Our simulators have other interesting features as well in this regard- for example, being outside time and space with respect to our simulation, corresponding to Boethian concepts of deity.
The problem of simulator theodicy
But there’s another divine attribute, particularly important in the Abrahamic religions (though not only those), the attribute of omnibenevolence. It’s far from clear that if the simulation argument is true, our simulators are omnibenevolent. In fact, you might worry they are evil- or perhaps somehow beyond good and evil (which is to say, in practical terms, evil). There are two arguments one might use to derive the conclusion that our simulators are evil:
The argument from suffering (and the absence of bliss). This world is filled with suffering. A good simulator would not create beings that suffer and would create beings that experience more bliss than us. Note that this can be extended to other evils besides suffering- for example, a lack of freedom.
The argument from deception, a good simulator would not deceive. This world, in some sense, tends to deceive us into thinking that we are not simulated, ergo, our simulators have created a deceptive world.
Our question then is: suppose our world is a simulation. Is the way the world is compatible with our simulators being good people who have made the world this way deliberately?
By good person, I don’t necessarily mean anything particularly demanding. Certainly not omnibenevolent. Perhaps the best definition of what I mean in this context is:
A good person is a person who does not cause substantial harm to others without a justification strong enough to excuse that harm.
A lot of this is going to come down to divergent values. My personal sense is that the argument from deception is relatively weak- ceteris paribus our simulators would owe us the knowledge we are in a simulation, but even a relatively modest justification could get them off the hook for not telling us we’re in a simulation. Thus we’ll focus on the argument from suffering (and other evils).
This is not just an abstract philosophical question. Though we probably cannot do much about it, it is possible that no question matters more. Our simulator could well be omnipotent with respect to us. They could turn us off, create disasters, wipe us from history, or send us to virtual heavens or hells.
Does our simulator owe us any more than a greater than even lifetime balance of good over bad?
One of the best defenses of our simulator’s moral goodness is to try and lower the bar for goodness as low as possible.
We should take seriously the idea that perhaps all our simulators owe us is more good than evil across our lifespan. One could even lower it further, and argue that all they owe us is for humanity as a whole to experience more good than evil across its lifespan- or for the simulation as a whole generate more good than evil. Suppose you were speaking to your simulator. You had a dialogue with her reminiscent of the book of Job- accusing her of badly mistreating you.
To this she replied:
“Would you prefer you’d never existed?”
“No, but you could have made things so much better!”
“Yes, but I’m not running a simulation of paradise, I’m running a simulation to find out about something, and having all simulated beings in a state of perpetual bliss would interfere with that. Nonetheless, I’ve taken steps to ensure that all lives in my simulation are worth living [ed: this could be achieved by running only a sparse simulation of the most miserable lives, or perhaps through a simulated afterlife for those who found earthly life worse than not existing at all] Or at the very least I have taken steps to ensure the total experience of the simulated human species is more positive than negative. I get the data I want. You get lives that are worth living- either individually or at least in the aggregate. in what sense can I be said to have wronged you?”
“You could easily make things better, but you choose not to, that’s wrong.”
“I can’t make things better easily. I have a limited computational budget for simulations.”
“Why aren’t you spending your computational budget on creating blissful lives?”
“This simulation is being run for some kind of purpose in my world- perhaps science, perhaps even entertainment- I won’t get into the details. I have the budget I do contingently on meeting that goal. If I just created blissful lives my funding would be taken away. Thus your choices are non-existence or the lives I give you. On the whole, I think this benefits both of us, and doesn’t make me evil ”
Whether this is an adequate response is going to depend on your ethical views. However, I think it’s clear that there is at least a coherent conception of the good on which what our simulator does in this scenario is defensible. Thus we can’t be sure that our simulator is malign.
Is it immoral to switch off a world, or to permanently terminate a simulated person’s consciousness at death? This depends on whether death is harmful.
One of the more terrifying implications of the simulation hypothesis is the possibility that the simulator could turn it off at any time. An interesting question then is if our simulators are benign are they be obliged not to turn us off? At least without our consent?
There is an ancient debate in philosophy over whether or not death is a kind of harm. That is to say, if someone dies, is that, in and of itself, harmful for them? The answer to this question will establish whether or not our simulators could count as benign, and still turn us off. Epicurus, for example, thought that death was not harmful. This, I think, is just going to come down to personal intuitions on death and harm. I won’t go through the philosophical arguments here. My sense is that the majority of people if they thought carefully about it, would come to the conclusion that dying is bad for the deceased.
If our simulators are benign and regard involuntary death as harmful, this has interesting implications beyond the question of whether they can turn the world off as a whole. It would tend to suggest that we could expect that death is not the end, and the dead are spirited away to some sort of afterlife. Alternatively perhaps our simulators think that death is, while tragic, necessary for some reason in a way that justifies our simulators allowing it.
Even if death is not intrinsically harmful, it might be held that dying after an unsatisfactory life that you would be better off never having lived is a sort of harm. Simulators might have a special duty to correct this through an afterlife. A similar argument might be made about premature death- although what counts as “premature” from the point of view of a god-like simulator might be difficult to assess.
Can we know that the various evils we complain about exist?
One thing we need to consider is that if we are in a simulation, our evidential basis for judging our creator is sketchy. Granted, the epistemological and metaphysical issues are complex, as Chalmers discusses, but it seems to me that if we’re in a simulation we can’t be confident, for example, that the past of that simulation happened the way it appears to have happened.
Any given awful experience that you might hold against your simulator might have never actually happened. The scope of evils for which the simulator is responsible might be far smaller than it initially seemed (or larger!)
Even the basis of our reasoning is suspect. It could be that inferences that appear plausible to us are the result of manipulation by our simulator. For an omnipotent simulator, how easy would it be to manipulate us so that we all think 2+2=4, when really it equals five?
These kinds of skeptical doubts start tearing up the very bases on which we came to the simulation argument. This leads to an argument that skepticism is self-undermining.
I do tend to think that, past a certain point, skeptical doubts become self-undermining, but theorizing exactly where this point is is difficult. Chalmers quotes one of my favorite philosophical arguments by a physicist, Sean Caroll’s argument that the idea we are Boltzmann brains {one of the most extreme skeptical hypotheses} is self-defeating- I tend to agree with Caroll on this. On the other hand, I’m sure that some philosophers will try to argue that the idea we are in a simulation undermines any evidence we might present for it, thus any version of the simulation argument is self-defeating, but I find this implausibly broad.
The truth of where to draw a line against doubts as futile and self-undermining probably lies somewhere between Boltzmann brain and ordinary simulationism. In our inquiry into the moral character of our simulators, I see little option but to proceed on the basis that, while our world may be simulated, things happen in the simulation broadly as they appear to while expanding the error bars around our conclusions.
What if we live in an ethically driven project- Diversity Utilitarianism
Another possibility that we need to consider is that if we are in a simulation, we may be in an ethically driven project. By “ethically driven project” I mean a project that exists for our own good, and/or the good of humanity. So long as our simulators have similar ethical values to us (a big if) this would be a fantastic outcome. There are many different possible ethical projects we could be a part of, in the next two sections I’ll consider two of them.
But would our simulators put us through pain and suffering if they are working for our own good?
Suppose I gave you vast, though not unlimited, computing power and put you in an otherwise empty universe, what would you do? If you’re anything like me, you’d want to create numerous beings, and let them live blissful lives. Perhaps humans, because we’re biased.
You might also feel like these beings have to be genuinely distinct from each other, and live varied lives. A vast number of copies of a being experiencing a single blissful moment over and over would be unsatisfactory.
Call this position diversity utilitarianism. A diversity utilitarian holds that total value is equal to the sum of the utility of individuals. However, this value is diversity weighted in some way. If there are two beings, Don & Nod, and they are quite distinct from each other, total utility equals the sum of their utilities. If they are identical, total value is maybe equal to half their total happiness, or perhaps just a little over half their total value. If they are very similar, but not identical, perhaps there is some penalty to how much their aggregated utility is worth.
Personally, I find diversity utilitarianism plausible, at least in so far as tiling the universe with identical simulated people experiencing bliss doesn’t sound that attractive. If our simulator is a diversity utilitarian- or something similar- they will need to generate not just as much bliss as possible, but diverse bliss.
How do you create numerous different humans, genuinely distinct from each other? Well, it’s possible that the most efficient way, or possibly even the only feasible way, to create a human personality—especially a range of different personalities- is to simulate the biological and social processes of human life. Our world could thus be a diversity utilitarian people generating ground.
But why not generate these future citizens of blisstopia in a blissful world? If you want the humans you create to be diverse, just raise them in diverse blissful worlds. Chekov said that all happy families are the same, it’s the unhappy ones that are different, but surely Chekov aside, there are uncountable possible utopias.
I grant that, if you’re motivated by the ethical goal of increasing total human flourishing, you’d start by creating blissful lives. But a posthuman civilization might have vast computational power- so much that they could simulate all sufficiently psychologically distinct beings that grew up in blissful conditions. Thus they might turn to simulating people who grew up in less than blissful conditions. After they died, or at a certain age, or something, you’d harvest them out of the simulation and set them up in a nice afterlife.
In other words, if this speculation is correct, we are the product of an attempt to balance psychological diversity with psychological bliss, after the low-hanging fruit of people raised in utopias has been exhausted.
That scenario probably sounds absurd, or wishful thinking, but it first occurred to me not when thinking about this problem, but when thinking about what I’d do if you gave me vast computational power. It has a degree of independent plausibility.
What if we live in an ethically driven project- Nikolai Fyodorovich Fyodorovism
Nikolai Fyodorovich Fyodorov is my favorite non-Marxist Russian philosopher. Nikolai believed that the greatest source of alienation in our lives is the alienation of the living from the dead. We are cut off from ancestors and friends alike by that dread scythe. Nikolai, however, had a can-do attitude. Where a lesser, perhaps saner, philosopher would simply bemoan the tragedy of death, he proposed its abolition. But he went beyond the normal transhuman desire to eliminate death- for he wanted to eliminate it retrospectively. Nikolai wanted to raise everyone who had ever died from the death. Another reason you might simulate people with less than blissful lives is if you wanted to complete Nikolai Fydrov Fydrovich’s universal resurrection project. You wanted to recreate every human that had ever lived because you thought you had a duty to resurrect the dead. Since historical information is partial, in order to be sure of creating a good psychological approximation of everyone, you’d have to make a vast array of attempts. Certainly, there is enough mass and energy for a vast number of attempts, although just how many is a little unclear.
And so, on this theodicy, the bad stuff we experience is in a strange sense, formative. It is necessary to bring us back into being.
Now you might be wondering- in both the Fyodorovism and diversity utilitarianism cases- “couldn’t they just skip the experiences and create people without actually simulating the life history?” The answer may very well be no. It could be that there is no way- or at least no computationally efficient way- of creating the rich personality-memory complexes that are humans without running through a simulation of that personality’s history.
The problem of quantitative theodicy
Scott Alexander presents a kind of Theodicy that converges with what we called diversity utilitarianism but in a non-simulator context. Essentially, God aims to create as much (net) good as possible. First God creates all possible completely good worlds, and then when he runs out he creates worlds that have some good and some evil in them.
This makes me wonder. Chalmers claims that there is enough capacity in a kilogram of matter to simulate 100 years of life for 10 billion people. The mass of the galaxy is 1.5 trillion solar masses, which I think is about 10^40 kilograms. Is it plausible that using the mass of the galaxy to create simulations, one would run out of diverse, blissful lives, and have to resort to mixed lives like our own?
Now theodicy is reduced to a strange sort of maths problem, albeit an insoluble one, since we do not have any quantitative sense of how much diversity is required, or a way to quantify diversity.
We also don’t really know how much matter our simulators have. Perhaps they have far more than a galaxy’s worth, perhaps they have far less.
Consent theodicy- the virtual contract
Years ago I outlined a consent theodicy. I argued that it’s possible that we consented to live in a world with evil, or that our creator knew that in the counter-factual in which we were asked “do you want to live in this world” and the full reasons we were living in this world were given, we would say yes. Hence we suffer evil because we have agreed to it? Why? Well, perhaps because it’s essential for our development in some respect. Obviously, such a consent theodicy can be combined with sim-theism. It is possible that you are in a simulation right now that you agreed to be in*. Alternatively, it is also possible that your simulator would justify their treatment of you on the counterfactual that if you understood the full situation you would consent to be in the simulation. *- [although this raises prickly questions about in what sense the person who agreed to be in the simulation really is you, I think there are at least plausible permutations of the conditions on which this turns out to be true]
Evidential decision theory and the simulation hypothesis- or why there’s at least a modest case you shouldn’t mistreat sims
Does our consideration of simulator theodicy have any practical implications? Well an argument can be made that it gives us reason not to create simulations maliciously, or mistreat them.
Quoting Wikipedia, evidential decision theory holds that:
“The best action is the one which, conditional on one’s having chosen it, gives one of the best expectations for the outcome.”
Evidential decision theory is controversial. Its most prominent rival is causal decision theory, which holds that you should act in a way that is likely to cause the best outcome. Nonetheless, let’s stick with evidential decision theory for the moment.
Now our world, as we see it, is compatible with a variety of simulators, some of them benign, some of them callously indifferent, some of them actively cruel.
It seems quite possible that our simulator is what we might term our value function descendant (it may seem paradoxical to hold that our simulators are our descendants, but remember our earlier argument was that it is plausible that we are an ancestor simulation). A value function descendant of humanity is a being that has roughly our value function but is perhaps extrapolated out to remove inconsistencies and/or clarified. The argument for this is that, so long malign AI doesn’t take over the planet, it is likely that simulations we create and run will be run either by our value function descendants or by artificial intelligence under the control of our value function descendants.
Thus, if it turns out that we mistreat simulations in the simulations we create, the likelihood that we are in a simulation in which we are going to be mistreated goes up. Therefore the action that gives the best expectations of outcome is not to mistreat any sims we create, because it’s reasonably likely that our simulators have similar values to us. If we commit sim abuse, it’s more likely our simulators are willing to commit sim abuse. Thus, according to evidential decision theory, we have a reason not to.
Excursus- if you think our simulators are either humans or the descendants of humans implanted with our values, our probable situation depends on a kind of ethics exam at the end of history
If our simulators are human or value function descendants of humans -and not aberrant or rogue actors but representatives of their civilization(s)-, then there’s a sense in which our simulated humanity will get what it deserves. People like us are choosing our fate in an ethics exam at the end of history, we will have done unto us what we would do unto others.
I’ve long wondered whether the evils of the world reflect mistakes or conflicts of interest. This is why I introduced the language of conflict versus mistake theory all those years ago. The answer of course is both but in a very subtle way, with malice and mistake interpenetrating in a dizzying web.
Suppose that, due to super-intelligent AI, we eliminated the possibility of mistakes. Do you have confidence that faced with genuine knowledge of the consequences of their actions, humans would choose to do the right thing? If yes, then rejoice because our simulators are probably not malicious {assuming humanity is still in charge}. If not, then there’s less comfort to be had.
What about the argument that even if humanity as a whole is good, but we in particular could have the misfortune to be in a simulation run by a rogue evil individual? It’s possible but unlikely, I tend to think there would be a fraction as many such illegal simulations as legal ones.
More disturbing is the possibility that humanity as a whole is (or was) good, but a clique of evil people managed to “win” history. The priors, various scenarios etc. are very hard to assess. All we can really do is act as if we aren’t in a simulation, and act so that if sapient beings everywhere in all worlds acted like us, the rate of “bad” simulations would be zero.
Excursus- What would you do if you were powerful?
I think a useful exercise in pondering this stuff- not necessarily in arriving at truth, but in getting a sense of the dizzying scope of possibilities, is to consider what you would do if you were very powerful- say I gave you a billion dollars.
Having done that, consider what you would do if you were even more powerful- say I gave you the capacities of superman. What would you do if you were so mighty that you exceeded the power of all governments?
Now, having considered that lets up the power level again. Suppose that you were not just mightier than all governments, but also had a super-intelligent AI that would advise you on the best way to achieve your goals- whatever they were what would you do then? what values would you steer humanity towards?
Now we come to the highest pinnacle. What would you do if I gave you vast computing power- enough to create simulations of whatever you liked- and AI assistance in creating those simulations? What worlds would you create?
Excursus- Some broad value frameworks omnipotent simulators could have
Here’s a smattering of different values systems simulators could subscribe to. Almost any of these value systems, in at least a partial form, can overlap with almost any of th others, and this isn’t a formal classification, but it’s a starting point for discussion. For most of these value functions, I can imagine some possible way that our experience thus far could be compatible with a simulator holding this value function, but I’ll leave thinking it through as an exercise to the reader.
Selfishness- hedonistic type: A simulator of the hedonistic type is dedicated to the satisfaction of their aesthetic, culinary, sensual, and/or sexual appetites. They may, for example, run numerous simulations to try and create the most exquisite and fascinating people to have sex with.
Selfishness- megalomaniac type: A selfish simulator of the megalomaniacal type wants to be worshipped, and to exercise their power according to their own strange whims for Self-glorification.
Selfishness- aesthetic type: A selfish simulator of the aesthetic type views the whole universe as like an artwork of some sort.
Selfishness- scientific type: A selfish simulator of the scientific type is running the universe to answer some scientific question- regardless of whether it hurts the simulated.
Sadism: The worst possible scenario would be if we were in a simulation created by a sadist. This could come in several different forms- for example, they might be a general sadist, or they might be seeking revenge on a specific person or group, thus in the process of recreating them to torture.
Liberalism: A simulator of the liberal type wants to give us, above all, freedom of some kind. Exactly what that freedom amounts to will depend on the simulator.
‘Crude’ utilitarianism: A crude utilitarian simulator wants to maximize pleasure, or desire satisfaction or something like that, and so is running simulations to do so. We can be reasonably confident that we are not in such a simulation due to the existence of suffering.
’Diversity utilitarianism”: As described above. A diversity utilitarianism wants to maximize utility—disutility. However, they weigh repetitive good experiences or good lives as worth less than non-repetitive good experiences or lives.
Humanism: A humanistic simulator sees its primary goal as the flourishing of people. It’s a eudaemonist. Freedom and happiness, at least to some degree, are likely both parts of this goal, but neither is the full object. A humanistic simulator might need diversity for similar reasons the diversity utilitarian does- e.g. a flourishing life counts for less if it is a copy of an already existing one.
Fyodorovian: As above, a project to resurrect the dead.
Tribalism: A tribalist simulator is like a selfish one, but they dedicate themselves to a group, rather than just themselves. We are sadly not in the group.
Social Darwinist/Nietzschean: A social Darwinist simulator wants to create strong creatures, for some value of strong, and even if it requires great suffering.
Primitivism: The primitivist singleton is leery of technology, and wishes to constrain it. This might sound like a bizarre or unlikely view for a simulator to take, but although I do not agree, I think it makes a certain sense. In the novel Consider Phlebas, by the sadly passed Iain Banks, the Iridians fight a war against The Culture because they view the culture as devoid of human agency- AI does everything. “Human” (organic sapient) striving and struggle is necessary for a meaningful existence, argue the Iridians. We can imagine a primitivist simulator who has put us in our world- just before the invention of artificial intelligence that can take over human functioning- for exactly this reason.
Moralism: A moralistic simulator wants to create good worlds, but their idea of goodness is laden with ideas that some might consider outmoded. Drugs are bad, promiscuity is bad, everyone must worship God, that sort of thing. It’s hard for me to see how our world is compatible with that? Unless it’s combined with other factors? (E.g., people must choose morality “of their own free will?)
Radical aporia
I’d like to give a personal coda to all this simulation stuff, building on the brief discussion of skepticism, and branching out from there.
How are we meant to think about cosmology, and on a more personal level, the meaning and value of our lives in light of the simulation argument? We face both radical uncertainties about whether we are in a simulation and radical uncertainty about the implications if we are in. For example, what is the risk of being turned off? What does the future hold for us given that we don’t know the purpose of the simulation? Does life end at death or do our simulators continue us on? If we are in a simulation, how can we be sure the past happened anything as we remember it, given that our simulators could just tweak our memories? But if we go down this road, how can we know anything about our situation, including the things that led us to posit we’re probably a simulation in the first place? How can we even trust our own a priori reasoning, given that it would be trivial to interfere with that?
This all reminds me of Neurath’s boat. As Neurath put it:
“We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driwood the ship can be shaped entirely anew, but only by gradual reconstruction”
In truth, we’ve never known our own situation in the grander scheme of things. A lot of us thought that we had it figured out with a kind of vague, cosmological materialism, but we never had the full picture filled in on that story. There were always questions about the standard 19th-century materialist framework- the mystery of qualia (as Chalmers of all people has pushed), the Fermi paradox, etc.
Even the idea we are in a simulation only represents a guess given our current level of technology. Who knows what stuff we’ll be pondering with the technology, social structure and speculative philosophy of the future? The simulation argument seems kind of persuasive with the tech of today, but perhaps the technology of tomorrow will suggest wholly different cosmic possibilities. To put it tautologically, we are conditioned by our conditions- things that seem like very good arguments to us now might seem like poor arguments in the future. Things that seem like poor arguments now, or that haven’t even occurred to us, might seem compelling in the future.
In other words, I’m urging you to apply the skeptical meta-induction to speculative metaphysics. If it has power in the realm of science, how much more so in philosophy. Given how unstable our ideas have proven, not just about our cosmic situation, but even about what the possible alternatives are, we know nothing. We can’t trust the simulation argument, can’t trust the opposite, can’t really trust anything.
So we don’t know where we are in the logical space of possible worlds, not even approximately, as best I can tell we have no way of figuring it out. The only way to cope is to accept that you don’t know, and you will very possibly never know, even the basics of your situation. Having accepted this, resolve to live by your values in a way that carries meaning even in an absurd and unknowable space of possibilities.
In an increasingly bizarre world, the thought that no one has ever proven it’s not going to turn out alright can be a source of comfort. We’re swimming over an abyss on a black night, and it’s natural to worry a Levithan might be rushing up to devour us. That’s possible, but hands might be rising up to cradle us as well. Who the fuck knows?
Theodicy and the simulation hypothesis, or: The problem of simulator evil
Link post
Philosophy Bear here. At the moment I’m composing an anthology of all the work I’ve done on the topic of AI. Simultaneously, as I edit those works for the anthology, I thought it would be a good idea to crosspost the here, as I’ve never shared any of them on less wrong before. The version I’ve posted as text is edited (improved) from the version at the attached link. I’ll be posting the book at my Philosophy Bear Substack at some point.
I’ve been going through Chalmers’s book Reality+. It’s a good refresher on some of the more interesting implications of simulation theory and he has some fascinating new takes as well. I noticed that he’d come to many similar conclusions to me on a variety of topics, so I figured I’d best get what remains of my thinking on these topics into print as quickly as possible :-).
In particular, I wanted to hone in on a question- a kind of modern update on the problem of evil. If we are in a simulation, does it follow our simulators are bad people?
A brief summary of the argument we’re in a simulation
Readers who are already aware of the simulation argument can skip this:
Why think we might be in a simulation? This is my version of the argument, which draws elements from both Bostrom & Chalmers. It’s a little closer to Bostrom than Chalmers because I find Bostrom’s version more persuasive for reasons I won’t get into here. My version of the argument is not as technically complete or comprehensive as it could be, because it is designed to be accessible. Nonetheless, it is, I think, in essence, right, at least on the basis of the evidence available to us at the moment.
1.What it “feels like” to be in a simulation is the same as what it feels like to be outside a simulation. Two people in the same situation (but one simulated) with the same past (but one simulated) will have the exact same experiences.
2. If humans survive the next few hundred years (at the most), human nature being what it is, it seems likely we will create many simulations, including simulations of humans. These will include simulations of our past- before we gained the capacity to create detailed simulations. Call these “ancestor simulations”.
3. The capacity to create simulations is abundant- potential computational power is vast. Our curiosity and desire for entertainment is also abundant. It is therefore likely that, if we start creating ancestor simulations, we will create a vast number of such simulations of our history, many times the number of simulated people than the number of people who ever existed.
Since by (1) we have no other evidence that would discriminate whether we are in a simulation, we need to fall back on the baseline probabilities.
By (2 & 3) the baseline probability that we are in a simulation is higher than the baseline probability that we are not in a simulation,
Ergo we are probably in a simulation. Note that this is not Descartes classical argument that we may be being deceived by a demon, insomuch as Descartes sought merely to show that it is possible that we are in an illusory world created by a demon, whereas this argument attempts to give us positive reason to think that it is probable that we are in a simulation.
Chalmers on the case that our simulators are divine
As Chalmers notes, simulation theory. has been called the most interesting new argument for theism of modern times. If we are in a simulation, then our simulators are:
● Our creators
● Enormously powerful with respect to us.
● Have at least the capacity to be enormously knowledgeable about our lives, even
if they don’t choose to exercise it.
These features can be seen as corresponding to traditional divine attributes. God(s) are generally thought to be creators and immensely powerful. Many, though not all, traditions hold that God(s) know all things or at least a vast amount. Thus the simulation argument can be seen as generating a kind of limited theism.
Our simulators have other interesting features as well in this regard- for example, being outside time and space with respect to our simulation, corresponding to Boethian concepts of deity.
The problem of simulator theodicy
But there’s another divine attribute, particularly important in the Abrahamic religions (though not only those), the attribute of omnibenevolence. It’s far from clear that if the simulation argument is true, our simulators are omnibenevolent. In fact, you might worry they are evil- or perhaps somehow beyond good and evil (which is to say, in practical terms, evil). There are two arguments one might use to derive the conclusion that our simulators are evil:
The argument from suffering (and the absence of bliss). This world is filled with suffering. A good simulator would not create beings that suffer and would create beings that experience more bliss than us. Note that this can be extended to other evils besides suffering- for example, a lack of freedom.
The argument from deception, a good simulator would not deceive. This world, in some sense, tends to deceive us into thinking that we are not simulated, ergo, our simulators have created a deceptive world.
Our question then is: suppose our world is a simulation. Is the way the world is compatible with our simulators being good people who have made the world this way deliberately?
By good person, I don’t necessarily mean anything particularly demanding. Certainly not omnibenevolent. Perhaps the best definition of what I mean in this context is:
A good person is a person who does not cause substantial harm to others without a justification strong enough to excuse that harm.
A lot of this is going to come down to divergent values. My personal sense is that the argument from deception is relatively weak- ceteris paribus our simulators would owe us the knowledge we are in a simulation, but even a relatively modest justification could get them off the hook for not telling us we’re in a simulation. Thus we’ll focus on the argument from suffering (and other evils).
This is not just an abstract philosophical question. Though we probably cannot do much about it, it is possible that no question matters more. Our simulator could well be omnipotent with respect to us. They could turn us off, create disasters, wipe us from history, or send us to virtual heavens or hells.
Does our simulator owe us any more than a greater than even lifetime balance of good over bad?
One of the best defenses of our simulator’s moral goodness is to try and lower the bar for goodness as low as possible.
We should take seriously the idea that perhaps all our simulators owe us is more good than evil across our lifespan. One could even lower it further, and argue that all they owe us is for humanity as a whole to experience more good than evil across its lifespan- or for the simulation as a whole generate more good than evil. Suppose you were speaking to your simulator. You had a dialogue with her reminiscent of the book of Job- accusing her of badly mistreating you.
To this she replied:
“Would you prefer you’d never existed?”
“No, but you could have made things so much better!”
“Yes, but I’m not running a simulation of paradise, I’m running a simulation to find out about something, and having all simulated beings in a state of perpetual bliss would interfere with that. Nonetheless, I’ve taken steps to ensure that all lives in my simulation are worth living [ed: this could be achieved by running only a sparse simulation of the most miserable lives, or perhaps through a simulated afterlife for those who found earthly life worse than not existing at all] Or at the very least I have taken steps to ensure the total experience of the simulated human species is more positive than negative. I get the data I want. You get lives that are worth living- either individually or at least in the aggregate. in what sense can I be said to have wronged you?”
“You could easily make things better, but you choose not to, that’s wrong.”
“I can’t make things better easily. I have a limited computational budget for simulations.”
“Why aren’t you spending your computational budget on creating blissful lives?”
“This simulation is being run for some kind of purpose in my world- perhaps science, perhaps even entertainment- I won’t get into the details. I have the budget I do contingently on meeting that goal. If I just created blissful lives my funding would be taken away. Thus your choices are non-existence or the lives I give you. On the whole, I think this benefits both of us, and doesn’t make me evil ”
Whether this is an adequate response is going to depend on your ethical views. However, I think it’s clear that there is at least a coherent conception of the good on which what our simulator does in this scenario is defensible. Thus we can’t be sure that our simulator is malign.
Is it immoral to switch off a world, or to permanently terminate a simulated person’s consciousness at death? This depends on whether death is harmful.
One of the more terrifying implications of the simulation hypothesis is the possibility that the simulator could turn it off at any time. An interesting question then is if our simulators are benign are they be obliged not to turn us off? At least without our consent?
There is an ancient debate in philosophy over whether or not death is a kind of harm. That is to say, if someone dies, is that, in and of itself, harmful for them? The answer to this question will establish whether or not our simulators could count as benign, and still turn us off. Epicurus, for example, thought that death was not harmful. This, I think, is just going to come down to personal intuitions on death and harm. I won’t go through the philosophical arguments here. My sense is that the majority of people if they thought carefully about it, would come to the conclusion that dying is bad for the deceased.
If our simulators are benign and regard involuntary death as harmful, this has interesting implications beyond the question of whether they can turn the world off as a whole. It would tend to suggest that we could expect that death is not the end, and the dead are spirited away to some sort of afterlife. Alternatively perhaps our simulators think that death is, while tragic, necessary for some reason in a way that justifies our simulators allowing it.
Even if death is not intrinsically harmful, it might be held that dying after an unsatisfactory life that you would be better off never having lived is a sort of harm. Simulators might have a special duty to correct this through an afterlife. A similar argument might be made about premature death- although what counts as “premature” from the point of view of a god-like simulator might be difficult to assess.
Can we know that the various evils we complain about exist?
One thing we need to consider is that if we are in a simulation, our evidential basis for judging our creator is sketchy. Granted, the epistemological and metaphysical issues are complex, as Chalmers discusses, but it seems to me that if we’re in a simulation we can’t be confident, for example, that the past of that simulation happened the way it appears to have happened.
Any given awful experience that you might hold against your simulator might have never actually happened. The scope of evils for which the simulator is responsible might be far smaller than it initially seemed (or larger!)
Even the basis of our reasoning is suspect. It could be that inferences that appear plausible to us are the result of manipulation by our simulator. For an omnipotent simulator, how easy would it be to manipulate us so that we all think 2+2=4, when really it equals five?
These kinds of skeptical doubts start tearing up the very bases on which we came to the simulation argument. This leads to an argument that skepticism is self-undermining.
I do tend to think that, past a certain point, skeptical doubts become self-undermining, but theorizing exactly where this point is is difficult. Chalmers quotes one of my favorite philosophical arguments by a physicist, Sean Caroll’s argument that the idea we are Boltzmann brains {one of the most extreme skeptical hypotheses} is self-defeating- I tend to agree with Caroll on this. On the other hand, I’m sure that some philosophers will try to argue that the idea we are in a simulation undermines any evidence we might present for it, thus any version of the simulation argument is self-defeating, but I find this implausibly broad.
The truth of where to draw a line against doubts as futile and self-undermining probably lies somewhere between Boltzmann brain and ordinary simulationism. In our inquiry into the moral character of our simulators, I see little option but to proceed on the basis that, while our world may be simulated, things happen in the simulation broadly as they appear to while expanding the error bars around our conclusions.
What if we live in an ethically driven project- Diversity Utilitarianism
Another possibility that we need to consider is that if we are in a simulation, we may be in an ethically driven project. By “ethically driven project” I mean a project that exists for our own good, and/or the good of humanity. So long as our simulators have similar ethical values to us (a big if) this would be a fantastic outcome. There are many different possible ethical projects we could be a part of, in the next two sections I’ll consider two of them.
But would our simulators put us through pain and suffering if they are working for our own good?
Suppose I gave you vast, though not unlimited, computing power and put you in an otherwise empty universe, what would you do? If you’re anything like me, you’d want to create numerous beings, and let them live blissful lives. Perhaps humans, because we’re biased.
You might also feel like these beings have to be genuinely distinct from each other, and live varied lives. A vast number of copies of a being experiencing a single blissful moment over and over would be unsatisfactory.
Call this position diversity utilitarianism. A diversity utilitarian holds that total value is equal to the sum of the utility of individuals. However, this value is diversity weighted in some way. If there are two beings, Don & Nod, and they are quite distinct from each other, total utility equals the sum of their utilities. If they are identical, total value is maybe equal to half their total happiness, or perhaps just a little over half their total value. If they are very similar, but not identical, perhaps there is some penalty to how much their aggregated utility is worth.
Personally, I find diversity utilitarianism plausible, at least in so far as tiling the universe with identical simulated people experiencing bliss doesn’t sound that attractive. If our simulator is a diversity utilitarian- or something similar- they will need to generate not just as much bliss as possible, but diverse bliss.
How do you create numerous different humans, genuinely distinct from each other? Well, it’s possible that the most efficient way, or possibly even the only feasible way, to create a human personality—especially a range of different personalities- is to simulate the biological and social processes of human life. Our world could thus be a diversity utilitarian people generating ground.
But why not generate these future citizens of blisstopia in a blissful world? If you want the humans you create to be diverse, just raise them in diverse blissful worlds. Chekov said that all happy families are the same, it’s the unhappy ones that are different, but surely Chekov aside, there are uncountable possible utopias.
I grant that, if you’re motivated by the ethical goal of increasing total human flourishing, you’d start by creating blissful lives. But a posthuman civilization might have vast computational power- so much that they could simulate all sufficiently psychologically distinct beings that grew up in blissful conditions. Thus they might turn to simulating people who grew up in less than blissful conditions. After they died, or at a certain age, or something, you’d harvest them out of the simulation and set them up in a nice afterlife.
In other words, if this speculation is correct, we are the product of an attempt to balance psychological diversity with psychological bliss, after the low-hanging fruit of people raised in utopias has been exhausted.
That scenario probably sounds absurd, or wishful thinking, but it first occurred to me not when thinking about this problem, but when thinking about what I’d do if you gave me vast computational power. It has a degree of independent plausibility.
What if we live in an ethically driven project- Nikolai Fyodorovich Fyodorovism
Nikolai Fyodorovich Fyodorov is my favorite non-Marxist Russian philosopher. Nikolai believed that the greatest source of alienation in our lives is the alienation of the living from the dead. We are cut off from ancestors and friends alike by that dread scythe. Nikolai, however, had a can-do attitude. Where a lesser, perhaps saner, philosopher would simply bemoan the tragedy of death, he proposed its abolition. But he went beyond the normal transhuman desire to eliminate death- for he wanted to eliminate it retrospectively. Nikolai wanted to raise everyone who had ever died from the death. Another reason you might simulate people with less than blissful lives is if you wanted to complete Nikolai Fydrov Fydrovich’s universal resurrection project. You wanted to recreate every human that had ever lived because you thought you had a duty to resurrect the dead. Since historical information is partial, in order to be sure of creating a good psychological approximation of everyone, you’d have to make a vast array of attempts. Certainly, there is enough mass and energy for a vast number of attempts, although just how many is a little unclear.
And so, on this theodicy, the bad stuff we experience is in a strange sense, formative. It is necessary to bring us back into being.
Now you might be wondering- in both the Fyodorovism and diversity utilitarianism cases- “couldn’t they just skip the experiences and create people without actually simulating the life history?” The answer may very well be no. It could be that there is no way- or at least no computationally efficient way- of creating the rich personality-memory complexes that are humans without running through a simulation of that personality’s history.
The problem of quantitative theodicy
Scott Alexander presents a kind of Theodicy that converges with what we called diversity utilitarianism but in a non-simulator context. Essentially, God aims to create as much (net) good as possible. First God creates all possible completely good worlds, and then when he runs out he creates worlds that have some good and some evil in them.
This makes me wonder. Chalmers claims that there is enough capacity in a kilogram of matter to simulate 100 years of life for 10 billion people. The mass of the galaxy is 1.5 trillion solar masses, which I think is about 10^40 kilograms. Is it plausible that using the mass of the galaxy to create simulations, one would run out of diverse, blissful lives, and have to resort to mixed lives like our own?
Now theodicy is reduced to a strange sort of maths problem, albeit an insoluble one, since we do not have any quantitative sense of how much diversity is required, or a way to quantify diversity.
We also don’t really know how much matter our simulators have. Perhaps they have far more than a galaxy’s worth, perhaps they have far less.
Consent theodicy- the virtual contract
Years ago I outlined a consent theodicy. I argued that it’s possible that we consented to live in a world with evil, or that our creator knew that in the counter-factual in which we were asked “do you want to live in this world” and the full reasons we were living in this world were given, we would say yes. Hence we suffer evil because we have agreed to it? Why? Well, perhaps because it’s essential for our development in some respect. Obviously, such a consent theodicy can be combined with sim-theism. It is possible that you are in a simulation right now that you agreed to be in*. Alternatively, it is also possible that your simulator would justify their treatment of you on the counterfactual that if you understood the full situation you would consent to be in the simulation. *- [although this raises prickly questions about in what sense the person who agreed to be in the simulation really is you, I think there are at least plausible permutations of the conditions on which this turns out to be true]
Evidential decision theory and the simulation hypothesis- or why there’s at least a modest case you shouldn’t mistreat sims
Does our consideration of simulator theodicy have any practical implications? Well an argument can be made that it gives us reason not to create simulations maliciously, or mistreat them.
Quoting Wikipedia, evidential decision theory holds that:
“The best action is the one which, conditional on one’s having chosen it, gives one of the best expectations for the outcome.”
Evidential decision theory is controversial. Its most prominent rival is causal decision theory, which holds that you should act in a way that is likely to cause the best outcome. Nonetheless, let’s stick with evidential decision theory for the moment.
Now our world, as we see it, is compatible with a variety of simulators, some of them benign, some of them callously indifferent, some of them actively cruel.
It seems quite possible that our simulator is what we might term our value function descendant (it may seem paradoxical to hold that our simulators are our descendants, but remember our earlier argument was that it is plausible that we are an ancestor simulation). A value function descendant of humanity is a being that has roughly our value function but is perhaps extrapolated out to remove inconsistencies and/or clarified. The argument for this is that, so long malign AI doesn’t take over the planet, it is likely that simulations we create and run will be run either by our value function descendants or by artificial intelligence under the control of our value function descendants.
Thus, if it turns out that we mistreat simulations in the simulations we create, the likelihood that we are in a simulation in which we are going to be mistreated goes up. Therefore the action that gives the best expectations of outcome is not to mistreat any sims we create, because it’s reasonably likely that our simulators have similar values to us. If we commit sim abuse, it’s more likely our simulators are willing to commit sim abuse. Thus, according to evidential decision theory, we have a reason not to.
Excursus- if you think our simulators are either humans or the descendants of humans implanted with our values, our probable situation depends on a kind of ethics exam at the end of history
If our simulators are human or value function descendants of humans -and not aberrant or rogue actors but representatives of their civilization(s)-, then there’s a sense in which our simulated humanity will get what it deserves. People like us are choosing our fate in an ethics exam at the end of history, we will have done unto us what we would do unto others.
I’ve long wondered whether the evils of the world reflect mistakes or conflicts of interest. This is why I introduced the language of conflict versus mistake theory all those years ago. The answer of course is both but in a very subtle way, with malice and mistake interpenetrating in a dizzying web.
Suppose that, due to super-intelligent AI, we eliminated the possibility of mistakes. Do you have confidence that faced with genuine knowledge of the consequences of their actions, humans would choose to do the right thing? If yes, then rejoice because our simulators are probably not malicious {assuming humanity is still in charge}. If not, then there’s less comfort to be had.
What about the argument that even if humanity as a whole is good, but we in particular could have the misfortune to be in a simulation run by a rogue evil individual? It’s possible but unlikely, I tend to think there would be a fraction as many such illegal simulations as legal ones.
More disturbing is the possibility that humanity as a whole is (or was) good, but a clique of evil people managed to “win” history. The priors, various scenarios etc. are very hard to assess. All we can really do is act as if we aren’t in a simulation, and act so that if sapient beings everywhere in all worlds acted like us, the rate of “bad” simulations would be zero.
Excursus- What would you do if you were powerful?
I think a useful exercise in pondering this stuff- not necessarily in arriving at truth, but in getting a sense of the dizzying scope of possibilities, is to consider what you would do if you were very powerful- say I gave you a billion dollars.
Having done that, consider what you would do if you were even more powerful- say I gave you the capacities of superman. What would you do if you were so mighty that you exceeded the power of all governments?
Now, having considered that lets up the power level again. Suppose that you were not just mightier than all governments, but also had a super-intelligent AI that would advise you on the best way to achieve your goals- whatever they were what would you do then? what values would you steer humanity towards?
Now we come to the highest pinnacle. What would you do if I gave you vast computing power- enough to create simulations of whatever you liked- and AI assistance in creating those simulations? What worlds would you create?
Excursus- Some broad value frameworks omnipotent simulators could have
Here’s a smattering of different values systems simulators could subscribe to. Almost any of these value systems, in at least a partial form, can overlap with almost any of th others, and this isn’t a formal classification, but it’s a starting point for discussion. For most of these value functions, I can imagine some possible way that our experience thus far could be compatible with a simulator holding this value function, but I’ll leave thinking it through as an exercise to the reader.
Selfishness- hedonistic type: A simulator of the hedonistic type is dedicated to the satisfaction of their aesthetic, culinary, sensual, and/or sexual appetites. They may, for example, run numerous simulations to try and create the most exquisite and fascinating people to have sex with.
Selfishness- megalomaniac type: A selfish simulator of the megalomaniacal type wants to be worshipped, and to exercise their power according to their own strange whims for Self-glorification.
Selfishness- aesthetic type: A selfish simulator of the aesthetic type views the whole universe as like an artwork of some sort.
Selfishness- scientific type: A selfish simulator of the scientific type is running the universe to answer some scientific question- regardless of whether it hurts the simulated.
Sadism: The worst possible scenario would be if we were in a simulation created by a sadist. This could come in several different forms- for example, they might be a general sadist, or they might be seeking revenge on a specific person or group, thus in the process of recreating them to torture.
Liberalism: A simulator of the liberal type wants to give us, above all, freedom of some kind. Exactly what that freedom amounts to will depend on the simulator.
‘Crude’ utilitarianism: A crude utilitarian simulator wants to maximize pleasure, or desire satisfaction or something like that, and so is running simulations to do so. We can be reasonably confident that we are not in such a simulation due to the existence of suffering.
’Diversity utilitarianism”: As described above. A diversity utilitarianism wants to maximize utility—disutility. However, they weigh repetitive good experiences or good lives as worth less than non-repetitive good experiences or lives.
Humanism: A humanistic simulator sees its primary goal as the flourishing of people. It’s a eudaemonist. Freedom and happiness, at least to some degree, are likely both parts of this goal, but neither is the full object. A humanistic simulator might need diversity for similar reasons the diversity utilitarian does- e.g. a flourishing life counts for less if it is a copy of an already existing one.
Fyodorovian: As above, a project to resurrect the dead.
Tribalism: A tribalist simulator is like a selfish one, but they dedicate themselves to a group, rather than just themselves. We are sadly not in the group.
Social Darwinist/Nietzschean: A social Darwinist simulator wants to create strong creatures, for some value of strong, and even if it requires great suffering.
Primitivism: The primitivist singleton is leery of technology, and wishes to constrain it. This might sound like a bizarre or unlikely view for a simulator to take, but although I do not agree, I think it makes a certain sense. In the novel Consider Phlebas, by the sadly passed Iain Banks, the Iridians fight a war against The Culture because they view the culture as devoid of human agency- AI does everything. “Human” (organic sapient) striving and struggle is necessary for a meaningful existence, argue the Iridians. We can imagine a primitivist simulator who has put us in our world- just before the invention of artificial intelligence that can take over human functioning- for exactly this reason.
Moralism: A moralistic simulator wants to create good worlds, but their idea of goodness is laden with ideas that some might consider outmoded. Drugs are bad, promiscuity is bad, everyone must worship God, that sort of thing. It’s hard for me to see how our world is compatible with that? Unless it’s combined with other factors? (E.g., people must choose morality “of their own free will?)
Radical aporia
I’d like to give a personal coda to all this simulation stuff, building on the brief discussion of skepticism, and branching out from there.
How are we meant to think about cosmology, and on a more personal level, the meaning and value of our lives in light of the simulation argument? We face both radical uncertainties about whether we are in a simulation and radical uncertainty about the implications if we are in. For example, what is the risk of being turned off? What does the future hold for us given that we don’t know the purpose of the simulation? Does life end at death or do our simulators continue us on? If we are in a simulation, how can we be sure the past happened anything as we remember it, given that our simulators could just tweak our memories? But if we go down this road, how can we know anything about our situation, including the things that led us to posit we’re probably a simulation in the first place? How can we even trust our own a priori reasoning, given that it would be trivial to interfere with that?
This all reminds me of Neurath’s boat. As Neurath put it:
“We are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom. Where a beam is taken away a new one must at once be put there, and for this the rest of the ship is used as support. In this way, by using the old beams and driwood the ship can be shaped entirely anew, but only by gradual reconstruction”
In truth, we’ve never known our own situation in the grander scheme of things. A lot of us thought that we had it figured out with a kind of vague, cosmological materialism, but we never had the full picture filled in on that story. There were always questions about the standard 19th-century materialist framework- the mystery of qualia (as Chalmers of all people has pushed), the Fermi paradox, etc.
Even the idea we are in a simulation only represents a guess given our current level of technology. Who knows what stuff we’ll be pondering with the technology, social structure and speculative philosophy of the future? The simulation argument seems kind of persuasive with the tech of today, but perhaps the technology of tomorrow will suggest wholly different cosmic possibilities. To put it tautologically, we are conditioned by our conditions- things that seem like very good arguments to us now might seem like poor arguments in the future. Things that seem like poor arguments now, or that haven’t even occurred to us, might seem compelling in the future.
In other words, I’m urging you to apply the skeptical meta-induction to speculative metaphysics. If it has power in the realm of science, how much more so in philosophy. Given how unstable our ideas have proven, not just about our cosmic situation, but even about what the possible alternatives are, we know nothing. We can’t trust the simulation argument, can’t trust the opposite, can’t really trust anything.
So we don’t know where we are in the logical space of possible worlds, not even approximately, as best I can tell we have no way of figuring it out. The only way to cope is to accept that you don’t know, and you will very possibly never know, even the basics of your situation. Having accepted this, resolve to live by your values in a way that carries meaning even in an absurd and unknowable space of possibilities.
In an increasingly bizarre world, the thought that no one has ever proven it’s not going to turn out alright can be a source of comfort. We’re swimming over an abyss on a black night, and it’s natural to worry a Levithan might be rushing up to devour us. That’s possible, but hands might be rising up to cradle us as well. Who the fuck knows?