You know, I was musing on the “Universe as Matrix” idea a while back, and I came to some interesting conclusions.
I realized first that, given sufficiently attentive creator(s), any attempt to prove that the Universe was a simulation must inevitably fail. Because if if such a proof were found, the proverbial Dark Lords could simply pause the sim, patch out the error that revealed the discrepancy, and roll back to before it was revealed. Similarly, proof that we weren’t in a Matrix should be equally impossible, since any evidence that proved the impossibility could simply be falsified by the system to maintain the illusion.
At this point my train of thought went on to a different track: if we did know that we were living in a simulated universe, what should we do about it? After some pondering, I concluded that we would have spent all of our existence living in the sim anyway, so I wouldn’t see much need for massive upheavals of human life. And if the Dark Lords were indeed trying to enforce a “realistic” simulation, then attempting to communicate with them would be fruitless, since they would not respond. But...
For whatever reason, the creators would have created this universe. It seems to me that if you were going to create a universal simulation, you would do it because you wanted to see what would happen inside. And we humans have a rather strong attachment to existing, so we should try to continue that state of affairs as far as possible. Therefore, in this scenario, every human being would have a solemn duty to make the world as interesting as possible.
It was at that moment that I realized that I had created a religion.
Therefore, in this scenario, every human being would have a solemn duty to make the world as interesting as possible.
Great post but this is where you lost me. I have a hard time prioritzing “interesting” over reducing suffering, and I find it repugnant that some beings created a universe where quintillions of sentient creatures have been suffering and dying for half a billion years on this planet alone. OK, maybe the creators had the decency to “shortcut” all the suffering so it wasn’t actually experienced, that’s the upside of the thought.
Hmm, that makes for a good religion too, you only remember the suffering, but during the actual moments you were zombified, you’re misremembering!
I find it repugnant that some beings created a universe where quintillions of sentient creatures have been suffering and dying for half a billion years on this planet alone.
Meanwhile, they find something utterly alien about solar fusion repugnant yet utterly fascinating.
Yes, in which case their evaluation doesn’t correspond to any first-person-evaluations other than their own (because solar fusion likely doesn’t have any of that), whereas my evaluation reflects all the first-person-perspectives out there. I’m being altruistic, they aren’t. Sure, they might not care about that, and indeed, if the creators themselves aren’t capable of suffering, they might not even realize they’re being a**holes, but otherwise they’d obviously be total jerks in a very objective sense—for whatever that’s worth.
Then, if I understand the question correcly, the creators would be being partially altruistic, which we’d mistake for being non-altruistic because we don’t understand that solar fusions can suffer.
“Not taking other-regarding reasons for actions seriously” makes you a total jerk. “Others” are beings with a first-person perspective, the only type of entities for which things can go well or not well in a sense that is more than just metaphorical. You could say that it is “bad for a rock” if the rock is split into parts, but there isn’t anything there to mind the splitting so at best you’re saying that you find it bad if rocks are split.
The above view fits into LW-metaethics the following way: No matter their “terminal values”, everyone can try to answer which action-guiding set of principles best reflects what is good or bad for others. So once you specify what the goalpost of ethics in this sense is, everyone can play the game. Some agents will however state that they don’t care about ethics if defined like that, which implies that their “terminal value” doesn’t include altruism (or at least that they think it doesn’t, which may sometimes happen if people are too quick to declare things their “terminal value”—it’s kind of a self-fulfilling prophecy if you think about it).
Would it be immoral to fully simulate a single human with brain cancer if there was an expected return of saving more than one actual human with brain cancer? What if there was an expectation of saving less than one actual human? (Say, a one-in-X chance of saving fewer than X patients) What if there was no chance of saving an actual patient at all as a result of the simulation? Assume that simulating the human and cancer well enough requires that the simulated human simulate saying that he is self-aware, among other things.
I’ve never quite understood, in cases like this, how “fully simulate a single human with brain cancer” and “create a single human with brain cancer” are supposed to differ from one another. Because boy do my intuitions about the situation change when I change the verb.
I find it repugnant that some beings created a universe where quintillions of sentient creatures have been suffering and dying for half a billion years on this planet alone.
Isn’t that an inevitable conclusion of the basic “the universe is a simulation” premise?
It seems to me that if you were going to create a universal simulation, you would do it because you wanted to see what would happen inside. And we humans have a rather strong attachment to existing, so we should try to continue that state of affairs as far as possible. Therefore, in this scenario, every human being would have a solemn duty to make the world as interesting as possible.
That seems to share some ideas with Neal Stephenson’s fictional religion of Kelx, as described in Anathem.
Thank you by the way, I had actually remembered about that as I was typing this up (In a sort of “Speaking of Religions with unusual premises...” way), but forgotten what it was called and who came up with it. I had speculated that it might have been from a Heinlein novel, since the half-remembered premise of “lone protagonist is saved from arctic peril and then gets to listen to someone politely explain their philosophy” sounded vaguely Heinleinish.
Therefore, in this scenario, every human being would have a solemn duty to make the world as interesting as possible.
If you look at the number of different sorts of stories created by humans, who are probably less complex than whoever made our universe, I think it’s fair to say we have no idea what counts as interesting to them.
You know, I was musing on the “Universe as Matrix” idea a while back, and I came to some interesting conclusions.
I realized first that, given sufficiently attentive creator(s), any attempt to prove that the Universe was a simulation must inevitably fail. Because if if such a proof were found, the proverbial Dark Lords could simply pause the sim, patch out the error that revealed the discrepancy, and roll back to before it was revealed. Similarly, proof that we weren’t in a Matrix should be equally impossible, since any evidence that proved the impossibility could simply be falsified by the system to maintain the illusion.
At this point my train of thought went on to a different track: if we did know that we were living in a simulated universe, what should we do about it? After some pondering, I concluded that we would have spent all of our existence living in the sim anyway, so I wouldn’t see much need for massive upheavals of human life. And if the Dark Lords were indeed trying to enforce a “realistic” simulation, then attempting to communicate with them would be fruitless, since they would not respond. But...
For whatever reason, the creators would have created this universe. It seems to me that if you were going to create a universal simulation, you would do it because you wanted to see what would happen inside. And we humans have a rather strong attachment to existing, so we should try to continue that state of affairs as far as possible. Therefore, in this scenario, every human being would have a solemn duty to make the world as interesting as possible.
It was at that moment that I realized that I had created a religion.
Robin Hanson has written on this topic.
Huh. I’m certain that I hadn’t read this before.
Obviously he gave it a little more thought than my own shower-musings received.
You’re assuming that the Dark Lords are aware of our existence and care.
Given the fraction of the universe that we occupy, I’m not betting on it.
What, can’t they search for local reversals of entropy?
Interesting as in “interesting times”?
I think the Dark Lord already designed us to be interest maximizers. They probably lead dull lives, the poor things...
Great post but this is where you lost me. I have a hard time prioritzing “interesting” over reducing suffering, and I find it repugnant that some beings created a universe where quintillions of sentient creatures have been suffering and dying for half a billion years on this planet alone. OK, maybe the creators had the decency to “shortcut” all the suffering so it wasn’t actually experienced, that’s the upside of the thought.
Hmm, that makes for a good religion too, you only remember the suffering, but during the actual moments you were zombified, you’re misremembering!
My trouble was in figuring out what “interesting” means to the beings which can model a universe.
Meanwhile, they find something utterly alien about solar fusion repugnant yet utterly fascinating.
Yes, in which case their evaluation doesn’t correspond to any first-person-evaluations other than their own (because solar fusion likely doesn’t have any of that), whereas my evaluation reflects all the first-person-perspectives out there. I’m being altruistic, they aren’t. Sure, they might not care about that, and indeed, if the creators themselves aren’t capable of suffering, they might not even realize they’re being a**holes, but otherwise they’d obviously be total jerks in a very objective sense—for whatever that’s worth.
What if they have first-person perspectives which are objectively comparable to us in the same way that we are comparable to solar fusion?
What are the necessary and sufficient conditions to be “total jerks” in any objective sense?
Then, if I understand the question correcly, the creators would be being partially altruistic, which we’d mistake for being non-altruistic because we don’t understand that solar fusions can suffer.
“Not taking other-regarding reasons for actions seriously” makes you a total jerk. “Others” are beings with a first-person perspective, the only type of entities for which things can go well or not well in a sense that is more than just metaphorical. You could say that it is “bad for a rock” if the rock is split into parts, but there isn’t anything there to mind the splitting so at best you’re saying that you find it bad if rocks are split.
The above view fits into LW-metaethics the following way: No matter their “terminal values”, everyone can try to answer which action-guiding set of principles best reflects what is good or bad for others. So once you specify what the goalpost of ethics in this sense is, everyone can play the game. Some agents will however state that they don’t care about ethics if defined like that, which implies that their “terminal value” doesn’t include altruism (or at least that they think it doesn’t, which may sometimes happen if people are too quick to declare things their “terminal value”—it’s kind of a self-fulfilling prophecy if you think about it).
Would it be immoral to fully simulate a single human with brain cancer if there was an expected return of saving more than one actual human with brain cancer? What if there was an expectation of saving less than one actual human? (Say, a one-in-X chance of saving fewer than X patients) What if there was no chance of saving an actual patient at all as a result of the simulation? Assume that simulating the human and cancer well enough requires that the simulated human simulate saying that he is self-aware, among other things.
I’ve never quite understood, in cases like this, how “fully simulate a single human with brain cancer” and “create a single human with brain cancer” are supposed to differ from one another. Because boy do my intuitions about the situation change when I change the verb.
Isn’t that an inevitable conclusion of the basic “the universe is a simulation” premise?
I’m a different sort of Dark Lord, I suppose—if my sims found out they were living in a simulation, I’d be fascinated to see what they’d do next.
wrong post.
That seems to share some ideas with Neal Stephenson’s fictional religion of Kelx, as described in Anathem.
Ah. This one, I’ve read.
Thank you by the way, I had actually remembered about that as I was typing this up (In a sort of “Speaking of Religions with unusual premises...” way), but forgotten what it was called and who came up with it. I had speculated that it might have been from a Heinlein novel, since the half-remembered premise of “lone protagonist is saved from arctic peril and then gets to listen to someone politely explain their philosophy” sounded vaguely Heinleinish.
If you look at the number of different sorts of stories created by humans, who are probably less complex than whoever made our universe, I think it’s fair to say we have no idea what counts as interesting to them.