No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in “return”. A host’s duty to his guests doesn’t go away just because that host had a poor experience when he himself was a guest at some other person’s house.
If our simulators don’t care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way “in return” by those simulating you—using a rather strange meaning of “in return”?
Some people interpret the Newcomb’s boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior—even if there is no obvious causal relationship, and even if the other entities already decided back in the past.
The Newcomb’s boxes paradox is essentially about reference class—it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you—and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.
Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience—in reality—it was more that your behavior was dictated by being part of the reference class—but you don’t experience the making of decisions from that perspective). Same for being unethical.
You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos—such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality—even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your “effect” on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.
I want to be clear here that I am under no illusion that there is some kind of “magical causal link”. We might say that this is about how our decisions are really determined anyway. Deciding as if “the decision” influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if “your decision” affects anything else in everyday life—when in fact, your decision is determined by outside things.
This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn’t deliberate.
One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.
P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right—only a bit, I admit.)
No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in “return”. A host’s duty to his guests doesn’t go away just because that host had a poor experience when he himself was a guest at some other person’s house.
If our simulators don’t care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
Of course, there’s always the possibility that simulations may be much more similar than we think.
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way “in return” by those simulating you—using a rather strange meaning of “in return”?
Some people interpret the Newcomb’s boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior—even if there is no obvious causal relationship, and even if the other entities already decided back in the past.
The Newcomb’s boxes paradox is essentially about reference class—it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you—and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.
Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience—in reality—it was more that your behavior was dictated by being part of the reference class—but you don’t experience the making of decisions from that perspective). Same for being unethical.
You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos—such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality—even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your “effect” on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.
I want to be clear here that I am under no illusion that there is some kind of “magical causal link”. We might say that this is about how our decisions are really determined anyway. Deciding as if “the decision” influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if “your decision” affects anything else in everyday life—when in fact, your decision is determined by outside things.
This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn’t deliberate.
One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.
P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right—only a bit, I admit.)