Maybe it’s just the particular links I have been following (acausal trade and blackmail, AI boxes you, the Magnum Innominandum) but I keep coming across the idea that the self should care about the well-being (it seems to always come back to torture) of one or of a googleplex of simulated selves. I can’t find a single argument or proof of why this should be so. I accept that perfectly simulated sentient beings can be seen as morally equal in value to meat sentient beings (or, if we accept Bostrom’s reasoning, that beings in a simulation other than our own can be seen as morally equal to us). But why value the simulated self over the simulated other? I accept that I can care in a blackmail situation where I might unknowingly be one of the simulations (ala Dr Evil or the AI boxes me), but that’s not the same as inherently caring about (or having nightmares about) what may happen to a simulated version of me in the past, present, or future.
Any thoughts on why thou shalt love thy simulation as thyself?
There is Bostrom’s argument—but there’s also another take on these types of scenario, which you may be confusing with the Bostrom argument. In those takes, you’re not sure whether you’re the simulation or the original—and since there’s billions of simulations, there’s a billion to one chance you’ll be the one tortured.
Just make sure you’re not pattern matching to the first type of argument when it’s actually the second.
I appreciate the reply. I recognize both of those arguments but I am asking something different. If Omega tells me to give him a dollar or he tortures a simulation, a separate being to me, no threat that I might be that simulation (also thinking of the Basilisk here), why should I care if that simulation is one of me as opposed to any other sentient being?
I see them as equally valuable. Both are not-me. Identical-to-me is still not-me. If I am a simulation and I meet another simulation of me in Thunderdome (Omega is an evil bastard) I’m going to kill that other guy just the same as if he were someone else. I don’t get why sim-self is of greater value than sim-other. Everything I’ve read here (admittedly not too much) seems to assume this as self-evident but I can’t find a basis for it. Is the “it could be you who is tortured” just implied in all of these examples and I’m not up on the convention? I don’t see it specified, and in “The AI boxes you” the “It could be you” is a tacked-on threat in addition to the “I will torture simulations of you”, implying that the starting threat is enough to give pause.
If love your simulation as you love yourself, they will love you as they love themselves (and if you don’t, they won’t). You can choose to have enemies or allies with your own actions.
You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?
I don’t play, craps is the only sucker bet I enjoy engaging in. But if coerced to play, I press with non-sims. Don’t press with sims. But not out of love, out of an intimate knowledge of my opponent’s expected actions. Out of my status as a reliable predictor in this unique circumstance.
My take on ethics is that it breaks into two parts: Individual ethics and population ethics.
Population ethics in the general sense of action toward the greater good for the population under consideration (however large). Action here consequently meaning action by the population, i.e. among the available actions for a population—which much take into account that not all beings of the population are equal or willing to contribute equally.
Individual ethics on the other hand are ethics individual beings can potentially be convinced of (by others or themselves).
These two interplay. More altruistically minded individuals might (try to) adopt sensible population ethics as their maxim, some individuals might just adopt the ethics of their peers and and others might adopt ego-centrical or tribal ethics.
I do not see either of these as wrong or some better then others (OK, I admit I do; personally, but not abstractly). People are different and I accept that. Populations have to deal with that. Also note that people err. You might for example (try to) follow a specific population ethics because you don’t see a difference between population and individual ethics.
This can feel quite natural because many people have a tendency to contribute toward the greater good of their population. This is an important aspect because it allows population ethics to even have a chance to work. It couples population ethics to individual ethics (my math mind kicks in and wonders about a connection coefficient between these two and if and how this could be measured and how it depends on the level of altruism present in a population and how to measure that...).
What about my ethics? I admit that some people are more important to me than others. I invest more energy in the well-being of my children, myself and my family. And in an emergency I’d take greater risks to rescue them (and me) than unrelated strangers. I believe there is such a thing as an emotional distance to other people. I also believe that this is partly hard-wired and partly socialized. I’m convinced that whom and what one feels empathy with depends to a large degree on socialization. For example if you were often part of large crowds of totally different people you might learn to feel empathy toward strangers—and resent the idea of different treatment of e.g. foreigners. So populations could presumable shape this. And you could presumably hack yourself (or your next person) too.
But assume a given individual with a given willingness to do some for the greater good of their population and a capability to do so (including to reason about this). What should they do? I think a ‘should’ always involves a tension between what one wants and what the population demands (or in general forces of the environment). Therefore I split this in two again: What is the most sensible thing to do given what the individual wants. That might be a fusion of population ethics (to satisfy the desire for the greater good) and individual ethics (those parts that make differences between people).
And what is the most sensible thing to do given what the population demands? That depends on many factors and probably involves trade-offs. It seems to me that it shifts behavior toward conforming more with some population ethics.
And what about your question: I in my framework there can’t be a proof that you individually ‘should’ care for the other selves. You may care with them to some degree and society might tell you that you should, but that doesn’t mean that you have to rewire yourself to do so (though my may decide that you want to (and risk to err in that)).
Totally tangential point: Do you sometimes have the feeling that you could continue the though of a sentence along different paths and you want to convey both? But the best way to convey the idea is to pick up the thought at the given sentence (or at least conveying both thoughs requires some redundancy to reestablish the context later on). Is there a literary device to deal with this? Or should I just embrace more repetition?
Thanks for the reply. I’m not sure if your reasoning (sound) is behind the tendency I think I’ve identified for LW’ers to overvalue simulated selves in the examples I’ve cited, though. I suppose by population ethics you should value the more altruistic simulation, whomever that should be. But then, in a simulated universe devoted to nothing but endless torture, I’m not sure how much individual altruism counts.
“Totally tangential point” I believe footnotes do the job best. The fiction of David Foster Wallace is a masterwork of portraying OCD through this technique. I am an idiot at formatting on all media, though, and could offer no specifics as to how to do so.
What you’re calling population ethics is very similar to what most people call politics; indeed, I see politics as the logical extension of ethics when generalized to groups of people. I’m curious about whether there is some item in your description that would invalidate this comparison.
the practice and theory of influencing other people.
Politics involves the making of a common decision for a group of people
a uniform decision applying in the same way to all members of the group
contains a matching with choosing among available actions in a group for the benefit of the group.
The main difference to what I meant (ahem) is that politics describes the real thing with unequal power whereas population ethics prescribes independent of the decision makers power.
Maybe it’s just the particular links I have been following (acausal trade and blackmail, AI boxes you, the Magnum Innominandum) but I keep coming across the idea that the self should care about the well-being (it seems to always come back to torture) of one or of a googleplex of simulated selves. I can’t find a single argument or proof of why this should be so. I accept that perfectly simulated sentient beings can be seen as morally equal in value to meat sentient beings (or, if we accept Bostrom’s reasoning, that beings in a simulation other than our own can be seen as morally equal to us). But why value the simulated self over the simulated other? I accept that I can care in a blackmail situation where I might unknowingly be one of the simulations (ala Dr Evil or the AI boxes me), but that’s not the same as inherently caring about (or having nightmares about) what may happen to a simulated version of me in the past, present, or future.
Any thoughts on why thou shalt love thy simulation as thyself?
There is Bostrom’s argument—but there’s also another take on these types of scenario, which you may be confusing with the Bostrom argument. In those takes, you’re not sure whether you’re the simulation or the original—and since there’s billions of simulations, there’s a billion to one chance you’ll be the one tortured.
Just make sure you’re not pattern matching to the first type of argument when it’s actually the second.
I appreciate the reply. I recognize both of those arguments but I am asking something different. If Omega tells me to give him a dollar or he tortures a simulation, a separate being to me, no threat that I might be that simulation (also thinking of the Basilisk here), why should I care if that simulation is one of me as opposed to any other sentient being?
I see them as equally valuable. Both are not-me. Identical-to-me is still not-me. If I am a simulation and I meet another simulation of me in Thunderdome (Omega is an evil bastard) I’m going to kill that other guy just the same as if he were someone else. I don’t get why sim-self is of greater value than sim-other. Everything I’ve read here (admittedly not too much) seems to assume this as self-evident but I can’t find a basis for it. Is the “it could be you who is tortured” just implied in all of these examples and I’m not up on the convention? I don’t see it specified, and in “The AI boxes you” the “It could be you” is a tacked-on threat in addition to the “I will torture simulations of you”, implying that the starting threat is enough to give pause.
If love your simulation as you love yourself, they will love you as they love themselves (and if you don’t, they won’t). You can choose to have enemies or allies with your own actions.
You and a thousand simulations of you play a game where pressing a button gives the presser $500 but takes $1 from each of the other players. Do you press the button?
I don’t play, craps is the only sucker bet I enjoy engaging in. But if coerced to play, I press with non-sims. Don’t press with sims. But not out of love, out of an intimate knowledge of my opponent’s expected actions. Out of my status as a reliable predictor in this unique circumstance.
My take on ethics is that it breaks into two parts: Individual ethics and population ethics.
Population ethics in the general sense of action toward the greater good for the population under consideration (however large). Action here consequently meaning action by the population, i.e. among the available actions for a population—which much take into account that not all beings of the population are equal or willing to contribute equally.
Individual ethics on the other hand are ethics individual beings can potentially be convinced of (by others or themselves).
These two interplay. More altruistically minded individuals might (try to) adopt sensible population ethics as their maxim, some individuals might just adopt the ethics of their peers and and others might adopt ego-centrical or tribal ethics.
I do not see either of these as wrong or some better then others (OK, I admit I do; personally, but not abstractly). People are different and I accept that. Populations have to deal with that. Also note that people err. You might for example (try to) follow a specific population ethics because you don’t see a difference between population and individual ethics.
This can feel quite natural because many people have a tendency to contribute toward the greater good of their population. This is an important aspect because it allows population ethics to even have a chance to work. It couples population ethics to individual ethics (my math mind kicks in and wonders about a connection coefficient between these two and if and how this could be measured and how it depends on the level of altruism present in a population and how to measure that...).
What about my ethics? I admit that some people are more important to me than others. I invest more energy in the well-being of my children, myself and my family. And in an emergency I’d take greater risks to rescue them (and me) than unrelated strangers. I believe there is such a thing as an emotional distance to other people. I also believe that this is partly hard-wired and partly socialized. I’m convinced that whom and what one feels empathy with depends to a large degree on socialization. For example if you were often part of large crowds of totally different people you might learn to feel empathy toward strangers—and resent the idea of different treatment of e.g. foreigners. So populations could presumable shape this. And you could presumably hack yourself (or your next person) too.
But assume a given individual with a given willingness to do some for the greater good of their population and a capability to do so (including to reason about this). What should they do? I think a ‘should’ always involves a tension between what one wants and what the population demands (or in general forces of the environment). Therefore I split this in two again: What is the most sensible thing to do given what the individual wants. That might be a fusion of population ethics (to satisfy the desire for the greater good) and individual ethics (those parts that make differences between people). And what is the most sensible thing to do given what the population demands? That depends on many factors and probably involves trade-offs. It seems to me that it shifts behavior toward conforming more with some population ethics.
And what about your question: I in my framework there can’t be a proof that you individually ‘should’ care for the other selves. You may care with them to some degree and society might tell you that you should, but that doesn’t mean that you have to rewire yourself to do so (though my may decide that you want to (and risk to err in that)).
Totally tangential point: Do you sometimes have the feeling that you could continue the though of a sentence along different paths and you want to convey both? But the best way to convey the idea is to pick up the thought at the given sentence (or at least conveying both thoughs requires some redundancy to reestablish the context later on). Is there a literary device to deal with this? Or should I just embrace more repetition?
Thanks for the reply. I’m not sure if your reasoning (sound) is behind the tendency I think I’ve identified for LW’ers to overvalue simulated selves in the examples I’ve cited, though. I suppose by population ethics you should value the more altruistic simulation, whomever that should be. But then, in a simulated universe devoted to nothing but endless torture, I’m not sure how much individual altruism counts.
“Totally tangential point” I believe footnotes do the job best. The fiction of David Foster Wallace is a masterwork of portraying OCD through this technique. I am an idiot at formatting on all media, though, and could offer no specifics as to how to do so.
I think if people don’t make the distinction I proposed it is easy to choose an ethics that overvalues other selves compared to a mixed model.
Thanks for the idea to use footnotes though yes it is difficult with with some media.
What you’re calling population ethics is very similar to what most people call politics; indeed, I see politics as the logical extension of ethics when generalized to groups of people. I’m curious about whether there is some item in your description that would invalidate this comparison.
Ethics is a part of philosophy, political philosophy, also being a part of philosophy, would be a better analogy than politics itself, I think.
I did look up Wikipedia on population ethics and considered
to be matching if you generalize by substitution of “number of people” with “well-being of people”.
But I admit that politics
contains a matching with choosing among available actions in a group for the benefit of the group. The main difference to what I meant (ahem) is that politics describes the real thing with unequal power whereas population ethics prescribes independent of the decision makers power.