I’m against any irreversible destruction of
knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner. Such
destruction is worse the more valuable the thing destroyed, the longer it took to create, and the
harder it is to replace
[...]
Deleting the last copy of an em in
existence should be prosecuted as murder, not because doing so snuffs out some inner light of
consciousness (who is anyone else to know?), but rather because it deprives the rest of society of a
unique, irreplaceable store of knowledge and experiences, precisely as murdering a human would.
I don’t think this can be right. Suppose the universe consists of just two computers that are totally isolated from each other except that one computer can send a signal to the other to blow it up. There is one (different) virtual creature living in each computer, and one of them can press a virtual button to blow up the other computer and also get a small reward if he does so. Then according to this theory, he should think, “I should press this button, because deleting the other creature doesn’t deprive me of any unique, irreplaceable store of knowledge and experiences, because I never had any access to it in the first place.”
ETA: Or consider the above setup except that creature A does have read-only access to creature B’s knowledge and experiences, and creature B is suffering terribly with no end in sight. According to this theory, creature A should think, “I shouldn’t press this button, because what’s important about B is that he provides me with a unique, irreplaceable store of knowledge and experiences. Whether or not creature B is suffering doesn’t matter.”
I didn’t claim that the quoted passage universalizes as a version of negative utilitarianism in all imaginable cases, just that it makes sense intuitively in a variety of real-life situations as well as in the many cases not usually considered, like the ones you mentioned, or in case of reversible destruction Scott talks about, or human cloning, or…
And we can see that in your constructed setup the rationale for preserving the variety “it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences” no longer holds.
I don’t think it makes sense intuitively in the cases I mentioned, because intuitively I think we probably should consider the conscious experiences that the creatures are experiencing (whether they are positive or negative, or whether they are conscious at all), and Scott’s theory seems to be saying that we shouldn’t consider that. So I think the correct answer to my question 1 is probably something like “yes only if the creatures will have more negative than positive experiences over the rest of their lives (and their value to society as knowledge/experience do not make up for that)” instead of the “no” given by Scott’s theory. And 3 might be “no if overall the creatures will have more positive experiences, because by shutting down you’d be depriving them of those experiences”. Of course I’m really unsure about all of this but I don’t see how we can confidently conclude that the answer to 3 is “yes”.
It looks like you’re talking about this from page 28-29 of https://www.scottaaronson.com/papers/giqtm3.pdf:
I don’t think this can be right. Suppose the universe consists of just two computers that are totally isolated from each other except that one computer can send a signal to the other to blow it up. There is one (different) virtual creature living in each computer, and one of them can press a virtual button to blow up the other computer and also get a small reward if he does so. Then according to this theory, he should think, “I should press this button, because deleting the other creature doesn’t deprive me of any unique, irreplaceable store of knowledge and experiences, because I never had any access to it in the first place.”
ETA: Or consider the above setup except that creature A does have read-only access to creature B’s knowledge and experiences, and creature B is suffering terribly with no end in sight. According to this theory, creature A should think, “I shouldn’t press this button, because what’s important about B is that he provides me with a unique, irreplaceable store of knowledge and experiences. Whether or not creature B is suffering doesn’t matter.”
I didn’t claim that the quoted passage universalizes as a version of negative utilitarianism in all imaginable cases, just that it makes sense intuitively in a variety of real-life situations as well as in the many cases not usually considered, like the ones you mentioned, or in case of reversible destruction Scott talks about, or human cloning, or…
And we can see that in your constructed setup the rationale for preserving the variety “it deprives the rest of society of a unique, irreplaceable store of knowledge and experiences” no longer holds.
I don’t think it makes sense intuitively in the cases I mentioned, because intuitively I think we probably should consider the conscious experiences that the creatures are experiencing (whether they are positive or negative, or whether they are conscious at all), and Scott’s theory seems to be saying that we shouldn’t consider that. So I think the correct answer to my question 1 is probably something like “yes only if the creatures will have more negative than positive experiences over the rest of their lives (and their value to society as knowledge/experience do not make up for that)” instead of the “no” given by Scott’s theory. And 3 might be “no if overall the creatures will have more positive experiences, because by shutting down you’d be depriving them of those experiences”. Of course I’m really unsure about all of this but I don’t see how we can confidently conclude that the answer to 3 is “yes”.
Hmm, if you ask Scott directly, odds are, he will reply to you :)