I think another important point is how simulations are treated ethically. This is currently completely irrelevant since we only have the one level of reality we are aware of, but once AGIs exist, it will become a completely new field of ethics.
Do simulated people have the same ethical value as real ones?
When an AGI just thinks about a less sophisticated sophont in detail, can its internal representation of that entity become complex enough to fall under ethical criteria on its own? (this would mean that it would be unethical for an AGI to even think about humans being harmed if the thoughts are too detailed)
What are the ethical implications of copies in simulations? Do a million identical simulations carry the same ethical importance as a single one? A million times as much? Something in between? What if the simulations are not identical, but very similar? What differences would be important here?
And perhaps most importantly: When people disagree on how these questions should be answered, how do you react? You can’t really find a middle ground here since the decision what views to follow itself decides which entities’ ethical views should be considered in future deliberations, creating something like a feedback loop.
I think we’ll need to have some sort of split system—some new system of virtual rights in the virtual world for virtual people and meatbag world rights for us meatbags, basically just to account for the profound physical differences between the two worlds. That we can preserve the species and still have an interesting virtual world. Waaay easier said than done though. This is probably going to be one of the trickiest problems since someone said “so, this democracy thing, how’s it going to work exactly?”
I think another important point is how simulations are treated ethically. This is currently completely irrelevant since we only have the one level of reality we are aware of, but once AGIs exist, it will become a completely new field of ethics.
Do simulated people have the same ethical value as real ones?
When an AGI just thinks about a less sophisticated sophont in detail, can its internal representation of that entity become complex enough to fall under ethical criteria on its own? (this would mean that it would be unethical for an AGI to even think about humans being harmed if the thoughts are too detailed)
What are the ethical implications of copies in simulations? Do a million identical simulations carry the same ethical importance as a single one? A million times as much? Something in between? What if the simulations are not identical, but very similar? What differences would be important here?
And perhaps most importantly: When people disagree on how these questions should be answered, how do you react? You can’t really find a middle ground here since the decision what views to follow itself decides which entities’ ethical views should be considered in future deliberations, creating something like a feedback loop.
Yeah that’s an important topic we’re going to have to think about. I think its our natural inclination to give the same rights to simulated brains as for us meatbags, but there’s some really odd perverse outcomes to that to consider too. Basically, virtual people could become tools for real people to exploit our legal and ethical systems—creating virtual populations for voting etc. I’ve written a little on that half way down this article: http://citizensearth.wordpress.com/2014/08/23/is-placing-consciousness-at-the-heart-of-futurist-ethics-a-terrible-mistake-are-there-alternatives/
I think we’ll need to have some sort of split system—some new system of virtual rights in the virtual world for virtual people and meatbag world rights for us meatbags, basically just to account for the profound physical differences between the two worlds. That we can preserve the species and still have an interesting virtual world. Waaay easier said than done though. This is probably going to be one of the trickiest problems since someone said “so, this democracy thing, how’s it going to work exactly?”