My reservations aren’t only moral; they are also psychological: that is, I think it likely (whether or not I am “right” to have the moral reservations I do, whether or not that’s even a meaningful question) that if there were a lot of Wakers, some of them would come to think that they were responsible for billions of deaths, or at least to worry that they might be. And I think that would be a horrific outcome.
When I read a good book, I am not interacting with its characters as I interact with other people in the world. I know how to program a computer to describe a person who doesn’t actually exist in a way indistinguishable from a description of a real ordinary human being. (I.e., take a naturalistic description such as a novelist might write, and just type it into the computer and tell it to write it out again on demand.) The smartest AI researchers on earth are a long way from knowing how to program a computer to behave (in actual interactions) just like an ordinary human being. This is an important difference.
It is at least arguable that emulating someone with enough fidelity to stand up to the kind of inspection our hypothetical “Waker” would be able to give (let’s say) at least dozens of people requires a degree of simulation that would necessarily make those emulated-someones persons. Again, it doesn’t really matter that much whether I’m right, or even whether it’s actually a meaningful question; if a Waker comes to think that it does, then they’re going to be seeing themselves as a mass-murderer.
[EDITED to add: And if our hypothetical Waker doesn’t come to think that, then they’re likely to feel that their entire life involves no real human interaction, which is also very very bad.]
My reservations aren’t only moral; they are also psychological: that is, I think it likely (whether or not I am “right” to have the moral reservations I do, whether or not that’s even a meaningful question) that if there were a lot of Wakers, some of them would come to think that they were responsible for billions of deaths, or at least to worry that they might be. And I think that would be a horrific outcome.
When I read a good book, I am not interacting with its characters as I interact with other people in the world. I know how to program a computer to describe a person who doesn’t actually exist in a way indistinguishable from a description of a real ordinary human being. (I.e., take a naturalistic description such as a novelist might write, and just type it into the computer and tell it to write it out again on demand.) The smartest AI researchers on earth are a long way from knowing how to program a computer to behave (in actual interactions) just like an ordinary human being. This is an important difference.
It is at least arguable that emulating someone with enough fidelity to stand up to the kind of inspection our hypothetical “Waker” would be able to give (let’s say) at least dozens of people requires a degree of simulation that would necessarily make those emulated-someones persons. Again, it doesn’t really matter that much whether I’m right, or even whether it’s actually a meaningful question; if a Waker comes to think that it does, then they’re going to be seeing themselves as a mass-murderer.
[EDITED to add: And if our hypothetical Waker doesn’t come to think that, then they’re likely to feel that their entire life involves no real human interaction, which is also very very bad.]