Why must destroying a conscious model be considered cruel if it wouldn’t have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally.
Furthermore, from our current knowledge of the universe I don’t think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess. The whole idea seems near-metaphysical, much like the multiverse hypothesis. Granted, the nonzero probability of these models being conscious is still significant considering the massive future utility, but considering the enormity of our ignorance you might as well start talking about the non-zero probability of rocks being conscious.
I don’t think anyone answered Doug’s question yet. “Would a human, trying to solve the same problem, also run the risk of simulating a person?”
I have heard of carbon chauvinism, but perhaps there is a bit of binary chauvinism going on?
“Should your parents have the right to kill you now, if they do so painlessly?”
Yes, according to that logic. Also, from a negative utilitarian standpoint, it was actually the act of creating me which they had no right to do since that makes them responsible for all pain I have ever suffered.
I’m not saying I live life by utilitarian ethics, I’m just saying I haven’t found any way to refute it.
That said though, non-existence doesn’t frighten me. I’m not so sure non-existence is an option though, if the universe is eternal or infinite. That might be a very good thing or a very bad thing.