Why must destroying a conscious model be considered cruel if it wouldn’t have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally.
Furthermore, from our current knowledge of the universe I don’t think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess. The whole idea seems near-metaphysical, much like the multiverse hypothesis. Granted, the nonzero probability of these models being conscious is still significant considering the massive future utility, but considering the enormity of our ignorance you might as well start talking about the non-zero probability of rocks being conscious.
I don’t think anyone answered Doug’s question yet. “Would a human, trying to solve the same problem, also run the risk of simulating a person?”
I have heard of carbon chauvinism, but perhaps there is a bit of binary chauvinism going on?
I’m not sure if it’s actually possible for someone to die painlessly. My idea was to base happiness on classical conditioning. If something causes you to stop doing what you were doing, you dislike it. If it stops you from doing everything that you do while you’re alive, it must be very painful indeed.
Why must destroying a conscious model be considered cruel if it wouldn’t have even been created otherwise, and it died painlessly? I mean, I understand the visceral revulsion to this idea, but that sort of utilitarian ethos is the only one that makes sense to me rationally.
Furthermore, from our current knowledge of the universe I don’t think we can possibly know if a computational model is even capable of producing consciousness so it is really only a guess. The whole idea seems near-metaphysical, much like the multiverse hypothesis. Granted, the nonzero probability of these models being conscious is still significant considering the massive future utility, but considering the enormity of our ignorance you might as well start talking about the non-zero probability of rocks being conscious.
I don’t think anyone answered Doug’s question yet. “Would a human, trying to solve the same problem, also run the risk of simulating a person?”
I have heard of carbon chauvinism, but perhaps there is a bit of binary chauvinism going on?
I’m not sure if it’s actually possible for someone to die painlessly. My idea was to base happiness on classical conditioning. If something causes you to stop doing what you were doing, you dislike it. If it stops you from doing everything that you do while you’re alive, it must be very painful indeed.