Only sentient beings have a first-person point of view, only for them can states of the world be good or bad.
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?
I don’t see the relevance of this question, but judging by the upvotes it received, it seems that I’m missing something.
I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn’t. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing “weird” things, I would do weird things.
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you’re willing to bite the bullet here, then at least you’re being consistent (although I’m no longer sure that consistency is such a great property of a moral system for humans).
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?
I don’t see the relevance of this question, but judging by the upvotes it received, it seems that I’m missing something.
I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn’t. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing “weird” things, I would do weird things.
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you’re willing to bite the bullet here, then at least you’re being consistent (although I’m no longer sure that consistency is such a great property of a moral system for humans).