No, they are not. Animals can feel e.g. happiness as well.
Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we’re pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard—meh, maybe but probably not. An insect, most people would say no. Maybe I’m wrong and there’s an argument that animals can experience happiness which is not based on their similarity to us, in that case I’m very curious to see this argument.
Otherwise, I actually think it somewhat answers my question. One my qualm would be that sentience does seem to come on a spectrum—but that can in theory be addressed by some scaling factor. The bigger issue for me is that it implies that a hardcore total utilitarian would be fine with a future populated by trillions of sentient but otherwise completely alien AIs successfully achieving their alien goals (e.g. maximizing paperclips) and experiencing desirable-state-of-consciousness about it. But I think some hardcore utilitarians would bite this bullet, and that wouldn’t be a biggest bullet for a utilitarian to bite either.
Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we’re pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard—meh, maybe but probably not. An insect, most people would say no. Maybe I’m wrong and there’s an argument that animals can experience happiness which is not based on their similarity to us, in that case I’m very curious to see this argument.
For the record, I believe we do have at least crude mechanistic model of how consciousness works in general, and yes what’s with the hard problem of consciousness in particular (the latter being a bit of a wrong question).
Otherwise, I actually think it somewhat answers my question. One my qualm would be that sentience does seem to come on a spectrum—but that can in theory be addressed by some scaling factor. The bigger issue for me is that it implies that a hardcore total utilitarian would be fine with a future populated by trillions of sentient but otherwise completely alien AIs successfully achieving their alien goals (e.g. maximizing paperclips) and experiencing desirable-state-of-consciousness about it. But I think some hardcore utilitarians would bite this bullet, and that wouldn’t be a biggest bullet for a utilitarian to bite either.