>Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.
You can say that a robot toy has a goal of following a light source. Or thermostat has a goal of keeping the room temperature at a certain setting. But I’m yet to hear anyone counting those things towards total utility calculations.
Of course a counterargument would be “but those are not actual goals, those are the goals of humans that set it”, but in this case you’ve just hidden all the references to humans into the word “goal” and are back to square 1.
Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we’re pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard—meh, maybe but probably not. An insect, most people would say no. Maybe I’m wrong and there’s an argument that animals can experience happiness which is not based on their similarity to us, in that case I’m very curious to see this argument.
For the record, I believe we do have at least crude mechanistic model of how consciousness works in general, and yes what’s with the hard problem of consciousness in particular (the latter being a bit of a wrong question).
Otherwise, I actually think it somewhat answers my question. One my qualm would be that sentience does seem to come on a spectrum—but that can in theory be addressed by some scaling factor. The bigger issue for me is that it implies that a hardcore total utilitarian would be fine with a future populated by trillions of sentient but otherwise completely alien AIs successfully achieving their alien goals (e.g. maximizing paperclips) and experiencing desirable-state-of-consciousness about it. But I think some hardcore utilitarians would bite this bullet, and that wouldn’t be a biggest bullet for a utilitarian to bite either.