Before one could even consider an utility of a human (or a nematode) ’s existence
No. Utility is a thing agents have.
‘one’ in that case refers to an agent who’s trying to value feelings that physical systems have.
I think there’s some linguistic confusion here. As an agent valuing that there’s no enormous torture camp set up in a region of space, I’d need to have an utility function over space, which gives the utility of that space.
‘one’ in that case refers to an agent who’s trying to value feelings that physical systems have.
I see what you’re doing, then. I’m thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You’re thinking of an ideal agent that has a universal utility function that applies to arbitrary reality.
Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up.
I don’t understand your overall point. It sounds to me like you’re taking a long way around to agreeing with me, yet phrasing it as if you disagreed.
I think (and private_messaging should feel free to correct me if I’m wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you’ve got to be able to recognize those objects/worldstates/whatever. I may value “humans”, but what is a “human”? Since the actual reality doesn’t have a “human” as an ontologically fundamental category—it simply computes the behavior of particles according to the laws of physics—the definition of the “human” which I assign utility to must be given by me. I’m not going to get the definition of a “human” from the universe itself.
‘one’ in that case refers to an agent who’s trying to value feelings that physical systems have.
I think there’s some linguistic confusion here. As an agent valuing that there’s no enormous torture camp set up in a region of space, I’d need to have an utility function over space, which gives the utility of that space.
I see what you’re doing, then. I’m thinking of a real-life limited agent like me, who has little idea how the inside of a nematode or human works. I have a model of each, and I make a guess at how to weigh them in my utility function based on observations of them. You’re thinking of an ideal agent that has a universal utility function that applies to arbitrary reality.
Still, though, the function is at least as likely to start its evaluation top-down (partitioning the world into objects) as bottom-up.
I don’t understand your overall point. It sounds to me like you’re taking a long way around to agreeing with me, yet phrasing it as if you disagreed.
I think (and private_messaging should feel free to correct me if I’m wrong) that what private_messaging is saying is, in effect, that before you can assign utilities to objects or worldstates or whatever, you’ve got to be able to recognize those objects/worldstates/whatever. I may value “humans”, but what is a “human”? Since the actual reality doesn’t have a “human” as an ontologically fundamental category—it simply computes the behavior of particles according to the laws of physics—the definition of the “human” which I assign utility to must be given by me. I’m not going to get the definition of a “human” from the universe itself.
Okay. I don’t understand his point, then. That doesn’t seem relevant to what I was saying.