The basic issue is whether the utility weights (“description lengths”) reflect the subjective preferences. If they do, it’s an entirely different kettle of fish. If they don’t, I don’t see why “my wife” should get much more weight than “the girl next to me on a bus”.
I think real people have preferences whose weights decay with distance—geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people’s, or would make an artificial agent tend to behave in ways we’d want, I don’t know. As I’ve already indicated, I’m not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.
The basic issue is whether the utility weights (“description lengths”) reflect the subjective preferences. If they do, it’s an entirely different kettle of fish. If they don’t, I don’t see why “my wife” should get much more weight than “the girl next to me on a bus”.
I think real people have preferences whose weights decay with distance—geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people’s, or would make an artificial agent tend to behave in ways we’d want, I don’t know. As I’ve already indicated, I’m not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.