“You have a matrix …”: correct. “And then …”: whether that’s correct depends on what you mean by “in the process”, but it’s certainly not entirely unlike what I meant :-).
Your last paragraph is too metaphorical for me to work out whether I share your concerns. (My description was extremely handwavy so I’m in no position to complain.) I think the scaffolding required is basically just the agent’s knowledge. (To clarify a couple of points: not necessarily minimum description length, which of course is uncomputable, but something like “shortest description the agent can readily come up with”; and of course in practice what I describe is way too onerous computationally but some crude approximation might be manageable.)
The basic issue is whether the utility weights (“description lengths”) reflect the subjective preferences. If they do, it’s an entirely different kettle of fish. If they don’t, I don’t see why “my wife” should get much more weight than “the girl next to me on a bus”.
I think real people have preferences whose weights decay with distance—geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people’s, or would make an artificial agent tend to behave in ways we’d want, I don’t know. As I’ve already indicated, I’m not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.
“You have a matrix …”: correct. “And then …”: whether that’s correct depends on what you mean by “in the process”, but it’s certainly not entirely unlike what I meant :-).
Your last paragraph is too metaphorical for me to work out whether I share your concerns. (My description was extremely handwavy so I’m in no position to complain.) I think the scaffolding required is basically just the agent’s knowledge. (To clarify a couple of points: not necessarily minimum description length, which of course is uncomputable, but something like “shortest description the agent can readily come up with”; and of course in practice what I describe is way too onerous computationally but some crude approximation might be manageable.)
The basic issue is whether the utility weights (“description lengths”) reflect the subjective preferences. If they do, it’s an entirely different kettle of fish. If they don’t, I don’t see why “my wife” should get much more weight than “the girl next to me on a bus”.
I think real people have preferences whose weights decay with distance—geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people’s, or would make an artificial agent tend to behave in ways we’d want, I don’t know. As I’ve already indicated, I’m not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.