You have a matrix of (number of individuals) x (number of time-slices). Each matrix cell has value (“happiness”) that’s constrained to lie in the [-1..1] interval. You call the cell value “local utility”, right?
And then you, basically, sum up the cell values, re-scale the sum to fit into a pre-defined range and, in the process, add a transformation that makes sure the bounds are not sharp cut-offs, but rather limits which you approach asymptotically.
As to the second part, I have trouble visualising the language in which the description-length would work as you want. It seems to me it will have to involve a lot scaffolding which might collapse under its own weight.
“You have a matrix …”: correct. “And then …”: whether that’s correct depends on what you mean by “in the process”, but it’s certainly not entirely unlike what I meant :-).
Your last paragraph is too metaphorical for me to work out whether I share your concerns. (My description was extremely handwavy so I’m in no position to complain.) I think the scaffolding required is basically just the agent’s knowledge. (To clarify a couple of points: not necessarily minimum description length, which of course is uncomputable, but something like “shortest description the agent can readily come up with”; and of course in practice what I describe is way too onerous computationally but some crude approximation might be manageable.)
The basic issue is whether the utility weights (“description lengths”) reflect the subjective preferences. If they do, it’s an entirely different kettle of fish. If they don’t, I don’t see why “my wife” should get much more weight than “the girl next to me on a bus”.
I think real people have preferences whose weights decay with distance—geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people’s, or would make an artificial agent tend to behave in ways we’d want, I don’t know. As I’ve already indicated, I’m not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.
Let me see if I understand you correctly.
You have a matrix of (number of individuals) x (number of time-slices). Each matrix cell has value (“happiness”) that’s constrained to lie in the [-1..1] interval. You call the cell value “local utility”, right?
And then you, basically, sum up the cell values, re-scale the sum to fit into a pre-defined range and, in the process, add a transformation that makes sure the bounds are not sharp cut-offs, but rather limits which you approach asymptotically.
As to the second part, I have trouble visualising the language in which the description-length would work as you want. It seems to me it will have to involve a lot scaffolding which might collapse under its own weight.
“You have a matrix …”: correct. “And then …”: whether that’s correct depends on what you mean by “in the process”, but it’s certainly not entirely unlike what I meant :-).
Your last paragraph is too metaphorical for me to work out whether I share your concerns. (My description was extremely handwavy so I’m in no position to complain.) I think the scaffolding required is basically just the agent’s knowledge. (To clarify a couple of points: not necessarily minimum description length, which of course is uncomputable, but something like “shortest description the agent can readily come up with”; and of course in practice what I describe is way too onerous computationally but some crude approximation might be manageable.)
The basic issue is whether the utility weights (“description lengths”) reflect the subjective preferences. If they do, it’s an entirely different kettle of fish. If they don’t, I don’t see why “my wife” should get much more weight than “the girl next to me on a bus”.
I think real people have preferences whose weights decay with distance—geographical, temporal and conceptual. I think it would be reasonable for artificial agents to do likewise. Whether the particular mode of decay I describe resembles real people’s, or would make an artificial agent tend to behave in ways we’d want, I don’t know. As I’ve already indicated, I’m not claiming to be doing more than sketch what some kinda-plausible bounded-utility agents might look like.