yes, it is not written in the universe how to weight the aggregation
I think it’s written, but not in behavior.
Imagine two people whose behavior is encoded by the same utility function—they both behave as if they valued chocolate as 1 and vanilla as 2. But internally, the first person feels very strongly about all of their preferences, while the second one is very even-tempered and mostly feels ok no matter what. (They’d climb the same height of stairs to get vanilla, too, because the second person is more indifferent about vanilla but also is less bothered by climbing stairs.) Then we want to give them different weight in the aggregation, even though they have the same utility function. That means the correct weighting should be inferred from internal feelings, not only from behavior.
Another, more drastic thought experiment: imagine a box that has no behavior at all, but in fact there’s a person inside. You have to decide whether to send resources into the box. For that you need to know what’s in the box and what feelings it contains.
but your reply obviously beat me to it! I agree, there is plausibly some ‘actual valence magnitude’ which we ‘should’ normatively account for in aggregations.
In behavioural practice, it comes down to what cooperative/normative infrastructure is giving rise to the cooperative gains which push toward the Pareto frontier. e.g.
explicit instructions/norms (fair or otherwise)
‘exchange rates’ between goods or directly on utilities
marginal production returns on given resources
starting state/allocation in dynamic economy-like scenarios (with trades)
differential bargaining power/leverage
In discussion I have sometimes used the ‘ice cream/stabbing game’ as an example
either you get ice cream and I get stabbed
or neither of those things
neither of us is concerned with the other’s preferences
It’s basically a really extreme version of your chocolate and vanilla case. But they’re preference-isomorphic!
I think it’s written, but not in behavior.
Imagine two people whose behavior is encoded by the same utility function—they both behave as if they valued chocolate as 1 and vanilla as 2. But internally, the first person feels very strongly about all of their preferences, while the second one is very even-tempered and mostly feels ok no matter what. (They’d climb the same height of stairs to get vanilla, too, because the second person is more indifferent about vanilla but also is less bothered by climbing stairs.) Then we want to give them different weight in the aggregation, even though they have the same utility function. That means the correct weighting should be inferred from internal feelings, not only from behavior.
Another, more drastic thought experiment: imagine a box that has no behavior at all, but in fact there’s a person inside. You have to decide whether to send resources into the box. For that you need to know what’s in the box and what feelings it contains.
I swiftly edited that to read
but your reply obviously beat me to it! I agree, there is plausibly some ‘actual valence magnitude’ which we ‘should’ normatively account for in aggregations.
In behavioural practice, it comes down to what cooperative/normative infrastructure is giving rise to the cooperative gains which push toward the Pareto frontier. e.g.
explicit instructions/norms (fair or otherwise)
‘exchange rates’ between goods or directly on utilities
marginal production returns on given resources
starting state/allocation in dynamic economy-like scenarios (with trades)
differential bargaining power/leverage
In discussion I have sometimes used the ‘ice cream/stabbing game’ as an example
either you get ice cream and I get stabbed
or neither of those things
neither of us is concerned with the other’s preferences
It’s basically a really extreme version of your chocolate and vanilla case. But they’re preference-isomorphic!