At the end you still had to talk about the low level states again to say they should compromise on b
“Compromising on b” is a more detailed implementation that can easily be omitted. The load-bearing part is “both would be happy enough with any low-level state that gets mapped to the high-level state of x”.
For example, the policy of randomly sampling any l such that f(l)=x is something both utility functions can agree on, and doesn’t require doing any additional comparisons of low-level preferences, once the high-level state has been agreed upon. Rising tide lifts all boats, etc.
Suppose the two agents are me and a flatworm.
a = ideal world according to me
b = status quo
c = ideal world according to the flatworm
d, e, f = various deliberately-bad-to-both worlds
I’m not going to stop trying to improve the world just because the flatworm prefers the status quo, and I wouldn’t be “happy enough” if we ended up in flatworm utopia.
What bargains I would agree to, and how I would feel about them, are not safe to abstract away.
I wouldn’t be “happy enough” if we ended up in flatworm utopia
You would, presumably, be quite happy compared to “various deliberately-bad-to-both worlds”.
I’m not going to stop trying to improve the world just because the flatworm prefers the status quo
Because you don’t care about the flatworm and the flatworm is not perceived by you as having much bargaining power for you to bend to its preferences.
In addition, your model rules out more fine-grained ideas like “the cubic mile of terrain around the flatworm remains unchanged while I get the rest of the universe”. Which is plausibly what CEV would result in: everyone gets their own safe garden, with the only concession the knowledge that everyone else’s safe gardens also exist.
“Compromising on b” is a more detailed implementation that can easily be omitted. The load-bearing part is “both would be happy enough with any low-level state that gets mapped to the high-level state of x”.
For example, the policy of randomly sampling any l such that f(l)=x is something both utility functions can agree on, and doesn’t require doing any additional comparisons of low-level preferences, once the high-level state has been agreed upon. Rising tide lifts all boats, etc.
Suppose the two agents are me and a flatworm.
a = ideal world according to me
b = status quo
c = ideal world according to the flatworm
d, e, f = various deliberately-bad-to-both worlds
I’m not going to stop trying to improve the world just because the flatworm prefers the status quo, and I wouldn’t be “happy enough” if we ended up in flatworm utopia.
What bargains I would agree to, and how I would feel about them, are not safe to abstract away.
You would, presumably, be quite happy compared to “various deliberately-bad-to-both worlds”.
Because you don’t care about the flatworm and the flatworm is not perceived by you as having much bargaining power for you to bend to its preferences.
In addition, your model rules out more fine-grained ideas like “the cubic mile of terrain around the flatworm remains unchanged while I get the rest of the universe”. Which is plausibly what CEV would result in: everyone gets their own safe garden, with the only concession the knowledge that everyone else’s safe gardens also exist.