Not even if those people independently keep going higher in the abstraction hierarchy—they’ll never converge to the same object, because there’s always that inequivalence in how they’re translated back to the low level description.
I mean, that’s clearly not how it works in practice? Take the example in the post literally: two people disagree on food preferences, but can agree on the “food” abstraction and on both of them having a preference for subjectively tasty ones.
I agree with the part of what you just said that’s the NAH, but disagree with your interpretation.
Both people can recognize that there’s a good abstraction here, where what they care about is subjectively tasty food. But this interpersonal abstraction is no longer an abstraction of their values, it simply happens to be about their values, sometimes. It can no longer be cashed out into specific recommendations of real-world actions in the way someone’s values can[1].
We have some system with a low-level state l, which can take on one of six values: {a,b,c,d,e,f}.
We can abstract over this system’s state and get a high-level state h, which can take on one of two states: {x,y}.
We have an objective abstracting-up function f(l)=h.
We have the following mappings between states:
∀l∈{a,b,c}:f(l)=x
∀l∈{d,e,f}:f(l)=y
We have an utility function UA(l), with a preference ordering of a>b>c≫d≈e≈f, and an utility function UB(l), with a preference ordering of c>b>a≫d≈e≈f.
We translate both utility functions to h, and get the same utility function: U(h) whose preference ordering is x>y.
Thus, both UA(l) and UB(l) can agree on which high-level state they would greatly prefer. No low-level state would maximally satisfy both of them, but they both would be happy enough with any low-level state that gets mapped to the high-level state of x. (b is the obvious compromise.)
I disagree that translating to x and y let you “reduce the degrees of freedom” or otherwise get any sort of discount lunch. At the end you still had to talk about the low level states again to say they should compromise on b (or not compromise and fight it out over c vs. a, that’s always an option).
At the end you still had to talk about the low level states again to say they should compromise on b
“Compromising on b” is a more detailed implementation that can easily be omitted. The load-bearing part is “both would be happy enough with any low-level state that gets mapped to the high-level state of x”.
For example, the policy of randomly sampling any l such that f(l)=x is something both utility functions can agree on, and doesn’t require doing any additional comparisons of low-level preferences, once the high-level state has been agreed upon. Rising tide lifts all boats, etc.
Suppose the two agents are me and a flatworm.
a = ideal world according to me
b = status quo
c = ideal world according to the flatworm
d, e, f = various deliberately-bad-to-both worlds
I’m not going to stop trying to improve the world just because the flatworm prefers the status quo, and I wouldn’t be “happy enough” if we ended up in flatworm utopia.
What bargains I would agree to, and how I would feel about them, are not safe to abstract away.
I wouldn’t be “happy enough” if we ended up in flatworm utopia
You would, presumably, be quite happy compared to “various deliberately-bad-to-both worlds”.
I’m not going to stop trying to improve the world just because the flatworm prefers the status quo
Because you don’t care about the flatworm and the flatworm is not perceived by you as having much bargaining power for you to bend to its preferences.
In addition, your model rules out more fine-grained ideas like “the cubic mile of terrain around the flatworm remains unchanged while I get the rest of the universe”. Which is plausibly what CEV would result in: everyone gets their own safe garden, with the only concession the knowledge that everyone else’s safe gardens also exist.
I agree with the part of what you just said that’s the NAH, but disagree with your interpretation.
Both people can recognize that there’s a good abstraction here, where what they care about is subjectively tasty food. But this interpersonal abstraction is no longer an abstraction of their values, it simply happens to be about their values, sometimes. It can no longer be cashed out into specific recommendations of real-world actions in the way someone’s values can[1].
For certain meanings of “values,” ofc.
Okay, let’s build a toy model.
We have some system with a low-level state l, which can take on one of six values: {a,b,c,d,e,f}.
We can abstract over this system’s state and get a high-level state h, which can take on one of two states: {x,y}.
We have an objective abstracting-up function f(l)=h.
We have the following mappings between states:
∀l∈{a,b,c}:f(l)=x
∀l∈{d,e,f}:f(l)=y
We have an utility function UA(l), with a preference ordering of a>b>c≫d≈e≈f, and an utility function UB(l), with a preference ordering of c>b>a≫d≈e≈f.
We translate both utility functions to h, and get the same utility function: U(h) whose preference ordering is x>y.
Thus, both UA(l) and UB(l) can agree on which high-level state they would greatly prefer. No low-level state would maximally satisfy both of them, but they both would be happy enough with any low-level state that gets mapped to the high-level state of x. (b is the obvious compromise.)
Which part of this do you disagree with?
I disagree that translating to x and y let you “reduce the degrees of freedom” or otherwise get any sort of discount lunch. At the end you still had to talk about the low level states again to say they should compromise on b (or not compromise and fight it out over c vs. a, that’s always an option).
“Compromising on b” is a more detailed implementation that can easily be omitted. The load-bearing part is “both would be happy enough with any low-level state that gets mapped to the high-level state of x”.
For example, the policy of randomly sampling any l such that f(l)=x is something both utility functions can agree on, and doesn’t require doing any additional comparisons of low-level preferences, once the high-level state has been agreed upon. Rising tide lifts all boats, etc.
Suppose the two agents are me and a flatworm.
a = ideal world according to me
b = status quo
c = ideal world according to the flatworm
d, e, f = various deliberately-bad-to-both worlds
I’m not going to stop trying to improve the world just because the flatworm prefers the status quo, and I wouldn’t be “happy enough” if we ended up in flatworm utopia.
What bargains I would agree to, and how I would feel about them, are not safe to abstract away.
You would, presumably, be quite happy compared to “various deliberately-bad-to-both worlds”.
Because you don’t care about the flatworm and the flatworm is not perceived by you as having much bargaining power for you to bend to its preferences.
In addition, your model rules out more fine-grained ideas like “the cubic mile of terrain around the flatworm remains unchanged while I get the rest of the universe”. Which is plausibly what CEV would result in: everyone gets their own safe garden, with the only concession the knowledge that everyone else’s safe gardens also exist.