I’m going to try thinking about this by applying the reversal heuristic.
If a smarter and/or less evil person had magicked house elves into existence, so that they were mentally incapable of understanding what freedom would entail, instead of just enjoying servitude, should we change them? Equivalently, if we have a world where everyone is happier than this one because their desires are eusocial and fully compatible with each other, but liberty and prestige are literally impossible to conceive of, should we change back? If that world existed, or we found those aliens, should they be “freed” to make them appreciate liberty, when the concept never occurred to them?
OK, now we can ask the question—should we change from our world to one where people are not culturally molded to appreciate any of our current values? Let’s say cultural pressures didn’t exist, and values emerged from allowing people, starting from when they are babies, to have whatever they want. This is accomplished by non-sentient robots that can read brainwaves and fulfill desires immediately. IS that better, or should we move towards a future where we continue to culturally engineer our children to have a specific set of desires, those we care about—for pathos, prestige, freedom, etc?
Or should we change the world from our current one to one where people’s values are eusocial by design? Where being sacrificed for the greater good was pleasant, and the idea of selfishness was impossible?
At the end of this, I’m left with a feeling that yes, I agree that these are actually ambiguous, and “an explicit level of status quo bias to our preferences” is in fact justified.
had magicked house elves into existence [...] should we change them?
I’m explicitly arguing that even though we might not want to change them, we could still prefer they not exist in the first place.
should we change from our world to one where people are not culturally molded to appreciate any of our current values?
I’m trying to synthesise actual human values, not hypothetical other values that other beings might have. So in this process, our current values (or our current meta-preferences for our future values) get special place. If we had different values currently, the synthesis would be different. So that would-change is, from our perspective, a loss.
Agreed. I’m just trying to think through why we should / should not privilege the status quo. I notice I’m confused about this, since the reversal heuristic implies we shouldn’t. If we take this approach to an extreme, aren’t we locking in the status-quo as a base for allowing only pareto improvements, rather than overall utilitarian gains?
(I’ll note that Eric Drexler’s Pareto-topia argument explicitly allows for this condition—I’m just wondering whether it is ideal, or a necessary compromise.)
I’m trying to synthesise actual human values, not hypothetical other values that other beings might have.
To be clear, when you say “actual human values”, do you mean anything different than just “the values of the humans alive today, in the year 2019″? You mention “other beings”—is this meant to include other humans in the past who might have held different values?
The aim is to be even more specific—the values of a specific human at a specific time. Then what we do with these syntheses is another point, how much change to allow, etc… Including other humans in the past is a choice that we then need to make, or not.
I see, thank you. So then, would you say this doesn’t & isn’t intended to answer any question like “whose perspective should be taken into account?”, but that it instead assumes some answer to that question has already been specified, & is meant to address what to do given this chosen perspective?
I’m going to try thinking about this by applying the reversal heuristic.
If a smarter and/or less evil person had magicked house elves into existence, so that they were mentally incapable of understanding what freedom would entail, instead of just enjoying servitude, should we change them? Equivalently, if we have a world where everyone is happier than this one because their desires are eusocial and fully compatible with each other, but liberty and prestige are literally impossible to conceive of, should we change back? If that world existed, or we found those aliens, should they be “freed” to make them appreciate liberty, when the concept never occurred to them?
OK, now we can ask the question—should we change from our world to one where people are not culturally molded to appreciate any of our current values? Let’s say cultural pressures didn’t exist, and values emerged from allowing people, starting from when they are babies, to have whatever they want. This is accomplished by non-sentient robots that can read brainwaves and fulfill desires immediately. IS that better, or should we move towards a future where we continue to culturally engineer our children to have a specific set of desires, those we care about—for pathos, prestige, freedom, etc?
Or should we change the world from our current one to one where people’s values are eusocial by design? Where being sacrificed for the greater good was pleasant, and the idea of selfishness was impossible?
At the end of this, I’m left with a feeling that yes, I agree that these are actually ambiguous, and “an explicit level of status quo bias to our preferences” is in fact justified.
I’m explicitly arguing that even though we might not want to change them, we could still prefer they not exist in the first place.
I’m trying to synthesise actual human values, not hypothetical other values that other beings might have. So in this process, our current values (or our current meta-preferences for our future values) get special place. If we had different values currently, the synthesis would be different. So that would-change is, from our perspective, a loss.
Agreed. I’m just trying to think through why we should / should not privilege the status quo. I notice I’m confused about this, since the reversal heuristic implies we shouldn’t. If we take this approach to an extreme, aren’t we locking in the status-quo as a base for allowing only pareto improvements, rather than overall utilitarian gains?
(I’ll note that Eric Drexler’s Pareto-topia argument explicitly allows for this condition—I’m just wondering whether it is ideal, or a necessary compromise.)
It’s locking in the moral/preference status quo; once that’s done, non-Pareto overall gains are fine.
Even when locking in that status quo, it explicitly trades off certain values against others, so there is no “only Pareto” restriction.
I have a research agenda to be published soon that will look into these issues in more detail.
To be clear, when you say “actual human values”, do you mean anything different than just “the values of the humans alive today, in the year 2019″? You mention “other beings”—is this meant to include other humans in the past who might have held different values?
The aim is to be even more specific—the values of a specific human at a specific time. Then what we do with these syntheses is another point, how much change to allow, etc… Including other humans in the past is a choice that we then need to make, or not.
I see, thank you. So then, would you say this doesn’t & isn’t intended to answer any question like “whose perspective should be taken into account?”, but that it instead assumes some answer to that question has already been specified, & is meant to address what to do given this chosen perspective?
It doesn’t intend to answer those questions; but those questions become a lot easier to answer once this issue is solved.