I think getting to “good enough” on this question should pretty much come for free when the hard problems are solved. For example any common sense statement like “Maximize flourishing as depicted in the UN convention on human rights” is IMO likely to get us to a good place, if the agent is honest, remains aligned to those values, and interprets them reasonably intelligently. (With each of those three pre-requisites being way harder than picking a non-harmful value function.)
If our AGIs, after delivering utopia, tell us we need to start restricting childbearing rights I don’t see that as problematic. Long before we require that step we will have revolutionized society and so most people will buy into the requirement.
Honestly I think there are plenty of great outcomes that don’t preserve 1 as well. A world of radical abundance with no ownership, property, or ability to form companies/enterprises could still be dramatically better than the no-AGI counterfactual trajectory, even if it happens not to be most people’s preferred outcome ex ante.
For sci-fi, I’d say Ian M. Banks’ Culture series presents one of the more plausible (as in plausibly stable, not most probable ex ante) AGI-led utopias. (It’s what Musk is referring to when he says AGIs will keep us around because we are interesting.)
I think getting to “good enough” on this question should pretty much come for free when the hard problems are solved. For example any common sense statement like “Maximize flourishing as depicted in the UN convention on human rights” is IMO likely to get us to a good place, if the agent is honest, remains aligned to those values, and interprets them reasonably intelligently. (With each of those three pre-requisites being way harder than picking a non-harmful value function.)
If our AGIs, after delivering utopia, tell us we need to start restricting childbearing rights I don’t see that as problematic. Long before we require that step we will have revolutionized society and so most people will buy into the requirement.
Honestly I think there are plenty of great outcomes that don’t preserve 1 as well. A world of radical abundance with no ownership, property, or ability to form companies/enterprises could still be dramatically better than the no-AGI counterfactual trajectory, even if it happens not to be most people’s preferred outcome ex ante.
For sci-fi, I’d say Ian M. Banks’ Culture series presents one of the more plausible (as in plausibly stable, not most probable ex ante) AGI-led utopias. (It’s what Musk is referring to when he says AGIs will keep us around because we are interesting.)