Instead of creating a superintelligent AGI to perform some arbitrary task and watch it allocate all the Earth’s resources (and the universe’s resources later, but we won’t be there to watch it) to optimize it, we decide to give it the one task that justifies that kind of power and control—ruling over humanity.
The AGI is more competent than any human leader, but we wouldn’t want a human leader who’s values we disagree with even if they are very competent—and the same applies to robotic overlords. So, we implement something like Futarchy, except:
Instead of letting the officials generate policies, the AGI will do it.
Instead of using betting markets we let the AGI decide which policy best fulfills the values.
Instead of voting for representatives that’ll define the values, the AGI will talk with each and every one of us to build a values profile, and then use the average of all our values profiles to build the values profile used for decision making.
Even better—if it has enough computation power it can store all the values profiles, calculate the utility of each decision according to each profile, calculate how much the decision will affect of each voter, and do a weighed average.
So the AGI takes over, but humanity is still deciding what it wants.
Instead of creating a superintelligent AGI to perform some arbitrary task and watch it allocate all the Earth’s resources (and the universe’s resources later, but we won’t be there to watch it) to optimize it, we decide to give it the one task that justifies that kind of power and control—ruling over humanity.
The AGI is more competent than any human leader, but we wouldn’t want a human leader who’s values we disagree with even if they are very competent—and the same applies to robotic overlords. So, we implement something like Futarchy, except:
Instead of letting the officials generate policies, the AGI will do it.
Instead of using betting markets we let the AGI decide which policy best fulfills the values.
Instead of voting for representatives that’ll define the values, the AGI will talk with each and every one of us to build a values profile, and then use the average of all our values profiles to build the values profile used for decision making.
Even better—if it has enough computation power it can store all the values profiles, calculate the utility of each decision according to each profile, calculate how much the decision will affect of each voter, and do a weighed average.
So the AGI takes over, but humanity is still deciding what it wants.