“Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way.
It seems quite a bit easier to me! Maybe not 7 billion times easier—but heading that way.
If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
That would work—if everyone agreed to trust them and their faith was justified. However, there doesn’t seem to be much chance of that happening.
It seems quite a bit easier to me! Maybe not 7 billion times easier—but heading that way.
That would work—if everyone agreed to trust them and their faith was justified. However, there doesn’t seem to be much chance of that happening.