Not really. “Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that’s not where the majority of the problem lies.
I’d even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it’d be a net win for humanity, compared to our current prospects. With absolute power there’s not a lot of incentive to be an especially malevolent dictator (see Moldbug’s Fhnargl thought experiment for something similar) and in a post-scarcity world there’d be more than enough for everyone including IBM executives. It’d be sub-optimal, but compared to Unfriendly AI? Piece of cake.
If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.
“Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way.
It seems quite a bit easier to me! Maybe not 7 billion times easier—but heading that way.
If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
That would work—if everyone agreed to trust them and their faith was justified. However, there doesn’t seem to be much chance of that happening.
Not really. “Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that’s not where the majority of the problem lies.
I’d even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it’d be a net win for humanity, compared to our current prospects. With absolute power there’s not a lot of incentive to be an especially malevolent dictator (see Moldbug’s Fhnargl thought experiment for something similar) and in a post-scarcity world there’d be more than enough for everyone including IBM executives. It’d be sub-optimal, but compared to Unfriendly AI? Piece of cake.
Fnargl.
[Yvain crosses “get corrected on spelling of ‘Fnargl’” off his List Of Things To Do In Life]
Glad to be of service!
If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.
It seems quite a bit easier to me! Maybe not 7 billion times easier—but heading that way.
That would work—if everyone agreed to trust them and their faith was justified. However, there doesn’t seem to be much chance of that happening.