To build a superintelligence that actually maximizes IBM’s share price in a normal way that the CEO of IBM would >approve of would require solving the friendly AI problem but then changing a couple of lines of code.
That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.
Surely it’s much harder to make all of humanity happy than to make IBM’s stockholders happy? I mean, a FAI that does the latter is far less constrained, but it’s still not going to convert the universe into computronium.
Not really. “Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that’s not where the majority of the problem lies.
I’d even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it’d be a net win for humanity, compared to our current prospects. With absolute power there’s not a lot of incentive to be an especially malevolent dictator (see Moldbug’s Fhnargl thought experiment for something similar) and in a post-scarcity world there’d be more than enough for everyone including IBM executives. It’d be sub-optimal, but compared to Unfriendly AI? Piece of cake.
If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.
“Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way.
It seems quite a bit easier to me! Maybe not 7 billion times easier—but heading that way.
If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
That would work—if everyone agreed to trust them and their faith was justified. However, there doesn’t seem to be much chance of that happening.
Surely it’s much harder to make all of humanity happy than to make IBM’s stockholders happy?
It is more work for the AI to make all of humanity happy than a smaller subset, but it is not really more work for the human development team. They have to solve the same Friendliness problem either way.
For a greatly scaled down analogy, I wrote a program that analyzes stored procedures in a database and generates web services that call those stored procedures. I run that program on our database which currently has around 1800 public procedures, whenever we make a release. Writing that program was the same amount of work for me as if there were 500 or 5000 web services to generate instead of 1800. It is the program that has to do more or less work if there are more or fewer procedures.
That assumes that being Friendly to all of humanity is just as easy as being Friendly to a small subset.
Surely it’s much harder to make all of humanity happy than to make IBM’s stockholders happy? I mean, a FAI that does the latter is far less constrained, but it’s still not going to convert the universe into computronium.
Not really. “Maximize the utility of this one guy” isn’t much easier than “Maximize the utility of all humanity” when the real problem is defining “maximize utility” in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying “Maximize the utility of this one guy here who’s clearly very nice and wants what’s best for humanity.”
There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that’s not where the majority of the problem lies.
I’d even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it’d be a net win for humanity, compared to our current prospects. With absolute power there’s not a lot of incentive to be an especially malevolent dictator (see Moldbug’s Fhnargl thought experiment for something similar) and in a post-scarcity world there’d be more than enough for everyone including IBM executives. It’d be sub-optimal, but compared to Unfriendly AI? Piece of cake.
Fnargl.
[Yvain crosses “get corrected on spelling of ‘Fnargl’” off his List Of Things To Do In Life]
Glad to be of service!
If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.
It seems quite a bit easier to me! Maybe not 7 billion times easier—but heading that way.
That would work—if everyone agreed to trust them and their faith was justified. However, there doesn’t seem to be much chance of that happening.
It is more work for the AI to make all of humanity happy than a smaller subset, but it is not really more work for the human development team. They have to solve the same Friendliness problem either way.
For a greatly scaled down analogy, I wrote a program that analyzes stored procedures in a database and generates web services that call those stored procedures. I run that program on our database which currently has around 1800 public procedures, whenever we make a release. Writing that program was the same amount of work for me as if there were 500 or 5000 web services to generate instead of 1800. It is the program that has to do more or less work if there are more or fewer procedures.