If capitalism is the evolutionary engine that leads to AI, then the advent AI cannot be separated from the larger economic consequences of AI. In my judgment, the single most realistic way to design God-AI that is friendly is to evolve such AI directly out the economy that succeeds human capitalism, i.e. as an economic servant to human needs. While this is not a guarantee of friendly AI in itself, any attempt to make AI friendly purely on the basis of absolute, unchanging principles is doomed to ultimate failure because this is exactly how human intelligence, at its best, does not work.
Specific quote is a bit reminiscent of Social Justice Warriors who oppose capitalism, but see capitalism as being defined by oppression, inequality, and other bad stuff rather than by capital.
… as opposed to Libertarian Warriors who support capitalism, but see capitalism as being defined by freedom of speech, at-will employment, and legalized drugs and prostitution rather than by capital?
The Battle for Capitalism has always seemed to be a bit unusual here, though. Especially as it kind of looks like he just did try to make an AI with unchanging principles.
The problem is that understanding the economy is probably harder than understanding human intelligence. After all, the global economy is the product of over 6 billion human brains interacting with each other and their environment.
What does ‘understanding the economy’ mean? Routinely economists point out missed opportunities which the market then exploits (IIRC one of the standard examples was a paper which discovered a small average rise on Mondays), or simple models outperform the economists’ predictions of the future.
By “understand”, I mean have a sufficiently good model to make high quality predictions about what key economic variables are going to do. And I wouldn’t call papers like the one on the Monday effect routine, though they do happen.
If capitalism is the evolutionary engine that leads to AI, then the advent AI cannot be separated from the larger economic consequences of AI. In my judgment, the single most realistic way to design God-AI that is friendly is to evolve such AI directly out the economy that succeeds human capitalism, i.e. as an economic servant to human needs. While this is not a guarantee of friendly AI in itself, any attempt to make AI friendly purely on the basis of absolute, unchanging principles is doomed to ultimate failure because this is exactly how human intelligence, at its best, does not work.
Mitchell Heisman, Suicide Note p315
Seconding CronoDAS, that’s an awful book, but I found one funny sentence in it:
I looked at that site. The guy writes like a crackpot.
Specific quote is a bit reminiscent of Social Justice Warriors who oppose capitalism, but see capitalism as being defined by oppression, inequality, and other bad stuff rather than by capital.
… as opposed to Libertarian Warriors who support capitalism, but see capitalism as being defined by freedom of speech, at-will employment, and legalized drugs and prostitution rather than by capital?
(Blue, Green, let’s call the whole thing ao.)
Yeah, kinda.
The Battle for Capitalism has always seemed to be a bit unusual here, though. Especially as it kind of looks like he just did try to make an AI with unchanging principles.
The problem is that understanding the economy is probably harder than understanding human intelligence. After all, the global economy is the product of over 6 billion human brains interacting with each other and their environment.
What does ‘understanding the economy’ mean? Routinely economists point out missed opportunities which the market then exploits (IIRC one of the standard examples was a paper which discovered a small average rise on Mondays), or simple models outperform the economists’ predictions of the future.
By “understand”, I mean have a sufficiently good model to make high quality predictions about what key economic variables are going to do. And I wouldn’t call papers like the one on the Monday effect routine, though they do happen.