That Yudkowsky claims that they are working for the benefit of humanity doesn’t mean it is true. Surely I’d write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.
For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI.
I think many people would like to be in that group—if they can find a way to arrange it.
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.
If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of ‘social cooperation’ mechanism will not arise.
Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences—which in turn is likely to make them satisfied. I am not sure what you are talking about—but surely this kind of thing is already happening all the time—with Sergey Brin, James Harris Simons—and so on.
That doesn’t really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of “cheerful, contented, intellectually and physically well-nourished people.”
This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.
It’s a fair description of today’s more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same—but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.
In TURING’S CATHEDRAL, George Dyson writes:
I think many people would like to be in that group—if they can find a way to arrange it.
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.
If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of ‘social cooperation’ mechanism will not arise.
Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences—which in turn is likely to make them satisfied. I am not sure what you are talking about—but surely this kind of thing is already happening all the time—with Sergey Brin, James Harris Simons—and so on.
That doesn’t really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of “cheerful, contented, intellectually and physically well-nourished people.”
This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.
It’s a fair description of today’s more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same—but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.