Maybe people shouldn’t make Superintelligence at all? Narrow AIs are just fine if you consider the progress so far. Self-driving cars will be good, then applications using Big Data will find cures for most illnesses, then solve starvation and other problems by 3D printing foods and everything else, including rockets to deflect asteroids. Just give 10-20 more years only. Why to create dangerous SI?
Not obviously true. An alternative which immediately comes to my mind is a globally enforced mutual agreement to refrain from building superintelligences.
(Yes, that alternative is unrealistic if making superintelligences turns out to be too easy. But I’d want to see that premise argued for, not passed over in silence.)
The more general problem is that we need a solution to multi-polar traps (of which superintelligent AI creationg is one instance). The only viable solution I’ve seen proposed is creating a sufficiently powerful Singleton.
The only likely viable ideas for Singletons I’ve seen proposed are superintelligent AIs, and a human group with extensive use of thought-control technologies on itself. The latter probably can’t work unless you apply it to all of society, since it doesn’t have the same inherent advantages AI does, and as such would remain vulnerable to being usurped by a clandestingly constructed AI. Applying the latter to all of society, OTOH, would most likely cause massive value loss.
Therefore I’m in favor of the former; not because I like the odds, but because the alternatives look worse.
Totally agree, and I wish this opinion was voiced more on LW rather than the emphasis on trying to make a friendly self improving AI. For this to make sense though I think the human race needs to become a singleton, although perhaps that is what Google’s acquisitions and massive government surveillance is already doing.
Yes, continued development of AI seems unstoppable. But this brings up another very good point: if humanity cannot become a Singleton in our search for good egalitarian shared values, what is the chance of creating FAI? After years of good work in that direction and perhaps even success in determining a good approximation, what prevents some powerful secret entity like the CIA from hijacking it at the last minute and simply narrowing its objectives for something it determines is a “greater” good?
Our objectives are always better than the other guy’s, and while violence is universally despicable, it is fast, cheap, easy to program and the other guy (including FAI developers) won’t be expecting it. For the guy running the controls, that’s friendly enough. :-)
On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.
The “own best interest” in a winner- takes-all scenario is to create an eternal monopoly on everything. All levels of Maslow’s pyramide of human needs will be served by goods and services supplied by this singleton.
Maybe people shouldn’t make Superintelligence at all? Narrow AIs are just fine if you consider the progress so far. Self-driving cars will be good, then applications using Big Data will find cures for most illnesses, then solve starvation and other problems by 3D printing foods and everything else, including rockets to deflect asteroids. Just give 10-20 more years only. Why to create dangerous SI?
Because if you don’t, someone else will.
Not obviously true. An alternative which immediately comes to my mind is a globally enforced mutual agreement to refrain from building superintelligences.
(Yes, that alternative is unrealistic if making superintelligences turns out to be too easy. But I’d want to see that premise argued for, not passed over in silence.)
The more general problem is that we need a solution to multi-polar traps (of which superintelligent AI creationg is one instance). The only viable solution I’ve seen proposed is creating a sufficiently powerful Singleton.
The only likely viable ideas for Singletons I’ve seen proposed are superintelligent AIs, and a human group with extensive use of thought-control technologies on itself. The latter probably can’t work unless you apply it to all of society, since it doesn’t have the same inherent advantages AI does, and as such would remain vulnerable to being usurped by a clandestingly constructed AI. Applying the latter to all of society, OTOH, would most likely cause massive value loss.
Therefore I’m in favor of the former; not because I like the odds, but because the alternatives look worse.
Totally agree, and I wish this opinion was voiced more on LW rather than the emphasis on trying to make a friendly self improving AI. For this to make sense though I think the human race needs to become a singleton, although perhaps that is what Google’s acquisitions and massive government surveillance is already doing.
Yes, continued development of AI seems unstoppable. But this brings up another very good point: if humanity cannot become a Singleton in our search for good egalitarian shared values, what is the chance of creating FAI? After years of good work in that direction and perhaps even success in determining a good approximation, what prevents some powerful secret entity like the CIA from hijacking it at the last minute and simply narrowing its objectives for something it determines is a “greater” good?
Our objectives are always better than the other guy’s, and while violence is universally despicable, it is fast, cheap, easy to program and the other guy (including FAI developers) won’t be expecting it. For the guy running the controls, that’s friendly enough. :-)
On one hand, I think the world is already somewhat close to a singleton (with regard to AI, obviously it is nowhere near singleton with regard to most other things). I mean google has a huge fraction of the AI talent. The US government has a huge fraction of the mathematics talent. Then, there is Microsoft, FB, Baidu, and a few other big tech companies. But every time an independent AI company gains some traction it seems to be bought out by the big guys. I think this is a good thing as I believe the big guys will act in there own best interest including their interest in preserving their own life (i.e., not ending the world). Of course if it is easy to make an AGI, then there is no hope anyway. But, if it requires companies of Google scale, then there is hope they will choose to avoid it.
The “own best interest” in a winner- takes-all scenario is to create an eternal monopoly on everything. All levels of Maslow’s pyramide of human needs will be served by goods and services supplied by this singleton.