Thanks for saying what (I assume) a lot of people were thinking privately.
I think the problem is that Elon Musk is an entrepreneur not a philosopher, so he has a bias for action, “fail fast” mentality, etc. And he’s too high-status for people to feel comfortable pointing out when he’s making a mistake (as in the case of OpenAI). (I’m generally an admirer of Mr. Musk, but I am really worried that the intuitions he’s honed through entrepreneurship will turn out to be completely wrong for AI safety.)
and now think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke… or OpenNanobot in the future
certainly the public will ensure proper control of the new technology
Musk does believe that ASI will be dangerous, so sometimes I wonder, quite seriously, whether he started OpenAI to put himself in a position where he can uh, get in the way, the moment real dangers start to surface. If you wanted to decrease openness in ASI research, the first thing you would need to do do is take power over the relevant channels and organizations. It’s easy to do that when you have the benefit of living in ASI’s past, however many decades back when those organizations were small and weak and pliable.
Hearing this, you might burp out a reflexive “people aren’t really these machiavellien geniuses who go around plotting decade-long games to-” and I have to stop you there. People generally aren’t, but Musk isn’t people. Musk has lived through growth and power and creating giants he might regret (paypal). Musk would think of it, and follow through, and the moment dangers present themselves, so long as he hasn’t become senile or otherwise mindkilled, I believe he’ll notice them, and I believe he’ll try to mitigate them.
(The question is, will the dangers present themselves early enough for shutting down OpenAI to be helpful, or will they just foom)
Note that OpenAI is not doing much ASI research in the first place, nor is it expected to; by and large, “AI” research is focused on comparatively narrow tasks that are nowhere near human-level ‘general intelligence’ (AGI), let alone broadly-capable super-intelligence (ASI)! And the ASI research that it does do is itself narrowly focused on the safety question. So, while I might agree that OpenAI is not really about making ASI research more open, I also think that OpenAI and Musk are being quite transparent about this!
Your AGI is ASI in embryo. There’s basically no difference. Once AI gets to “human level” generally, it will already have far surpassed humans in many domains. It’s also interesting that many of the “narrow tasks” are handled by basically the same deep learning technique which has proven to be very general in scope.
I agree. But then again, that’s true by definition of ‘AGI’ and ‘ASI’.
However, it’s not even clear that the ‘G’ in ‘AGI’ is a well-defined notion in the first place. What does it even mean to be a ‘general’ intelligence? Usually people use the term to mean something like the old definition of ‘Strong AI’, i.e. something that equates to human intelligence in some sense—but even the task human brains implement is not “general” in any real sense. It’s just the peculiar task we call ‘being a human’, the result of an extraordinarily capable aggregate of narrow intelligences!
I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call “human level,” it will actually be better than humans in many ways. So how come it can’t or doesn’t want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.
Eliezer says that “superintelligence” for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.
Thanks for saying what (I assume) a lot of people were thinking privately.
I think the problem is that Elon Musk is an entrepreneur not a philosopher, so he has a bias for action, “fail fast” mentality, etc. And he’s too high-status for people to feel comfortable pointing out when he’s making a mistake (as in the case of OpenAI). (I’m generally an admirer of Mr. Musk, but I am really worried that the intuitions he’s honed through entrepreneurship will turn out to be completely wrong for AI safety.)
and now think about some visionary entrepreneur/philosopher coming in the past with OpenTank, OpenRadar, OpenRocket, OpenNuke… or OpenNanobot in the future
certainly the public will ensure proper control of the new technology
How about do-it-yourself genetic engineering?
Musk does believe that ASI will be dangerous, so sometimes I wonder, quite seriously, whether he started OpenAI to put himself in a position where he can uh, get in the way, the moment real dangers start to surface. If you wanted to decrease openness in ASI research, the first thing you would need to do do is take power over the relevant channels and organizations. It’s easy to do that when you have the benefit of living in ASI’s past, however many decades back when those organizations were small and weak and pliable.
Hearing this, you might burp out a reflexive “people aren’t really these machiavellien geniuses who go around plotting decade-long games to-” and I have to stop you there. People generally aren’t, but Musk isn’t people. Musk has lived through growth and power and creating giants he might regret (paypal). Musk would think of it, and follow through, and the moment dangers present themselves, so long as he hasn’t become senile or otherwise mindkilled, I believe he’ll notice them, and I believe he’ll try to mitigate them.
(The question is, will the dangers present themselves early enough for shutting down OpenAI to be helpful, or will they just foom)
Note that OpenAI is not doing much ASI research in the first place, nor is it expected to; by and large, “AI” research is focused on comparatively narrow tasks that are nowhere near human-level ‘general intelligence’ (AGI), let alone broadly-capable super-intelligence (ASI)! And the ASI research that it does do is itself narrowly focused on the safety question. So, while I might agree that OpenAI is not really about making ASI research more open, I also think that OpenAI and Musk are being quite transparent about this!
Your AGI is ASI in embryo. There’s basically no difference. Once AI gets to “human level” generally, it will already have far surpassed humans in many domains. It’s also interesting that many of the “narrow tasks” are handled by basically the same deep learning technique which has proven to be very general in scope.
I agree. But then again, that’s true by definition of ‘AGI’ and ‘ASI’.
However, it’s not even clear that the ‘G’ in ‘AGI’ is a well-defined notion in the first place. What does it even mean to be a ‘general’ intelligence? Usually people use the term to mean something like the old definition of ‘Strong AI’, i.e. something that equates to human intelligence in some sense—but even the task human brains implement is not “general” in any real sense. It’s just the peculiar task we call ‘being a human’, the result of an extraordinarily capable aggregate of narrow intelligences!
I agree with this. This also indicates one of the problems with the AI risk idea. If there is an AI going around that people call “human level,” it will actually be better than humans in many ways. So how come it can’t or doesn’t want to destroy the world yet? Suppose there are 500 domains left in which it is inferior to humans.
Eliezer says that “superintelligence” for the purposes of our bet only counts if the thing is better than humans in basically every domain. But this seems to imply that at some point, as those 500 areas slowly disappear, the AI will suddenly acquire magical powers. If not, it will be able to surpass humans in all 500 areas, and so be a superintelligence, and the world will still be going on as usual.