From their website, it looks like they’ll be doing a lot of deep learning research and making the results freely available, which doesn’t sound like it would accelerate Friendly AI relative to AI as a whole. I hope they’ve thought this through.
If I’m Dr. Evil and I use it, won’t you be empowering me?
Musk: I think that’s an excellent question and it’s something that we debated quite a bit.
Altman: There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs, will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.
The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which… transforms the grey goo into a nicer color of goo, I guess?
If you don’t believe that a foom is the most likely outcome(a common and not unreasonable position) then it’s probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.
They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).
Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).
In summary, this could actually be really good, it’s just too early to tell.
It seems that they consider a soft takeoff more likely than a hard takeoff, which is still compatible with understanding the concept of an intelligence explosion.
Yeah the best argument I can think of for this course is something like: soft takeoff is more likely, and even if hard takeoff is a possibility, preparing for hard takeoff is so terrifically difficult that it doesn’t make sense to even try. So let’s optimize for the scenario where soft takeoff is what happens.
From their website, it looks like they’ll be doing a lot of deep learning research and making the results freely available, which doesn’t sound like it would accelerate Friendly AI relative to AI as a whole. I hope they’ve thought this through.
Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]
Please don’t use “they don’t know what they’re doing” as a synonym for “I don’t agree with their approach”.
That interview is indeed worrying. I’m surprised by some of the answers.
Like this?
The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which… transforms the grey goo into a nicer color of goo, I guess?
If you don’t believe that a foom is the most likely outcome(a common and not unreasonable position) then it’s probably better to have lots of weakly-superhuman AI than a single weakly-superhuman AI.
Even in that case, whichever actor has the most processors would have the largest “AI farm”, with commensurate power projection.
I think the second one suggests that they don’t believe the future AI will be a singleton.
Their statement accords very well with the Hansonian vision of AI progress.
If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.
Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?
They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).
Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).
In summary, this could actually be really good, it’s just too early to tell.
Maybe the apparent incompetence is a publicity game, and the do actually know what they’re doing?
Heh. Keep in mind, we’ve been through this before.
What the hell? There’s no sign that Musk and Altman have read Bostrom, or understand the concept of an intelligence explosion in that interview.
Musk has read it and has repeatedly and publically agreed with its key points.
It seems that they consider a soft takeoff more likely than a hard takeoff, which is still compatible with understanding the concept of an intelligence explosion.
Yeah the best argument I can think of for this course is something like: soft takeoff is more likely, and even if hard takeoff is a possibility, preparing for hard takeoff is so terrifically difficult that it doesn’t make sense to even try. So let’s optimize for the scenario where soft takeoff is what happens.