Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of “the existing corpus of human knowledge” to provide a suitably large data set to pursue development of AGI?
I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.
Which raises another issue… is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?
Or just decide that its goal system needed a little more tweaking before it’s let loose on the world. Or even just slow it down.
This applies much more so if you’re dealing with an entity potentially capable of an intelligence explosion. Those are devices for changing the world into whatever you want it to be, as long as you’ve solved the FAI problem and nobody takes it from you before you activate it. The incentives for the latter would be large, given the current value disagreements within human society, and so so are the incentives for hiding that you have one.
If you’ve solved the FAI problem, the device will change the world into what’s right, not what you personally want. But of course, we should probably have a term of art for an AGI that will honestly follow the intentions of its human creator/operator whether or not those correspond to what’s broadly ethical.
We need some kind of central ethical code and there are many principles that are transcultural enough to follow. However, how do we teach a machine to make judgment calls?
A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same.
Where the problems differ, it’s mostly in that the society-level FAI case is harder: there’s additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you’re right that my original use of terminology was sloppy.
Do you think, then, that its a dangerous strategy for an entity such as a Google that may be using its enormous and growing accumulation of “the existing corpus of human knowledge” to provide a suitably large data set to pursue development of AGI?
I think Google is still quite aways from AGI, but in all seriousness, if there was ever a compelling interest of national security to be used as a basis for nationalizing inventions, AGI would be it. At the very least, we need some serious regulation of how such efforts are handled.
Which raises another issue… is there a powerful disincentive to reveal the emergence of an artificial superintelligence? Either by the entity itself (because we might consider pulling the plug) or by its creators who might see some strategic advantage lost (say, a financial institution that has gained a market trading advantage) by having their creation taken away?
Absolutely.
Or just decide that its goal system needed a little more tweaking before it’s let loose on the world. Or even just slow it down.
This applies much more so if you’re dealing with an entity potentially capable of an intelligence explosion. Those are devices for changing the world into whatever you want it to be, as long as you’ve solved the FAI problem and nobody takes it from you before you activate it. The incentives for the latter would be large, given the current value disagreements within human society, and so so are the incentives for hiding that you have one.
If you’ve solved the FAI problem, the device will change the world into what’s right, not what you personally want. But of course, we should probably have a term of art for an AGI that will honestly follow the intentions of its human creator/operator whether or not those correspond to what’s broadly ethical.
We need some kind of central ethical code and there are many principles that are transcultural enough to follow. However, how do we teach a machine to make judgment calls?
A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same.
Where the problems differ, it’s mostly in that the society-level FAI case is harder: there’s additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you’re right that my original use of terminology was sloppy.
That’s already underway.
I don’t think that Google is there yet. But as Google sucks up more and more knowledge I think we might get there.