Why do you think expert systems cannot handle anything cross-disciplinary? I even
say that expert systems can generate new ideas, by more or less the same process > that humans do. An expert system only needs an understanding of manufacturing,
physics, and chemistry to design better computer chips, for instance.
Expert systems generally need very narrow problem domains to function. I’m not sure how you would expect an expert system to have an understanding of three very broad topics. Moreover, I don’t know exactly how humans come up with new ideas (sometimes when people ask me, I tell them that I bang my head against the wall. That’s not quite true but it does reflect that I only understand at a very gross level how I construct new ideas. I’m bright but not very bright, and I can see that much smarter people have the same trouble). So how you are convinced that expert systems could construct new ideas is not at all clear to me.
To be sure, there have been some limited work with computer systems coming up with new, interesting ideas. There’s been some limited success with computers in my own field. See for example Simon Colton’s work. There’s also been similar work in geometry and group theory. But none of these systems were expert systems as that term is normally used. Moreover, none of the ideas they’ve come up with have that impressive. The only exception I’m aware of that is the proof of the Robbins conjecture. So even in narrow areas we’ve had very little success using specialized AIs. Are you using a more general definition of expert system than is standard?
The reason why NO general AI is better than friendly (general) AI is very simple. IF
general AI is an existential threat, than no organization claiming to put humans
first could justify being pro-AGI (friendly or not), since no possible benefit* can
justify the risk of destroying humanity
Multiple problems with that claim. First, the existential threat may be low. There’s some tiny risk for example that the LHC will destroy the Earth in some very fun way. There’s also some risk that work with genetic engineering might give fanatics the skill to make a humanity destroying pathogen. And there’s a chance that nanotech might turn everything into purple with green stripes goo (this is much more likely than gray goo of course). There’s even some risk that proving the wrong theorem might summon Lovecraftian horrors. All events have some degree of risk. Moreover, general AI might actually help mitigate some serious threats, such as making it easier to track and deal with rogue asteroids or other catastrophic threats.
Also, even if one accepted the general outline of your argument, one would conclude that that’s a reason why organizations shouldn’t try to make general friendly AI. It isn’t a reason that actually having no AI is better than having no friendly AI.
“First, the existential threat [of AGI] may be low.”
Let me trace back the argument tree for a second. I originally asked for a defense of the claim that “SIAI is tackling the world’s most important task.” Michael Porter responded, “The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?” So NOW in this argument tree, we’re assuming that unfriendly AI IS an existential threat, enough that preventing it is the “world’s most important task.”
Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it’s a naive notion that any organization could protect friendly AI from being subverted.
AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it’s vulnerable, but not a wide one.
Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.
There are plenty of resources that an FAI could ethically obtain, and with a lead of time of less than 1 day, it could grow enough to be vastly more powerful than an unfriendly seed AI.
Really, asking which AI wins going head to head is the wrong question. The goal is to get an FAI running before unfriendly AGI is implemented.
The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain.
Wrong. FAI will make whatever unethical steps it must, as long as it’s on the net the best path it can see, taking into account both the (ethically harmful) instrumental actions and their expected outcome. There is no such general disadvantage coming with AI being Friendly. Not that I expect any need for such drastic measures (in an apparent way), especially considering the likely fist-mover advantage it’ll have.
Expert systems generally need very narrow problem domains to function. I’m not sure how you would expect an expert system to have an understanding of three very broad topics. Moreover, I don’t know exactly how humans come up with new ideas (sometimes when people ask me, I tell them that I bang my head against the wall. That’s not quite true but it does reflect that I only understand at a very gross level how I construct new ideas. I’m bright but not very bright, and I can see that much smarter people have the same trouble). So how you are convinced that expert systems could construct new ideas is not at all clear to me.
To be sure, there have been some limited work with computer systems coming up with new, interesting ideas. There’s been some limited success with computers in my own field. See for example Simon Colton’s work. There’s also been similar work in geometry and group theory. But none of these systems were expert systems as that term is normally used. Moreover, none of the ideas they’ve come up with have that impressive. The only exception I’m aware of that is the proof of the Robbins conjecture. So even in narrow areas we’ve had very little success using specialized AIs. Are you using a more general definition of expert system than is standard?
Multiple problems with that claim. First, the existential threat may be low. There’s some tiny risk for example that the LHC will destroy the Earth in some very fun way. There’s also some risk that work with genetic engineering might give fanatics the skill to make a humanity destroying pathogen. And there’s a chance that nanotech might turn everything into purple with green stripes goo (this is much more likely than gray goo of course). There’s even some risk that proving the wrong theorem might summon Lovecraftian horrors. All events have some degree of risk. Moreover, general AI might actually help mitigate some serious threats, such as making it easier to track and deal with rogue asteroids or other catastrophic threats.
Also, even if one accepted the general outline of your argument, one would conclude that that’s a reason why organizations shouldn’t try to make general friendly AI. It isn’t a reason that actually having no AI is better than having no friendly AI.
“First, the existential threat [of AGI] may be low.”
Let me trace back the argument tree for a second. I originally asked for a defense of the claim that “SIAI is tackling the world’s most important task.” Michael Porter responded, “The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?” So NOW in this argument tree, we’re assuming that unfriendly AI IS an existential threat, enough that preventing it is the “world’s most important task.”
Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it’s a naive notion that any organization could protect friendly AI from being subverted.
AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it’s vulnerable, but not a wide one.
How are you going to protect the source code before you run it?
A Friendly AI ought to protect itself from being subverted into an unfriendly AI.
Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.
There are plenty of resources that an FAI could ethically obtain, and with a lead of time of less than 1 day, it could grow enough to be vastly more powerful than an unfriendly seed AI.
Really, asking which AI wins going head to head is the wrong question. The goal is to get an FAI running before unfriendly AGI is implemented.
Wrong. FAI will make whatever unethical steps it must, as long as it’s on the net the best path it can see, taking into account both the (ethically harmful) instrumental actions and their expected outcome. There is no such general disadvantage coming with AI being Friendly. Not that I expect any need for such drastic measures (in an apparent way), especially considering the likely fist-mover advantage it’ll have.