You make some good points about economic and political realities. However, I’m deeply puzzled by some of your other remarks. For example, you make the claim that general AI wouldn’t provide any benefits above expert systems. I’m deeply puzzled by this claim since expert systems are by nature highly limited. Expert systems cannot construct new ideas nor can they handle anything that’s even vaguely cross-disciplinary. No number of expert systems will be able to engage in the same degree of scientific productivity as a single bright scientists.
You also claim that no general AI is better than friendly AI. This is deeply puzzling. This makes sense only if one is fantastically paranoid about the loss of jobs. But new technologies are often economically disruptive. There are all sorts of jobs that don’t exist now that were around a hundred years ago, or even fifty years ago. And yes, people lost jobs. But overall, they are better for it. You would need to make a much stronger case if you are trying to establish that no general AI is somehow better than general AI.
Why do you think expert systems cannot handle anything cross-disciplinary? I even say that expert systems can generate new ideas, by more or less the same process that humans do. An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance. If you’re talking about revolutionary, paradigm shifting ideas—we are probably already saturated with such ideas. The main bottleneck inhibiting paradigm shifts is not the ideas but the infrastructure and economic need for the paradigm shift. A company that can produce a 10% better product can already take over the market, a 200% better product is overkill, and especially unnecessary if there are substantial costs in overhauling the production line.
The reason why NO general AI is better than friendly (general) AI is very simple. IF general AI is an existential threat, than no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity.
*save for mitigating an even larger risk of annihilation, of course
Why do you think expert systems cannot handle anything cross-disciplinary? I even
say that expert systems can generate new ideas, by more or less the same process > that humans do. An expert system only needs an understanding of manufacturing,
physics, and chemistry to design better computer chips, for instance.
Expert systems generally need very narrow problem domains to function. I’m not sure how you would expect an expert system to have an understanding of three very broad topics. Moreover, I don’t know exactly how humans come up with new ideas (sometimes when people ask me, I tell them that I bang my head against the wall. That’s not quite true but it does reflect that I only understand at a very gross level how I construct new ideas. I’m bright but not very bright, and I can see that much smarter people have the same trouble). So how you are convinced that expert systems could construct new ideas is not at all clear to me.
To be sure, there have been some limited work with computer systems coming up with new, interesting ideas. There’s been some limited success with computers in my own field. See for example Simon Colton’s work. There’s also been similar work in geometry and group theory. But none of these systems were expert systems as that term is normally used. Moreover, none of the ideas they’ve come up with have that impressive. The only exception I’m aware of that is the proof of the Robbins conjecture. So even in narrow areas we’ve had very little success using specialized AIs. Are you using a more general definition of expert system than is standard?
The reason why NO general AI is better than friendly (general) AI is very simple. IF
general AI is an existential threat, than no organization claiming to put humans
first could justify being pro-AGI (friendly or not), since no possible benefit* can
justify the risk of destroying humanity
Multiple problems with that claim. First, the existential threat may be low. There’s some tiny risk for example that the LHC will destroy the Earth in some very fun way. There’s also some risk that work with genetic engineering might give fanatics the skill to make a humanity destroying pathogen. And there’s a chance that nanotech might turn everything into purple with green stripes goo (this is much more likely than gray goo of course). There’s even some risk that proving the wrong theorem might summon Lovecraftian horrors. All events have some degree of risk. Moreover, general AI might actually help mitigate some serious threats, such as making it easier to track and deal with rogue asteroids or other catastrophic threats.
Also, even if one accepted the general outline of your argument, one would conclude that that’s a reason why organizations shouldn’t try to make general friendly AI. It isn’t a reason that actually having no AI is better than having no friendly AI.
“First, the existential threat [of AGI] may be low.”
Let me trace back the argument tree for a second. I originally asked for a defense of the claim that “SIAI is tackling the world’s most important task.” Michael Porter responded, “The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?” So NOW in this argument tree, we’re assuming that unfriendly AI IS an existential threat, enough that preventing it is the “world’s most important task.”
Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it’s a naive notion that any organization could protect friendly AI from being subverted.
AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it’s vulnerable, but not a wide one.
Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.
There are plenty of resources that an FAI could ethically obtain, and with a lead of time of less than 1 day, it could grow enough to be vastly more powerful than an unfriendly seed AI.
Really, asking which AI wins going head to head is the wrong question. The goal is to get an FAI running before unfriendly AGI is implemented.
The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain.
Wrong. FAI will make whatever unethical steps it must, as long as it’s on the net the best path it can see, taking into account both the (ethically harmful) instrumental actions and their expected outcome. There is no such general disadvantage coming with AI being Friendly. Not that I expect any need for such drastic measures (in an apparent way), especially considering the likely fist-mover advantage it’ll have.
An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance.
If a program can take an understanding of those subjects and design a better computer chip, I don’t think it’s just an “expert system” anymore. I would think it would take an AI to do that. That’s an AI complete problem.
If you’re talking about revolutionary, paradigm shifting ideas—we are probably already saturated with such ideas. The main bottleneck inhibiting paradigm shifts is not the ideas but the infrastructure and economic need for the paradigm shift.
Are you serious? I would think the exact opposite would be true: we have an infrastructure starving for paradigm shifting ideas. I’d love to hear some of these revolutionary ideas that we’re saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur
no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity.
Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world’s problems, I’d do it. Everything we do has some risks. Even the LHC had an (extremely miniscule) risk of destroying the universe, but doing a cost-benefit analysis should reveal that some things are worth minor chances of destroying humanity.
“If a program can take an understanding of those subjects and design a better computer chip, I don’t think it’s just an “expert system” anymore. I would think it would take an AI to do that. That’s an AI complete problem.”
What I had in mind was some sort of combinatorial approach to designing chips, i.e. take these materials and randomly generate a design, test it, and then start altering the search space based on the results. I didn’t mean “understanding” in the human sense of the word, sorry.
“I’d love to hear some of these revolutionary ideas that we’re saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur”
Example: many aspects of the legal and political systems could be reformed, and it’s not difficult to come up with ideas on how they could be reformed. The benefit is simply insufficient to justify spending much of the limited resources we have on solving those problems.
“Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world’s problems, I’d do it. ”
So you think there’s a >10% chance that the world’s problems are going to destroy humanity in the near future?
What I had in mind was some sort of combinatorial approach to designing chips, i.e. > take these materials and randomly generate a design, test it, and then start altering
the search space based on the results. I didn’t mean “understanding” in the human
sense of the word, sorry.
Given the very large number of possibilities and the difficulty with making prototypes, this seems like an extremely inefficient process without more thought going into to it.
What I had in mind was some sort of combinatorial approach to designing chips
Oh, okay, fair enough, though I’m still not sure I would call that an “expert system” (this time for the opposite reason that it seems too stupid).
many aspects of the legal and political systems could be reformed, and it’s not difficult to come up with ideas on how they could be reformed. The benefit is simply insufficient to justify spending much of the limited resources we have on solving those problems.
Ah. I was thinking of designing an AI, probably because I was primed by your expert system comment. Well, in those cases, I think the issue is that our legal and political systems were purposely set up to be difficult to change: change requires overturning precedents, obtaining majority or 3⁄5 or 2⁄3 votes in various legislative bodies, passing constitutional amendments, and so forth. And I can guarantee you that for any of these reforms, there are powerful interests who would be harmed by the reforms, and many people who don’t want reform: this is more of a persuasion problem than an infrastructure problem. But yes, you’re right that there are plenty of revolutionary ideas about how to reform, say, the education system: they’re just not widely accepted enough to happen.
So you think there’s a >10% chance that the world’s problems are going to destroy humanity in the near future?
I’m confused by this sentence. I’m not sure if I think that, but what does it have to do with the hypothetical button that has a 10% chance of destroying humanity? My point was that it’s worth taking a small risk of destroying humanity if the benefits are great enough.
You make some good points about economic and political realities. However, I’m deeply puzzled by some of your other remarks. For example, you make the claim that general AI wouldn’t provide any benefits above expert systems. I’m deeply puzzled by this claim since expert systems are by nature highly limited. Expert systems cannot construct new ideas nor can they handle anything that’s even vaguely cross-disciplinary. No number of expert systems will be able to engage in the same degree of scientific productivity as a single bright scientists.
You also claim that no general AI is better than friendly AI. This is deeply puzzling. This makes sense only if one is fantastically paranoid about the loss of jobs. But new technologies are often economically disruptive. There are all sorts of jobs that don’t exist now that were around a hundred years ago, or even fifty years ago. And yes, people lost jobs. But overall, they are better for it. You would need to make a much stronger case if you are trying to establish that no general AI is somehow better than general AI.
Why do you think expert systems cannot handle anything cross-disciplinary? I even say that expert systems can generate new ideas, by more or less the same process that humans do. An expert system only needs an understanding of manufacturing, physics, and chemistry to design better computer chips, for instance. If you’re talking about revolutionary, paradigm shifting ideas—we are probably already saturated with such ideas. The main bottleneck inhibiting paradigm shifts is not the ideas but the infrastructure and economic need for the paradigm shift. A company that can produce a 10% better product can already take over the market, a 200% better product is overkill, and especially unnecessary if there are substantial costs in overhauling the production line.
The reason why NO general AI is better than friendly (general) AI is very simple. IF general AI is an existential threat, than no organization claiming to put humans first could justify being pro-AGI (friendly or not), since no possible benefit* can justify the risk of destroying humanity.
*save for mitigating an even larger risk of annihilation, of course
Expert systems generally need very narrow problem domains to function. I’m not sure how you would expect an expert system to have an understanding of three very broad topics. Moreover, I don’t know exactly how humans come up with new ideas (sometimes when people ask me, I tell them that I bang my head against the wall. That’s not quite true but it does reflect that I only understand at a very gross level how I construct new ideas. I’m bright but not very bright, and I can see that much smarter people have the same trouble). So how you are convinced that expert systems could construct new ideas is not at all clear to me.
To be sure, there have been some limited work with computer systems coming up with new, interesting ideas. There’s been some limited success with computers in my own field. See for example Simon Colton’s work. There’s also been similar work in geometry and group theory. But none of these systems were expert systems as that term is normally used. Moreover, none of the ideas they’ve come up with have that impressive. The only exception I’m aware of that is the proof of the Robbins conjecture. So even in narrow areas we’ve had very little success using specialized AIs. Are you using a more general definition of expert system than is standard?
Multiple problems with that claim. First, the existential threat may be low. There’s some tiny risk for example that the LHC will destroy the Earth in some very fun way. There’s also some risk that work with genetic engineering might give fanatics the skill to make a humanity destroying pathogen. And there’s a chance that nanotech might turn everything into purple with green stripes goo (this is much more likely than gray goo of course). There’s even some risk that proving the wrong theorem might summon Lovecraftian horrors. All events have some degree of risk. Moreover, general AI might actually help mitigate some serious threats, such as making it easier to track and deal with rogue asteroids or other catastrophic threats.
Also, even if one accepted the general outline of your argument, one would conclude that that’s a reason why organizations shouldn’t try to make general friendly AI. It isn’t a reason that actually having no AI is better than having no friendly AI.
“First, the existential threat [of AGI] may be low.”
Let me trace back the argument tree for a second. I originally asked for a defense of the claim that “SIAI is tackling the world’s most important task.” Michael Porter responded, “The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way?” So NOW in this argument tree, we’re assuming that unfriendly AI IS an existential threat, enough that preventing it is the “world’s most important task.”
Now in this branch of the argument, I assumed (but did not state) the following: If unfriendly AI is an existential threat, friendly AI is an existential threat, as long as there is some chance of it being modified into unfriendly AI. Furthermore, I assert that it’s a naive notion that any organization could protect friendly AI from being subverted.
AIs, including ones with Friendly goals, are apt to work to protect their goal systems from modification, as this will prevent their efforts from being directed towards things other than their (current) aims. There might be a window while the AI is mid-FOOM where it’s vulnerable, but not a wide one.
How are you going to protect the source code before you run it?
A Friendly AI ought to protect itself from being subverted into an unfriendly AI.
Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.
There are plenty of resources that an FAI could ethically obtain, and with a lead of time of less than 1 day, it could grow enough to be vastly more powerful than an unfriendly seed AI.
Really, asking which AI wins going head to head is the wrong question. The goal is to get an FAI running before unfriendly AGI is implemented.
Wrong. FAI will make whatever unethical steps it must, as long as it’s on the net the best path it can see, taking into account both the (ethically harmful) instrumental actions and their expected outcome. There is no such general disadvantage coming with AI being Friendly. Not that I expect any need for such drastic measures (in an apparent way), especially considering the likely fist-mover advantage it’ll have.
If a program can take an understanding of those subjects and design a better computer chip, I don’t think it’s just an “expert system” anymore. I would think it would take an AI to do that. That’s an AI complete problem.
Are you serious? I would think the exact opposite would be true: we have an infrastructure starving for paradigm shifting ideas. I’d love to hear some of these revolutionary ideas that we’re saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur
Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world’s problems, I’d do it. Everything we do has some risks. Even the LHC had an (extremely miniscule) risk of destroying the universe, but doing a cost-benefit analysis should reveal that some things are worth minor chances of destroying humanity.
“If a program can take an understanding of those subjects and design a better computer chip, I don’t think it’s just an “expert system” anymore. I would think it would take an AI to do that. That’s an AI complete problem.”
What I had in mind was some sort of combinatorial approach to designing chips, i.e. take these materials and randomly generate a design, test it, and then start altering the search space based on the results. I didn’t mean “understanding” in the human sense of the word, sorry.
“I’d love to hear some of these revolutionary ideas that we’re saturated with. I think we have some insights, but these insights need to be fleshed out and implemented, and figuring out how to do that is the paradigm shift that needs to occur”
Example: many aspects of the legal and political systems could be reformed, and it’s not difficult to come up with ideas on how they could be reformed. The benefit is simply insufficient to justify spending much of the limited resources we have on solving those problems.
“Wait a minute. If I could press a button now with a 10% chance of destroying humanity and a 90% chance of solving the world’s problems, I’d do it. ”
So you think there’s a >10% chance that the world’s problems are going to destroy humanity in the near future?
Given the very large number of possibilities and the difficulty with making prototypes, this seems like an extremely inefficient process without more thought going into to it.
Oh, okay, fair enough, though I’m still not sure I would call that an “expert system” (this time for the opposite reason that it seems too stupid).
Ah. I was thinking of designing an AI, probably because I was primed by your expert system comment. Well, in those cases, I think the issue is that our legal and political systems were purposely set up to be difficult to change: change requires overturning precedents, obtaining majority or 3⁄5 or 2⁄3 votes in various legislative bodies, passing constitutional amendments, and so forth. And I can guarantee you that for any of these reforms, there are powerful interests who would be harmed by the reforms, and many people who don’t want reform: this is more of a persuasion problem than an infrastructure problem. But yes, you’re right that there are plenty of revolutionary ideas about how to reform, say, the education system: they’re just not widely accepted enough to happen.
I’m confused by this sentence. I’m not sure if I think that, but what does it have to do with the hypothetical button that has a 10% chance of destroying humanity? My point was that it’s worth taking a small risk of destroying humanity if the benefits are great enough.