This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.
It’s also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It’s not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.
In any case, I don’t really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you’re quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don’t think slowing down technology is the right approach.
Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don’t think Iran’s government is trustworthy enough to deal with these risks. There aren’t enough safeguards against unfriendly AI or incentives to develop friendly AI, but that’s something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.
To be clear, the question is not whether we should divert resources from FAI research to trying to slow world economic growth, that seems risky and ineffectual. The question is whether, as a good and ethical person, I should avoid any opportunities to join in ensembles trying to increase world economic growth.
Follow-up: If you are part of an ensemble generating ideas for increasing world economic growth, how much information will that give you about the specific ways in which economic growth will manifest, compared to not being part of that ensemble? How easily leveraged is that information towards directly controlling or exploiting a noticeable fraction of the newly-grown economy?
As a singular example: how much money could you get from judicious investments, if you know where things are going next? How usable would those funds be towards mitigating UFAI risks and optimizing FAI research, in ratio to the increased general risk of UFAI caused by the economic growth itself?
That’s why I keep telling people about Scott Sumner, market monetarism, and NGDP level determinism—it might not let you beat the stock market indices, but you can end up with some really bizarre expectations if you don’t know about the best modern concept of “tight money” and “loose money”. E.g. all the people who were worried about hyperinflation when the Fed lowered interest rates to 0.25 and started printing huge amounts of money, while the market monetarists were saying “You’re still going to get sub-trend inflation, our indicators say there isn’t enough money being printed.”
Beating the market is hard. Not being stupid with respect to the market is doable.
Perhaps a better question would be “If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?” No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI—though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven’t specialized in and if you think about it, you may realize that you have limitations as well. One of the most intelligent people I’ve ever met said to me (on a different subject):
“I don’t know enough to do it right. I just know enough to get myself in trouble.”
If you can do anything with the time and effort this ensemble requires of you to make a quality decision and participate in activities, what would make the biggest difference?
But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Not much of a point in nukes’ favor since there are so many other ways to redirect asteroids; even if nukes had a niche for taking care of asteroids very close to impact, it’d probably be vastly cheaper to just put up a better telescope network to spot all asteroids further off.
Nukes and bioweapons don’t FOOM in quite the way AGI is often thought to, because there’s a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.)
I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity.
Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development.
Two counter-arguments to the anti-apocalypse argument:
A catastrophe that didn’t devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn’t kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful.
A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it’s easier to explain AI risk to a few dictators than to a lot of voters.
So unless you’re quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies
As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh the risks. In the end, inaction requires just as much moral and evidential justification as action.
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without disease, where we all live as long as we like and have essentially unlimited resources.
It’s also worth asking whether slowing technology would even help; cultural advancement seems somewhat dependent upon technological advancement. It’s not clear to me that had we taken another 100 years to get nuclear weapons we would have used them any more responsibly; perhaps it simply would have taken that much longer to achieve the Long Peace.
In any case, I don’t really see any simple intervention that would slow technological advancement without causing an enormous amount of collateral damage. So unless you’re quite sure that the benefit in terms of slowing down dangerous technologies like unfriendly AI outweighs the cost in slowing down beneficial technologies, I don’t think slowing down technology is the right approach.
Instead, find ways to establish safeguards, and incentives for developing beneficial technologies faster. To some extent we already do this: Nuclear research continues at CERN and Fermilab, but when we learn that Iran is working on similar technologies we are concerned, because we don’t think Iran’s government is trustworthy enough to deal with these risks. There aren’t enough safeguards against unfriendly AI or incentives to develop friendly AI, but that’s something the Singularity Institute or similar institutions could very well work on. Lobby for legislation on artificial intelligence, or raise funds for an endowment that supports friendliness research.
To be clear, the question is not whether we should divert resources from FAI research to trying to slow world economic growth, that seems risky and ineffectual. The question is whether, as a good and ethical person, I should avoid any opportunities to join in ensembles trying to increase world economic growth.
If the ideas for increasing world economic growth can be traced back to you, might the improvement in your reputation increase the odds of FAI?
Sounds like a rather fragile causal pathway. Especially if one is joining an ensemble.
Follow-up: If you are part of an ensemble generating ideas for increasing world economic growth, how much information will that give you about the specific ways in which economic growth will manifest, compared to not being part of that ensemble? How easily leveraged is that information towards directly controlling or exploiting a noticeable fraction of the newly-grown economy?
As a singular example: how much money could you get from judicious investments, if you know where things are going next? How usable would those funds be towards mitigating UFAI risks and optimizing FAI research, in ratio to the increased general risk of UFAI caused by the economic growth itself?
That’s why I keep telling people about Scott Sumner, market monetarism, and NGDP level determinism—it might not let you beat the stock market indices, but you can end up with some really bizarre expectations if you don’t know about the best modern concept of “tight money” and “loose money”. E.g. all the people who were worried about hyperinflation when the Fed lowered interest rates to 0.25 and started printing huge amounts of money, while the market monetarists were saying “You’re still going to get sub-trend inflation, our indicators say there isn’t enough money being printed.”
Beating the market is hard. Not being stupid with respect to the market is doable.
Perhaps a better question would be “If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?” No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI—though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven’t specialized in and if you think about it, you may realize that you have limitations as well. One of the most intelligent people I’ve ever met said to me (on a different subject):
“I don’t know enough to do it right. I just know enough to get myself in trouble.”
If you can do anything with the time and effort this ensemble requires of you to make a quality decision and participate in activities, what would make the biggest difference?
Not much of a point in nukes’ favor since there are so many other ways to redirect asteroids; even if nukes had a niche for taking care of asteroids very close to impact, it’d probably be vastly cheaper to just put up a better telescope network to spot all asteroids further off.
Nukes and bioweapons don’t FOOM in quite the way AGI is often thought to, because there’s a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.)
I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity.
Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development.
Two counter-arguments to the anti-apocalypse argument:
A catastrophe that didn’t devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn’t kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful.
A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it’s easier to explain AI risk to a few dictators than to a lot of voters.
As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh the risks. In the end, inaction requires just as much moral and evidential justification as action.