In this post I argued that an AI-induced point of no return would probably happen before world GDP starts to noticeably accelerate. You gave me some good pushback about the historical precedent I cited, but what is your overall view? If you can spare the time, what is your credence in each of the following PONR-before-GDP-acceleration scenarios, and why?
1. Fast takeoff
2. The sorts of skills needed to succeed in politics or war are easier to develop in AI than the sorts needed to accelerate the entire world economy, and/or have less deployment lag. (Maybe it takes years to build the relevant products and industries to accelerate the economy, but only months to wage a successful propaganda campaign to get people to stop listening to the AI safety community)
3. We get an “expensive AI takeoff” in which AI capabilities improve enough to cross some threshold of dangerousness, but this improvement happens in a very compute-intensive way that makes it uneconomical to automate a significant part of the economy until the threshold has been crossed.
4. Vulnerable world: Thanks to AI and other advances, a large number of human actors get the ability to make WMD’s.
5. Persuasion/propaganda tools get good enough and are widely used enough that it significantly deteriorates the collective epistemology of the relevant actors (corps, governments, maybe even our community). (I know you’ve said at various times that probably AI-designed persuasive content will be banned or guarded against by other AIs, but what if this doesn’t happen? We don’t currently do much to protect ourselves from ordinary propaganda or algorithmically-selected content...)
6. Tech hoarding (The leading project(s) don’t deploy their AI to improve the world economy, but nevertheless stay in the lead, perhaps due to massive investment, or perhaps due to weak or stifled competition)
I don’t know if we ever cleared up ambiguity about the concept of PONR. It seems like it depends critically on who is returning, i.e. what is the counterfactual we are considering when asking if we “could” return. If we don’t do any magical intervention, then it seems like the PONR could be well before AI since the conclusion was always inevitable. If we do a maximally magical intervention, of creating unprecedented political will, then I think it’s most likely that we’d see 100%+ annual growth (even of say energy capture) before PONR. I don’t think there are reasonable definitions of PONR where it’s very likely to occur before significant economic acceleration.
I don’t think I consider most of the scenarios list necessarily-PONR-before-GDP acceleration scenarios, though many of them could permit PONR-before-GDP if AI was broadly deployed before it started adding significant economic value.
All of these probabilities are obviously pretty unreliable and made up on the spot:
1. Fast takeoff
Defined as 1-year doubling starts before 4-year doubling finishes, maybe 25%?
2. The sorts of skills needed to succeed in politics or war are easier to develop in AI than the sorts needed to accelerate the entire world economy, and/or have less deployment lag. (Maybe it takes years to build the relevant products and industries to accelerate the economy, but only months to wage a successful propaganda campaign to get people to stop listening to the AI safety community)
I definitely don’t know what PONR means in this scenario (who is returning?) So to clarify the other terms: by “accelerate the entire world economy” I think you mean “generate enough value to meaningfully accelerate GWP growth”, and by “succeed in politics or war” you mean “allow a small group of humans to take over the rest of the world”? (If you just mean “undermine attempts at AI alignment in the actual world,” I don’t even understand why the presence of the AI is important—can’t we have a PONR if social tides just turn against concern about AI safety?)
For my maybe-stronger definitions, maybe 10%? I expect most of that comes from “takeoff could have been fast but we don’t really roll stuff out in a timely way” and I don’t know if it’s right to describe it as “the sort of skills.” (The main structural advantage of taking over the world is that fewer people need to roll it out.)
3. We get an “expensive AI takeoff” in which AI capabilities improve enough to cross some threshold of dangerousness, but this improvement happens in a very compute-intensive way that makes it uneconomical to automate a significant part of the economy until the threshold has been crossed.
I don’t think I quite understand this scenario. It sounds quite similar to 2, where the main point is that we reach a dangerous-in-the-sense-of-taking-over-the-world threshold before a economically-useful threshold? Or maybe they are simultaneous, and so this is kind of like the extension of #1+#2 where it’s a tie or nearly a tie between take over the world and accelerate GDP growth?
Vulnerable world: Thanks to AI and other advances, a large number of human actors get the ability to make WMD’s.
Are you saying that this happens before economic acceleration, or just anytime in our future?
I think probability of happening before economic acceleration is maybe 5%? If ever, it really depends on “get the ability” and distinguishing actors and so on, maybe I think there is a 50% chance that at some point the state of the world’s collective know-how is such that, absent any regulation about the use of destructive technologies, a very large number of small actors would each be able to unilaterally destroy the world?
5. Persuasion/propaganda tools get good enough and are widely used enough that it significantly deteriorates the collective epistemology of the relevant actors (corps, governments, maybe even our community). (I know you’ve said at various times that probably AI-designed persuasive content will be banned or guarded against by other AIs, but what if this doesn’t happen? We don’t currently do much to protect ourselves from ordinary propaganda or algorithmically-selected content...)
Depends on the threshold. One version: what’s the probability that at some point in the development of AI, prior to significant economic acceleration, it has a net negative effect on the quality of the average importance-weighted actor’s beliefs (because propaganda outweighs epistemically productive uses of AI). Maybe I’d be at like 50%? It then gets smaller if you ask for it to be true on average over the period or if you ask for a larger negative effect.
6. Tech hoarding (The leading project(s) don’t deploy their AI to improve the world economy, but nevertheless stay in the lead, perhaps due to massive investment, or perhaps due to weak or stifled competition)
Is this including things like export controls from the US in an attempt to win a war with China? I guess not, the relevant threshold is something like “These technologies are deployed sufficiently narrowly that they do not meaningfully accelerate GWP growth.” I think this is fairly hard for me to imagine (since their lead would need to be very large to outcompete another country that did deploy the technology to broadly accelerate growth), perhaps 5%?
“These technologies are deployed sufficiently narrowly that they do not meaningfully accelerate GWP growth.” I think this is fairly hard for me to imagine (since their lead would need to be very large to outcompete another country that did deploy the technology to broadly accelerate growth), perhaps 5%?
I think there is a reasonable way it could happen even without an enormous lead. You just need either,
Its very hard to capture a significant fraction of the gains from the tech.
Tech progress scales very poorly in money.
For example, suppose it is obvious to everyone that AI in a few years time will be really powerful. Several teams with lots of funding are set up. If progress is researcher bound, and researchers are ideologically committed to the goals of the project, then top research talent might be extremely difficult to buy. (They are already well paid, for the next year they will be working almost all day. After that, the world is mostly shaped by which project won.)
Compute could be hard to buy if there were hard bottlenecks somewhere in the chip supply chain, most of the worlds new chips were already being used by the AI projects, and an attitude of “our chips and were not selling” was prevalent.
Another possibility, suppose deploying a tech means letting the competition know how it works. Then if one side deploys, they are pushing the other side ahead. So the question is, does deploying one unit of research give you the resources to do more than one unit?
In this post I argued that an AI-induced point of no return would probably happen before world GDP starts to noticeably accelerate. You gave me some good pushback about the historical precedent I cited, but what is your overall view? If you can spare the time, what is your credence in each of the following PONR-before-GDP-acceleration scenarios, and why?
1. Fast takeoff
2. The sorts of skills needed to succeed in politics or war are easier to develop in AI than the sorts needed to accelerate the entire world economy, and/or have less deployment lag. (Maybe it takes years to build the relevant products and industries to accelerate the economy, but only months to wage a successful propaganda campaign to get people to stop listening to the AI safety community)
3. We get an “expensive AI takeoff” in which AI capabilities improve enough to cross some threshold of dangerousness, but this improvement happens in a very compute-intensive way that makes it uneconomical to automate a significant part of the economy until the threshold has been crossed.
4. Vulnerable world: Thanks to AI and other advances, a large number of human actors get the ability to make WMD’s.
5. Persuasion/propaganda tools get good enough and are widely used enough that it significantly deteriorates the collective epistemology of the relevant actors (corps, governments, maybe even our community). (I know you’ve said at various times that probably AI-designed persuasive content will be banned or guarded against by other AIs, but what if this doesn’t happen? We don’t currently do much to protect ourselves from ordinary propaganda or algorithmically-selected content...)
6. Tech hoarding (The leading project(s) don’t deploy their AI to improve the world economy, but nevertheless stay in the lead, perhaps due to massive investment, or perhaps due to weak or stifled competition)
I don’t know if we ever cleared up ambiguity about the concept of PONR. It seems like it depends critically on who is returning, i.e. what is the counterfactual we are considering when asking if we “could” return. If we don’t do any magical intervention, then it seems like the PONR could be well before AI since the conclusion was always inevitable. If we do a maximally magical intervention, of creating unprecedented political will, then I think it’s most likely that we’d see 100%+ annual growth (even of say energy capture) before PONR. I don’t think there are reasonable definitions of PONR where it’s very likely to occur before significant economic acceleration.
I don’t think I consider most of the scenarios list necessarily-PONR-before-GDP acceleration scenarios, though many of them could permit PONR-before-GDP if AI was broadly deployed before it started adding significant economic value.
All of these probabilities are obviously pretty unreliable and made up on the spot:
Defined as 1-year doubling starts before 4-year doubling finishes, maybe 25%?
I definitely don’t know what PONR means in this scenario (who is returning?) So to clarify the other terms: by “accelerate the entire world economy” I think you mean “generate enough value to meaningfully accelerate GWP growth”, and by “succeed in politics or war” you mean “allow a small group of humans to take over the rest of the world”? (If you just mean “undermine attempts at AI alignment in the actual world,” I don’t even understand why the presence of the AI is important—can’t we have a PONR if social tides just turn against concern about AI safety?)
For my maybe-stronger definitions, maybe 10%? I expect most of that comes from “takeoff could have been fast but we don’t really roll stuff out in a timely way” and I don’t know if it’s right to describe it as “the sort of skills.” (The main structural advantage of taking over the world is that fewer people need to roll it out.)
I don’t think I quite understand this scenario. It sounds quite similar to 2, where the main point is that we reach a dangerous-in-the-sense-of-taking-over-the-world threshold before a economically-useful threshold? Or maybe they are simultaneous, and so this is kind of like the extension of #1+#2 where it’s a tie or nearly a tie between take over the world and accelerate GDP growth?
Are you saying that this happens before economic acceleration, or just anytime in our future?
I think probability of happening before economic acceleration is maybe 5%? If ever, it really depends on “get the ability” and distinguishing actors and so on, maybe I think there is a 50% chance that at some point the state of the world’s collective know-how is such that, absent any regulation about the use of destructive technologies, a very large number of small actors would each be able to unilaterally destroy the world?
Depends on the threshold. One version: what’s the probability that at some point in the development of AI, prior to significant economic acceleration, it has a net negative effect on the quality of the average importance-weighted actor’s beliefs (because propaganda outweighs epistemically productive uses of AI). Maybe I’d be at like 50%? It then gets smaller if you ask for it to be true on average over the period or if you ask for a larger negative effect.
Is this including things like export controls from the US in an attempt to win a war with China? I guess not, the relevant threshold is something like “These technologies are deployed sufficiently narrowly that they do not meaningfully accelerate GWP growth.” I think this is fairly hard for me to imagine (since their lead would need to be very large to outcompete another country that did deploy the technology to broadly accelerate growth), perhaps 5%?
I think there is a reasonable way it could happen even without an enormous lead. You just need either,
Its very hard to capture a significant fraction of the gains from the tech.
Tech progress scales very poorly in money.
For example, suppose it is obvious to everyone that AI in a few years time will be really powerful. Several teams with lots of funding are set up. If progress is researcher bound, and researchers are ideologically committed to the goals of the project, then top research talent might be extremely difficult to buy. (They are already well paid, for the next year they will be working almost all day. After that, the world is mostly shaped by which project won.)
Compute could be hard to buy if there were hard bottlenecks somewhere in the chip supply chain, most of the worlds new chips were already being used by the AI projects, and an attitude of “our chips and were not selling” was prevalent.
Another possibility, suppose deploying a tech means letting the competition know how it works. Then if one side deploys, they are pushing the other side ahead. So the question is, does deploying one unit of research give you the resources to do more than one unit?