As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc. (I was claiming a few year’s warning in the piece you are responding to, which is pretty minimal). Do you think there are counterexamples? You are claiming that a completely unprecedented will happen with very high probability. If you don’t think that requires strong arguments to justify than I am confused, and if you think you’ve provided strong arguments I’m confused too.
I agree that AI has the potential to develop extremely quickly, in a way that only a handful of other technologies did. As far as I can tell the best reason to suspect that AI might be a surprise is that it is possible that only theoretical insights are needed, and we do have empirical evidence that sometimes people will be blindsided by a new mathematical proof. But again, as far as I know that has never resulted in a surprising economic impact, not even a modest one (and even in the domain of proofs, most of them don’t blindside people, and there are strong arguments that AI is a harder problem than the problems that one person solves in isolation from the community—for example, literally thousands of times more effort has been put into it). A priori you might say “well, writing better conceptual algorithms is basically the same as proofs—and also sometimes blindsides people—and the total economic value of algorithms is massive at this point, so surely we would sometimes see huge jumps” but as far as I know you would be wrong.
There seems to be a big gap between the sort of problem on which progress is rapid and surprising, and the sort of problem on which progress would have an economic impact. There are a number of reasons to suspect this a priori (lots of people work on economically relevant problems, lots of people try to pay attention to development in those areas because it actually matters, economically relevant problems tend to have lots of moving pieces and require lots of work to get right, lots of people create working intermediate versions because those tend to also have economic impact, etc. etc.) and it seems to be an extremely strong empirical trend.
Like I said, I agree that AI has the potential to develop surprisingly quickly. I would say that 10% is a reasonable probability for such a surprising development (we have seen only a few cases of tech developments which could plausibly have rapid scale-up in economic significance; we also have empirical evidence from the nature of the relationship between theoretical progress and practical progress on software performance). This is a huge deal and something that people don’t take nearly seriously enough. But your position on this question seems perplexing, and it doesn’t seem surprising to me that most AI researchers dismiss it (and most other serious observers follow their lead, since your claim appears to be resting on a detailed view about the nature of AI, and it seems reasonable to believe people who have done serious work on AI when trying to evaluate such claims).
Making clear arguments for a more moderate and defensible conclusions seems like a good idea, and the sort of thing that would probably cause reasonable AI researchers to take the scenario more seriously.
As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc.
Is the thesis here that the surprisingness of atomic weapons does not count because there was still a 13-year delay from there until commercial nuclear power plants? It is not obvious to me that the key impact of AI is analogous to a commercial plant rather than an atomic weapon. I agree that broad economic impacts of somewhat-more-general tool-level AI may well be anticipated by some of the parties with a monetary stake in them, but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
Took many years and the prospect was widely understood amongst people who knew the field (I agree that massive wartime efforts to keep things secret are something of a special case, in terms of keeping knowledge from spreading from people who know what’s up to other people).
Once you can make nuclear weapons you still have a continuous increase in destructive power; did it start from a level much higher than conventional bombing?
I do think this example is good for your case and unusually extreme, but if we are talking about a few years I think it still isn’t surprising (except perhaps because of military secrecy).
but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
I don’t think people will suspect a FOOM in particular, but I think they are open to the possibility to the extent that the arguments suggest it is plausible. I don’t think you have argued against that much.
I don’t think that people will become aggregative utilitarians when they think AI is imminent, but that seems like an odd suggestion at any rate. The policies we consider wise for a FOOM scenario are those that result in people basically remaining in control of the world rather than accidentally giving it up, which seems like a goal they basically share. Again, I agree that there is likely to be a gap between what I do and what others would do—e.g., I focus more on aggregate welfare, so am inclined to be more cautious. But that’s a far cry from thinking that other people’s plans don’t matter, or even that my plans matter much more than everyone else’s taken together.
I think I may be missing a relevant part of the previous discussion between you and Eliezer.
As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc.
By “people” do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what’s the strategic relevance of the question?
(I was claiming a few year’s warning in the piece you are responding to, which is pretty minimal).
Which piece?
There seems to be a big gap between the sort of problem on which progress is rapid and surprising, and the sort of problem on which progress would have an economic impact.
Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you’re probably right, as it’s hard to think of another similar example. There was some discussion about this here.)
By “people” do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what’s the strategic relevance of the question?
I mean if you suggested “Technology X will have a huge economic impact in the near future” to a smart person who knew something about the area, they would think that was plausible and have reasonable estimates for the plausible magnitude of that impact.
The question is whether AI researchers and other elites who take them seriously will basically predict that human-level AI is coming, so that there will be good-faith attempts to mitigate impacts. I think this is very likely, and that improving society’s capability to handle problems they recognize (e.g. to reason about them effectively) has a big impact on improving the probability that they will handle a transition to AI well. Eliezer tends to think this doesn’t much matter, and that if lone heroes don’t resolve the problems then there isn’t much hope.
Which piece?
On my blog I made some remarks about AI, in particular saying that in the mainline people expect human-level AI before it happens. But I think the discussion makes sense without that.
Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you’re probably right, as it’s hard to think of another similar example. There was some discussion about this here.)
The economic impact of bitcoin to date is modest, and I expect it to increase continuously over a scale of years rather than jumping surprisingly.
I don’t think people would have confidently predicted no digital currency prior to bitcoin, nor that they would predict that now. So if e.g. the emergence of digital currency was associated with big policy issues which warranted a pre-emptive response, and this was actually an important issue, I would expect people arguing for that policy response would get traction.
Bitcoin is probably still unusually extreme.
If Bitcoin precipitated a surprising shift in the economic organization of the world, then that would count.
I guess this part does depend a bit on context, since “surprising” depends on timescale. But Eliezer was referring to predictions of “a few years” of warning (which I think is on the very short end, and he thinks is on the very long end).
But Eliezer was referring to predictions of “a few years” of warning (which I think is on the very short end, and he thinks is on the very long end).
My own range would be a few years to a decade, but I guess unlike you I don’t think that is enough warning time for the default scenario to turn out well. Does Eliezer think that would be enough time?
For what it’s worth, I think that (some fraction of) AI researchers are already cognizant of the potential impacts of AI. I think a much smaller number believe in FOOM scenarios, and might reject Hansonian projections as too detailed relative to the amount of uncertainty, but would basically agree that human-level AI changes the game.
As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc. (I was claiming a few year’s warning in the piece you are responding to, which is pretty minimal). Do you think there are counterexamples? You are claiming that a completely unprecedented will happen with very high probability. If you don’t think that requires strong arguments to justify than I am confused, and if you think you’ve provided strong arguments I’m confused too.
I agree that AI has the potential to develop extremely quickly, in a way that only a handful of other technologies did. As far as I can tell the best reason to suspect that AI might be a surprise is that it is possible that only theoretical insights are needed, and we do have empirical evidence that sometimes people will be blindsided by a new mathematical proof. But again, as far as I know that has never resulted in a surprising economic impact, not even a modest one (and even in the domain of proofs, most of them don’t blindside people, and there are strong arguments that AI is a harder problem than the problems that one person solves in isolation from the community—for example, literally thousands of times more effort has been put into it). A priori you might say “well, writing better conceptual algorithms is basically the same as proofs—and also sometimes blindsides people—and the total economic value of algorithms is massive at this point, so surely we would sometimes see huge jumps” but as far as I know you would be wrong.
There seems to be a big gap between the sort of problem on which progress is rapid and surprising, and the sort of problem on which progress would have an economic impact. There are a number of reasons to suspect this a priori (lots of people work on economically relevant problems, lots of people try to pay attention to development in those areas because it actually matters, economically relevant problems tend to have lots of moving pieces and require lots of work to get right, lots of people create working intermediate versions because those tend to also have economic impact, etc. etc.) and it seems to be an extremely strong empirical trend.
Like I said, I agree that AI has the potential to develop surprisingly quickly. I would say that 10% is a reasonable probability for such a surprising development (we have seen only a few cases of tech developments which could plausibly have rapid scale-up in economic significance; we also have empirical evidence from the nature of the relationship between theoretical progress and practical progress on software performance). This is a huge deal and something that people don’t take nearly seriously enough. But your position on this question seems perplexing, and it doesn’t seem surprising to me that most AI researchers dismiss it (and most other serious observers follow their lead, since your claim appears to be resting on a detailed view about the nature of AI, and it seems reasonable to believe people who have done serious work on AI when trying to evaluate such claims).
Making clear arguments for a more moderate and defensible conclusions seems like a good idea, and the sort of thing that would probably cause reasonable AI researchers to take the scenario more seriously.
Is the thesis here that the surprisingness of atomic weapons does not count because there was still a 13-year delay from there until commercial nuclear power plants? It is not obvious to me that the key impact of AI is analogous to a commercial plant rather than an atomic weapon. I agree that broad economic impacts of somewhat-more-general tool-level AI may well be anticipated by some of the parties with a monetary stake in them, but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
Regarding atomic weapons:
Took many years and the prospect was widely understood amongst people who knew the field (I agree that massive wartime efforts to keep things secret are something of a special case, in terms of keeping knowledge from spreading from people who know what’s up to other people).
Once you can make nuclear weapons you still have a continuous increase in destructive power; did it start from a level much higher than conventional bombing?
I do think this example is good for your case and unusually extreme, but if we are talking about a few years I think it still isn’t surprising (except perhaps because of military secrecy).
I don’t think people will suspect a FOOM in particular, but I think they are open to the possibility to the extent that the arguments suggest it is plausible. I don’t think you have argued against that much.
I don’t think that people will become aggregative utilitarians when they think AI is imminent, but that seems like an odd suggestion at any rate. The policies we consider wise for a FOOM scenario are those that result in people basically remaining in control of the world rather than accidentally giving it up, which seems like a goal they basically share. Again, I agree that there is likely to be a gap between what I do and what others would do—e.g., I focus more on aggregate welfare, so am inclined to be more cautious. But that’s a far cry from thinking that other people’s plans don’t matter, or even that my plans matter much more than everyone else’s taken together.
I think I may be missing a relevant part of the previous discussion between you and Eliezer.
By “people” do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what’s the strategic relevance of the question?
Which piece?
Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you’re probably right, as it’s hard to think of another similar example. There was some discussion about this here.)
I mean if you suggested “Technology X will have a huge economic impact in the near future” to a smart person who knew something about the area, they would think that was plausible and have reasonable estimates for the plausible magnitude of that impact.
The question is whether AI researchers and other elites who take them seriously will basically predict that human-level AI is coming, so that there will be good-faith attempts to mitigate impacts. I think this is very likely, and that improving society’s capability to handle problems they recognize (e.g. to reason about them effectively) has a big impact on improving the probability that they will handle a transition to AI well. Eliezer tends to think this doesn’t much matter, and that if lone heroes don’t resolve the problems then there isn’t much hope.
On my blog I made some remarks about AI, in particular saying that in the mainline people expect human-level AI before it happens. But I think the discussion makes sense without that.
The economic impact of bitcoin to date is modest, and I expect it to increase continuously over a scale of years rather than jumping surprisingly.
I don’t think people would have confidently predicted no digital currency prior to bitcoin, nor that they would predict that now. So if e.g. the emergence of digital currency was associated with big policy issues which warranted a pre-emptive response, and this was actually an important issue, I would expect people arguing for that policy response would get traction.
Bitcoin is probably still unusually extreme.
If Bitcoin precipitated a surprising shift in the economic organization of the world, then that would count.
I guess this part does depend a bit on context, since “surprising” depends on timescale. But Eliezer was referring to predictions of “a few years” of warning (which I think is on the very short end, and he thinks is on the very long end).
My own range would be a few years to a decade, but I guess unlike you I don’t think that is enough warning time for the default scenario to turn out well. Does Eliezer think that would be enough time?
For what it’s worth, I think that (some fraction of) AI researchers are already cognizant of the potential impacts of AI. I think a much smaller number believe in FOOM scenarios, and might reject Hansonian projections as too detailed relative to the amount of uncertainty, but would basically agree that human-level AI changes the game.
Could we get a link to this? Maybe EY could add it to the post?