I think I may be missing a relevant part of the previous discussion between you and Eliezer.
As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc.
By “people” do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what’s the strategic relevance of the question?
(I was claiming a few year’s warning in the piece you are responding to, which is pretty minimal).
Which piece?
There seems to be a big gap between the sort of problem on which progress is rapid and surprising, and the sort of problem on which progress would have an economic impact.
Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you’re probably right, as it’s hard to think of another similar example. There was some discussion about this here.)
By “people” do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what’s the strategic relevance of the question?
I mean if you suggested “Technology X will have a huge economic impact in the near future” to a smart person who knew something about the area, they would think that was plausible and have reasonable estimates for the plausible magnitude of that impact.
The question is whether AI researchers and other elites who take them seriously will basically predict that human-level AI is coming, so that there will be good-faith attempts to mitigate impacts. I think this is very likely, and that improving society’s capability to handle problems they recognize (e.g. to reason about them effectively) has a big impact on improving the probability that they will handle a transition to AI well. Eliezer tends to think this doesn’t much matter, and that if lone heroes don’t resolve the problems then there isn’t much hope.
Which piece?
On my blog I made some remarks about AI, in particular saying that in the mainline people expect human-level AI before it happens. But I think the discussion makes sense without that.
Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you’re probably right, as it’s hard to think of another similar example. There was some discussion about this here.)
The economic impact of bitcoin to date is modest, and I expect it to increase continuously over a scale of years rather than jumping surprisingly.
I don’t think people would have confidently predicted no digital currency prior to bitcoin, nor that they would predict that now. So if e.g. the emergence of digital currency was associated with big policy issues which warranted a pre-emptive response, and this was actually an important issue, I would expect people arguing for that policy response would get traction.
Bitcoin is probably still unusually extreme.
If Bitcoin precipitated a surprising shift in the economic organization of the world, then that would count.
I guess this part does depend a bit on context, since “surprising” depends on timescale. But Eliezer was referring to predictions of “a few years” of warning (which I think is on the very short end, and he thinks is on the very long end).
But Eliezer was referring to predictions of “a few years” of warning (which I think is on the very short end, and he thinks is on the very long end).
My own range would be a few years to a decade, but I guess unlike you I don’t think that is enough warning time for the default scenario to turn out well. Does Eliezer think that would be enough time?
For what it’s worth, I think that (some fraction of) AI researchers are already cognizant of the potential impacts of AI. I think a much smaller number believe in FOOM scenarios, and might reject Hansonian projections as too detailed relative to the amount of uncertainty, but would basically agree that human-level AI changes the game.
I think I may be missing a relevant part of the previous discussion between you and Eliezer.
By “people” do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what’s the strategic relevance of the question?
Which piece?
Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you’re probably right, as it’s hard to think of another similar example. There was some discussion about this here.)
I mean if you suggested “Technology X will have a huge economic impact in the near future” to a smart person who knew something about the area, they would think that was plausible and have reasonable estimates for the plausible magnitude of that impact.
The question is whether AI researchers and other elites who take them seriously will basically predict that human-level AI is coming, so that there will be good-faith attempts to mitigate impacts. I think this is very likely, and that improving society’s capability to handle problems they recognize (e.g. to reason about them effectively) has a big impact on improving the probability that they will handle a transition to AI well. Eliezer tends to think this doesn’t much matter, and that if lone heroes don’t resolve the problems then there isn’t much hope.
On my blog I made some remarks about AI, in particular saying that in the mainline people expect human-level AI before it happens. But I think the discussion makes sense without that.
The economic impact of bitcoin to date is modest, and I expect it to increase continuously over a scale of years rather than jumping surprisingly.
I don’t think people would have confidently predicted no digital currency prior to bitcoin, nor that they would predict that now. So if e.g. the emergence of digital currency was associated with big policy issues which warranted a pre-emptive response, and this was actually an important issue, I would expect people arguing for that policy response would get traction.
Bitcoin is probably still unusually extreme.
If Bitcoin precipitated a surprising shift in the economic organization of the world, then that would count.
I guess this part does depend a bit on context, since “surprising” depends on timescale. But Eliezer was referring to predictions of “a few years” of warning (which I think is on the very short end, and he thinks is on the very long end).
My own range would be a few years to a decade, but I guess unlike you I don’t think that is enough warning time for the default scenario to turn out well. Does Eliezer think that would be enough time?
For what it’s worth, I think that (some fraction of) AI researchers are already cognizant of the potential impacts of AI. I think a much smaller number believe in FOOM scenarios, and might reject Hansonian projections as too detailed relative to the amount of uncertainty, but would basically agree that human-level AI changes the game.