As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc.
Is the thesis here that the surprisingness of atomic weapons does not count because there was still a 13-year delay from there until commercial nuclear power plants? It is not obvious to me that the key impact of AI is analogous to a commercial plant rather than an atomic weapon. I agree that broad economic impacts of somewhat-more-general tool-level AI may well be anticipated by some of the parties with a monetary stake in them, but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
Took many years and the prospect was widely understood amongst people who knew the field (I agree that massive wartime efforts to keep things secret are something of a special case, in terms of keeping knowledge from spreading from people who know what’s up to other people).
Once you can make nuclear weapons you still have a continuous increase in destructive power; did it start from a level much higher than conventional bombing?
I do think this example is good for your case and unusually extreme, but if we are talking about a few years I think it still isn’t surprising (except perhaps because of military secrecy).
but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
I don’t think people will suspect a FOOM in particular, but I think they are open to the possibility to the extent that the arguments suggest it is plausible. I don’t think you have argued against that much.
I don’t think that people will become aggregative utilitarians when they think AI is imminent, but that seems like an odd suggestion at any rate. The policies we consider wise for a FOOM scenario are those that result in people basically remaining in control of the world rather than accidentally giving it up, which seems like a goal they basically share. Again, I agree that there is likely to be a gap between what I do and what others would do—e.g., I focus more on aggregate welfare, so am inclined to be more cautious. But that’s a far cry from thinking that other people’s plans don’t matter, or even that my plans matter much more than everyone else’s taken together.
Is the thesis here that the surprisingness of atomic weapons does not count because there was still a 13-year delay from there until commercial nuclear power plants? It is not obvious to me that the key impact of AI is analogous to a commercial plant rather than an atomic weapon. I agree that broad economic impacts of somewhat-more-general tool-level AI may well be anticipated by some of the parties with a monetary stake in them, but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).
Regarding atomic weapons:
Took many years and the prospect was widely understood amongst people who knew the field (I agree that massive wartime efforts to keep things secret are something of a special case, in terms of keeping knowledge from spreading from people who know what’s up to other people).
Once you can make nuclear weapons you still have a continuous increase in destructive power; did it start from a level much higher than conventional bombing?
I do think this example is good for your case and unusually extreme, but if we are talking about a few years I think it still isn’t surprising (except perhaps because of military secrecy).
I don’t think people will suspect a FOOM in particular, but I think they are open to the possibility to the extent that the arguments suggest it is plausible. I don’t think you have argued against that much.
I don’t think that people will become aggregative utilitarians when they think AI is imminent, but that seems like an odd suggestion at any rate. The policies we consider wise for a FOOM scenario are those that result in people basically remaining in control of the world rather than accidentally giving it up, which seems like a goal they basically share. Again, I agree that there is likely to be a gap between what I do and what others would do—e.g., I focus more on aggregate welfare, so am inclined to be more cautious. But that’s a far cry from thinking that other people’s plans don’t matter, or even that my plans matter much more than everyone else’s taken together.