I would perhaps urge Tyler Cowen to consider raising certain other theories of sudden leaps in status, then? To actually reason out what would be the consequences of such technological advancements, to ask what happens?
At a guess, people resist doing this because predictions about technology are already very difficult, and doing lots of them at once would be very very difficult.
But would it be possible to treat increasing AI capabilities as an increase in model or Knightian uncertainty? It feels like questions of the form “what happens to investment if all industries become uncertain at once? If uncertainty increases randomly across industries? If uncertainty increases according to some distribution across industries?” should be definitely answerable. My gut says the obvious answer is that investment shifts from the most uncertain industries into AI, but how much, how fast, and at what thresholds are all things we want to predict.
At a guess, people resist doing this because predictions about technology are already very difficult, and doing lots of them at once would be very very difficult.
But would it be possible to treat increasing AI capabilities as an increase in model or Knightian uncertainty? It feels like questions of the form “what happens to investment if all industries become uncertain at once? If uncertainty increases randomly across industries? If uncertainty increases according to some distribution across industries?” should be definitely answerable. My gut says the obvious answer is that investment shifts from the most uncertain industries into AI, but how much, how fast, and at what thresholds are all things we want to predict.