Interesting that this essay gives both a 0.4% probability of transformative AI by 2043, and a 60% probability of transformative AI by 2043, for slightly different definitions of “transformative AI by 2043”. One of these is higher than the highest probability given by anyone on the Open Phil panel (~45%) and the other is significantly lower than the lowest panel member probability (~10%). I guess that emphasizes the importance of being clear about what outcome we’re predicting / what outcomes we care about trying to predict.
The 60% is for “We invent algorithms for transformative AGI”, which I guess means that we have the tech that can be trained to do pretty much any job. And the 0.4% is the probability for the whole conjunction, which sounds like it’s for pervasively implemented transformative AI: AI systems have been trained to do pretty much any job, and the infrastructure has been built (chips, robots, power) for them to be doing all of those jobs at a fairly low cost.
It’s unclear why the 0.4% number is the headline here. What’s the question here, or the thing that we care about, such that this is the outcome that we’re making forecasts for? e.g., I think that many paths to extinction don’t route through this scenario. IIRC Eliezer has written that it’s possible that AI could kill everyone before we have widespread self-driving cars. And other sorts of massive transformation don’t depend on having all the infrastructure in place so that AIs/robots can be working as loggers, nurses, upholsterers, etc.
Interesting that this essay gives both a 0.4% probability of transformative AI by 2043, and a 60% probability of transformative AI by 2043, for slightly different definitions of “transformative AI by 2043”. One of these is higher than the highest probability given by anyone on the Open Phil panel (~45%) and the other is significantly lower than the lowest panel member probability (~10%). I guess that emphasizes the importance of being clear about what outcome we’re predicting / what outcomes we care about trying to predict.
The 60% is for “We invent algorithms for transformative AGI”, which I guess means that we have the tech that can be trained to do pretty much any job. And the 0.4% is the probability for the whole conjunction, which sounds like it’s for pervasively implemented transformative AI: AI systems have been trained to do pretty much any job, and the infrastructure has been built (chips, robots, power) for them to be doing all of those jobs at a fairly low cost.
It’s unclear why the 0.4% number is the headline here. What’s the question here, or the thing that we care about, such that this is the outcome that we’re making forecasts for? e.g., I think that many paths to extinction don’t route through this scenario. IIRC Eliezer has written that it’s possible that AI could kill everyone before we have widespread self-driving cars. And other sorts of massive transformation don’t depend on having all the infrastructure in place so that AIs/robots can be working as loggers, nurses, upholsterers, etc.