“My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI not having been developed by 2075 or even 2100… seems too low”
Should Bostrom trust his own opinion on this more than the aggregated judgement of a large group of AI experts?
Certainly he has thought about the question at considerably more length than many of the respondents. Even if he weren’t at the absolute tail of the distribution, he might justifiably be skeptical of the aggregates. But based on public discourse, it seems quite possible that he has thought about the question much more than almost any respondents, and so does have something to add over and above their aggregate.
(I am inclined to agree that predicting a 90% probability of developing broadly human-replacement AI by 2075 is essentially indefensible, and 2100 also seems rash. It’s not totally clear that their definition of HLMI implies human-repalcement, but it seems likely. I think that most onlookers, and probably most AI researchers, would agree.)
There is also the fact that Bostrom has been operating under extreme long term incentives for many years. He has been thinking (and being paid to think, and receiving grants and status for) the long, really long term future for quite a while. AI scientists on the other hand usually are more focused on the time of their lives, the lenght of their grants and other more mundane, shorter term considerations of the scientist life.
Most people have scarce to no mental representation of the World more than two decades after they die, I see no reason why AI scientists would be different.
“My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI not having been developed by 2075 or even 2100… seems too low”
Should Bostrom trust his own opinion on this more than the aggregated judgement of a large group of AI experts?
Certainly he has thought about the question at considerably more length than many of the respondents. Even if he weren’t at the absolute tail of the distribution, he might justifiably be skeptical of the aggregates. But based on public discourse, it seems quite possible that he has thought about the question much more than almost any respondents, and so does have something to add over and above their aggregate.
(I am inclined to agree that predicting a 90% probability of developing broadly human-replacement AI by 2075 is essentially indefensible, and 2100 also seems rash. It’s not totally clear that their definition of HLMI implies human-repalcement, but it seems likely. I think that most onlookers, and probably most AI researchers, would agree.)
There is also the fact that Bostrom has been operating under extreme long term incentives for many years. He has been thinking (and being paid to think, and receiving grants and status for) the long, really long term future for quite a while. AI scientists on the other hand usually are more focused on the time of their lives, the lenght of their grants and other more mundane, shorter term considerations of the scientist life.
Most people have scarce to no mental representation of the World more than two decades after they die, I see no reason why AI scientists would be different.