...you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years
Hit the brakes on that line of reasoning! That’s not what the question asked. It asked WILL it, not COULD it.
If I have a statement “X will happen”, and ask people to assign a probability to it, then if the probability is <=50% I believe it isn’t too much to a stretch to paraphrase “X will happen with a probability <=50%” as “It could be that X will happen”. Looking at the data of the survey, of 163 people who gave a probability estimate, only 15 people assigned a probability >50% to the possibility that there will be a superhuman intelligence that greatly surpasses the performance of humans within 2 years after the creation of a human level intelligence.
That said, I didn’t use the word “could” on purpose in my comment. It was just an unintentional inaccuracy. If you think that is a big deal, then I am sorry. I’ll try to be more careful in future.
If I have a statement “X will happen”, and ask people to assign a probability to it, then if the probability is <=50% I believe it isn’t too much to a stretch to paraphrase “X will happen with a probability <=50%” as “It could be that X will happen”. Looking at the data of the survey, of 163 people who gave a probability estimate, only 15 people assigned a probability >50% to the possibility that there will be a superhuman intelligence that greatly surpasses the performance of humans within 2 years after the creation of a human level intelligence.
That said, I didn’t use the word “could” on purpose in my comment. It was just an unintentional inaccuracy. If you think that is a big deal, then I am sorry. I’ll try to be more careful in future.
The difference here is that you considered this position to strictly imply being against the possibility of intelligence explosion.
One can consider intelligence explosion a real risk, and then take steps to prevent it, with the resulting estimate being low probability.