Rude of me to jump to that oh-so-self-flattering conclusion, yes. And certainly me saying that should not be taken as any sort of evidence in support of my view.
Instead you should judge my view by:
My willingness to make an explicit concrete prediction and put money on it. Admittedly a trivial amount of money 8n this, but I’ve made much larger bets on the topic in the past.
The fact that my views are self-consistent and have remained fairly stable in response to evidence gathered over the past two years about AI progress. Stable views aren’t necessarily a good thing, it could mean I’m failing to update! In this case, the evidence of the past two years confirms the predictions I publicly stated before that time, thus the stability of my prediction is a plus in this case.
Contrast this with the dramatic change in the predictions I was criticizing, which came about because recent evidence strongly contradicted their previous views.
Note that my prediction of “AGI < 10 years” is consistent with my prediction of “and we should expect lots of far reaching changes, and novel dangers which will need careful measurement and regulation”. As compared to the views of many of the ML experts saying “AGI > 15 years away”, and also saying things like, “the changes will be relatively small. On the same order of change as the printing press and the Internet.” and also “the risks aren’t very high. Everything will probably be fine, and even if things go wrong, we can easily iteratively fix the problems with only minor negative consequences”.
I would argue that even if one held the view that AGI is > 15 years away (but less than 50), it would still not make sense to be so unworried about the potential consequences. I claim that that set of views is “insufficiently thought through”, and that if forced to specify all the detailed pieces of their predictions in a lengthy written debate, those views would show themselves to be self-contradictory. I believe that my set of predictions would be relatively much more self-consistent.
Ah yes, the good ol’ “If someone disagrees with me, they must be stupid or lying”
Rude of me to jump to that oh-so-self-flattering conclusion, yes. And certainly me saying that should not be taken as any sort of evidence in support of my view.
Instead you should judge my view by:
My willingness to make an explicit concrete prediction and put money on it. Admittedly a trivial amount of money 8n this, but I’ve made much larger bets on the topic in the past.
The fact that my views are self-consistent and have remained fairly stable in response to evidence gathered over the past two years about AI progress. Stable views aren’t necessarily a good thing, it could mean I’m failing to update! In this case, the evidence of the past two years confirms the predictions I publicly stated before that time, thus the stability of my prediction is a plus in this case. Contrast this with the dramatic change in the predictions I was criticizing, which came about because recent evidence strongly contradicted their previous views.
Note that my prediction of “AGI < 10 years” is consistent with my prediction of “and we should expect lots of far reaching changes, and novel dangers which will need careful measurement and regulation”. As compared to the views of many of the ML experts saying “AGI > 15 years away”, and also saying things like, “the changes will be relatively small. On the same order of change as the printing press and the Internet.” and also “the risks aren’t very high. Everything will probably be fine, and even if things go wrong, we can easily iteratively fix the problems with only minor negative consequences”. I would argue that even if one held the view that AGI is > 15 years away (but less than 50), it would still not make sense to be so unworried about the potential consequences. I claim that that set of views is “insufficiently thought through”, and that if forced to specify all the detailed pieces of their predictions in a lengthy written debate, those views would show themselves to be self-contradictory. I believe that my set of predictions would be relatively much more self-consistent.