LessWrong & EA is inundated with repeating the same.old arguments for ai x-risk in a hundred different formats. Could this really be the difference ?
Besides, arent superforecasters supposed to be the Kung Fu masters of doing their own research ;-)
I agree with you that a crux is base rate relevancy. Since there is no base rate for x-risk I’m unsure how to translate this to superforecaster language tho
Thank you ! glad you liked it. ☺️
LessWrong & EA is inundated with repeating the same.old arguments for ai x-risk in a hundred different formats. Could this really be the difference ?
Besides, arent superforecasters supposed to be the Kung Fu masters of doing their own research ;-)
I agree with you that a crux is base rate relevancy. Since there is no base rate for x-risk I’m unsure how to translate this to superforecaster language tho
Well, what base rates can inform the trajectory of AGI?
dominance of h sapiens over other hominids
historical errors in forecasting AI capabilities/timelines
impacts of new technologies on animals they have replaced
an analysis of what base rates AI has already violated
rate of bad individuals shaping world history
analysis of similarity of AI to the typical new technology that doesn’t cause extinction
success of terrorist attacks
impacts of covid
success of smallpox eradication
Would be an interesting exercise to do to flesh this out.