I want to pursue this slightly. Before recent evidence—which caused me to update in a vague way towards shorter timelines—my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim “5-10 years to strong AI” and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you “confident” the way people were certain (~80%) Hillary Clinton would win?
I want to pursue this slightly. Before recent evidence—which caused me to update in a vague way towards shorter timelines—my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim “5-10 years to strong AI” and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you “confident” the way people were certain (~80%) Hillary Clinton would win?