I guess I don’t disagree with the “no fire alarm” thing. I have a policy that if it looks like I might be somebody’s villain, I should show up and make myself available to get smited.
Good point re: talking to Andreas, I may do that one of these days.
I want to pursue this slightly. Before recent evidence—which caused me to update in a vague way towards shorter timelines—my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim “5-10 years to strong AI” and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you “confident” the way people were certain (~80%) Hillary Clinton would win?
I guess I don’t disagree with the “no fire alarm” thing. I have a policy that if it looks like I might be somebody’s villain, I should show up and make myself available to get smited.
Good point re: talking to Andreas, I may do that one of these days.
I want to pursue this slightly. Before recent evidence—which caused me to update in a vague way towards shorter timelines—my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim “5-10 years to strong AI” and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you “confident” the way people were certain (~80%) Hillary Clinton would win?