I was initially going to comment “yeah I meant to put 1% on ‘already happened’ but at the time that I made my distribution the option wasn’t there” and then I reread my prior reasoning and saw the 0.1%. Not sure what happened there, I agree that 0.1% is way too confident.
On Laplace’s rule: as with most outside views, it’s tricky to say what your reference class should be. You could go with the Dartmouth conference, but given that we’re talking about the AI safety community influencing the AI community, you could also go with the publication of Superintelligence in 2014 (which feels like the first real attempt to communicate with the AI community), and then you would be way more optimistic. (I might be neglecting lots of failed attempts by SIAI / MIRI, but my impression is that they didn’t try to engage the academic AI community very much.)
I don’t buy the point about there being good heuristics against x-risk: the premise of my reasoning was that we get warning shots, which would negate many (though not all) of the heuristics.
I was initially going to comment “yeah I meant to put 1% on ‘already happened’ but at the time that I made my distribution the option wasn’t there” and then I reread my prior reasoning and saw the 0.1%. Not sure what happened there, I agree that 0.1% is way too confident.
On Laplace’s rule: as with most outside views, it’s tricky to say what your reference class should be. You could go with the Dartmouth conference, but given that we’re talking about the AI safety community influencing the AI community, you could also go with the publication of Superintelligence in 2014 (which feels like the first real attempt to communicate with the AI community), and then you would be way more optimistic. (I might be neglecting lots of failed attempts by SIAI / MIRI, but my impression is that they didn’t try to engage the academic AI community very much.)
I don’t buy the point about there being good heuristics against x-risk: the premise of my reasoning was that we get warning shots, which would negate many (though not all) of the heuristics.
+1 for long tails