Personally I haven’t thought about how strong the analogy to GoF is, but another thing that feels worth noting is that there may be a bunch of other cases where the analogy is similarly strong and where major government efforts aimed at risk-reduction have occurred. And my rough sense is that that’s indeed the case, e.g. some of the examples here.
In general, at least for important questions worth spending time on, it seems very weird to say “You think X will happen, but we should be very confident it won’t because in analogous case Y it didn’t”, without also either (a) checking for other analogous cases or other lines of argument or (b) providing an argument for why this one case is far more relevant evidence than any other available evidence. I do think it totally makes sense to flag the analogous case and to update in light of it, but stopping there and walking away feeling confident in the answer seems very weird.
I haven’t read any of the relevant threads in detail, so perhaps the arguments made are stronger than I imply here, but my guess is they weren’t. And it seems to me that it’s unfortunately decently common for AI risk discussions on LessWrong to involve this pattern I’m sketching here.
(To be clear, all I’m arguing here is that these arguments often seem weak, not that their conclusions are false.)
(This comment is raising an additional point to Jan’s, not disagreeing.)
Update: Oh, I just saw Steve Byrnes also the following in this thread, which I totally agree with:
“[Maybe one could argue] “It’s all very random—who happens to be in what position of power and when, etc.—and GoF is just one example, so we shouldn’t generalize too far from it” (OK maybe, but if so, then can we pile up more examples into a reference class to get a base rate or something? and what are the interventions to improve the odds, and can we also try those same interventions on GoF?)”
Personally I haven’t thought about how strong the analogy to GoF is, but another thing that feels worth noting is that there may be a bunch of other cases where the analogy is similarly strong and where major government efforts aimed at risk-reduction have occurred. And my rough sense is that that’s indeed the case, e.g. some of the examples here.
In general, at least for important questions worth spending time on, it seems very weird to say “You think X will happen, but we should be very confident it won’t because in analogous case Y it didn’t”, without also either (a) checking for other analogous cases or other lines of argument or (b) providing an argument for why this one case is far more relevant evidence than any other available evidence. I do think it totally makes sense to flag the analogous case and to update in light of it, but stopping there and walking away feeling confident in the answer seems very weird.
I haven’t read any of the relevant threads in detail, so perhaps the arguments made are stronger than I imply here, but my guess is they weren’t. And it seems to me that it’s unfortunately decently common for AI risk discussions on LessWrong to involve this pattern I’m sketching here.
(To be clear, all I’m arguing here is that these arguments often seem weak, not that their conclusions are false.)
(This comment is raising an additional point to Jan’s, not disagreeing.)
Update: Oh, I just saw Steve Byrnes also the following in this thread, which I totally agree with: