Do you think this comparison to be a good specific exemplar for the ai case, such that you’d suggest they should have the same answer, or do you bring it up simply to check calibration? I do agree that it’s a valid calibration to check, but I’m curious whether you’re claiming capabilities research is the same order of magnitude horrific.
I am bringing it up for calibration. As to whether it’s the same magnitude of horrific: in some ways, it’s higher magnitude, no? Even Nazis weren’t going to cause human extinction—of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn’t easily forgive a drunk driver who runs over a child...
Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?
Do you think this comparison to be a good specific exemplar for the ai case, such that you’d suggest they should have the same answer, or do you bring it up simply to check calibration? I do agree that it’s a valid calibration to check, but I’m curious whether you’re claiming capabilities research is the same order of magnitude horrific.
I am bringing it up for calibration. As to whether it’s the same magnitude of horrific: in some ways, it’s higher magnitude, no? Even Nazis weren’t going to cause human extinction—of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn’t easily forgive a drunk driver who runs over a child...
No, but intentional malice is much harder to dissuade nonviolently.