This is a good question to ask, and my general answer is to this question is a combination of 2 answers, defense doesn’t trump offense in all domains, but they are much more balanced than LWers think, with the exception of bio, and that domain mostly doesn’t have existential risky products. Regulation is necessary for superintelligence, but I don’t think this is anywhere near true:
If so, wouldn’t the same regulation be able to stop superintelligence altogether?
No, primarily because misuse/structural issues demand very different responses, and a lot of the policy making pretty much relies on the assumption that AI is hard to control.
Much more generally, I wish people would make distinctions between existential risk caused by lack of control vs existential risk caused by misuse vs mass death caused by structural forces, since each of these relies on very differing causes, and that matters since the policy conclusions are very different, and sometimes even opposed to each other.
Open Sourcing is a case in point. It’s negative in the cases where misuse and loss control is the dominant risk factor, but turns into a positive if we instead assume structural forces are at work, like in dr_s’s story here:
This is a good question to ask, and my general answer is to this question is a combination of 2 answers, defense doesn’t trump offense in all domains, but they are much more balanced than LWers think, with the exception of bio, and that domain mostly doesn’t have existential risky products. Regulation is necessary for superintelligence, but I don’t think this is anywhere near true:
No, primarily because misuse/structural issues demand very different responses, and a lot of the policy making pretty much relies on the assumption that AI is hard to control.
Much more generally, I wish people would make distinctions between existential risk caused by lack of control vs existential risk caused by misuse vs mass death caused by structural forces, since each of these relies on very differing causes, and that matters since the policy conclusions are very different, and sometimes even opposed to each other.
Open Sourcing is a case in point. It’s negative in the cases where misuse and loss control is the dominant risk factor, but turns into a positive if we instead assume structural forces are at work, like in dr_s’s story here:
https://www.lesswrong.com/posts/2ujT9renJwdrcBqcE/the-benevolence-of-the-butcher
More generally, policies for one scenario will not work for other scenarios for AI risk automatically, it’s a case by case basis.