I feel like I broadly agree with most of the points you make, but I also feel like accident vs misuse are still useful concepts to have.
For example, disasters caused by guns could be seen as:
Accidents, e.g. killing people by mistaking real guns for prop guns, which may be mitigated with better safety protocols
Misuse, e.g. school shootings, which may be mitigated with better legislations and better security etc.
Other structural causes (?), e.g. guns used in wars, which may be mitigated with better international relations
Nevertheless, all of the above are complex and structural in different ways where it is often counterproductive or plain misleading to assign blame (or credit) to the causal node directly upstream of it (in this case, guns).
While I agree that the majority of AI risks are neither caused by accidents nor misuse, and that they shouldn’t be seen as a dichotomy, I do feel that the distinction may still be useful in some contexts i.e. what the mitigation approaches could look like.
Yes it may be useful in some very limited contexts. I can’t recall a time I have seen it in writing and felt like it was not a counter-productive framing.
I feel like I broadly agree with most of the points you make, but I also feel like accident vs misuse are still useful concepts to have.
For example, disasters caused by guns could be seen as:
Accidents, e.g. killing people by mistaking real guns for prop guns, which may be mitigated with better safety protocols
Misuse, e.g. school shootings, which may be mitigated with better legislations and better security etc.
Other structural causes (?), e.g. guns used in wars, which may be mitigated with better international relations
Nevertheless, all of the above are complex and structural in different ways where it is often counterproductive or plain misleading to assign blame (or credit) to the causal node directly upstream of it (in this case, guns).
While I agree that the majority of AI risks are neither caused by accidents nor misuse, and that they shouldn’t be seen as a dichotomy, I do feel that the distinction may still be useful in some contexts i.e. what the mitigation approaches could look like.
Yes it may be useful in some very limited contexts. I can’t recall a time I have seen it in writing and felt like it was not a counter-productive framing.
AI is highly non-analogous with guns.
Yes, especially for consequentialist AIs that don’t behave like tool AIs.