Similarly appreciate the response!
I would say (3). Societal resilience is mandatory as threat systems proliferate and grow in power. You would need positive systems to counter them.
Regarding your points on writing in dystopia tone, I don’t disagree. But it’s easier to highlight an idea via narrative than bullet points. I personally like Mr. Smiles, he’s my new mascot when I inevitably give up trying to solve AI alignment and turn to villainy.
Few comparisons/contrasts on allow vs not allow creation of bad systems:
Major point, as above, is that disallowing the creation of out-of-control systems requires significant power in surveillance and control. Allowing their creation and preventing the worst effects requires significantly less. I can protect my system from viruses, but I can’t stop a script kiddie from releasing one from their personal PC.
I think non-optimal agents are key to the diversity of any ecosystem. Further, I think it’s important that the human genome allows for antisocial, even evil humans. In my mind, minimizing a trait, rather than disallowing it, is of fundamental importance to the long-term survival of any adaptive collective. It just becomes especially important that the ecosystem/culture/society/justice system is robust to the negative externalities of that diversity.
We humans have a justice system based on actions conducted, rather than an individual’s characteristics. It’s illegal to murder, not be on the ASPD spectrum. I think there’s a lot more merit to that than first glance would suggest. I also think it will be similarly difficult to decide whether a system is inherently “out-of-control,” just as it is difficult to determine if a given person with ASPD will commit a crime in the future.
Is a system that optimizes for destruction an optimizing system?