90% awful idea: “Genetic diversity” in computer programs for resistance to large-scale cyberattacks.
The problem: Once someone has figured out the right security hole in Tesla’s software (and, say, broken into a server used to deliver software updates), they can use this to install their malicious code into all 5 million Teslas in the field (or maybe just one model, so perhaps 1 million cars), and probably make them all crash simultaneously and cause a catastrophe.
The solution: There will probably come a point where we can go through the codebase and pick random functions and say, “Claude, write a specification of what this function does”, and then “Claude, take this specification and write a new function implementing it”, and end up with different functions that accomplish the same task, which are likely to have different bugs. Have every Tesla do this to its own software. Then the virus or program that breaks into some Teslas will likely fail on others.
One reason this is horrible is that you would need an exceptionally high success rate for writing those replacement functions—else this process would introduce lots of mundane bugs, which might well cause crashes of their own. That, or you’d need a very extensive set of unit tests to catch all such bugs—so extensive as to probably eat up most of your engineers’ time writing them. Though perhaps AIs could do that part.
The political version of the question isn’t functionally the same as the skin cream version, because the former isn’t a randomized intervention—cities that decided to add gun control laws seem likely to have other crime-related events and law changes at the same time, which could produce a spurious result in either direction. So it’s quite reasonable to say “My opinion is determined by my priors and the evidence didn’t appreciably affect my position.”