No, it was a lot of words that describe why your strategy of modelling stuff as more/less “dangerous” and then trying to calibrate to how much to be scared of “dangerous” stuff doesn’t work.
The better strategy, if you want to pursue this general line of argument, is to make the strongest argument you can for what makes e.g. Bitcoin so dangerous and how horrible the consequences will be. Then since your sense of danger overestimates how dangerous Bitcoin will be, you can go in and empirically investigate where your intuition was wrong by seeing what predictions of your intuitive argument failed and what obstacles caused them to fail.
and then trying to calibrate to how much to be scared of “dangerous” stuff doesn’t work.
Maybe I was unclear in my original post, because you seem confused here. I’m not claiming the thing we should learn is “dangerous things aren’t dangerous”. I’m claiming: here are a bunch of domains that have problems of adverse selection and inability to learn from failure, and yet humans successfully negotiate these domains. We should figure out what strategies humans are using and how far they generalize because this is going to be extremely important in the near future.
My original response contained numerous strategies that people were using:
Keeping one’s cryptocurrency in cold storage rather than easily usable
Using different software than that with known vulnerabilities
Just letting relatively-trusted/incentive-aligned people use the insecure systems
Using mutual surveillance to deescalate destructive weaponry
Using aggression to prevent the weak from building destructive weaponry
You dismissed these as “just-so stories” but I think they are genuinely the explanations for why stuff works in these cases, and if you want to find general rules, you are better off collecting stories like this from many different domains than to try to find The One Unified Principle. Plausible something between 5 and 100 stories will taxonomize all the usable methods and you will develop a theory through this sort of investigation.
I think tailcalled’s point here is an important one. You’ve got very different domains with very different dynamics, and it’s not apriori obvious that the same general principle is involved in making all of these at first glance dangerous systems relatively safe. It’s not even clear to me that they are safer than you’d expect. Of course that depends on how safe you’d expect them to be.
Many people have lost their money from crypto scams. Catastrophic nuclear war hasn’t happened yet, but it seems like we may have had some close calls, and looked at on a chance/year basis it still seems we’re in a bad equilibrium. It’s not at all clear that nuclear weapons are safer than we’d naively assume. Cybersecurity issues haven’t destroyed the global economy, but, for instance on the order of a hundred of billion dollars of pandemic relief funds were stolen by scammers.
That said, if I were looking for a general principle that might be at play in all of these cases I’d look at something like offensive/defense balance.
Offense/defense balance can be handled just by ensuring security via offense rather than via defense.
I guess as a side-note, I think it’s better to study oxidation, the habitable zone, famines, dodo extinction, etc. if one needs something beyond the basic “dangerous domains” that are mentioned in the OP.
That was a lot of words to say “I don’t think anything can be learned here”.
Personally, I think something can be learned here.
No, it was a lot of words that describe why your strategy of modelling stuff as more/less “dangerous” and then trying to calibrate to how much to be scared of “dangerous” stuff doesn’t work.
The better strategy, if you want to pursue this general line of argument, is to make the strongest argument you can for what makes e.g. Bitcoin so dangerous and how horrible the consequences will be. Then since your sense of danger overestimates how dangerous Bitcoin will be, you can go in and empirically investigate where your intuition was wrong by seeing what predictions of your intuitive argument failed and what obstacles caused them to fail.
Maybe I was unclear in my original post, because you seem confused here. I’m not claiming the thing we should learn is “dangerous things aren’t dangerous”. I’m claiming: here are a bunch of domains that have problems of adverse selection and inability to learn from failure, and yet humans successfully negotiate these domains. We should figure out what strategies humans are using and how far they generalize because this is going to be extremely important in the near future.
My original response contained numerous strategies that people were using:
Keeping one’s cryptocurrency in cold storage rather than easily usable
Using different software than that with known vulnerabilities
Just letting relatively-trusted/incentive-aligned people use the insecure systems
Using mutual surveillance to deescalate destructive weaponry
Using aggression to prevent the weak from building destructive weaponry
You dismissed these as “just-so stories” but I think they are genuinely the explanations for why stuff works in these cases, and if you want to find general rules, you are better off collecting stories like this from many different domains than to try to find The One Unified Principle. Plausible something between 5 and 100 stories will taxonomize all the usable methods and you will develop a theory through this sort of investigation.
That sounds like something we should work on, I guess.
I think tailcalled’s point here is an important one. You’ve got very different domains with very different dynamics, and it’s not apriori obvious that the same general principle is involved in making all of these at first glance dangerous systems relatively safe. It’s not even clear to me that they are safer than you’d expect. Of course that depends on how safe you’d expect them to be.
Many people have lost their money from crypto scams. Catastrophic nuclear war hasn’t happened yet, but it seems like we may have had some close calls, and looked at on a chance/year basis it still seems we’re in a bad equilibrium. It’s not at all clear that nuclear weapons are safer than we’d naively assume. Cybersecurity issues haven’t destroyed the global economy, but, for instance on the order of a hundred of billion dollars of pandemic relief funds were stolen by scammers.
That said, if I were looking for a general principle that might be at play in all of these cases I’d look at something like offensive/defense balance.
Offense/defense balance can be handled just by ensuring security via offense rather than via defense.
I guess as a side-note, I think it’s better to study oxidation, the habitable zone, famines, dodo extinction, etc. if one needs something beyond the basic “dangerous domains” that are mentioned in the OP.