How do you deal with the knowledge problem? Typically, the actual, experienced pain in steps 2 and 3 is critical to the safety measures implemented in 3 and enjoyed in 4. The progress is not delayed for all possible problems, but the worst of them get addressed—the incentive to be safe (reduce pain) aligns with the incentive to use the technology at all.
This works for pain (risk that’s short-term enough to measure the cost and incidence of). It’s not clear that it works for rarer but more severe risks (x-risk or just giant economic risk).
In other words, the regulators are part of the technology in the first place—what’s the guarantee (or even the mechanism to start) that the regulators are addressing only the critical risks?
How do you deal with the knowledge problem? Typically, the actual, experienced pain in steps 2 and 3 is critical to the safety measures implemented in 3 and enjoyed in 4. The progress is not delayed for all possible problems, but the worst of them get addressed—the incentive to be safe (reduce pain) aligns with the incentive to use the technology at all.
This works for pain (risk that’s short-term enough to measure the cost and incidence of). It’s not clear that it works for rarer but more severe risks (x-risk or just giant economic risk).
In other words, the regulators are part of the technology in the first place—what’s the guarantee (or even the mechanism to start) that the regulators are addressing only the critical risks?