This analogy might not work for all the things “dragons” is standing in for in this thread … but if I have a good statistical bound on the risk posed by dragons being low (but cannot, strictly speaking, rule out their existence entirely) I may conclude that a residual 1E-5 chance of running in to one to be a acceptable risk.
So if I see verified reports of AI causing a mass casualty incident with more that %500 million in damage (or whatever the threshold in the California bill is), I shall consider that evidence on a par to seeing Lake-Town get toasted by Smaug, and update accordingly.
This analogy might not work for all the things “dragons” is standing in for in this thread … but if I have a good statistical bound on the risk posed by dragons being low (but cannot, strictly speaking, rule out their existence entirely) I may conclude that a residual 1E-5 chance of running in to one to be a acceptable risk.
So if I see verified reports of AI causing a mass casualty incident with more that %500 million in damage (or whatever the threshold in the California bill is), I shall consider that evidence on a par to seeing Lake-Town get toasted by Smaug, and update accordingly.