My brief complaints about that article (from twitter here):
My complaints about that essay would be: (1) talking about factors that might bias people’s p(doom) too high but not mentioning factors that might bias people’s p(doom) too low; (2) implicitly treating “p(doom) is unknowable” as evidence for “p(doom) is very low”; (3) Dismissing the possibility of object-level arguments. E.g. for (2), they say “govts should adopt policies that are compatible with a range of possible estimates of AI risk, and are on balance helpful even if the risk is negligible”. Why not “…even if the risk is high”? I agree that the essay has many good parts, and stands head-and-shoulders above much of the drivel that comprises the current discourse 😛
(…and then downthread there’s more elaboration on (2).)
My brief complaints about that article (from twitter here):
(…and then downthread there’s more elaboration on (2).)