The idea that most people who can’t do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you’ve said. And a key problem with “raising awareness” as a method of risk reduction is that it’s rife with infohazard concerns. For example, if we’re really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI.
And I don’t think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research.
The idea that most people who can’t do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you’ve said. And a key problem with “raising awareness” as a method of risk reduction is that it’s rife with infohazard concerns. For example, if we’re really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI.
And I don’t think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research.