This will definitely help. But any kind dirty tricks could easily deepen the polarization with those opposed. On thinking about it more, I think this polarization is already in play. Interested intellectuals have already seen years of forceful AI doom arguments, and they dislike the whole concept on an emotional level. Similarly, those dismissals drive AGI x-risk believers (including myself) kind of nuts, and we tend to respond more forcefully, and the cycle continues.
The problem with this is that, if the public perceives AGI as dangerous, but most of those actually working in the field do not, policy will tend to follow the experts and ignore the populace. They’ll put in surface-level rules that sound like they’ll do something to monitor AGI work, without actually doing much. At least that’s my take on much of public policy that responds to public outcry.
This will definitely help. But any kind dirty tricks could easily deepen the polarization with those opposed. On thinking about it more, I think this polarization is already in play. Interested intellectuals have already seen years of forceful AI doom arguments, and they dislike the whole concept on an emotional level. Similarly, those dismissals drive AGI x-risk believers (including myself) kind of nuts, and we tend to respond more forcefully, and the cycle continues.
The problem with this is that, if the public perceives AGI as dangerous, but most of those actually working in the field do not, policy will tend to follow the experts and ignore the populace. They’ll put in surface-level rules that sound like they’ll do something to monitor AGI work, without actually doing much. At least that’s my take on much of public policy that responds to public outcry.