The discussion in the comments is extremely useful and we’ve sorely needed much more of it. I think we need a separate place purely for sharing and debating our thoughts about strategies like this, and ideally also working on actual praxis based on these strategies. The ideal solution for me would be a separate “strategy” section on LessWrong or at least a tag, with much weaker moderation to encourage out-of-the-box ideas. So as not to pass the buck I’m in the process of building my own forum in the absence of anything better.
Some ideas for praxis I had, to add to the ones in this post and the comments: gather a database of experiences people have had of actually convincing different types of people of AI risk, and then try to quantitatively distill the most convincing arguments for each segment; proofreading for content expected to be mass consumed—this could have prevented the Time nukes gaffe; I strongly believe a mass-appeal documentary could go a long way to alignment-pilling a critical mass of the public. It’s possible these are terrible ideas, but I lack a useful place to even discuss them.
The discussion in the comments is extremely useful and we’ve sorely needed much more of it. I think we need a separate place purely for sharing and debating our thoughts about strategies like this, and ideally also working on actual praxis based on these strategies. The ideal solution for me would be a separate “strategy” section on LessWrong or at least a tag, with much weaker moderation to encourage out-of-the-box ideas. So as not to pass the buck I’m in the process of building my own forum in the absence of anything better.
Some ideas for praxis I had, to add to the ones in this post and the comments: gather a database of experiences people have had of actually convincing different types of people of AI risk, and then try to quantitatively distill the most convincing arguments for each segment; proofreading for content expected to be mass consumed—this could have prevented the Time nukes gaffe; I strongly believe a mass-appeal documentary could go a long way to alignment-pilling a critical mass of the public. It’s possible these are terrible ideas, but I lack a useful place to even discuss them.