Matthew Yglesias has written a couple things about AI risk & existential risk more broadly, and he has also talked a few times about why he doesn’t write more about AI, e.g.:
I don’t write takes about how we should all be more worried about an out-of-control AI situation, but that’s because I know several smart people who do write those takes, and unfortunately they do not have much in the way of smart, tractable policy ideas to actually address it.
This seems different than your 8 possibilities. It sounds like his main issue is that he doesn’t see the path that you think you see where “Rationalist-adjacent writers are a major path for LessWrong ideas to influence elite and mainstream opinion. This can lead to good policies, like avoiding a race with China and discouraging certain types of capabilities research.”
I bet you’re right that a perceived lack of policy options is a key reason people don’t write about this to mainstream audiences
Still, I think policy options exist
The easiest one is adding right right types AI capabilities research to the US Munitions List, so they’re covered under ITAR laws. These are mind-bogglingly burdensome to comply with (so it’s effectively a tax on capabilities research). They also make it illegal to share certain parts of your research publicly
It’s not quite the secrecy regime that Eliezer is looking for, but it’s a big step in that direction
Matthew Yglesias has written a couple things about AI risk & existential risk more broadly, and he has also talked a few times about why he doesn’t write more about AI, e.g.:
This seems different than your 8 possibilities. It sounds like his main issue is that he doesn’t see the path that you think you see where “Rationalist-adjacent writers are a major path for LessWrong ideas to influence elite and mainstream opinion. This can lead to good policies, like avoiding a race with China and discouraging certain types of capabilities research.”
I bet you’re right that a perceived lack of policy options is a key reason people don’t write about this to mainstream audiences
Still, I think policy options exist
The easiest one is adding right right types AI capabilities research to the US Munitions List, so they’re covered under ITAR laws. These are mind-bogglingly burdensome to comply with (so it’s effectively a tax on capabilities research). They also make it illegal to share certain parts of your research publicly
It’s not quite the secrecy regime that Eliezer is looking for, but it’s a big step in that direction