This community is doing way better than it has any right to for a bunch of contrarian weirdos with below-average social skills. It’s actually astounding.
The US government and broader military-industrial complex is taking existential AI risk somewhat seriously. The head of the RAND Corporation is an existential risk guy who used to work for FHI.
Apparently the Prime Minister of the UK and various European institutions are concerned as well.
There are x-risk-concerned people at most top universities for AI research and within many of the top commercial labs.
In my experience “normies” are mostly open to simple, robust arguments that AI could be very dangerous if sufficiently capable, so I think the outreach has been sufficiently good on that front.
There is a much more specific set of arguments about advanced AI (exotic decision theories, theories of agency and preferences, computationalism about consciousness) that are harder to explain and defend than the basic AI risk case, so would rhetorically weaken it. But people who like them get very excited about them. Thus I think having a lot more popular materials by LessWrong-ish people would do more harm than good, so it was a good move whether intentional or not to avoid this. (On the other hand if you think these ideas are absolutely crucial considerations without which sensible discussion is impossible, then it is not good.)
This community is doing way better than it has any right to for a bunch of contrarian weirdos with below-average social skills. It’s actually astounding.
The US government and broader military-industrial complex is taking existential AI risk somewhat seriously. The head of the RAND Corporation is an existential risk guy who used to work for FHI.
Apparently the Prime Minister of the UK and various European institutions are concerned as well.
There are x-risk-concerned people at most top universities for AI research and within many of the top commercial labs.
In my experience “normies” are mostly open to simple, robust arguments that AI could be very dangerous if sufficiently capable, so I think the outreach has been sufficiently good on that front.
There is a much more specific set of arguments about advanced AI (exotic decision theories, theories of agency and preferences, computationalism about consciousness) that are harder to explain and defend than the basic AI risk case, so would rhetorically weaken it. But people who like them get very excited about them. Thus I think having a lot more popular materials by LessWrong-ish people would do more harm than good, so it was a good move whether intentional or not to avoid this. (On the other hand if you think these ideas are absolutely crucial considerations without which sensible discussion is impossible, then it is not good.)