Strong upvoted. I myself still don’t know what form public outreach should take; the billionaires we’ve had so far (Jaan, Dustin, etc) were the cute and cuddly and friendly billionaires, and there are probably some seriously mean mother fuckers in the greater ecosystem.
However, I was really impressed by the decisions behind WWOTF, and MIRI’s policies before and after MIRI’s shift. I still have strong sign uncertainty for the scenario where this book succeeds and gets something like 10 million people thinking in AI safety. We really don’t know what that world would look like, e.g. it could end up as the same death game as right now but with more players.
But one way or another, it is probably highly valuable and optimized reference material for getting an intuitive sense for how to explain AI safety to people, similar to Scott Alexander’s Superintelligence FAQ which was endorsed as #1 by Raemon, or the top ~10% of the AI safety arguments competition.
Strong upvoted. I myself still don’t know what form public outreach should take; the billionaires we’ve had so far (Jaan, Dustin, etc) were the cute and cuddly and friendly billionaires, and there are probably some seriously mean mother fuckers in the greater ecosystem.
However, I was really impressed by the decisions behind WWOTF, and MIRI’s policies before and after MIRI’s shift. I still have strong sign uncertainty for the scenario where this book succeeds and gets something like 10 million people thinking in AI safety. We really don’t know what that world would look like, e.g. it could end up as the same death game as right now but with more players.
But one way or another, it is probably highly valuable and optimized reference material for getting an intuitive sense for how to explain AI safety to people, similar to Scott Alexander’s Superintelligence FAQ which was endorsed as #1 by Raemon, or the top ~10% of the AI safety arguments competition.