I think you’re conflating some things. I recommend reading Bucket Errors, and rereading Rationalist Taboo and breaking down more clearly what you mean by “rational.” (It’s very underspecified what “be rational with the public” means. It could mean a lot of different things)
Thanks for the advice! I’ll do that. What I meant by “being rational with the public” was things like trying to explain AI risk through social media. I plan to still try to do this, but I have lower hopes that I (or anyone else) can convince absolutely everyone of the risk. And it only takes one rogue individual to destroy humanity. Hence, escape plans are needed. I have to go offline for family needs for a while, but I’ll reply to any more replies when I can get back. Thanks again for the advice/ideas!
I think this is still sort of the wrong frame. I also plan to explain AI risk through various social media. I will use different phrasings when talking to different target audiences that I expect to have different cruxes. I think what you’re calling “explaining rationally”, I’d describe as “being insufficiently skilled at explaining things.” (To be fair, it’s often quite hard to explain things! But, that’s because it’s hard, not because people are irrational or rational explaining is impossible)
Good points! I should explain more carefully for my audience. I do worry that, below a certain intelligence level, people might not fully comprehend AI risk, no matter how well I explain it.
I think you’re conflating some things. I recommend reading Bucket Errors, and rereading Rationalist Taboo and breaking down more clearly what you mean by “rational.” (It’s very underspecified what “be rational with the public” means. It could mean a lot of different things)
Thanks for the advice! I’ll do that. What I meant by “being rational with the public” was things like trying to explain AI risk through social media. I plan to still try to do this, but I have lower hopes that I (or anyone else) can convince absolutely everyone of the risk. And it only takes one rogue individual to destroy humanity. Hence, escape plans are needed. I have to go offline for family needs for a while, but I’ll reply to any more replies when I can get back. Thanks again for the advice/ideas!
I think this is still sort of the wrong frame. I also plan to explain AI risk through various social media. I will use different phrasings when talking to different target audiences that I expect to have different cruxes. I think what you’re calling “explaining rationally”, I’d describe as “being insufficiently skilled at explaining things.” (To be fair, it’s often quite hard to explain things! But, that’s because it’s hard, not because people are irrational or rational explaining is impossible)
Good points! I should explain more carefully for my audience. I do worry that, below a certain intelligence level, people might not fully comprehend AI risk, no matter how well I explain it.