Curated. I like this post taking LessWrong back to its roots of trying to get us humans to reason better and believe truth things. I think we need that now as much as we did in 2009, and I fear that my own beliefs have become ossified through identity and social commitment, etc. LessWrong now talks a lot of about AI, and AI is increasingly a political topic (this post is a little political in a way I don’t want to put front and center but I’ll curate anyway), which means recalling the ways our minds get stuck and exploring ways to ask ourselves questions in ways where the answer could come back different.
Won’t the goal of getting humans to reason better necessarily turn political at a certain point? After all, if there is one side of an issue that is decidedly better from some ethical perspective we have accepted, won’t the rationalist have to advocate that side? Won’t refraining from taking political action then be unethical? This line of reasoning might need a little bit of reinforcement to be properly convincing, but it’s just to make the point that it seems to me that since political action is action, having a space cover rationality and ethics and not politics would be stifling a (very consequential) part of the discussion.
I’m not here very frequently, I just really like political theory and have seen around the site that you guys try to not discuss it too much. Not very common to find a good place to discuss it, as one would expect. But I’d love to find one!
Won’t the goal of getting humans to reason better necessarily turn political at a certain point?
Trivially, yes. Among other things, we would like politicians to reason better, and for everyone to profit thereby.
I’m not here very frequently, I just really like political theory and have seen around the site that you guys try to not discuss it too much.
As it happens, this significantly predates the current political environment. Minimizing talking about politics, in the American political party horse-race sense, is one of our foundational taboos. It is not so strong anymore—once even a relevant keyword without appropriate caveats would pile on downvotes and excoriation in the comments—but for your historical interest the relevant essay is Politics Is The Mind-Killer. You can search that phrase, or similar ones like “mind-killed” or “arguments are soldiers” to get a sense of how it went. The basic idea was that while we are all new at this rationality business, we should try to avoid talking about things that are especially irrational.
Of course at the same time the website was big on atheism, which is an irony we eventually recognized and corrected. The anti-politics taboo softened enough to allow talking about theory, and mechanisms, and even non-flashpoint policy (see the AI regulation posts). We also added things like arguing about whether or not god exists to the taboo list. There was a bunch of other developments too, but that’s the directional gist.
Happily for you and me both, political theory tackled well as theory finds a good reception here. As an example I submit A voting theory primer for rationalists and the follow-up posts by Jameson Quinn. All of these are on the subject of theories of voting, including discussing some real life examples of orgs and campaigns on the subject, and the whole thing is one of my favorite chunks of writing on the site.
It depends what you mean by political. If you mean something like “people should act on their convictions” then sure. But you don’t have to actually go in to politics to do that, the assumption being that if everyone is sane, they will implement sane policies (with the obvious caveats of Moloch, Goodhart etc.).
If you mean something like “we should get together and actively work on methods to force (or at least strongly encourage) people to be better”, then very much no. Or rather it gets complicated fast.
Thank you. I don’t think it’s possible to review this book without talking a bit about politics, given that so many of the techniques were forged and refined via political canvassing, but I also don’t think that’s the main takeaway, and I hope this introduced some good ideas to the community.
Curated. I like this post taking LessWrong back to its roots of trying to get us humans to reason better and believe truth things. I think we need that now as much as we did in 2009, and I fear that my own beliefs have become ossified through identity and social commitment, etc. LessWrong now talks a lot of about AI, and AI is increasingly a political topic (this post is a little political in a way I don’t want to put front and center but I’ll curate anyway), which means recalling the ways our minds get stuck and exploring ways to ask ourselves questions in ways where the answer could come back different.
Won’t the goal of getting humans to reason better necessarily turn political at a certain point? After all, if there is one side of an issue that is decidedly better from some ethical perspective we have accepted, won’t the rationalist have to advocate that side? Won’t refraining from taking political action then be unethical? This line of reasoning might need a little bit of reinforcement to be properly convincing, but it’s just to make the point that it seems to me that since political action is action, having a space cover rationality and ethics and not politics would be stifling a (very consequential) part of the discussion.
I’m not here very frequently, I just really like political theory and have seen around the site that you guys try to not discuss it too much. Not very common to find a good place to discuss it, as one would expect. But I’d love to find one!
Trivially, yes. Among other things, we would like politicians to reason better, and for everyone to profit thereby.
As it happens, this significantly predates the current political environment. Minimizing talking about politics, in the American political party horse-race sense, is one of our foundational taboos. It is not so strong anymore—once even a relevant keyword without appropriate caveats would pile on downvotes and excoriation in the comments—but for your historical interest the relevant essay is Politics Is The Mind-Killer. You can search that phrase, or similar ones like “mind-killed” or “arguments are soldiers” to get a sense of how it went. The basic idea was that while we are all new at this rationality business, we should try to avoid talking about things that are especially irrational.
Of course at the same time the website was big on atheism, which is an irony we eventually recognized and corrected. The anti-politics taboo softened enough to allow talking about theory, and mechanisms, and even non-flashpoint policy (see the AI regulation posts). We also added things like arguing about whether or not god exists to the taboo list. There was a bunch of other developments too, but that’s the directional gist.
Happily for you and me both, political theory tackled well as theory finds a good reception here. As an example I submit A voting theory primer for rationalists and the follow-up posts by Jameson Quinn. All of these are on the subject of theories of voting, including discussing some real life examples of orgs and campaigns on the subject, and the whole thing is one of my favorite chunks of writing on the site.
It depends what you mean by political. If you mean something like “people should act on their convictions” then sure. But you don’t have to actually go in to politics to do that, the assumption being that if everyone is sane, they will implement sane policies (with the obvious caveats of Moloch, Goodhart etc.).
If you mean something like “we should get together and actively work on methods to force (or at least strongly encourage) people to be better”, then very much no. Or rather it gets complicated fast.
Thank you. I don’t think it’s possible to review this book without talking a bit about politics, given that so many of the techniques were forged and refined via political canvassing, but I also don’t think that’s the main takeaway, and I hope this introduced some good ideas to the community.