Part of the issue seems to be that some rationalists strongly reject what has come to be called post-rationality. I’ve certainly gotten plenty of blow back on my exploration of these topics over the last couple years from rationalists who view it as an antirationalist project. It’s hard for me to measure what proportion of the community expresses what views, but there’s a significant chunk of the rationality community seems to be solidifying into a new form of the antecedent skeptic/scientific rationality culture that is unwilling to make space for additional boundary pushing much beyond the existing understanding of the Sequences.
Maybe these folks are just especially vocal, but it does make the environment more difficult to work in. I’m on writing very publicly now because I finally feel confident enough that I can get away with being opposed by vocal community members. Not all are so lucky, and thus feel silenced unless they can distance themselves from the existing rationalist community enough to create space for disagreement without intolerable stress.
Knowing about rationalism plus feeling superior to rationalists :-).
EDITED to add: I hope my snark doesn’t make gworley feel blown-back-at, silenced, and intolerably stressed. That’s not at all my purpose. I’ll make the point I was making a bit more explicitly.
Reading “post-rationalist” stuff, I genuinely do often get the impression that people become “post-rationalists” when they have been exposed to rationalism but find rationalists a group they don’t want to affiliate with (e.g., because they seem disagreeably nerdy).
As shev said, post-rationalists’ complaints about rationalism do sometimes look rather strawy; that’s one thing that gives me the trying-to-look-different vibe.
The (alleged) differences that aren’t just complaints about strawmen generally seem to me to be simply wrong.
Here’s the first Google hit (for me) for “post-rationalist”: from The Future Primeval, a kinda-neoreactionary site set up by ex-LWers. Its summary of how post-rationalists differ from rationalists seems fairly typical. Let’s see what it has to say.
First of all it complains of “some of the silliness” of modern conceptions of rationalist. (OK, then.)
Then it says that there’s more to thinking than propositional belief (perhaps there are rationalists who deny that, but I don’t think I know any) and says that post-rationalists see truth “as a sometimes-applicable proxy for usefulness rather than an always-applicable end in itself” (the standard rationalist position, in so far as there is one, is that truth is usually useful and that deliberately embracing untruth for pragmatic reasons tends to get you in a mess; rationalists also tend to like truth, to value it terminally).
So here we have one implicit strawman (that rationalists think propositional belief is everything), another implicit strawman (that rationalists don’t recognize that truth and usefulness can in principle diverge), something I think is simply an error if I’ve understood correctly (the suggestion that untruth is often more useful than truth), and what looks like a failure of empathy (obliviousness to the possibility that someone might simply prefer to be right, just as they might prefer to be comfortable).
Then it suggests that values shouldn’t be taken as axiomatic fundamental truths but that they often arise from social phenomena (so far as I can tell, this is also generally understood by rationalists).
Then we are told that “some rationalists have a reductionistic and mechanistic theory of mind” (how true this is depends on how those weaselly words “reductionistic” and “mechanistic” are understood) and think that it’s useful to identify biases and try to patch them; post-rationalists, on the other hand, understand that the mind is too complex for that to work and we should treat it as a black box.
Here we may have an actual point of disagreement, but let’s proceed with caution. First of all, the sort of mechanistic reductionism that LW-style rationalists fairly universally endorse is in fact also endorsed by our post-rationalists, in the same paragraph (“while the mind is ultimately a reducible machine”). But I think it’s fair to say that rationalists are generally somewhat optimistic about the prospects of improving one’s thinking by, er, “overcoming bias”. But it is also widely recognized that this doesn’t always work, that in many cases knowing about a bias just makes you more willing to accuse your opponents of it; I think there’s at least one thing along those lines in the Sequences, so it’s not something we’ve been taught recently by the post-rationalists. So I think the point of disagreement here is this: Are there a substantial number of heuristics implemented in our brains that, in today’s environment, can be bettered by deliberate “system-2” calculation? I do think the answer is yes; it seems like our post-rationalists think it’s no; but if they’ve given reasons for that other than handwaving about evolution, I haven’t seen them.
They elaborate on this to say it’s foolish to try to found our practical reasoning in theory rather than common sense and intuition. (This is more or less the same as the previous complaint, and I think we have a similar disagreement here.)
And then they list a bunch of things post-rationalists apparently have “an appreciation for”: tradition, ritual, modes of experience beyond detached skepticism, etc. (Mostly straw, this; the typical rationalist position seems to be that these things can be helpful or harmful and that many of their common forms are harmful; that isn’t at all the same thing as not “appreciating” them.)
So, a lot of that does indeed seem to consist of strawmanning plus feeling superior. Not, of course, all of it; but enough to (I think) explain some of the negative attitude gworley describes getting from rationalists.
In the “Rationality is about winning” train of thought, I’d guess that anything materially different in post-rationality (tm) would be eventually subsumed into the ‘rationality’ umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
The best critique of post-rationalism I’ve seen so far. It matches my thought as well. Please consider making this a post so we can all double-upvote you.
While rationality is nominally that which wins, and so is thus complete, in practice people want consistent, systematic ways of achieving rationality, and so the term comes to have the double meaning of both that which wins and a discovered system for winning based around a combination of traditional rationality, cognitive bias and heuristic research, and rational agent behavior in decision theory, game theory, etc.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter. I think this maybe also better explains the tension that results in feeling a need to carve out post-rationality from rationality when it is nominally still part of the rationalist project.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW) and doing things which are optimal in reaching your goals (“instrumental rationality”, also known as “pragmatism” outside of LW).
The “rationalists must win” point was made by EY to, basically, tie rationality to the real world and real success as opposed to declaring oneself extra rational via navel-gazing. It is basically “don’t tell me you’re better, show me you’re better”.
For a trivial example consider buying for $1 a lottery ticket which has a 1% chance of paying out $1000. It is rational to buy the ticket, but the expected outcome (mode, in statitics-speak) is that you will lose.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter.
So, um, how to win using any means necessary..? I am not sure where do you want to go outside of the “boundaries of the latter”.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW)
I’m not sure that’s what people usually mean by science. And most of the questions we’re concerned about in our lives (“am I going to be able to pay the credit in time?”) are not usually considered to be scientific ones.
Part of the issue seems to be that some rationalists strongly reject what has come to be called post-rationality. I’ve certainly gotten plenty of blow back on my exploration of these topics over the last couple years from rationalists who view it as an antirationalist project. It’s hard for me to measure what proportion of the community expresses what views, but there’s a significant chunk of the rationality community seems to be solidifying into a new form of the antecedent skeptic/scientific rationality culture that is unwilling to make space for additional boundary pushing much beyond the existing understanding of the Sequences.
Maybe these folks are just especially vocal, but it does make the environment more difficult to work in. I’m on writing very publicly now because I finally feel confident enough that I can get away with being opposed by vocal community members. Not all are so lucky, and thus feel silenced unless they can distance themselves from the existing rationalist community enough to create space for disagreement without intolerable stress.
What is “post-rationality”?
Knowing about rationalism plus feeling superior to rationalists :-).
EDITED to add: I hope my snark doesn’t make gworley feel blown-back-at, silenced, and intolerably stressed. That’s not at all my purpose. I’ll make the point I was making a bit more explicitly.
Reading “post-rationalist” stuff, I genuinely do often get the impression that people become “post-rationalists” when they have been exposed to rationalism but find rationalists a group they don’t want to affiliate with (e.g., because they seem disagreeably nerdy).
As shev said, post-rationalists’ complaints about rationalism do sometimes look rather strawy; that’s one thing that gives me the trying-to-look-different vibe.
The (alleged) differences that aren’t just complaints about strawmen generally seem to me to be simply wrong.
Here’s the first Google hit (for me) for “post-rationalist”: from The Future Primeval, a kinda-neoreactionary site set up by ex-LWers. Its summary of how post-rationalists differ from rationalists seems fairly typical. Let’s see what it has to say.
First of all it complains of “some of the silliness” of modern conceptions of rationalist. (OK, then.)
Then it says that there’s more to thinking than propositional belief (perhaps there are rationalists who deny that, but I don’t think I know any) and says that post-rationalists see truth “as a sometimes-applicable proxy for usefulness rather than an always-applicable end in itself” (the standard rationalist position, in so far as there is one, is that truth is usually useful and that deliberately embracing untruth for pragmatic reasons tends to get you in a mess; rationalists also tend to like truth, to value it terminally).
So here we have one implicit strawman (that rationalists think propositional belief is everything), another implicit strawman (that rationalists don’t recognize that truth and usefulness can in principle diverge), something I think is simply an error if I’ve understood correctly (the suggestion that untruth is often more useful than truth), and what looks like a failure of empathy (obliviousness to the possibility that someone might simply prefer to be right, just as they might prefer to be comfortable).
Then it suggests that values shouldn’t be taken as axiomatic fundamental truths but that they often arise from social phenomena (so far as I can tell, this is also generally understood by rationalists).
Then we are told that “some rationalists have a reductionistic and mechanistic theory of mind” (how true this is depends on how those weaselly words “reductionistic” and “mechanistic” are understood) and think that it’s useful to identify biases and try to patch them; post-rationalists, on the other hand, understand that the mind is too complex for that to work and we should treat it as a black box.
Here we may have an actual point of disagreement, but let’s proceed with caution. First of all, the sort of mechanistic reductionism that LW-style rationalists fairly universally endorse is in fact also endorsed by our post-rationalists, in the same paragraph (“while the mind is ultimately a reducible machine”). But I think it’s fair to say that rationalists are generally somewhat optimistic about the prospects of improving one’s thinking by, er, “overcoming bias”. But it is also widely recognized that this doesn’t always work, that in many cases knowing about a bias just makes you more willing to accuse your opponents of it; I think there’s at least one thing along those lines in the Sequences, so it’s not something we’ve been taught recently by the post-rationalists. So I think the point of disagreement here is this: Are there a substantial number of heuristics implemented in our brains that, in today’s environment, can be bettered by deliberate “system-2” calculation? I do think the answer is yes; it seems like our post-rationalists think it’s no; but if they’ve given reasons for that other than handwaving about evolution, I haven’t seen them.
They elaborate on this to say it’s foolish to try to found our practical reasoning in theory rather than common sense and intuition. (This is more or less the same as the previous complaint, and I think we have a similar disagreement here.)
And then they list a bunch of things post-rationalists apparently have “an appreciation for”: tradition, ritual, modes of experience beyond detached skepticism, etc. (Mostly straw, this; the typical rationalist position seems to be that these things can be helpful or harmful and that many of their common forms are harmful; that isn’t at all the same thing as not “appreciating” them.)
So, a lot of that does indeed seem to consist of strawmanning plus feeling superior. Not, of course, all of it; but enough to (I think) explain some of the negative attitude gworley describes getting from rationalists.
Ah, that’s easy. Can I just go straight to being a super-extra-meta-post-rationalist, then?
This is helpful, thanks.
In the “Rationality is about winning” train of thought, I’d guess that anything materially different in post-rationality (tm) would be eventually subsumed into the ‘rationality’ umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
The best critique of post-rationalism I’ve seen so far. It matches my thought as well. Please consider making this a post so we can all double-upvote you.
While rationality is nominally that which wins, and so is thus complete, in practice people want consistent, systematic ways of achieving rationality, and so the term comes to have the double meaning of both that which wins and a discovered system for winning based around a combination of traditional rationality, cognitive bias and heuristic research, and rational agent behavior in decision theory, game theory, etc.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter. I think this maybe also better explains the tension that results in feeling a need to carve out post-rationality from rationality when it is nominally still part of the rationalist project.
I don’t think it is.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW) and doing things which are optimal in reaching your goals (“instrumental rationality”, also known as “pragmatism” outside of LW).
The “rationalists must win” point was made by EY to, basically, tie rationality to the real world and real success as opposed to declaring oneself extra rational via navel-gazing. It is basically “don’t tell me you’re better, show me you’re better”.
For a trivial example consider buying for $1 a lottery ticket which has a 1% chance of paying out $1000. It is rational to buy the ticket, but the expected outcome (mode, in statitics-speak) is that you will lose.
So, um, how to win using any means necessary..? I am not sure where do you want to go outside of the “boundaries of the latter”.
I’m not sure that’s what people usually mean by science. And most of the questions we’re concerned about in our lives (“am I going to be able to pay the credit in time?”) are not usually considered to be scientific ones.
Other than that minor nitpick, I agree.