Google suggests nothing helpful to define Keganism, and that Keganites are humans from the planet Kegan in the Star Wars Expanded Universe. Could you point me to something about the Keganism you’re referring to?
FWIW I view a lot of the tension between/within the rationality community regarding post-rationality as usually rooted in tribal identification more than concrete disagreement. If rationality is winning, then unusual mental tricks and perspectives that help you win are part of instrumental rationality. If some of those mental tricks happen to infringe upon a pristine epistemic rationality, then we just need a more complicated mental model of what rationality is. Or call it post-rationality, I don’t really care, except for the fact that labels like post-rationality connotationally imply that rationality has to be discarded and replaced with some other thing, which isn’t true. Rationality is and always was an evolving project and saying you’re post- something that’s evolving to incorporate new ideas is getting ahead of yourself.
In other words, any valid critique of rationality becomes part of rationality. We are Borg. Resistance is futile.
I only heard this phrase “postrationality” for the first time a few days ago, maybe because I don’t keep up with the rationality-blog-metaverse that well, and I really don’t understand it.
Allthedescriptions I come across when I look for them seem to describe “rationality, plus being willing to talk about human experience too”, but I thought the LW-sphere was already into talking about human experience and whatnot. So is it just “we’re not comfortable talking about human experience on in the rationalist sphere so we made our own sphere”? That is, a cultural divide?
That first link writes “Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.”. Yet I would imagine everyone on LW would be interested in talking about System 1 and how it works and anything interesting we can say about it. So what’s the difference?
I’m not a massive fan of the ‘postrationality’ label but I do like some of the content, so I thought I’d try and explain why I’m attracted to it. I hope this comment is not too long. I’m not deeply involved but I have spent a lot of time recently reading my way through David Chapman’s Meaningness site and commenting there a bit (as ‘lk’).
One of my minor obsessions is thinking and reading about the role of intuition in maths. (Probably the best example of what I’m thinking of is Thurston’s wonderful Proof and Progress in Mathematics.) As Thurston’s essay describes, mathematicians make progress using a range of human faculties including not just logical deduction but also spatial and geometric intuition, language, metaphors and associations, and processes occurring in time. Chapman is good on this, whereas a lot of the original Less Wrong content seems to have rather a narrow focus on logic and probabilistic inference. (I think this is less true now.)
Mathematical intuition is how I normally approach this subject, but I think this is generally applicable to how we reason about all kinds of topics and come to useful conclusions. There should be a really wide variety of literature to raid for insights here. I’d expect useful contributions from fields such as phenomenology and meditation practice (and some of the ‘instrumental rationality’ folk wisdom) where there’s a focus on introspection of private mental phenomena, and also looking at the same thing from the outside and trying to study how people in a specific field think about problems (apparently this is called ‘ethnomethodology’.) There’s probably also a fair bit to extract more widely from continental philosophy and pomo literature, which I know little about (I’m aware there’s also lots of rubbish).
There’s another side to the postrationality thing that seems to involve a strong interest in various ‘social technologies’ and ritual practices, which often shades into what I’ll kind-of-uncharitably call LARPing various religious/traditional beliefs. I think the idea is that you have to be involved pretty deeply in some version of Buddhism/Catholicism/paganism/whatever to gain any kind of visceral understanding of what’s useful there. From the outside, though, it still looks like a lot of rather uncritical acceptance of the usual sort of traditional rubbish humans believe, and getting involved with one particular type of this seems kind of arbitrary to me. (I exclude Chapman from this criticism, he is very forthright about what he think is bad/useless in Buddhism and what he thinks is worth preserving.) It’s probably obvious at this point that I don’t at all understand the appeal of this myself, though I’m open to learning more about it.
Obviously different people do things for different reasons, but I infer that a lot of people started identifying as post-rationalist when they felt it was no longer cool to be associated with the rationalist movement. There have been a number of episodes of Internet drama over the last several years, any one of which might be alienating to some subset of people; those people might still like a lot of these ideas, but feel rejected from the “core group” as they perceive it.
The natural Schelling point for people who feel rejected by the rationality movement is to try to find a Rationality 2.0 movement that has all the stuff they liked without the stuff they didn’t like. This Schelling point seems to be stable regardless of whether Rationality 2.0 has any actual content or clear definition.
When I look at the Sequences, as the core around which the rationalist community formed, I find many interesting ideas and mental tools. (Randomly listing stuff that comes to my mind: Bayes theorem, Kolmogorov complexity, cognitive biases, planning fallacy, anchoring, politics is the mindkiller, 0 and 1 are not probabilities, cryonics, having your bottom line written first, how an algorithm feels from inside, many-worlds interpretation of quantum physics, etc.)
When I look at “Keganism”, it seems like an affective spiral based on one idea.
I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error. If this one idea has merit, it can become another useful tool in a large toolset. But it does not surpass the whole toolset or make it obsolete, which the “post-” prefix would suggest.
Essentially, the “post-” prefix is just a status claim; it connotationally means “smarter than”.
To compare, Eliezer never said that using Bayes theorem is “post-mathematics”, or that accepting many-worlds interpretation of quantum physics is “post-physics”. Because that would just be silly. Similarly, the idea of “interpenetration of systems” doesn’t make one “post-rational”.
I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error.
It seems like you are making that error. I’m not seeing anybody else making it.
In particular, rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”. Postrationality, on the other hand, says “actually, intuitions and feelings are really important, let’s see if we can work with them instead of against them”.
Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.
This would make me a post-rationalist, too.
Postrationalists don’t think that death, suffering, and the forces of nature are cosmic evils that need to be destroyed.
Postrationalists enjoy surrealist art and fiction.
This wouldn’t.
I guess the second part is more important, because the first part is mostly a strawman.
Congratulations on successfully breaking through an open door, I guess.
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2 — fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren’t always true, and perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality”. Both systems can serve the goal of truth, or defeat it, according to how they are used.
The last debate I had on the LW open thread whether it’s worthwhile to have an internally consistent Bayesian net would be a practical example of the first conflict.
You have people in this community who think that a Bayesian net can basically model everything that’s important for making predictions and if one spends enough effort on the Bayesian net, intuition is not required.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body.
Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
I admit that I don’t understand what you’re actually trying to argue, Christian.
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.
Part of the issue seems to be that some rationalists strongly reject what has come to be called post-rationality. I’ve certainly gotten plenty of blow back on my exploration of these topics over the last couple years from rationalists who view it as an antirationalist project. It’s hard for me to measure what proportion of the community expresses what views, but there’s a significant chunk of the rationality community seems to be solidifying into a new form of the antecedent skeptic/scientific rationality culture that is unwilling to make space for additional boundary pushing much beyond the existing understanding of the Sequences.
Maybe these folks are just especially vocal, but it does make the environment more difficult to work in. I’m on writing very publicly now because I finally feel confident enough that I can get away with being opposed by vocal community members. Not all are so lucky, and thus feel silenced unless they can distance themselves from the existing rationalist community enough to create space for disagreement without intolerable stress.
Knowing about rationalism plus feeling superior to rationalists :-).
EDITED to add: I hope my snark doesn’t make gworley feel blown-back-at, silenced, and intolerably stressed. That’s not at all my purpose. I’ll make the point I was making a bit more explicitly.
Reading “post-rationalist” stuff, I genuinely do often get the impression that people become “post-rationalists” when they have been exposed to rationalism but find rationalists a group they don’t want to affiliate with (e.g., because they seem disagreeably nerdy).
As shev said, post-rationalists’ complaints about rationalism do sometimes look rather strawy; that’s one thing that gives me the trying-to-look-different vibe.
The (alleged) differences that aren’t just complaints about strawmen generally seem to me to be simply wrong.
Here’s the first Google hit (for me) for “post-rationalist”: from The Future Primeval, a kinda-neoreactionary site set up by ex-LWers. Its summary of how post-rationalists differ from rationalists seems fairly typical. Let’s see what it has to say.
First of all it complains of “some of the silliness” of modern conceptions of rationalist. (OK, then.)
Then it says that there’s more to thinking than propositional belief (perhaps there are rationalists who deny that, but I don’t think I know any) and says that post-rationalists see truth “as a sometimes-applicable proxy for usefulness rather than an always-applicable end in itself” (the standard rationalist position, in so far as there is one, is that truth is usually useful and that deliberately embracing untruth for pragmatic reasons tends to get you in a mess; rationalists also tend to like truth, to value it terminally).
So here we have one implicit strawman (that rationalists think propositional belief is everything), another implicit strawman (that rationalists don’t recognize that truth and usefulness can in principle diverge), something I think is simply an error if I’ve understood correctly (the suggestion that untruth is often more useful than truth), and what looks like a failure of empathy (obliviousness to the possibility that someone might simply prefer to be right, just as they might prefer to be comfortable).
Then it suggests that values shouldn’t be taken as axiomatic fundamental truths but that they often arise from social phenomena (so far as I can tell, this is also generally understood by rationalists).
Then we are told that “some rationalists have a reductionistic and mechanistic theory of mind” (how true this is depends on how those weaselly words “reductionistic” and “mechanistic” are understood) and think that it’s useful to identify biases and try to patch them; post-rationalists, on the other hand, understand that the mind is too complex for that to work and we should treat it as a black box.
Here we may have an actual point of disagreement, but let’s proceed with caution. First of all, the sort of mechanistic reductionism that LW-style rationalists fairly universally endorse is in fact also endorsed by our post-rationalists, in the same paragraph (“while the mind is ultimately a reducible machine”). But I think it’s fair to say that rationalists are generally somewhat optimistic about the prospects of improving one’s thinking by, er, “overcoming bias”. But it is also widely recognized that this doesn’t always work, that in many cases knowing about a bias just makes you more willing to accuse your opponents of it; I think there’s at least one thing along those lines in the Sequences, so it’s not something we’ve been taught recently by the post-rationalists. So I think the point of disagreement here is this: Are there a substantial number of heuristics implemented in our brains that, in today’s environment, can be bettered by deliberate “system-2” calculation? I do think the answer is yes; it seems like our post-rationalists think it’s no; but if they’ve given reasons for that other than handwaving about evolution, I haven’t seen them.
They elaborate on this to say it’s foolish to try to found our practical reasoning in theory rather than common sense and intuition. (This is more or less the same as the previous complaint, and I think we have a similar disagreement here.)
And then they list a bunch of things post-rationalists apparently have “an appreciation for”: tradition, ritual, modes of experience beyond detached skepticism, etc. (Mostly straw, this; the typical rationalist position seems to be that these things can be helpful or harmful and that many of their common forms are harmful; that isn’t at all the same thing as not “appreciating” them.)
So, a lot of that does indeed seem to consist of strawmanning plus feeling superior. Not, of course, all of it; but enough to (I think) explain some of the negative attitude gworley describes getting from rationalists.
In the “Rationality is about winning” train of thought, I’d guess that anything materially different in post-rationality (tm) would be eventually subsumed into the ‘rationality’ umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
The best critique of post-rationalism I’ve seen so far. It matches my thought as well. Please consider making this a post so we can all double-upvote you.
While rationality is nominally that which wins, and so is thus complete, in practice people want consistent, systematic ways of achieving rationality, and so the term comes to have the double meaning of both that which wins and a discovered system for winning based around a combination of traditional rationality, cognitive bias and heuristic research, and rational agent behavior in decision theory, game theory, etc.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter. I think this maybe also better explains the tension that results in feeling a need to carve out post-rationality from rationality when it is nominally still part of the rationalist project.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW) and doing things which are optimal in reaching your goals (“instrumental rationality”, also known as “pragmatism” outside of LW).
The “rationalists must win” point was made by EY to, basically, tie rationality to the real world and real success as opposed to declaring oneself extra rational via navel-gazing. It is basically “don’t tell me you’re better, show me you’re better”.
For a trivial example consider buying for $1 a lottery ticket which has a 1% chance of paying out $1000. It is rational to buy the ticket, but the expected outcome (mode, in statitics-speak) is that you will lose.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter.
So, um, how to win using any means necessary..? I am not sure where do you want to go outside of the “boundaries of the latter”.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW)
I’m not sure that’s what people usually mean by science. And most of the questions we’re concerned about in our lives (“am I going to be able to pay the credit in time?”) are not usually considered to be scientific ones.
Google suggests nothing helpful to define Keganism, and that Keganites are humans from the planet Kegan in the Star Wars Expanded Universe. Could you point me to something about the Keganism you’re referring to?
FWIW I view a lot of the tension between/within the rationality community regarding post-rationality as usually rooted in tribal identification more than concrete disagreement. If rationality is winning, then unusual mental tricks and perspectives that help you win are part of instrumental rationality. If some of those mental tricks happen to infringe upon a pristine epistemic rationality, then we just need a more complicated mental model of what rationality is. Or call it post-rationality, I don’t really care, except for the fact that labels like post-rationality connotationally imply that rationality has to be discarded and replaced with some other thing, which isn’t true. Rationality is and always was an evolving project and saying you’re post- something that’s evolving to incorporate new ideas is getting ahead of yourself.
In other words, any valid critique of rationality becomes part of rationality. We are Borg. Resistance is futile.
I only heard this phrase “postrationality” for the first time a few days ago, maybe because I don’t keep up with the rationality-blog-metaverse that well, and I really don’t understand it.
All the descriptions I come across when I look for them seem to describe “rationality, plus being willing to talk about human experience too”, but I thought the LW-sphere was already into talking about human experience and whatnot. So is it just “we’re not comfortable talking about human experience on in the rationalist sphere so we made our own sphere”? That is, a cultural divide?
That first link writes “Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.”. Yet I would imagine everyone on LW would be interested in talking about System 1 and how it works and anything interesting we can say about it. So what’s the difference?
I’m not a massive fan of the ‘postrationality’ label but I do like some of the content, so I thought I’d try and explain why I’m attracted to it. I hope this comment is not too long. I’m not deeply involved but I have spent a lot of time recently reading my way through David Chapman’s Meaningness site and commenting there a bit (as ‘lk’).
One of my minor obsessions is thinking and reading about the role of intuition in maths. (Probably the best example of what I’m thinking of is Thurston’s wonderful Proof and Progress in Mathematics.) As Thurston’s essay describes, mathematicians make progress using a range of human faculties including not just logical deduction but also spatial and geometric intuition, language, metaphors and associations, and processes occurring in time. Chapman is good on this, whereas a lot of the original Less Wrong content seems to have rather a narrow focus on logic and probabilistic inference. (I think this is less true now.)
Mathematical intuition is how I normally approach this subject, but I think this is generally applicable to how we reason about all kinds of topics and come to useful conclusions. There should be a really wide variety of literature to raid for insights here. I’d expect useful contributions from fields such as phenomenology and meditation practice (and some of the ‘instrumental rationality’ folk wisdom) where there’s a focus on introspection of private mental phenomena, and also looking at the same thing from the outside and trying to study how people in a specific field think about problems (apparently this is called ‘ethnomethodology’.) There’s probably also a fair bit to extract more widely from continental philosophy and pomo literature, which I know little about (I’m aware there’s also lots of rubbish).
There’s another side to the postrationality thing that seems to involve a strong interest in various ‘social technologies’ and ritual practices, which often shades into what I’ll kind-of-uncharitably call LARPing various religious/traditional beliefs. I think the idea is that you have to be involved pretty deeply in some version of Buddhism/Catholicism/paganism/whatever to gain any kind of visceral understanding of what’s useful there. From the outside, though, it still looks like a lot of rather uncritical acceptance of the usual sort of traditional rubbish humans believe, and getting involved with one particular type of this seems kind of arbitrary to me. (I exclude Chapman from this criticism, he is very forthright about what he think is bad/useless in Buddhism and what he thinks is worth preserving.) It’s probably obvious at this point that I don’t at all understand the appeal of this myself, though I’m open to learning more about it.
Obviously different people do things for different reasons, but I infer that a lot of people started identifying as post-rationalist when they felt it was no longer cool to be associated with the rationalist movement. There have been a number of episodes of Internet drama over the last several years, any one of which might be alienating to some subset of people; those people might still like a lot of these ideas, but feel rejected from the “core group” as they perceive it.
The natural Schelling point for people who feel rejected by the rationality movement is to try to find a Rationality 2.0 movement that has all the stuff they liked without the stuff they didn’t like. This Schelling point seems to be stable regardless of whether Rationality 2.0 has any actual content or clear definition.
How this all feels to me:
When I look at the Sequences, as the core around which the rationalist community formed, I find many interesting ideas and mental tools. (Randomly listing stuff that comes to my mind: Bayes theorem, Kolmogorov complexity, cognitive biases, planning fallacy, anchoring, politics is the mindkiller, 0 and 1 are not probabilities, cryonics, having your bottom line written first, how an algorithm feels from inside, many-worlds interpretation of quantum physics, etc.)
When I look at “Keganism”, it seems like an affective spiral based on one idea.
I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error. If this one idea has merit, it can become another useful tool in a large toolset. But it does not surpass the whole toolset or make it obsolete, which the “post-” prefix would suggest.
Essentially, the “post-” prefix is just a status claim; it connotationally means “smarter than”.
To compare, Eliezer never said that using Bayes theorem is “post-mathematics”, or that accepting many-worlds interpretation of quantum physics is “post-physics”. Because that would just be silly. Similarly, the idea of “interpenetration of systems” doesn’t make one “post-rational”.
It seems like you are making that error. I’m not seeing anybody else making it.
There’s no reason to assume that the word postrational is only about Kegan’s ideas. The most in depth post that tried to define the term (https://yearlycider.wordpress.com/2014/09/19/postrationality-table-of-contents/) didn’t even speak of Kegan directly.
Calling the stage 5 a tool or “weird trick” also misses the point. It’s not an idea in that class.
This would make me a post-rationalist, too.
This wouldn’t.
I guess the second part is more important, because the first part is mostly a strawman.
Not in my experience. It may seem like it now, but that’s because the postrationalists won the argument.
Congratulations on successfully breaking through an open door, I guess.
-- Why truth? And..., 2006
The last debate I had on the LW open thread whether it’s worthwhile to have an internally consistent Bayesian net would be a practical example of the first conflict.
You have people in this community who think that a Bayesian net can basically model everything that’s important for making predictions and if one spends enough effort on the Bayesian net, intuition is not required.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body. Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
Chapman also wrote the more mathy followup post: https://meaningness.com/probability-and-logic
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
I admit that I don’t understand what you’re actually trying to argue, Christian.
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.
Yeah, that’s my thought on post rationality too
Part of the issue seems to be that some rationalists strongly reject what has come to be called post-rationality. I’ve certainly gotten plenty of blow back on my exploration of these topics over the last couple years from rationalists who view it as an antirationalist project. It’s hard for me to measure what proportion of the community expresses what views, but there’s a significant chunk of the rationality community seems to be solidifying into a new form of the antecedent skeptic/scientific rationality culture that is unwilling to make space for additional boundary pushing much beyond the existing understanding of the Sequences.
Maybe these folks are just especially vocal, but it does make the environment more difficult to work in. I’m on writing very publicly now because I finally feel confident enough that I can get away with being opposed by vocal community members. Not all are so lucky, and thus feel silenced unless they can distance themselves from the existing rationalist community enough to create space for disagreement without intolerable stress.
What is “post-rationality”?
Knowing about rationalism plus feeling superior to rationalists :-).
EDITED to add: I hope my snark doesn’t make gworley feel blown-back-at, silenced, and intolerably stressed. That’s not at all my purpose. I’ll make the point I was making a bit more explicitly.
Reading “post-rationalist” stuff, I genuinely do often get the impression that people become “post-rationalists” when they have been exposed to rationalism but find rationalists a group they don’t want to affiliate with (e.g., because they seem disagreeably nerdy).
As shev said, post-rationalists’ complaints about rationalism do sometimes look rather strawy; that’s one thing that gives me the trying-to-look-different vibe.
The (alleged) differences that aren’t just complaints about strawmen generally seem to me to be simply wrong.
Here’s the first Google hit (for me) for “post-rationalist”: from The Future Primeval, a kinda-neoreactionary site set up by ex-LWers. Its summary of how post-rationalists differ from rationalists seems fairly typical. Let’s see what it has to say.
First of all it complains of “some of the silliness” of modern conceptions of rationalist. (OK, then.)
Then it says that there’s more to thinking than propositional belief (perhaps there are rationalists who deny that, but I don’t think I know any) and says that post-rationalists see truth “as a sometimes-applicable proxy for usefulness rather than an always-applicable end in itself” (the standard rationalist position, in so far as there is one, is that truth is usually useful and that deliberately embracing untruth for pragmatic reasons tends to get you in a mess; rationalists also tend to like truth, to value it terminally).
So here we have one implicit strawman (that rationalists think propositional belief is everything), another implicit strawman (that rationalists don’t recognize that truth and usefulness can in principle diverge), something I think is simply an error if I’ve understood correctly (the suggestion that untruth is often more useful than truth), and what looks like a failure of empathy (obliviousness to the possibility that someone might simply prefer to be right, just as they might prefer to be comfortable).
Then it suggests that values shouldn’t be taken as axiomatic fundamental truths but that they often arise from social phenomena (so far as I can tell, this is also generally understood by rationalists).
Then we are told that “some rationalists have a reductionistic and mechanistic theory of mind” (how true this is depends on how those weaselly words “reductionistic” and “mechanistic” are understood) and think that it’s useful to identify biases and try to patch them; post-rationalists, on the other hand, understand that the mind is too complex for that to work and we should treat it as a black box.
Here we may have an actual point of disagreement, but let’s proceed with caution. First of all, the sort of mechanistic reductionism that LW-style rationalists fairly universally endorse is in fact also endorsed by our post-rationalists, in the same paragraph (“while the mind is ultimately a reducible machine”). But I think it’s fair to say that rationalists are generally somewhat optimistic about the prospects of improving one’s thinking by, er, “overcoming bias”. But it is also widely recognized that this doesn’t always work, that in many cases knowing about a bias just makes you more willing to accuse your opponents of it; I think there’s at least one thing along those lines in the Sequences, so it’s not something we’ve been taught recently by the post-rationalists. So I think the point of disagreement here is this: Are there a substantial number of heuristics implemented in our brains that, in today’s environment, can be bettered by deliberate “system-2” calculation? I do think the answer is yes; it seems like our post-rationalists think it’s no; but if they’ve given reasons for that other than handwaving about evolution, I haven’t seen them.
They elaborate on this to say it’s foolish to try to found our practical reasoning in theory rather than common sense and intuition. (This is more or less the same as the previous complaint, and I think we have a similar disagreement here.)
And then they list a bunch of things post-rationalists apparently have “an appreciation for”: tradition, ritual, modes of experience beyond detached skepticism, etc. (Mostly straw, this; the typical rationalist position seems to be that these things can be helpful or harmful and that many of their common forms are harmful; that isn’t at all the same thing as not “appreciating” them.)
So, a lot of that does indeed seem to consist of strawmanning plus feeling superior. Not, of course, all of it; but enough to (I think) explain some of the negative attitude gworley describes getting from rationalists.
Ah, that’s easy. Can I just go straight to being a super-extra-meta-post-rationalist, then?
This is helpful, thanks.
In the “Rationality is about winning” train of thought, I’d guess that anything materially different in post-rationality (tm) would be eventually subsumed into the ‘rationality’ umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
The best critique of post-rationalism I’ve seen so far. It matches my thought as well. Please consider making this a post so we can all double-upvote you.
While rationality is nominally that which wins, and so is thus complete, in practice people want consistent, systematic ways of achieving rationality, and so the term comes to have the double meaning of both that which wins and a discovered system for winning based around a combination of traditional rationality, cognitive bias and heuristic research, and rational agent behavior in decision theory, game theory, etc.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter. I think this maybe also better explains the tension that results in feeling a need to carve out post-rationality from rationality when it is nominally still part of the rationalist project.
I don’t think it is.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW) and doing things which are optimal in reaching your goals (“instrumental rationality”, also known as “pragmatism” outside of LW).
The “rationalists must win” point was made by EY to, basically, tie rationality to the real world and real success as opposed to declaring oneself extra rational via navel-gazing. It is basically “don’t tell me you’re better, show me you’re better”.
For a trivial example consider buying for $1 a lottery ticket which has a 1% chance of paying out $1000. It is rational to buy the ticket, but the expected outcome (mode, in statitics-speak) is that you will lose.
So, um, how to win using any means necessary..? I am not sure where do you want to go outside of the “boundaries of the latter”.
I’m not sure that’s what people usually mean by science. And most of the questions we’re concerned about in our lives (“am I going to be able to pay the credit in time?”) are not usually considered to be scientific ones.
Other than that minor nitpick, I agree.