I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error.
It seems like you are making that error. I’m not seeing anybody else making it.
In particular, rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”. Postrationality, on the other hand, says “actually, intuitions and feelings are really important, let’s see if we can work with them instead of against them”.
Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.
This would make me a post-rationalist, too.
Postrationalists don’t think that death, suffering, and the forces of nature are cosmic evils that need to be destroyed.
Postrationalists enjoy surrealist art and fiction.
This wouldn’t.
I guess the second part is more important, because the first part is mostly a strawman.
Congratulations on successfully breaking through an open door, I guess.
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2 — fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren’t always true, and perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality”. Both systems can serve the goal of truth, or defeat it, according to how they are used.
The last debate I had on the LW open thread whether it’s worthwhile to have an internally consistent Bayesian net would be a practical example of the first conflict.
You have people in this community who think that a Bayesian net can basically model everything that’s important for making predictions and if one spends enough effort on the Bayesian net, intuition is not required.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body.
Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
I admit that I don’t understand what you’re actually trying to argue, Christian.
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.
It seems like you are making that error. I’m not seeing anybody else making it.
There’s no reason to assume that the word postrational is only about Kegan’s ideas. The most in depth post that tried to define the term (https://yearlycider.wordpress.com/2014/09/19/postrationality-table-of-contents/) didn’t even speak of Kegan directly.
Calling the stage 5 a tool or “weird trick” also misses the point. It’s not an idea in that class.
This would make me a post-rationalist, too.
This wouldn’t.
I guess the second part is more important, because the first part is mostly a strawman.
Not in my experience. It may seem like it now, but that’s because the postrationalists won the argument.
Congratulations on successfully breaking through an open door, I guess.
-- Why truth? And..., 2006
The last debate I had on the LW open thread whether it’s worthwhile to have an internally consistent Bayesian net would be a practical example of the first conflict.
You have people in this community who think that a Bayesian net can basically model everything that’s important for making predictions and if one spends enough effort on the Bayesian net, intuition is not required.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body. Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
Chapman also wrote the more mathy followup post: https://meaningness.com/probability-and-logic
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
I admit that I don’t understand what you’re actually trying to argue, Christian.
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.