Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body.
Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
I admit that I don’t understand what you’re actually trying to argue, Christian.
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body. Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
Chapman also wrote the more mathy followup post: https://meaningness.com/probability-and-logic
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
I admit that I don’t understand what you’re actually trying to argue, Christian.
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.