Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
I admit that I don’t understand what you’re actually trying to argue, Christian.
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
Chapman also wrote the more mathy followup post: https://meaningness.com/probability-and-logic
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
I admit that I don’t understand what you’re actually trying to argue, Christian.
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.