Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”.
If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”. If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
I’m still having trouble with this.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.