Let’s say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it “sophisticated” to say, “Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don’t possess, then the Greens have some good points!”? That doesn’t seem sophisticated.
Both of those positions are expressible in a single sentence. Sophisticated positions on topics are generally complex enough that they aren’t expressible in a single sentence.
Saying: “Here’s the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like.” is more sophisticated.
Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc
There are cases where it’s useful to use probability when faced with uncertainty. It’s when you can define a specific test of how the world looks like when the belief is true and when it isn’t.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don’t count that as being a plus.
Complexity or “sophistication” can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
I don’t know. I try to root out beliefs that follow that general form and replace them, e.g. “the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word “function” which implies some kind of designed intent.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
I haven’t argued that A is just better than B.
I try to root out beliefs that follow that general form
Yes, and I see that as a flaw that’s the result of thinking of everything in Bayesian terms.
“the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information
When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result.
The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”.
If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.
Both of those positions are expressible in a single sentence. Sophisticated positions on topics are generally complex enough that they aren’t expressible in a single sentence.
Saying: “Here’s the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like.” is more sophisticated.
There are cases where it’s useful to use probability when faced with uncertainty. It’s when you can define a specific test of how the world looks like when the belief is true and when it isn’t.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
ROFL...
tl;dr: KILL THEM ALL! …but if you want sophistication, here is a 300 page paper about how and why we should KILL THEM ALL!
It’s probably not the best example, but I stayed with the original example.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don’t count that as being a plus.
Complexity or “sophistication” can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.
I don’t know. I try to root out beliefs that follow that general form and replace them, e.g. “the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word “function” which implies some kind of designed intent.
I haven’t argued that A is just better than B.
Yes, and I see that as a flaw that’s the result of thinking of everything in Bayesian terms.
When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result. The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”. If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
I’m still having trouble with this.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.