I’m not sure what work the word “sophisticated” is doing here.
Let’s say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it “sophisticated” to say, “Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don’t possess, then the Greens have some good points!”? That doesn’t seem sophisticated. That just indicates a pathological inability to “join a side” even when one side is the one you should join by your own ethical compunctions, you want to join, you would benefit from joining, and you would cause others to benefit by joining.
Also, what if you arrive at the party partway through, and the Green and Blue have already spoken, and also another Sophisticate has spoken and indicated that “both sides have some good points, perhaps the answer is in the middle”. Are you allowed to just say, “I agree with the Sophisticate!” or does that make you a “sophisticate partisan” meaning you are obligated by the laws of being/appearing “sophisticated” to say, “Well, actually, the answer can’t be in the middle, a 50-50 split just seems improbable, the Greens are probably 25% right and the Blues are probably 75% right.”?
What I’m getting at is I’m not sure what the difference is being your usage of “sophisticated” and just being a contrarian.
You mention the attitudes implicit in certain styles of debate. I’ve written before about the dangers of certain styles of policy debate taught in American schools. I’ve always seen it as damaging that the point of US policy debate is to be able to argue from any position and against any position. It implicitly teaches the young mind that you can “win” an argument through cleverness and rule-lawyering without regard to whether your position is actually superior. The whole framework actively undermines the truthseeking mindset, because in a policy debate, you’re not truthseeking, you’re trying to obfuscate the opponents’ inconvenient truths and distort facts that support your own argument to appear more important than they are. In short, I think there’s definitely such a thing as “too much sophistication”, and I blame this type of sophistication on why many of my former high school friends are now effectively insane.
Obviously I agree that it’s dangerous to identify with a school of thought. Political parties in particular are coalitions of disparate interest groups, so the odds that a group of people who are only aligned for historically contingent reasons are going to come up with uniformly correct conclusions is near zero. That doesn’t mean you can never be confident that you’re right about something.
Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc. None of us are really Bayesians anyway, because a thorough and correct Bayesian network for computing something as simple as whether you should buy Hershey’s or Snickers would be computationally intractable.
I imagined that Greens want to neurotoxin rats and Blues want to do nothing and live with rats. Blues argue that neurotoxining would kill more than just rats. And greens argue that rats are uncomfortable to live with.
Complex positions then look like “kill rats with bullets” or “herd rats into zoos”. In thought experiements this would be “fighting the hypothetical”. I think because it is about issues and not about human groups taking sides that “fighting the framing” should be done with these kinds of issues. Greens will argue that “kill rats with bullets” will leave some wounded rats alive or let them escape. Blues will argue that “herd rats into zoos” diminishes rat life quality. But we went from “rats or no rats” to “wiped rats, wounded rats, caged rats or freeroaming rats”.
Sure from one point of view the options are just how much killing/opressing we want to do? How about none? But one could also construct a view point about restaurant health safety to have. The correct amount of rats to have on your plate is 0 and any increase (keeping other things handled) is further failure. Answering whos carings we care about means some people will get less preferential treatment than under other arrangements. But positions like “lets neurotox 30% of the rats” that are just compromises without additional idea behind them. Reframings are probably not as synergestic with the poles but not all compromises are sensible reunderstandings of the field. “Centrism” is not inherently sophisticated.
My analogy extension might have been less than elegant as it easily turns into gruesome territority if you replace rats with any human group. But maybe it also highlights that it is easier to be sympathetic to health safety than bigotry.
Let’s say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it “sophisticated” to say, “Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don’t possess, then the Greens have some good points!”? That doesn’t seem sophisticated.
Both of those positions are expressible in a single sentence. Sophisticated positions on topics are generally complex enough that they aren’t expressible in a single sentence.
Saying: “Here’s the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like.” is more sophisticated.
Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc
There are cases where it’s useful to use probability when faced with uncertainty. It’s when you can define a specific test of how the world looks like when the belief is true and when it isn’t.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don’t count that as being a plus.
Complexity or “sophistication” can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
I don’t know. I try to root out beliefs that follow that general form and replace them, e.g. “the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word “function” which implies some kind of designed intent.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
I haven’t argued that A is just better than B.
I try to root out beliefs that follow that general form
Yes, and I see that as a flaw that’s the result of thinking of everything in Bayesian terms.
“the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information
When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result.
The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”.
If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.
I’m not sure what work the word “sophisticated” is doing here.
Let’s say the Greens advocate dispersing neurotoxins to eradicate all life on earth, and the Blues advocate not doing that. Is it “sophisticated” to say, “Well, there are certainly good arguments on both sides, for example if you assume this specific twisted utilitarian framework, or assume values that I don’t possess, then the Greens have some good points!”? That doesn’t seem sophisticated. That just indicates a pathological inability to “join a side” even when one side is the one you should join by your own ethical compunctions, you want to join, you would benefit from joining, and you would cause others to benefit by joining.
Also, what if you arrive at the party partway through, and the Green and Blue have already spoken, and also another Sophisticate has spoken and indicated that “both sides have some good points, perhaps the answer is in the middle”. Are you allowed to just say, “I agree with the Sophisticate!” or does that make you a “sophisticate partisan” meaning you are obligated by the laws of being/appearing “sophisticated” to say, “Well, actually, the answer can’t be in the middle, a 50-50 split just seems improbable, the Greens are probably 25% right and the Blues are probably 75% right.”?
What I’m getting at is I’m not sure what the difference is being your usage of “sophisticated” and just being a contrarian.
You mention the attitudes implicit in certain styles of debate. I’ve written before about the dangers of certain styles of policy debate taught in American schools. I’ve always seen it as damaging that the point of US policy debate is to be able to argue from any position and against any position. It implicitly teaches the young mind that you can “win” an argument through cleverness and rule-lawyering without regard to whether your position is actually superior. The whole framework actively undermines the truthseeking mindset, because in a policy debate, you’re not truthseeking, you’re trying to obfuscate the opponents’ inconvenient truths and distort facts that support your own argument to appear more important than they are. In short, I think there’s definitely such a thing as “too much sophistication”, and I blame this type of sophistication on why many of my former high school friends are now effectively insane.
Obviously I agree that it’s dangerous to identify with a school of thought. Political parties in particular are coalitions of disparate interest groups, so the odds that a group of people who are only aligned for historically contingent reasons are going to come up with uniformly correct conclusions is near zero. That doesn’t mean you can never be confident that you’re right about something.
Additionally, I think to the degree that LWers identify as Bayesian, they are mostly just acknowledging the superiority of the Bayesian toolkit, such as maintaining some notion of a probability distribution over beliefs rather than exclusive and inviolable belief-statements, updating beliefs incrementally based on evidence, etc. None of us are really Bayesians anyway, because a thorough and correct Bayesian network for computing something as simple as whether you should buy Hershey’s or Snickers would be computationally intractable.
I imagined that Greens want to neurotoxin rats and Blues want to do nothing and live with rats. Blues argue that neurotoxining would kill more than just rats. And greens argue that rats are uncomfortable to live with.
Complex positions then look like “kill rats with bullets” or “herd rats into zoos”. In thought experiements this would be “fighting the hypothetical”. I think because it is about issues and not about human groups taking sides that “fighting the framing” should be done with these kinds of issues. Greens will argue that “kill rats with bullets” will leave some wounded rats alive or let them escape. Blues will argue that “herd rats into zoos” diminishes rat life quality. But we went from “rats or no rats” to “wiped rats, wounded rats, caged rats or freeroaming rats”.
Sure from one point of view the options are just how much killing/opressing we want to do? How about none? But one could also construct a view point about restaurant health safety to have. The correct amount of rats to have on your plate is 0 and any increase (keeping other things handled) is further failure. Answering whos carings we care about means some people will get less preferential treatment than under other arrangements. But positions like “lets neurotox 30% of the rats” that are just compromises without additional idea behind them. Reframings are probably not as synergestic with the poles but not all compromises are sensible reunderstandings of the field. “Centrism” is not inherently sophisticated.
My analogy extension might have been less than elegant as it easily turns into gruesome territority if you replace rats with any human group. But maybe it also highlights that it is easier to be sympathetic to health safety than bigotry.
Both of those positions are expressible in a single sentence. Sophisticated positions on topics are generally complex enough that they aren’t expressible in a single sentence.
Saying: “Here’s the 300 page bill about how our policy in regards to using neurotoxins on life on earth should look like.” is more sophisticated.
There are cases where it’s useful to use probability when faced with uncertainty. It’s when you can define a specific test of how the world looks like when the belief is true and when it isn’t.
Many beliefs are too vague for such a test to exist. It doesn’t make sense to put a probability on “The function of the heart is to pump blood”. That belief doesn’t have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.
At the same time it’s useful to have beliefs like “The function of the heart is to pump blood”.
ROFL...
tl;dr: KILL THEM ALL! …but if you want sophistication, here is a 300 page paper about how and why we should KILL THEM ALL!
It’s probably not the best example, but I stayed with the original example.
If “sophisticated” in this usage just means “complex”, I’m not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.
There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don’t count that as being a plus.
Complexity or “sophistication” can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.
I don’t know. I try to root out beliefs that follow that general form and replace them, e.g. “the heart pumps blood” is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word “function” which implies some kind of designed intent.
I haven’t argued that A is just better than B.
Yes, and I see that as a flaw that’s the result of thinking of everything in Bayesian terms.
When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result. The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.
Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.
Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.
Again, this is sort of trivial, because all it’s saying is that ‘past information is probabilistically useful to the future.’ I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it’s simply the way to interpret reality.
David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.
In biology people who build knowledge bases find it useful to allow storing knowledge like “The function of the heart is to pump blood”. If I’m having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.
I’m still having trouble with this.
A human mind is built out of nonlinear logic gates of various kinds. So even a belief like “the function of the heart is to pump blood” is actually composed of some network of neural connections that could be construed as interdependent probabilistic classification and reasoning via probabilistic logic. Or, at least, the human brain looks a lot more like “probabilistic classification and probabilistic reasoning” than it looks like “a clean algorithm for some kind of abstract formal logic”. (Assume all the appropriate caveats that we don’t actually compute probabilities; the human mind works correctly to the degree that it accidentally approximates Bayesian reasoning.)
Heck, any human you find actually using predicate calculus is using these neural networks of probabilistic logic to “virtualize” it.
Maybe probability matters little at the object level of your discussion, but that’s completely ignoring the fact that your brain’s assessment that X has quality Z which makes it qualify as a member of category Y is a probability assessment whether or not you choose to call it that.
I think Chapman is talking past the position that Jeynes is trying to take. You obviously can build logic out of interlinked probabilistic nodes because that’s what we are.