Crosspost: Developing the middle ground on polarized topics
Crossposted from Otherwise
I was once in a group discussion about whether wild animals might be having net negative lives. One person didn’t want to consider that possibility, essentially because “then people would want to kill all the wild animals.”
Hold on! You can evaluate the question of “What is life like for wild animals” without jumping to “And if they’re having bad lives, we should try to kill them all.” There’s a kind of tunnel vision here, as if having a belief about facts must necessarily channel you to only one action.
If you want people to honestly consider “Is climate change real?” it matters a lot if the only options are “No” and “Yes, so you must stop using airplanes and clothes dryers,” or if there are other possible responses.
I’d like to see more scout mindset here, figuring out what the facts might be before jumping to policy conclusions.
On the other hand, I get why people are alarmed when they realize they’re interfacing with someone who holds a belief that’s associated with policies they find appalling.
Bryan Caplan on the kinds of people who want to discuss IQ:
“I’ve got to admit: My fellow IQ realists are, on average, a scary bunch. People who vocally defend the power of IQ are vastly more likely than normal people to advocate extreme human rights violations.”
And people with beliefs that others find horrifying might not admit to the most unpopular of their beliefs.
So I can see why onlookers who see someone advocating idea X might say: “Sure, they only mentioned X, but people who support X often turn out to support Y and even Z. Read between the lines!” If someone voices “IQ is real and important,” you should have a higher prior that they might support human rights violations on that basis. This is especially true if you don’t know them and don’t have time to evaluate what they’ve said and written in the past.
Another approach is “Let’s not judge people guilty by association. There’s nothing inherently wrong with believing X. They didn’t say anything about Y or Z, or maybe they even argue against Y and Z.”
This can be a more useful approach when someone has an extensive history of public writing and speaking that indicates they’re not into human rights violations, etc.
The more polarized an idea is, the harder it will be to think clearly about it. Often for controversial belief X, the spread looks like
If you want to explore the facts on X, it’s especially hard because neutral people don’t research the topic. Much of the evidence is collected by people with strong feelings in one direction or the other.
But I think there are often a fair number of people in that silent middle zone.
I’d like to have more people saying “X is an important topic, and I want to form a clearer picture of it.” This might allow people to explore steps G, W, R, or no action at all, rather than only Y.
Kelsey Piper’s piece “Can we be actually normal about birthrates?” is an example of this:
“There’s something that feels ugly around proclamations about what the population or the birth rate “should” be — especially given the horrific history of mass sterilizations conducted in the name of “fixing” high birth rates for the sake of the world. . . .
What I want is a cultural and policy conversation about how to support families that starts by addressing these problems, beginning with simple premises I think most people agree on: that having children can be awesome and a source of great joy and meaning in life, though it’s far from the only source of joy and meaning in life; that we could do a lot more to build communities in which children are supported, welcomed, and have meaningful independence; that people who don’t want kids shouldn’t have them but that people who do want kids should be supported in making that a priority.”
I also value it when people say “Hey, I believe X and firmly reject Y.”
Caplan’s post on intelligence continues:
“If someone says, ‘I’m more intelligent than other people, so it’s acceptable for me to murder them,’ the sensible response isn’t, ‘Intelligence is a myth.’ The sensible response is, ‘Are you mad? That doesn’t justify murder.’
….here’s what I say to every IQ realist who forgets common decency: You embarrass me. You embarrass yourself.”
Exploring the middle zone, even privately, won’t be a good fit for everyone. It’s reasonable that a lot of people won’t want to spend their energy or their weirdness points on this. Declining to develop an opinion on whether dragons exist is often the option that lets you move ahead with your life and spend less time in internet arguments.
But I’m sad about that. And I appreciate it when people like Kelsey Piper and Bryan Caplan say “Are you mad?” to the people proposing awful things, and explore other ways forward.
Ever since the situation with Blanchardianism, I’ve become extremely bearish on the possibility on this, considering how everyone on all sides including rationalists on all sides of the debate just massively failed on it.
With IQ realism, you also get insane stuff where the supposedly reasonable middle ground tends to have skepticism about the g factor and thinks of IQ as an index that sums together cognitive abilities.
I haven’t thought of this in relation to wild animal welfare or birthrates but I don’t immediately see the argument that we can outperform the abysmal track record seen in these two other cases.
Is this part not technically true? IQ tests tend to have a bunch of subtests intended to measure different cognitive abilities, and you add up—or average, which is adding up and dividing by a constant—your scores on each subtest. For example (bold added):
That’s part of the problem, often the bad middle ground looks superficially plausible, so it’s very sticky and hard to get rid of, because it’s not exactly that people get told the wrong things but rather that they spontaneously develop the wrong ideas.
The three basic issues with this viewpoint are:
IQ test batteries do not measure even close to all cognitive abilities and realistically could never do that.
Many of the abilities that IQ scores weight highly are practically unimportant.
Differential-psychology tests are in practice more like log scales than like linear scales, so “sums” are more like products than like actual suns; even if you are absurdly good at one thing, you’re going to have a hard time competing with someone in IQ if they are moderately better at many things.
Well, the original statement was “sums together cognitive abilities” and didn’t use the word “all”, and I, at least, saw no reason to assume it. If you’re going to say something along the lines of “Well, I’ve tried to have reasonable discussions with these people, but they have these insane views”, that seems like a good time to be careful about how you represent those views.
Are you talking about direct measurement, or what they correlate with? Because, certainly, things like anagramming a word have almost no practical application, but I think it’s intended to (and does) correlate with language ability. But in any case, the truth value of the statement that IQ is “an index that sums together cognitive abilities” is unaffected by whether those abilities are useful ones.
Perhaps you have some idea of a holistic view, of which that statement is only a part, and maybe that holistic view contains other statements which are in fact insane, and you’re attacking that view, but… in the spirit of this post, I would recommend confining your attacks to specific statements rather than to other claims that you think correlate with those statements.
I wonder how large a difference this makes in practice. So if we run with your claim here, it seems like your conclusion would be… that IQ tests combine the subtest scores in the wrong way, and are less accurate than they should be for people with very uneven abilities? Is that your position? At any rate, even if the numbers are logarithms, it’s still correct to say that the test is adding them up, and I don’t consider that good grounds for calling it “insane” for people to consider it addition.
The correlations are the important part.
A popular competitor to IQ is the theory of multiple intelligences. It sounds very nice and plausible, the only problem is that the actual data do not support the theory. When you measure them, most of the intelligences correlate strongly with each other, and the ones that correlate less are the ones that stretch the meaning of “intelligence” a bit too far (things like “dancing intelligence”).
Another problem is that no one agrees on the standard list of those multiple intelligences (different lists of various lengths have been proposed), because all those lists are a result of armchair reasoning. The proper way to do that would be to collect lots of data first, and then do factor analysis and see what you get as a result. But when you actually do that, what you get is… IQ.
If we think of the quantified abilities as the logarithms of the true abilities, then taking the log has likely massively increased the correlations by bringing the outliers into the bulk of the distribution.
(Fun tangent, not directly addressing this argument thread.)
There’s a trio of great posts from 2015 by @JonahS : The Truth About Mathematical Ability ; Innate Mathematical Ability ; Is Scott Alexander bad at math? which (among other things) argues that you can be “good at math” along the dimension(s) of noticing patterns very quickly, AND/OR you can be “good at math” along the dimension(s) of an “aesthetic” sense for concepts being right and sensible. (My summary, not his.)
The “aesthetics” is sorta a loss function that provides a guidestar for developing good deep novel understanding—but that process may take a very long time. He offers Scott Alexander, and himself, and Alexander Grothendieck as examples of people with lopsided profiles—stronger on “aesthetics” than they are on “fast pattern-recognition”.
I found it a thought-provoking hypothesis. I wish JonahS had written more.
The analogy that I’m objecting to is, if you looked at e.g. the total for a ledger or a budget, it is an index that sums together expenses in a much more straightforward way. For instance if there is a large expense, the total is large.
Meanwhile, IQ scores are more like the geometric mean of the entries on such an entity. The geometric mean tells you whether the individual items tend to be large or small, which gives you broad-hitting information that distinguishes e.g. people who live in high-income countries from people who live in low-income countries, or large organizations from individual people; but it won’t inform you if someone got hit by a giant medical bill or if they managed to hack themselves to an ultra-cheap living space. These pretty much necessarily have to be low-rank mediators (like in the g model) rather than diverse aggregates (like in the sum model).
(Well, a complication in this analogy is that a ledger can vary not just in the magnitude of the transfers but also qualitatively in the kinds of transfers that are made, whereas IQ tests fix the variables, making it more analogous to a standardized budget form (e.g. for tax or loan purposes) broken down by stuff like “living space rent”, “food”, “healthcare”, etc..)
So, the arithmetic and geometric mean agree when the inputs are equal, and, the more unequal they are, the lower the geometric mean is.
I note that the subtests have ceilings, which puts a limit on how much any one can skew the result. Like, if you have 10 subtests, and the max score is something like 150, then presumably each test has a max score of 15 points. If we imagine someone gets five 7s and five 13s (a moderately unbalanced set of abilities), then the geometric mean is 9.54, while the arithmetic mean is 10. So, even if someone were confused about whether the IQ test was using a geometric or an arithmetic mean, does it make a large difference in practice?
The people you’re arguing against, is it actually a crux for them? Do they think IQ tests are totally invalid because they’re using an arithmetic mean, but actually they should realize it’s more like a geometric mean and then they’d agree IQ tests are great?
If you consider the “true ability” to be the exponential of the subtest scores, then the extent to which the problem I mention applies depends on the base of the exponential. In the limiting case where the base goes to infinity, only the highest ability matter, whereas in the limiting case where the base goes to 1, you end up with something basically linear.
As for whether it’s a crux, approximately nobody has thought about this deeply enough that they would recognize it, but I think it’s pretty foundational for a lot of disagreements about IQ.
This might be a good (if controversial) example of “the reality is more complicated than typical simplifications, and it matters what your oversimplification is leaving out”.
And Blanchard’s account of autogynephilia is more nuanced than most peoples second hand version of it. Like, e.g. Blanchard doesn’t think trans men have AGP, and doesn’t think trans women who are attracted to men have AGP.
So, we might, say…
Oversimplication 1: Even Blanchard didn’t try to apply his theory to trans men or trans women attracted to men
Oversimplification 2: Bisexuals exist. Many trans women report their sexual orientation changing when they start taking hormones. The correlation between having AGP and being attracted to women can’t be as 100% as Blanchard appears to believe it is.
Oversimplification 3: looks like Blanchard only identified two subtypes of trans person, and completely missed some of the other subtypes.
Oversimplification 4: Do heterosexual cisgender women have AGP? (Cf. Comments by Aella, eigenrobot etc.) if straight cisgender women also like being attractive in the same way as (some) trans women do, it becomes somewhat doubtful that it’s a pathology.
Your post is an excellent example of how the supposedly-reasonable middle ground tends to be so clueless as to be plausibly worse than the extremes.
You mean AAP here, right?
He accepts autohomoeroticism, which is close enough to AAP that the difference doesn’t matter. The real problem here is Michael Bailey who has a sort of dogmatic denial of AAP.
That’s pretty common in people’s second-hand version; the real issue here is that this is sometimes wrong and some androphiles are AGP.
Blanchard explicitly measured that some trans women identified as bisexual, and argued that they were autogynephilic and not truly bisexual. There’s some problems with that assertion, but uncovering those problems really ought to engage with more of the nuances than what you imply here.
According to qualitative studies I’ve done, around 15% of women are at least somewhat AGP (though I think it correlates with being bi/lesbian), but the assertion that this implies it’s not a pathology for males seems like magical thinking. E.g. ~100% of women have breasts, but this does not mean that developing breasts would not be considered a pathology for males.
I will take “actually, it’s even more complicated” as a reasonable response. Yes, it probably is.
What I don’t get is, why do you have this impulse to sanewash the sides in this discussion?
Candidate explanations for some specific person being trans could as easily be that they are sexually averse, rather than that they are turned on by presenting as their preferred gender. Compare anorexia nervosa, which might have some parallel with some cases of gender identity disorder. If the patient is worrying about being gender non conforming in the same way that an anorexic worries that that they’re fat, then Blanchard is just completely wrong about what the condition even is in that case.
I interpret the main argument as:
You cannot predict the direction of policy that would result from certain discussions/beliefs
The discussions improve the accuracy of our collective world model, which is very valuable
Therefore, we should have the discussions first and worry about policy later.
I agree that in many cases there will be unforeseen positive consequences as a result of the improved world model, but in my view, it is obviously false that we cannot make good directionally-correct predictions of this sort for many X. And the negative will clearly outweigh the positive for some large group in many cases. In that case, the question is how much you are willing to sacrifice for the collective knowledge.
If you want to highlight people who handle this well, the only interesting case is people from group A in favor of discussing X where X is presumed to lead to Y and Y negatively impacts A. Piper’s X has a positive impact on her beliefs (discussing solutions to falling birth-rates as one who believes it is a problem), and Caplan’s X has a positive impact on him (he is obviously high IQ), so neither of these are interesting samples. There is no reason for either of these to inherently want to avoid discussing these X. Even worse, Caplan’s rejected “Y” is a clear strawman, which begs the question and actually negatively updates me on his beliefs. More realistic Ys are things like IQ-based segregation, resource allocation, reproductive policies, etc.
If I reject these Ys for ideological reasons, and the middle ground looks like what I think it looks like, I do not want to expose the middle ground.