If we take the example of the HBD argument, one issue that arises is that it centers latent variables which the interlocutor might not believe are observable (namely, innate racial differences). If these latents aren’t observable, then either the argument is nonsense, or the real origin for the conclusion is something else (for the case of HBD, an obvious candidate would be a lot of work that has attempted and failed to achieve racial equity, indicating that whatever causes the gaps must be quite robust). Thus from the interlocutors’ perspective, the three obvious options would be to hear how one could even come to observe these latents, to hear what the real origin for the conclusion is, or to just dismiss the conclusion as baseless speculations.
The second option, trying to uncover the real origin of the conclusion, being obviously the best of the three. It is also most in-line with canonical works like Is That Your True Rejection?
But it belongs to the older paradigm of rationalist thinking; the one that sought to examine motivated cognition and discover the underlying emotional drives (ideally with delicate sensitivty), whereas the new paradigm merely stigmatizes motivated cognition and inadvertently imposes a cultural standard of performativity, in which we are all supposed to pretend that our thinking is unmotivated. The problems with present rationalist culture would stand out like a glowing neon sign to old-school LessWrongers, but unfortunately there are not many of these left.
I think it also depends. If you are engaging in purely political discourse, then sure, this is correct. But e.g. if you’re doing a good-faith project to measure latent variables, such that the latent variables are of primary interest and the political disputes are of secondary interest, then having people around who are postulating elaborate latent variable models in order to troll their political opponents are distracting. At best, they could indicate that there is a general interest in measuring the sort of latent variables they talk about, and so they could be used as inspiration for what to develop measures on, but at worst they could interfere with the research project by propagating myths.
They are not doing it in order to troll their political opponents. They are doing it out of scientism and loyalty to enlightenment aesthetics of reason and rationality, which just so happens to entail an extremely toxic stigma against informal reasoning about weighty matters.
Sort of true, but it seems polycausal; the drive to troll their political opponents makes them willing to push misleading and bad-faith arguments, whereas the scientism makes them disguise those bad arguments in scientific-sounding terms.
Both causes seem toxic to good-faith attempts at building good measurement though. Like you’re correct that the scientism has to be corrected too, but that can be handled much more straightforwardly if they’re mostly interested in the measurement project than if they are interested in political discourse.
The measuring project is symptomatic of scientism and is part of what needs to be corrected.
That is what I meant when I said that the HBD crowd is reminiscent of utilitarian technocracy and progressive-era eugenics. The correct way of handling race politics is to take an inventory of the current situation by doing case studies and field research, and to develop a no-bullshit commonsense executive-minded attitude for how to go about improving the conditions of racial minorities from where they’re currently at.
Obviously, more policing is needed, so as to finally give black business-owners in black areas a break and let them develop without being pestered by shoplifters, riots, etc. Affirmative action is not working, and nor is the whole paradigm of equity politics. Antidiscrimination legislation was what crushed black business districts that had been flourishing prior to the sixties.
Whether the races are theoretically equal in their genetic potential or not is utterly irrelevant. The plain fact is that they are not equal at present, and that is not something you need statistics in order to notice. If you are a utopian, then your project is to make them achieve their full potential as constrained by genetics in some distant future, and if they are genetically equal, then that means you want equal outcomes at some point. But this is a ridiculous way of thinking, because it extrapolates your policy goals unreasonably far into the future, never mind that genetic inequalities do not constrain long-term outcomes in a world that is rapidly advancing in genetic engineering tech.
The scientistic, statistics-driven approach is clearly the wrong tool for the job, as we can see from just looking at what outcomes it has achieved. Instead it is necessary to have human minds thinking reasonably about the issue, instead of trying to replace human reason with statistics “carried on by steam” as Carlyle put it. These human minds thinking reasonably about the issue should not be evaluating policies by whether they can theoretically be extrapolated to some utopian outcome in the distant future, but simply about whether they actually improve things for racial minorities or not. This is one case where we could all learn something from Keynes’ famous remark that “in the long run, we are all dead”.
In short: scientism is the issue, and statistics by steam are part of it. Your insistence on the measurement project over discussing the real issues is why you do not have much success with these people. You are inadvertently perpetuating the very same stigma on informal reasoning about weighty matters that is the cause of the issue.
This is correct as an analysis of racial politics, but you end up with latent variable measurement projects for multiple reasons. In the particular case of cognitive abilities, there’s also cognitive disability, military service, hiring, giftedness, cognitive decline, genomics and epidemiology, all of which have interest in the measurement of cognitive abilities. Furthermore, the theory and tools for cognitive abilities can be informed by and is informative for the measurement of other latent variables, so you also end up with interest from people who study e.g. personality.
People should do serious research to inform their policy, and they should do serious research on latent variables, but they should avoid using disingenuous arguments where they talk like they’ve done serious research when really they’re trying to troll.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.
This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.
Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
I’m not convinced about this, both from an efficiency perspective and an accuracy perspective.
Take military service as an example. The way I remember it, we had like 60 people all take an IQ test in parallel, which seems more efficient than having 60 different interviews. (Somewhere between 2x more efficient and 60x more efficient, depending on how highly one weights the testers and testee’s time.) Or in the case of genomics, you are often dealing with big databases of people who signed up a long time ago for medical research; it’s not so practical to interview them extensively, and existing studies deal with brief tests that were given with minimal supervision.
From an accuracy perspective, my understanding is that the usual finding is that psychometric tests and structured interviews provide relatively independent information, such that the best accuracy is obtained by combining both. This does definitely imply that there would be value in integrating more structured interviews into genomics (if anyone can afford it...), and more generally integrating information from more different angles into single datasets, but it doesn’t invalidate those tests in the first place.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
I mean there’s a few different obvious angles to this.
IQ tests measure g. If you’ve realized someone has some non-g factor that is very important for your purposes, then by all means conclude that the IQ test missed that.
If you’ve concluded that the IQ test underestimated their g, that’s a different issue. You’re phrasing things like your own assessment correlates at like 0.9 with g and that the residual is non-normally distributed, which I sort of doubt is true (should be easy enough to test… though maybe you already have experiences you can reference to illustrate it?)
If we take the example of the HBD argument, one issue that arises is that it centers latent variables which the interlocutor might not believe are observable (namely, innate racial differences). If these latents aren’t observable, then either the argument is nonsense, or the real origin for the conclusion is something else (for the case of HBD, an obvious candidate would be a lot of work that has attempted and failed to achieve racial equity, indicating that whatever causes the gaps must be quite robust). Thus from the interlocutors’ perspective, the three obvious options would be to hear how one could even come to observe these latents, to hear what the real origin for the conclusion is, or to just dismiss the conclusion as baseless speculations.
This trichotomy seems pretty common.
The second option, trying to uncover the real origin of the conclusion, being obviously the best of the three. It is also most in-line with canonical works like Is That Your True Rejection?
But it belongs to the older paradigm of rationalist thinking; the one that sought to examine motivated cognition and discover the underlying emotional drives (ideally with delicate sensitivty), whereas the new paradigm merely stigmatizes motivated cognition and inadvertently imposes a cultural standard of performativity, in which we are all supposed to pretend that our thinking is unmotivated. The problems with present rationalist culture would stand out like a glowing neon sign to old-school LessWrongers, but unfortunately there are not many of these left.
I think it also depends. If you are engaging in purely political discourse, then sure, this is correct. But e.g. if you’re doing a good-faith project to measure latent variables, such that the latent variables are of primary interest and the political disputes are of secondary interest, then having people around who are postulating elaborate latent variable models in order to troll their political opponents are distracting. At best, they could indicate that there is a general interest in measuring the sort of latent variables they talk about, and so they could be used as inspiration for what to develop measures on, but at worst they could interfere with the research project by propagating myths.
They are not doing it in order to troll their political opponents. They are doing it out of scientism and loyalty to enlightenment aesthetics of reason and rationality, which just so happens to entail an extremely toxic stigma against informal reasoning about weighty matters.
Sort of true, but it seems polycausal; the drive to troll their political opponents makes them willing to push misleading and bad-faith arguments, whereas the scientism makes them disguise those bad arguments in scientific-sounding terms.
Both causes seem toxic to good-faith attempts at building good measurement though. Like you’re correct that the scientism has to be corrected too, but that can be handled much more straightforwardly if they’re mostly interested in the measurement project than if they are interested in political discourse.
The measuring project is symptomatic of scientism and is part of what needs to be corrected.
That is what I meant when I said that the HBD crowd is reminiscent of utilitarian technocracy and progressive-era eugenics. The correct way of handling race politics is to take an inventory of the current situation by doing case studies and field research, and to develop a no-bullshit commonsense executive-minded attitude for how to go about improving the conditions of racial minorities from where they’re currently at.
Obviously, more policing is needed, so as to finally give black business-owners in black areas a break and let them develop without being pestered by shoplifters, riots, etc. Affirmative action is not working, and nor is the whole paradigm of equity politics. Antidiscrimination legislation was what crushed black business districts that had been flourishing prior to the sixties.
Whether the races are theoretically equal in their genetic potential or not is utterly irrelevant. The plain fact is that they are not equal at present, and that is not something you need statistics in order to notice. If you are a utopian, then your project is to make them achieve their full potential as constrained by genetics in some distant future, and if they are genetically equal, then that means you want equal outcomes at some point. But this is a ridiculous way of thinking, because it extrapolates your policy goals unreasonably far into the future, never mind that genetic inequalities do not constrain long-term outcomes in a world that is rapidly advancing in genetic engineering tech.
The scientistic, statistics-driven approach is clearly the wrong tool for the job, as we can see from just looking at what outcomes it has achieved. Instead it is necessary to have human minds thinking reasonably about the issue, instead of trying to replace human reason with statistics “carried on by steam” as Carlyle put it. These human minds thinking reasonably about the issue should not be evaluating policies by whether they can theoretically be extrapolated to some utopian outcome in the distant future, but simply about whether they actually improve things for racial minorities or not. This is one case where we could all learn something from Keynes’ famous remark that “in the long run, we are all dead”.
In short: scientism is the issue, and statistics by steam are part of it. Your insistence on the measurement project over discussing the real issues is why you do not have much success with these people. You are inadvertently perpetuating the very same stigma on informal reasoning about weighty matters that is the cause of the issue.
This is correct as an analysis of racial politics, but you end up with latent variable measurement projects for multiple reasons. In the particular case of cognitive abilities, there’s also cognitive disability, military service, hiring, giftedness, cognitive decline, genomics and epidemiology, all of which have interest in the measurement of cognitive abilities. Furthermore, the theory and tools for cognitive abilities can be informed by and is informative for the measurement of other latent variables, so you also end up with interest from people who study e.g. personality.
People should do serious research to inform their policy, and they should do serious research on latent variables, but they should avoid using disingenuous arguments where they talk like they’ve done serious research when really they’re trying to troll.
No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee’s mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.
Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it’s pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person’s actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.
Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.
This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.
Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.
I’m not convinced about this, both from an efficiency perspective and an accuracy perspective.
Take military service as an example. The way I remember it, we had like 60 people all take an IQ test in parallel, which seems more efficient than having 60 different interviews. (Somewhere between 2x more efficient and 60x more efficient, depending on how highly one weights the testers and testee’s time.) Or in the case of genomics, you are often dealing with big databases of people who signed up a long time ago for medical research; it’s not so practical to interview them extensively, and existing studies deal with brief tests that were given with minimal supervision.
From an accuracy perspective, my understanding is that the usual finding is that psychometric tests and structured interviews provide relatively independent information, such that the best accuracy is obtained by combining both. This does definitely imply that there would be value in integrating more structured interviews into genomics (if anyone can afford it...), and more generally integrating information from more different angles into single datasets, but it doesn’t invalidate those tests in the first place.
I mean there’s a few different obvious angles to this.
IQ tests measure g. If you’ve realized someone has some non-g factor that is very important for your purposes, then by all means conclude that the IQ test missed that.
If you’ve concluded that the IQ test underestimated their g, that’s a different issue. You’re phrasing things like your own assessment correlates at like 0.9 with g and that the residual is non-normally distributed, which I sort of doubt is true (should be easy enough to test… though maybe you already have experiences you can reference to illustrate it?)