This may be paranoid of me, but I’m always worried that when posts like this use the word “you”, people will read it as “you”, when they should be reading it as “the average person, who is similar to you to some extent (e.g. in possessing such brain machinery as is humanly universal), and whose properties inform you about your own properties to a degree that depends on background information”.
It would be pretty neat if we had some happiness research that was disaggregated by such variables as intelligence, cognitive reflectiveness, and introversion. Does anyone know of such research?
E.g., I’ve seen a few studies purporting that “high intelligence” (which IIRC meant like 1.5 SD above average SAT scores) provides substantial protection against common cognitive biases. Yet I’ve never seen anyone take this into account when discussing “de-biasing”.
How would you take that into account? Does high intelligence provide more protection against some biases than others? Can we isolate what aspect of high intelligence provides the protection, and amplify that aspect?
I personally wouldn’t take it into account at all, because it’s H&B research, i.e., untrustworthy and irrelevant to rationality, and also I do not condone trying to “de-bias” oneself. (Pretty sure the H&B consensus agrees with me on the dubious nature of “de-biasing”, anyone know if I’m wrong?) Lukeprog might have answers to your questions, IIRC he’s the one who sent me the papers in question.
Would you condone trying to de-bias oneself if you thought the research was trustworthy and relevant? That is, do you see an extra reason on top of those reasons not to engage in de-biasing?
That is, do you see an extra reason on top of those reasons not to engage in de-biasing?
Yes, the first law of ecology (also known in another aspect as Chesterton’s fence). There are exceptions but those exceptions only apply to people with abnormally accurate self models.
Why expect unintended consequences to oppose ones preferences? The biases weren’t created by processes that cared about your preferences.
Yeah, but our preferences were caused by the same thing as our biases, right? At the very least, shouldn’t we expect our preferences to be highly entangled with our biases because of their common origin?
This may be paranoid of me, but I’m always worried that when posts like this use the word “you”, people will read it as “you”, when they should be reading it as “the average person, who is similar to you to some extent (e.g. in possessing such brain machinery as is humanly universal), and whose properties inform you about your own properties to a degree that depends on background information”.
It would be pretty neat if we had some happiness research that was disaggregated by such variables as intelligence, cognitive reflectiveness, and introversion. Does anyone know of such research?
This is a good idea, considering that many lesswrongians are probably neuro-atypical!
E.g., I’ve seen a few studies purporting that “high intelligence” (which IIRC meant like 1.5 SD above average SAT scores) provides substantial protection against common cognitive biases. Yet I’ve never seen anyone take this into account when discussing “de-biasing”.
How would you take that into account? Does high intelligence provide more protection against some biases than others? Can we isolate what aspect of high intelligence provides the protection, and amplify that aspect?
I personally wouldn’t take it into account at all, because it’s H&B research, i.e., untrustworthy and irrelevant to rationality, and also I do not condone trying to “de-bias” oneself. (Pretty sure the H&B consensus agrees with me on the dubious nature of “de-biasing”, anyone know if I’m wrong?) Lukeprog might have answers to your questions, IIRC he’s the one who sent me the papers in question.
Would you condone trying to de-bias oneself if you thought the research was trustworthy and relevant? That is, do you see an extra reason on top of those reasons not to engage in de-biasing?
Yes, the first law of ecology (also known in another aspect as Chesterton’s fence). There are exceptions but those exceptions only apply to people with abnormally accurate self models.
Why expect unintended consequences to oppose ones preferences? The biases weren’t created by processes that cared about your preferences.
Yeah, but our preferences were caused by the same thing as our biases, right? At the very least, shouldn’t we expect our preferences to be highly entangled with our biases because of their common origin?