An interesting question would be whether there are areas where we don’t know ourselves well as rationalists and whether we can use those tests in those areas.
Guns and race are popular topics but we don’t profit much from understanding our own positions on those questions much better.
Having a implicit-association test that put people into categories of Aristotelianism, Anton-Wilsonism and Bayesianism would be fun. Having people who think that they are Bayesians score as Aristotelians would be an opportunity for growth.
An interesting question would be whether there are areas where we don’t know ourselves well as rationalists and whether we can use those tests in those areas.
The obvious thing would be to test negative associations with people who do not explicitly subscribe to LW style rationality. Or associations with low IQ people. I am certain that there will be quite some surprises to some people around here.
Guns and race are popular topics but we don’t profit much from understanding our own positions on those questions much better.
What would be some topics that could, in your opinion, be fruitful?
What would be some topics that could, in your opinion, be fruitful?
I added on via an edit to my last question before I saw an answer. But it’s not an easy question.
As far as I understand CFAR has some techniques that they teach on their bootcamps that are effective and change the minds of the person who goes to the bootcamp in a good way.
That change of mindset could be measured with an implicit-association test. Of course knowing what those changes happen to be means knowing the basics of rationality and when you followed what I wrote lately, I consider knowing the basics to be hard.
Suddenly having a data driven tool that tells good rationalists from bad rationalists also would make things uncomfortable for a bunch of people, because it’s deeper to their core than a test telling them whether the are implicit racists.
It a lot more fundamental than the basic LW consensus where we assume from each other that we are good rationalists. Data has power. Talking about the value of the scientific method as the only true frame for reality is noble. It makes it easy to signal to be a good rationalist. Identifying good rationalists from bad ones via data driven implicit reasoning tests would be walking the talk instead of just talking it.
I don’t think it’s my role to say what makes a good rationalist but I think we can agree that the stuff CFAR does makes someone a better rationalist. If someone has better suggestions that could be tested I’m also happy.
So the idea is to use your test results to help optimize happiness through overtly racist behavior? This strikes me as a bad idea that could be misapplied widely.
Also, Steven Pinker suggests in Better Angels of Our Nature that rationality/the enlightenment has lead to a decline in racism/nationalism. I buy his argument, and think that it would be a much better to apply rationality toward that end than to use implicit biases to justify or guide behavior. Maybe I’m missing something, but this idea doesn’t seem to fit with this site, which started out as Overcoming Bias, not “justify behavior caused/influenced by implicit biases.”
I think the idea is more that you will realize that you may be irrationally discounting a house based on its neighbors—not that you will concede extra ground to your prejudice.
you will realize that you may be irrationally discounting a house based on its neighbors
This seems like a good example of a rationalist winning to me.
Perhaps I’ve misinterpreted what Imm said above, but I think he was sort of saying the opposite: “The test shows I have a strong implicit bias against [name minority], so I should move to an all-white [or name your in-group here] neighborhood to be happier. In this situation, it seems like you are using knowledge of your bias to increase your irrational discounting of a house or neighborhood.
It was kind of ambiguous. BUT the sort of implicit association that this test measures is the kind that actual exposure would tend to diminish, so there’s not much point in avoiding.
Rationalists should win, and feeling vaguely unhappy as you go about your life but not knowing why is not winning. The point is to overcome bias, not to pretend it isn’t there (and believing one isn’t biased when one is is one of the more pernicious and common biases). Sure, if you have a technique for magically making people not racist then that’s better than a technique for figuring out how racist they are, but in the absence of the former the latter is useful.
if you have a technique for magically making people not racist
Pinker points to some non-magical causes: greater commerce, greater literacy, larger political and military coalitions. Essentially, greater exposure allowing people to view other people as useful (instrumentally in many cases), rational beings. He points to how much progress inter-racial relationships and gay relationships have made in acceptance; it wasn’t due to people moving away from the people they didn’t like. The relevant point is that people don’t feel those biases as strongly any longer.
The point is to overcome bias, not to pretend it isn’t there
I really like the way you put this.
Rationalists should win
I guess I think deliberately acting on implicit racist bias doesn’t seem like a win for a particular rationalist or rationalism generally
Implicit-association tests are handy for identifying things you might not be willing to admit to yourself.
Is there a specific computer program for doing them that you can recommend for personal usage?
There is a Harvard page full of such tests, ranging from guns, to race and all kinds of stuff.
Still, the value of this information should be low as I see no obvious, easy way to remove the association—assuming it is invalid in the first place.
An interesting question would be whether there are areas where we don’t know ourselves well as rationalists and whether we can use those tests in those areas.
Guns and race are popular topics but we don’t profit much from understanding our own positions on those questions much better.
Having a implicit-association test that put people into categories of Aristotelianism, Anton-Wilsonism and Bayesianism would be fun. Having people who think that they are Bayesians score as Aristotelians would be an opportunity for growth.
The obvious thing would be to test negative associations with people who do not explicitly subscribe to LW style rationality. Or associations with low IQ people. I am certain that there will be quite some surprises to some people around here.
What would be some topics that could, in your opinion, be fruitful?
I added on via an edit to my last question before I saw an answer. But it’s not an easy question.
As far as I understand CFAR has some techniques that they teach on their bootcamps that are effective and change the minds of the person who goes to the bootcamp in a good way.
That change of mindset could be measured with an implicit-association test. Of course knowing what those changes happen to be means knowing the basics of rationality and when you followed what I wrote lately, I consider knowing the basics to be hard.
Suddenly having a data driven tool that tells good rationalists from bad rationalists also would make things uncomfortable for a bunch of people, because it’s deeper to their core than a test telling them whether the are implicit racists.
It a lot more fundamental than the basic LW consensus where we assume from each other that we are good rationalists. Data has power. Talking about the value of the scientific method as the only true frame for reality is noble. It makes it easy to signal to be a good rationalist. Identifying good rationalists from bad ones via data driven implicit reasoning tests would be walking the talk instead of just talking it.
I don’t think it’s my role to say what makes a good rationalist but I think we can agree that the stuff CFAR does makes someone a better rationalist. If someone has better suggestions that could be tested I’m also happy.
The LW survey has a few rationalist testing questions.
In aggregate I think you can learn something from those questions but I don’t think they provide you a way to judge individual people well.
Answering those questions well also has a lot to do with whether you are exposed to them beforehand.
We all like to imagine we’re not racist. But knowing how racist one is can help optimize one’s happiness when e.g. choosing where to buy a house.
So the idea is to use your test results to help optimize happiness through overtly racist behavior? This strikes me as a bad idea that could be misapplied widely.
Also, Steven Pinker suggests in Better Angels of Our Nature that rationality/the enlightenment has lead to a decline in racism/nationalism. I buy his argument, and think that it would be a much better to apply rationality toward that end than to use implicit biases to justify or guide behavior. Maybe I’m missing something, but this idea doesn’t seem to fit with this site, which started out as Overcoming Bias, not “justify behavior caused/influenced by implicit biases.”
I think the idea is more that you will realize that you may be irrationally discounting a house based on its neighbors—not that you will concede extra ground to your prejudice.
This seems like a good example of a rationalist winning to me.
Perhaps I’ve misinterpreted what Imm said above, but I think he was sort of saying the opposite: “The test shows I have a strong implicit bias against [name minority], so I should move to an all-white [or name your in-group here] neighborhood to be happier. In this situation, it seems like you are using knowledge of your bias to increase your irrational discounting of a house or neighborhood.
It was kind of ambiguous. BUT the sort of implicit association that this test measures is the kind that actual exposure would tend to diminish, so there’s not much point in avoiding.
Rationalists should win, and feeling vaguely unhappy as you go about your life but not knowing why is not winning. The point is to overcome bias, not to pretend it isn’t there (and believing one isn’t biased when one is is one of the more pernicious and common biases). Sure, if you have a technique for magically making people not racist then that’s better than a technique for figuring out how racist they are, but in the absence of the former the latter is useful.
Pinker points to some non-magical causes: greater commerce, greater literacy, larger political and military coalitions. Essentially, greater exposure allowing people to view other people as useful (instrumentally in many cases), rational beings. He points to how much progress inter-racial relationships and gay relationships have made in acceptance; it wasn’t due to people moving away from the people they didn’t like. The relevant point is that people don’t feel those biases as strongly any longer.
I really like the way you put this.
I guess I think deliberately acting on implicit racist bias doesn’t seem like a win for a particular rationalist or rationalism generally
True! I forgot about them, and they are useful sometimes.