there’s no long-term benefit associated with its removal.
The first step in solving a problem is to recognise it. If I discovered I have cancer I would be demoralised immensely but I’d prefer that and take a shot at recovering rather than die unknowingly.
Strongly seconded. I speak from experience: when evidence starts mounting for some horrible, nightmarish proposition that you’re scared of, it is tempting to tell yourself that even if it were true, it wouldn’t really matter, that there would be no benefit to acknowledging it, that you can just go on acting as you’ve always done, as if nothing’s changed. But on your honor as an aspiring rationalist, you must face the pain directly. When you get a hint that this world is not what you thought it was, that you are not what you thought you were—look! And update!---no matter how much it hurts, no matter how much your heart may cry for the memory of the world you thought this was. Do it selfishly, in the name of the world you thought you knew: because once you have updated, once you see this hellish wasteland for what it really is, then you can start to try to patch what few things up that you can.
Suppose you really don’t like gender roles, and you’re quietly worried about something you read about evolutionary psychology. Brushing it all under the rug won’t help. Investigate, learn all you can, and then do something. Maybe something drastic, maybe something trivial, but something. Experiment with hormones! Donate a twenty to GenderPAC! Use your initials in your byline! But something, anything other than defaulting to ignorance and letting things take their natural course.
As soon as you start talking about your “honor as an aspiring rationalist”, you’re moving from the realm of rationality to ideology.
Like I said, I don’t think this question matters and I’m mostly indifferent to what the answer actually is. I’m just trying to protect the people who do care.
As soon as you start talking about your “honor as an aspiring rationalist”, you’re moving from the realm of rationality to ideology.
Well, sure, but the ideological stance is “You should care about rationality.” I should think that that’s one of the most general and least objectionable ideologies there is.
Like I said, I don’t think this question matters and I’m mostly indifferent to what the answer actually is. I’m just trying to protect the people who do care.
But I do care, and I no longer want to be protected from the actual answer. When I say that I speak from experience, it’s really true. There’s a reason that this issue has me banging out dramatic, gushy, italics-laden paragraphs on the terrible but necessary and righteous burden of relinquishing your cherished beliefs—unlike in the case of, say, theism, in which I’m more inclined to just say, “Yeah, so there’s no God; get over it”—although I should probably be more sympathetic.
So, why does it matter? Why can’t we just treat the issue with benign neglect, think of ourselves as strictly as individuals, and treat other people strictly as individuals? It is such a beautiful ideal—that my works and words should be taken to reflect only on myself alone, and that the words and works of other people born to a similar form should not be taken to reflect on me. It’s a beautiful ideal, and it seems like it should be possible to swear our loyalty to the general spirit of this ideal, while still recognizing that---
In this world, it’s not that simple. In a state of incomplete information (and it is not all clear to me what it would even mean to have complete information), you have to make probabilistic inferences based on what evidence you do have, and to the extent that there are systematic patterns of cognitive sex and race differences, people are going to update their opinions of others based on sex and race. You can profess that you’re not interested in these questions, that you don’t know—but just the same, when you see someone acting against type, you’re probably going to notice this as unusual, even if you don’t explicitly mention it to yourself.
There are those who argue—as I used to argue—that this business about incomplete information, while technically true, is irrelevant for practical purposes, that it’s easy to acquire specific about an individual, which screens off any prior information based on sex and race. And of course it’s true, and a good point, and an important point to bear in mind, especially for someone who comes to this issue with antiegalitarian biases, rather than the egalitarian biases that I did. But for someone with initial egalitarian biases, it’s important not to use it—as I think I used to use it—as some kind of point scored for the individualist/egalitarian side. Complex empirical questions do not have sides. And to the extent that this is not an empirical issue; to the extent that it’s about morality—then there are no points to score.
It gets worse—you don’t even have anywhere near complete information about yourself. People form egregiously false beliefs about themselves all the time. If you’re not ridiculously careful, it’s easy to spend your entire life believing that you have an immortal soul, or free will, or that the fate of the light cone depends solely on you and your genius AI project. So information about human nature in general can be useful even on a personal level: it can give you information about yourself that you would never have gotten from mere introspection and naive observation. I know from my readings that if I’m male, I’m more likely to have a heart attack and less likely to get breast cancer than would be the case if I were female, whereas this would not at all be obvious if I didn’t read. Why should this be true of physiology, but not psychology? If it turns out that women and men have different brain designs, and I don’t have particularly strong evidence that I’m a extreme genetic or developmental anomaly, then I should update my beliefs about myself based on this information, even if it isn’t at all obvious from the inside, and even though the fact may offend me and make me want to cry. For someone with a lot of scientific literacy but not as much rationality skill, the inside view is seductive. It’s tempting to cry out, “Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I’m different, I’m special, I’m an exception; I’m a gloriously androgynous creature of pure information!” But if you actually want to achieve your ideal (like becoming a gloriously androgynous creature of pure information), rather than just having a human’s delusion of it, you need to form accurate beliefs about just how far this world is from the ideal, because only true knowledge can help you actively shape reality.
It could very well be that information about human differences could have all sorts of terrible effects if widely or selectively disseminated. Who knows what the masses will do? I must confess that I am often tempted to say that I have no interest in such political questions—that I don’t know, that it doesn’t matter to me. This attitude probably is not satisfactory for the same sorts of reasons I’ve listed above. (How does the line go? “You might not care about politics, but politics cares about you”?) But for now, on a collective or political or institutional level, I really don’t know: maybe ignorance is bliss. But for the individual aspiring rationalist, the correct course of action is unambiguous: it’s better to know, than to not know, it’s better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
that it’s easy to acquire specific about an individual, which screens off any prior information based on sex and race
I think people may be overapplying the concept of “screening off”, though?
If for example RACE → INTELLIGENCE → TEST SCORE, then knowing someone’s test score does not screen off race for the purpose of predicting intelligence (unless I’m very confused). Knowing test score should still make knowing race less useful, but not because of screening off.
On the other hand, if for example GENDER → PERSONALITY TRAITS → RATIONALIST TENDENCIES, then knowing someone’s personality traits does screen off gender for the purpose of predicting rationalist tendencies.
I agree with the overall thrust of this, but I do have some specific reservations:
It’s tempting to cry out, “Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I’m different, I’m special, I’m an exception.
The temptation is real, and it’s a temptation we should be wary of. But it’s also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special (of course, we should still have some evidence for this, but the point is that the more within-group variation there is, the weaker that evidence needs to be). It’s difficult to know what to do with information about average differences between groups unless one also knows something about within-group variation.
But for the individual aspiring rationalist, the correct course of action is unambiguous: it’s better to know, than to not know, it’s better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
Again, I’m very sympathetic to this view, but I think you’re overselling the case. If (correctly) believing your group performs poorly causes you to perform worse then there’s a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that’s not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there’s a lot of evidence that “subconscious” decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer’s How We Decide, and pretty much all of Gerd Gigerenzer’s books make this argument (in addition to the more anecdotal account in Malcolm Gladwell’s Blink). I tend to think they oversell the case for intuition somewhat, but it’s still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
If it turns out that women and men have different brain designs, and I don’t have particularly strong evidence that I’m a extreme genetic or developmental anomaly, then I should update my beliefs about myself based on this information, even if it isn’t at all obvious from the inside, and even though the fact may offend me and make me want to cry.
Might be just mind projection on my part, but it seems to me that in those accursed cases, where various aspects of an “egalitarian” way of thought—that includes both values, moral intuitions, ethical rules, actual beliefs about the world and beliefs about one’s beliefs—all get conflated, and the entire (silly for an AI but identity-forming for lots of current humans) system perceives some statement of fact as a challenge to its entire existence… well, the LW crowd at least would pride themselves on not being personally offended.
If tomorrow it was revealed with high certainty that us Slavs are genetically predisposed against some habits that happen to be crucial for civilized living and running a state nicely, I’d most definitely try to take it in stride. But when something like this stuff is said, I tend to feel sick and uncertain; I don’t see a purely consequentialist way out which would leave me ethically unscathed.
Ah, if only it was so easy as identifying the objective state of the world, than trying to act in accordance with (what you’ve settled on as) your terminal values. But this would require both* more rationality and more compartmentalization than I’ve seen in humans so far.
“You come to amidst the wreckage of your own making. Do you stay there, eyes squeezed shut, afraid to move, hoping to bleed to death? Or do you crawl out, help your loved ones, make sure the fire doesn’t spread, try to fix it?”—Max Payne 2: The Fall of Max Payne
One day, I started looking at the people around me, and I began to realize how much they looked like apes. I quickly stopped doing this, because I feared it would cause me to treat them with contempt. And you know what? Doublethink worked. I didn’t start treating people with more contempt as a result of purposely avoiding certain knowledge that I knew would cause me to treat people with more contempt.
If you think that my efforts to suppress observations relating the appearance of humans with the appearance of apes were poorly founded, then you have a very instrumentally irrational tendency towards epistemic rationality.
[...] then you have a very instrumentally irrational tendency towards epistemic rationality.
Guilty as charged! I don’t want to win by means of doublethink—it sounds like the sort of thing an ape would do. A gloriously androgynous creature of pure information wins cleanly or not at all. (I think I’m joking somewhat, but my probability distribution on to what extent is very wide.)
At some point you will have limited resources, and you will need to decide how much that preference for winning cleanly is worth. How much risk you’re willing to take of not winning at all, in exchange for whatever victory you might still get being a clean one.
For example, say there are two people you love dangling off a cliff. You could grab one of them and pull them to safety immediately, but in doing so there’s some chance you could destabilize the loose rocks and cause the other to fall to certain death.
The gloriously androgynous creature of pure information (henceforth GACoPI) that you wish you were cannot prioritize one love over the other in a timely fashion, and will simply babble ineffectual encouragement to both until professional rescuers arrive, during which time there is some chance that somebody’s arms will get tired or the cliff will shift due to other factors, killing them both. An ape’s understanding of factional politics, on the other hand, has no particular difficulty judging one potential ally or mate as more valuable, and then rigging up a quick coin-flip or something to save face.
The gloriously androgynous creature of pure information (henceforth GACoPI) that you wish you were cannot prioritize one love over the other in a timely fashion, and will simply babble ineffectual encouragement to both until professional rescuers arrive, during which time there is some chance that somebody’s arms will get tired or the cliff will shift due to other factors, killing them both.
The grandparent uses “winning cleanly” to mean winning without resorting to doublethink. To go from that to assuming that a GACoPI is unable to enact your proposed ape solution or come up with some other courses of action than standing around like an idiot smacks of StrawVulcanism (WARNING: TVTropes).
Wouldn’t your efforts be better directed at clearing up whatever confusion leads you to react with contempt to the similarity to apes?
I can maybe see myself selling out epistemic rationality for an instrumental advantage in some extreme circumstance, but I find abhorrent the idea of selling it so cheaply. It seems to me a rationalist should value their ability to see reality higher, not give it up at the first sign of inconvenience.
Even on instrumental grounds. Just like theoretical mathematics tends to end up having initially unforeseen practical application, giving up on epistemic rationality carries potential of unforeseen instrumental disadvantage.
An approach using both instrumental and epistemic rationality would be to research chimps and bonobos and find all the good qualities of them that we share. The problem here is your association “apes = contempt,” and trying to suppress that we are apes isn’t going to fix that problem.
I agree with the overall thrust of this, but I do have some specific reservations:
It’s tempting to cry out, “Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I’m different, I’m special, I’m an exception.
The temptation is real, and it’s a temptation we should be wary of. But it’s also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special (of course, we should still have some evidence for this, but the point is that the more within-group variation there is, the weaker that evidence needs to be). It’s difficult to know what to do with information about average differences between groups unless one also knows something about within-group variation.
But for the individual aspiring rationalist, the correct course of action is unambiguous: it’s better to know, than to not know, it’s better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
Again, I’m very sympathetic to this view, but I think you’re overselling the case. If (correctly) believing your group performs poorly causes you to perform worse, then there’s a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that’s not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there’s a lot of evidence that “subconscious” decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer’s How We Decide, and pretty much all of Gerd Gigerenzer’s books make this argument (in addition to the more anecdotal account in Malcolm Gladwell’s Blink). I tend to think they oversell the case for intuition somewhat, but it’s still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
I agree with the overall thrust of this, but I do have some specific reservations:
It’s tempting to cry out, “Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I’m different, I’m special, I’m an exception.
The temptation is real, and it’s a temptation we should be wary of. But it’s also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special. It’s difficult to know what to do with information about average differences between groups unless one also knows about within group variation. (The only case
But for the individual aspiring rationalist, the correct course of action is unambiguous: it’s better to know, than to not know, it’s better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
Again, I’m very sympathetic to this view, but I think you’re overselling the case. If (correctly) believing your group performs poorly causes you to perform worse, then there’s a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that’s not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there’s a lot of evidence that “subconscious” decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer’s How We Decide, and pretty much all of Gerd Gigerenzer’s books make this argument (in addition to the more anecdotal account in Malcolm Gladwell’s Blink). I tend to think they oversell the case for intuition somewhat, but it’s still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
I agree that denial usually seems like a bad idea, but the problem with things like stereotype threat is that they suggest (and more importantly provide evidence) that sometimes it might actually be useful (a path to improvement, even if not necessarily the first-best path).
The trick, presumably, is to distinguish the situations when this will hold from those when it doesn’t.
Yes, but there is a long-term benefit associated with the removal of your cancer. On the other hand, if you had a blemish on your shoulder, you’d be better off not noticing it.
The first step in solving a problem is to recognise it. If I discovered I have cancer I would be demoralised immensely but I’d prefer that and take a shot at recovering rather than die unknowingly.
Denial is not a path to improvement.
Strongly seconded. I speak from experience: when evidence starts mounting for some horrible, nightmarish proposition that you’re scared of, it is tempting to tell yourself that even if it were true, it wouldn’t really matter, that there would be no benefit to acknowledging it, that you can just go on acting as you’ve always done, as if nothing’s changed. But on your honor as an aspiring rationalist, you must face the pain directly. When you get a hint that this world is not what you thought it was, that you are not what you thought you were—look! And update!---no matter how much it hurts, no matter how much your heart may cry for the memory of the world you thought this was. Do it selfishly, in the name of the world you thought you knew: because once you have updated, once you see this hellish wasteland for what it really is, then you can start to try to patch what few things up that you can.
Suppose you really don’t like gender roles, and you’re quietly worried about something you read about evolutionary psychology. Brushing it all under the rug won’t help. Investigate, learn all you can, and then do something. Maybe something drastic, maybe something trivial, but something. Experiment with hormones! Donate a twenty to GenderPAC! Use your initials in your byline! But something, anything other than defaulting to ignorance and letting things take their natural course.
As soon as you start talking about your “honor as an aspiring rationalist”, you’re moving from the realm of rationality to ideology.
Like I said, I don’t think this question matters and I’m mostly indifferent to what the answer actually is. I’m just trying to protect the people who do care.
Well, sure, but the ideological stance is “You should care about rationality.” I should think that that’s one of the most general and least objectionable ideologies there is.
But I do care, and I no longer want to be protected from the actual answer. When I say that I speak from experience, it’s really true. There’s a reason that this issue has me banging out dramatic, gushy, italics-laden paragraphs on the terrible but necessary and righteous burden of relinquishing your cherished beliefs—unlike in the case of, say, theism, in which I’m more inclined to just say, “Yeah, so there’s no God; get over it”—although I should probably be more sympathetic.
So, why does it matter? Why can’t we just treat the issue with benign neglect, think of ourselves as strictly as individuals, and treat other people strictly as individuals? It is such a beautiful ideal—that my works and words should be taken to reflect only on myself alone, and that the words and works of other people born to a similar form should not be taken to reflect on me. It’s a beautiful ideal, and it seems like it should be possible to swear our loyalty to the general spirit of this ideal, while still recognizing that---
In this world, it’s not that simple. In a state of incomplete information (and it is not all clear to me what it would even mean to have complete information), you have to make probabilistic inferences based on what evidence you do have, and to the extent that there are systematic patterns of cognitive sex and race differences, people are going to update their opinions of others based on sex and race. You can profess that you’re not interested in these questions, that you don’t know—but just the same, when you see someone acting against type, you’re probably going to notice this as unusual, even if you don’t explicitly mention it to yourself.
There are those who argue—as I used to argue—that this business about incomplete information, while technically true, is irrelevant for practical purposes, that it’s easy to acquire specific about an individual, which screens off any prior information based on sex and race. And of course it’s true, and a good point, and an important point to bear in mind, especially for someone who comes to this issue with antiegalitarian biases, rather than the egalitarian biases that I did. But for someone with initial egalitarian biases, it’s important not to use it—as I think I used to use it—as some kind of point scored for the individualist/egalitarian side. Complex empirical questions do not have sides. And to the extent that this is not an empirical issue; to the extent that it’s about morality—then there are no points to score.
It gets worse—you don’t even have anywhere near complete information about yourself. People form egregiously false beliefs about themselves all the time. If you’re not ridiculously careful, it’s easy to spend your entire life believing that you have an immortal soul, or free will, or that the fate of the light cone depends solely on you and your genius AI project. So information about human nature in general can be useful even on a personal level: it can give you information about yourself that you would never have gotten from mere introspection and naive observation. I know from my readings that if I’m male, I’m more likely to have a heart attack and less likely to get breast cancer than would be the case if I were female, whereas this would not at all be obvious if I didn’t read. Why should this be true of physiology, but not psychology? If it turns out that women and men have different brain designs, and I don’t have particularly strong evidence that I’m a extreme genetic or developmental anomaly, then I should update my beliefs about myself based on this information, even if it isn’t at all obvious from the inside, and even though the fact may offend me and make me want to cry. For someone with a lot of scientific literacy but not as much rationality skill, the inside view is seductive. It’s tempting to cry out, “Sure, maybe ordinary men are such-and-this, and normal women are such-and-that, but not me; I’m different, I’m special, I’m an exception; I’m a gloriously androgynous creature of pure information!” But if you actually want to achieve your ideal (like becoming a gloriously androgynous creature of pure information), rather than just having a human’s delusion of it, you need to form accurate beliefs about just how far this world is from the ideal, because only true knowledge can help you actively shape reality.
It could very well be that information about human differences could have all sorts of terrible effects if widely or selectively disseminated. Who knows what the masses will do? I must confess that I am often tempted to say that I have no interest in such political questions—that I don’t know, that it doesn’t matter to me. This attitude probably is not satisfactory for the same sorts of reasons I’ve listed above. (How does the line go? “You might not care about politics, but politics cares about you”?) But for now, on a collective or political or institutional level, I really don’t know: maybe ignorance is bliss. But for the individual aspiring rationalist, the correct course of action is unambiguous: it’s better to know, than to not know, it’s better to make decisions explicitly and with reason, then to let your subconscious decide for you and for things to take their natural course.
I think people may be overapplying the concept of “screening off”, though?
If for example RACE → INTELLIGENCE → TEST SCORE, then knowing someone’s test score does not screen off race for the purpose of predicting intelligence (unless I’m very confused). Knowing test score should still make knowing race less useful, but not because of screening off.
On the other hand, if for example GENDER → PERSONALITY TRAITS → RATIONALIST TENDENCIES, then knowing someone’s personality traits does screen off gender for the purpose of predicting rationalist tendencies.
I agree with the overall thrust of this, but I do have some specific reservations:
The temptation is real, and it’s a temptation we should be wary of. But it’s also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special (of course, we should still have some evidence for this, but the point is that the more within-group variation there is, the weaker that evidence needs to be). It’s difficult to know what to do with information about average differences between groups unless one also knows something about within-group variation.
Again, I’m very sympathetic to this view, but I think you’re overselling the case. If (correctly) believing your group performs poorly causes you to perform worse then there’s a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that’s not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there’s a lot of evidence that “subconscious” decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer’s How We Decide, and pretty much all of Gerd Gigerenzer’s books make this argument (in addition to the more anecdotal account in Malcolm Gladwell’s Blink). I tend to think they oversell the case for intuition somewhat, but it’s still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
Might be just mind projection on my part, but it seems to me that in those accursed cases, where various aspects of an “egalitarian” way of thought—that includes both values, moral intuitions, ethical rules, actual beliefs about the world and beliefs about one’s beliefs—all get conflated, and the entire (silly for an AI but identity-forming for lots of current humans) system perceives some statement of fact as a challenge to its entire existence… well, the LW crowd at least would pride themselves on not being personally offended.
If tomorrow it was revealed with high certainty that us Slavs are genetically predisposed against some habits that happen to be crucial for civilized living and running a state nicely, I’d most definitely try to take it in stride. But when something like this stuff is said, I tend to feel sick and uncertain; I don’t see a purely consequentialist way out which would leave me ethically unscathed.
Ah, if only it was so easy as identifying the objective state of the world, than trying to act in accordance with (what you’ve settled on as) your terminal values. But this would require both* more rationality and more compartmentalization than I’ve seen in humans so far.
“You come to amidst the wreckage of your own making. Do you stay there, eyes squeezed shut, afraid to move, hoping to bleed to death? Or do you crawl out, help your loved ones, make sure the fire doesn’t spread, try to fix it?”—Max Payne 2: The Fall of Max Payne
One day, I started looking at the people around me, and I began to realize how much they looked like apes. I quickly stopped doing this, because I feared it would cause me to treat them with contempt. And you know what? Doublethink worked. I didn’t start treating people with more contempt as a result of purposely avoiding certain knowledge that I knew would cause me to treat people with more contempt.
If you think that my efforts to suppress observations relating the appearance of humans with the appearance of apes were poorly founded, then you have a very instrumentally irrational tendency towards epistemic rationality.
Maybe the lesson to draw is that you don’t respect apes enough?
Guilty as charged! I don’t want to win by means of doublethink—it sounds like the sort of thing an ape would do. A gloriously androgynous creature of pure information wins cleanly or not at all. (I think I’m joking somewhat, but my probability distribution on to what extent is very wide.)
At some point you will have limited resources, and you will need to decide how much that preference for winning cleanly is worth. How much risk you’re willing to take of not winning at all, in exchange for whatever victory you might still get being a clean one.
For example, say there are two people you love dangling off a cliff. You could grab one of them and pull them to safety immediately, but in doing so there’s some chance you could destabilize the loose rocks and cause the other to fall to certain death.
The gloriously androgynous creature of pure information (henceforth GACoPI) that you wish you were cannot prioritize one love over the other in a timely fashion, and will simply babble ineffectual encouragement to both until professional rescuers arrive, during which time there is some chance that somebody’s arms will get tired or the cliff will shift due to other factors, killing them both. An ape’s understanding of factional politics, on the other hand, has no particular difficulty judging one potential ally or mate as more valuable, and then rigging up a quick coin-flip or something to save face.
The grandparent uses “winning cleanly” to mean winning without resorting to doublethink. To go from that to assuming that a GACoPI is unable to enact your proposed ape solution or come up with some other courses of action than standing around like an idiot smacks of Straw Vulcanism (WARNING: TVTropes).
Wouldn’t your efforts be better directed at clearing up whatever confusion leads you to react with contempt to the similarity to apes?
I can maybe see myself selling out epistemic rationality for an instrumental advantage in some extreme circumstance, but I find abhorrent the idea of selling it so cheaply. It seems to me a rationalist should value their ability to see reality higher, not give it up at the first sign of inconvenience.
Even on instrumental grounds. Just like theoretical mathematics tends to end up having initially unforeseen practical application, giving up on epistemic rationality carries potential of unforeseen instrumental disadvantage.
Good point.
An approach using both instrumental and epistemic rationality would be to research chimps and bonobos and find all the good qualities of them that we share. The problem here is your association “apes = contempt,” and trying to suppress that we are apes isn’t going to fix that problem.
Do you have evidence that the direct good effects of your doublethink outweighed the indirect bad effects?
I agree with the overall thrust of this, but I do have some specific reservations:
The temptation is real, and it’s a temptation we should be wary of. But it’s also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special (of course, we should still have some evidence for this, but the point is that the more within-group variation there is, the weaker that evidence needs to be). It’s difficult to know what to do with information about average differences between groups unless one also knows something about within-group variation.
Again, I’m very sympathetic to this view, but I think you’re overselling the case. If (correctly) believing your group performs poorly causes you to perform worse, then there’s a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that’s not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there’s a lot of evidence that “subconscious” decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer’s How We Decide, and pretty much all of Gerd Gigerenzer’s books make this argument (in addition to the more anecdotal account in Malcolm Gladwell’s Blink). I tend to think they oversell the case for intuition somewhat, but it’s still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
I agree with the overall thrust of this, but I do have some specific reservations:
The temptation is real, and it’s a temptation we should be wary of. But it’s also important to realize that the more variation there is within the groups, the more reasonable it may be to suppose that we are special. It’s difficult to know what to do with information about average differences between groups unless one also knows about within group variation. (The only case
Again, I’m very sympathetic to this view, but I think you’re overselling the case. If (correctly) believing your group performs poorly causes you to perform worse, then there’s a tradeoff between that and the benefits of accurate knowledge. Maybe accurate knowledge is still best, all things considered, but that’s not obvious to me.
Beyond the context of this specific debate, the theory of the second-best shows that incremental movements in the direction of perfect rationality will not always improve decision-making. And there’s a lot of evidence that “subconscious” decision-making outperforms more explicit reasoning in some situations. Jonah Lehrer’s How We Decide, and pretty much all of Gerd Gigerenzer’s books make this argument (in addition to the more anecdotal account in Malcolm Gladwell’s Blink). I tend to think they oversell the case for intuition somewhat, but it’s still pretty clear that blanket claims about the superiority of more information and conscious deliberation are false. The really important question is how we can best make use of the strengths of different decision-strategies.
I agree that denial usually seems like a bad idea, but the problem with things like stereotype threat is that they suggest (and more importantly provide evidence) that sometimes it might actually be useful (a path to improvement, even if not necessarily the first-best path).
The trick, presumably, is to distinguish the situations when this will hold from those when it doesn’t.
Yes, but there is a long-term benefit associated with the removal of your cancer. On the other hand, if you had a blemish on your shoulder, you’d be better off not noticing it.