No scientific conclusions can ever be good or bad, desirable or undesirable, sexist, racist, offensive, reactionary or dangerous; they can only be true or false. No other adjectives apply.
This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.
(Bear in mind, I use “science” mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)
Sure, but I often see this sort of argument used against concerns about bias in (claimed) scientific conclusions. I’d rather people didn’t treat science as privileged against bias, and the quote above seems to encourage that.
While I pretty much agree with the quote, it doesn’t provide anyone that isn’t already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.
In any case, a scientific conclusion needn’t be inherently offensive for closer examination to be recommended: if most researchers’ backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn’t merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn’t involve decrying all but the worst studies as bad science when you’ve read no more than the abstract.
if most researchers’ backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn’t merely a good political sop but is actually good science in its own right.
Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.
I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.
In light of the Implicit Association Test this doesn’t even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that’s where I find myself begging to differ.
I’d take an issue with “undesirable”, the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with “dangerous”: the discovery of nuclear energy was quite dangerous.
If travelling faster than light is possible, I desire to believe that travelling faster than light is possible; If travelling faster than light is impossible, I desire to believe that travelling faster than light is impossible; Let me not become attached to beliefs I may not want.
They are indeed. You seem to have added a level of indirection not present in the original statement. One statement is about this world, the other is about possible worlds.
I think it’s pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they’re doing is going to kill the human race or not.)
That’s the thing, the science wasn’t good or bad, it was the to decision to give the results to certain people that held that quality of good/bad. And it was very, very bad. But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.
But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.
I don’t know what you mean by “intrinsic” moral qualities (is this to be contrasted with “extrinsic” moral qualities, and should I care less about the latter or what?). What I’m saying is just that the decision to pursue some scientific research has bad consequences (whether or not you intend to publicize it: doing it increases the probability that it will get publicized one way or another).
The majority of scientific discoveries (I’m tempted to say all but I’m 90% certain that there exist at least one counter example) have very good consequences as well as bad. I think the good and bad actually usually go hand in hand.
To make the obvious example nuclear research lead to both the creation of nuclear weapons but also the creation of nuclear energy.
At what point could you label research into any scientific field as having to many negative consequences to pursue?
General complaint: sometimes when I say that people should be doing a certain thing, someone responds that doing that thing requires answering hard questions. I don’t know what bringing this point up is supposed to accomplish. Yes, many things worth doing require answering hard questions. That is not a compelling reason not to do them.
I do not ask it because I wanted to stop the discussion by asking a hard question. I ask it because I aspire to do research into physics and will someday need an answer to it. As such I have been very curious about different arguments to this question. By no means did I mean by asking this question that there are things that should not be research simply how to go about finding them?
Remove any confusions you might have about metaethics, figure out what it is you value, estimate what kind of impact the research you want to do will have with respect to what you value, estimate what kind of impact the other things you could do will have with respect to what you value, pick the thing that is more valuable.
Trying to retroactively judge previous research this way is difficult because the relevant quantity you want to estimate is not the observed net value of a given piece of research (which is hard enough to estimate) but the expected net value at the time the decision was being made to do the research. I think the expected value of research into nuclear physics in the past was highly negative because of how much it increased the probability of nuclear war, but I’m not a domain expert and can’t give hard numbers to back up this assertion.
I’m reading through all of the sequences (slowly, it takes a while to truly understand and I started in 2012) and by coincidence I happen to be at the beginning of metaethics currently. Until I finish I won’t argue any further on this subject due to being confused. Thanks for help
At one point, physicists thought detonating even one nuclear bomb might set fire to the atmosphere.
This was taken seriously, and disproven before one in fact was detonated, but it’s not clear that the tests wouldn’t have gone ahead even if the verdict had come back with merely “unlikely”.
In the current day biologists, computer scientists and physicists are all working on devices which could be far more dangerous than nuclear weapons. In this case the danger is well known, but no-one high-status enough to succeed is seriously proposing a moratorium on research. To be fair, we’ve still got some time to go.
A scientist can have an inclination towards—for example—racist ideas. You can’t just call this a kind of being wrong, because depending on the truth of what they’re studying, this can make them right more often or less often.
So racist scientists are possible, and racist scientific practice is possible. I think ‘racist’ is an appropriate label for the conclusions drawn with that practice, correct or incorrect.
Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It’s not an inherent property of any of the conclusions; another researcher with completely different biases wouldn’t be racist for independently rediscovering one of them.
It’s a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn’t reduce to “larger fraction correct” or “larger fraction incorrect” because it depends on if they’re working on a problem where racists are more or less likely to be correct.
Is Newtons theory of gravity true or false? It’s neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.
The same is true for nearly every scientific model. There are problems where it’s useful to use the model. There are problems where it isn’t.
There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn’t try to declare them true or false but declares them to be very likely.
It’s also interesting that the person who made this claim isn’t working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he’s quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that’s widely considered as racist in Psychology Today. I don’t see why “racist” is no valid word to describe his conclusions.
On the other hand, Kanazawa seems really good at saying controversial things that get attention… which suggests evidence for his views will overspread relative to those of his detractors. So it may make sense to hold people who say controversial stuff to high epistemological standards, or perhaps to scrutinize memes that seem unusually virulent especially carefully.
In the Wikipedia article that desribes him he’s quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position.
Huh, what definition of “racist” are you using here? Would you describe von Neumann’s proposal for a pre-emtive nuclear strike on the USSR as “racist”?
He published some material that’s widely considered as racist in Psychology Today. I don’t see why “racist” is no valid word to describe his conclusions.
I’m not sure what you mean by “racist”, however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.
Huh, what definition of “racist” are you using here?
It’s basically about putting a low value on the life on non-white civilians.
In addition “I would do to foreigners, what Ann Coulter would do to them”, is also a pretty straight way to signal racism.
I’m not sure what you mean by “racist”, however is your claim supposed to be that this somehow implies that the conclusion is false/less likely?
I haven’t argued that fact. I’m advocating for having a broad number of words which multidimensional meaning.
I see no reason to treat someone who makes wrong claims about race and who’s personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.
It’s basically about putting a low value on the life on non-white civilians.
So would you call the bombings of civilians during WWII “racist”?
I haven’t argued that fact. I’m advocating for having a broad number of words which multidimensional meaning.
So you would agree that there are some statements that are both “racist” and true.
I see no reason to treat someone who makes wrong claims about race
What do you mean by “wrong”? If you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
It’s basically about putting a low value on the life on non-white civilians.
So would you call the bombings of civilians during WWII “racist”?
I haven’t argued that fact. I’m advocating for having a broad number of words which multidimensional meaning.
So you would agree that there are some statements that are both “racist” and true.
I see no reason to treat someone who makes wrong claims about race
What do you mean by “wrong”, if you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
It’s basically about putting a low value on the life on non-white civilians.
So would you call the bombings of civilians during WWII “racist”?
I haven’t argued that fact. I’m advocating for having a broad number of words which multidimensional meaning.
So you would agree that there are some statements that are both “racist” and true.
I see no reason to treat someone who makes wrong claims about race
What do you mean by “wrong”, if you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
Satoshi Kanazawa
This seems to imply that science is somehow free from motivated cognition — people looking for evidence to support their biases. Since other fields of human reason are not, it would be astonishing if science were.
(Bear in mind, I use “science” mostly as the name of a social institution — the scientific community, replete with journals, grants and funding sources, tenure, and all — and not as a name for an idealized form of pure knowledge-seeking.)
I take the quote to be normative rather than descriptive. Science is not free from motivated cognition, but that’s a bug, not a feature.
Sure, but I often see this sort of argument used against concerns about bias in (claimed) scientific conclusions. I’d rather people didn’t treat science as privileged against bias, and the quote above seems to encourage that.
While I pretty much agree with the quote, it doesn’t provide anyone that isn’t already convinced with many good reasons to believe it. Less of an unusually rational statement and more of an empiricist applause light, in other words.
In any case, a scientific conclusion needn’t be inherently offensive for closer examination to be recommended: if most researchers’ backgrounds are likely to introduce implicit biases toward certain conclusions on certain topics, then taking a close look at the experimental structure to rule out such bias isn’t merely a good political sop but is actually good science in its own right. Of course, dealing with this properly would involve hard work and numbers and wouldn’t involve decrying all but the worst studies as bad science when you’ve read no more than the abstract.
Unfortunately, since the people deciding which papers to take a closer look at tend to have the same biases as most scientists, the papers that actually get examined closely are the ones going against common biases.
I hate to find myself in the position of playing apologist for this mentality, but I believe the party line is that most of the relevant biases are instilled by mass culture and present at some level even in most people trying to combat them, never mind scientists who oppose them in a kind of vague way but mostly have better things to do with their lives.
In light of the Implicit Association Test this doesn’t even seem all that far-fetched to me. The question is to what extent it warrants being paranoid about experimental design, and that’s where I find myself begging to differ.
I’d take an issue with “undesirable”, the way I understand it. For example, the conclusion that traveling FTL is impossible without major scientific breakthroughs was quite undesirable to those who want to reach for the stars. Similarly with “dangerous”: the discovery of nuclear energy was quite dangerous.
If travelling faster than light is possible,
I desire to believe that travelling faster than light is possible;
If travelling faster than light is impossible,
I desire to believe that travelling faster than light is impossible;
Let me not become attached to beliefs I may not want.
Something not (currently) possible can still be desirable.
FTL being impossible is undesirable if you want to go to the stars.
The conclusion that “FTL is impossible” is undesirable if and only iff FTL is possible.
The two conditions are very different.
They are indeed. You seem to have added a level of indirection not present in the original statement. One statement is about this world, the other is about possible worlds.
Shouldn’t it read
“FTL is impossible” is undesirable if and only if FTL is possible.”
as it stands it reads “FTL is impossible” is undesirable if and only if and only if (iff) FTL is possible.
Actually, it should be “FTL is impossible” is undesirable if and only if FTL is possible.”
Facepalms okay this is why I need to proofread everything I write
Thanks
Shouldn’t it really be “Believing that FTL is impossible is undesirable iff FTL is possible”?
You seemed to be doing something clever with quotes, but mostly that made it hard to read. :P
The author originally added an extra f to the last if in the original post rendering it as “if and only if and only if” instead of “if and only if”
I think it’s pretty clear that scientific conclusions can be dangerous in the sense that telling everybody about them is dangerous. For example, the possibility of nuclear weapons. On the other hand, there should probably be an ethical injunction against deciding what kind of science other people get to do. (But in return maybe scientists themselves should think more carefully about whether what they’re doing is going to kill the human race or not.)
That’s the thing, the science wasn’t good or bad, it was the to decision to give the results to certain people that held that quality of good/bad. And it was very, very bad. But the process of looking at the world, wondering how it works, then figuring out how it works, and then making it work the way you desire, that process carries with it no intrinsic moral qualities.
I don’t know what you mean by “intrinsic” moral qualities (is this to be contrasted with “extrinsic” moral qualities, and should I care less about the latter or what?). What I’m saying is just that the decision to pursue some scientific research has bad consequences (whether or not you intend to publicize it: doing it increases the probability that it will get publicized one way or another).
The majority of scientific discoveries (I’m tempted to say all but I’m 90% certain that there exist at least one counter example) have very good consequences as well as bad. I think the good and bad actually usually go hand in hand.
To make the obvious example nuclear research lead to both the creation of nuclear weapons but also the creation of nuclear energy.
At what point could you label research into any scientific field as having to many negative consequences to pursue?
I agree that this is a hard question.
General complaint: sometimes when I say that people should be doing a certain thing, someone responds that doing that thing requires answering hard questions. I don’t know what bringing this point up is supposed to accomplish. Yes, many things worth doing require answering hard questions. That is not a compelling reason not to do them.
I do not ask it because I wanted to stop the discussion by asking a hard question. I ask it because I aspire to do research into physics and will someday need an answer to it. As such I have been very curious about different arguments to this question. By no means did I mean by asking this question that there are things that should not be research simply how to go about finding them?
Remove any confusions you might have about metaethics, figure out what it is you value, estimate what kind of impact the research you want to do will have with respect to what you value, estimate what kind of impact the other things you could do will have with respect to what you value, pick the thing that is more valuable.
Trying to retroactively judge previous research this way is difficult because the relevant quantity you want to estimate is not the observed net value of a given piece of research (which is hard enough to estimate) but the expected net value at the time the decision was being made to do the research. I think the expected value of research into nuclear physics in the past was highly negative because of how much it increased the probability of nuclear war, but I’m not a domain expert and can’t give hard numbers to back up this assertion.
I’m reading through all of the sequences (slowly, it takes a while to truly understand and I started in 2012) and by coincidence I happen to be at the beginning of metaethics currently. Until I finish I won’t argue any further on this subject due to being confused. Thanks for help
I think nuclear weapons have a chance of killing a large number of people but are very unlikely to kill the human race.
At one point, physicists thought detonating even one nuclear bomb might set fire to the atmosphere.
This was taken seriously, and disproven before one in fact was detonated, but it’s not clear that the tests wouldn’t have gone ahead even if the verdict had come back with merely “unlikely”.
In the current day biologists, computer scientists and physicists are all working on devices which could be far more dangerous than nuclear weapons. In this case the danger is well known, but no-one high-status enough to succeed is seriously proposing a moratorium on research. To be fair, we’ve still got some time to go.
A scientist can have an inclination towards—for example—racist ideas. You can’t just call this a kind of being wrong, because depending on the truth of what they’re studying, this can make them right more often or less often.
So racist scientists are possible, and racist scientific practice is possible. I think ‘racist’ is an appropriate label for the conclusions drawn with that practice, correct or incorrect.
Though, I think being racist is a property of a whole group of conclusions drawn by scientists with a particular bias. It’s not an inherent property of any of the conclusions; another researcher with completely different biases wouldn’t be racist for independently rediscovering one of them.
It’s a useful descriptor because a body of conclusions drawn by racist scientists, right or wrong, is going to be different in important ways from one drawn by non-racist scientists. It doesn’t reduce to “larger fraction correct” or “larger fraction incorrect” because it depends on if they’re working on a problem where racists are more or less likely to be correct.
Is Newtons theory of gravity true or false? It’s neither. For some problems the theory provides a good model that allows us to make good predictions about the world around us. For other problems the theory produces bad predictions.
The same is true for nearly every scientific model. There are problems where it’s useful to use the model. There are problems where it isn’t.
There are also factual statements in science. Claiming that true and false are the only possible adjectives to describe them is also highly problematic. Instead of true and false, likely and unlikely are much better words. In hard science most scientific conclusions come with p values. The author doesn’t try to declare them true or false but declares them to be very likely.
It’s also interesting that the person who made this claim isn’t working in the hard sciences. He seems to be a evolutionary psychologist based in the London School of Economics. In the Wikipedia article that desribes him he’s quoted as suggesting that the US should have retaliated 9/11 with nuclear bombs. That a non-scientific racist position. He published some material that’s widely considered as racist in Psychology Today. I don’t see why “racist” is no valid word to describe his conclusions.
What happens if you apply the same epistomological standards to claims that someone is racist that you apply to claims from science?
On the other hand, Kanazawa seems really good at saying controversial things that get attention… which suggests evidence for his views will overspread relative to those of his detractors. So it may make sense to hold people who say controversial stuff to high epistemological standards, or perhaps to scrutinize memes that seem unusually virulent especially carefully.
Huh, what definition of “racist” are you using here? Would you describe von Neumann’s proposal for a pre-emtive nuclear strike on the USSR as “racist”?
I’m not sure what you mean by “racist”, however is your claim supposed to be that this somehow implies that the conclusion is false/less likely? You may want to practice repeating the Litany of Tarski.
It’s basically about putting a low value on the life on non-white civilians. In addition “I would do to foreigners, what Ann Coulter would do to them”, is also a pretty straight way to signal racism.
I haven’t argued that fact. I’m advocating for having a broad number of words which multidimensional meaning.
I see no reason to treat someone who makes wrong claims about race and who’s personal beliefs cluster with racist beliefs in his nonscientific statements the same way as someone who just makes wrong statements about the boiling point of some new synthetic chemical.
Rather than using the ambiguous word “racist”, one could say specifically that Kanazawa is an advocate of genocide.
As I said above, did the bombings of civilians during WWII constitute “genocide”?
So would you call the bombings of civilians during WWII “racist”?
So you would agree that there are some statements that are both “racist” and true.
What do you mean by “wrong”? If you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
So would you call the bombings of civilians during WWII “racist”?
So you would agree that there are some statements that are both “racist” and true.
What do you mean by “wrong”, if you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.
So would you call the bombings of civilians during WWII “racist”?
So you would agree that there are some statements that are both “racist” and true.
What do you mean by “wrong”, if you mean “wrong” in the sense of “false”, you’ve yet to present any evidence that any of Satoshi Kanazawa’s claims are wrong.