That is a really clever mixup of different argumentation modes. That being said, Mr. Cochran strangling one of his opponents would still be only weak evidence that it is not so difficult for humans to psych themselves up to kill another human.
First of all, he hasn’t actually done it (I presume).
Secondly, we know it’s difficult, not impossible.
Thirdly, we know there are sociopaths and psychopaths who can do this without much thought, as well as perhaps normal people who have become desensitized to killing. Fortunately these are a small percentage of the populace.
There is, in fact, a large amount of research that has gone into studying the minds of people who kill: in wartime, in criminal
activity, in law enforcement, and so forth; and there is a strong consensus that for most people intentional killing is hard. For example,
In World War Two, it is a fact that only 15-20 percent of the soldiers fired at the enemy. That is one in five soldiers actually shooting at a Nazi when he sees one. While this rate may have increased in desperate situations, in most combat situations soldiers were reluctant to kill each other. The Civil War was not dramatically different or any previous wars.
In WW2 only one percent of the pilots accounted for thirty to forty percent of enemy fighters shot down in the air. Some pilots didn’t shoot down a single enemy plane.
In Korea, the rate of soldiers unwilling to fire on the enemy decreased and fifty five percent of the soldiers fired at the enemy. In Vietnam, this rate increased to about ninety five percent but this doesn’t mean they were trying to hit the target. In fact it usually took around fifty-two thousand bullets to score one kill in regular infantry units! It may be interesting to not that when Special Forces kills are recorded and monitored this often includes kills scored by calling in artillery or close air support. In this way SF type units could score very high kill ratios like fifty to a hundred for every SF trooper killed. This is not to say these elite troops didn’t score a large number of bullet type kills. It is interesting to note that most kills in war are from artillery or other mass destruction type weapons.
If one studies history and is able to cut through the hype, one will find that man is often unwilling to kill his fellow man and the fighter finds it very traumatic when he has to do so. On the battlefield the stress of being killed and injured is not always the main fear.
If you want a more detailed look at this, including lots of references to the original Defense Department research, there are a number of good books by army officers including On Killing by Lieutenant Colonel Dave Grossman. One of the originals is Men Against Fire by World War I Officer S. L. A Marshall. Bruce Siddle’s work, more focused on law enforcement, is also worth a look. E.g. Sharpening the Warrior’s Edge.
None of these are perfect or irrefutable evidence. For instance, the research I’m aware focuses primarily on U.S. and British troops and police officers. It’s certainly possible that this is culturally conditioned and the results might be different elsewhere. However, I’ve yet to see any strong critiques of the general consensus about the difficulty of killing in war. The best evidence we have is that killing is in fact difficult for most people, most of the time, even in war.
In World War Two, it is a fact that only 15-20 percent of the soldiers fired at the enemy.
One of the originals is Men Against Fire by World War I Officer S. L. A Marshall.
You find this claim all over the place; the problem with it is that comrade “S.L.A.M” is not “one of the originals”, he is the sole and only source for the claim—and he made it up. A cursory Wiki search shows:
[So-and-so demonstrated] that Marshall had not actually conducted the research upon which he based his ratio-of-fire theory. “The ‘systematic collection of data’ appears to have been an invention.”
My emphasis.
The best evidence we have is that killing is in fact difficult for most people, most of the time, even in war.
Ok. So on the one hand we’ve got a single book, later shown to have been an invention, but taken up by a huge number of people so it looks like a consensus, in the best Dark-Arts, “you have to be smart to know this”, counterintuitive-Deep-Wisdom style. And on the other hand we have a huge amount of dead people, mysteriously killed by bullets that, somehow, got fired in spite of the noted reluctance of men to do so. I propose that your accolade of “best evidence” is a bit misplaced.
This is an excellent example of the need to apply some skepticism to a counter-intuitive but neat-seeming claim, whose possession will put you inside the tribe of people who Know Neat And Counterintuitive Stuff. Sometimes the simple answer really is the right one; this is one of those times.
And on the other hand we have a huge amount of dead people, mysteriously killed by bullets that, somehow, got fired in spite of the noted reluctance of men to do so.
Let’s not overstate your case, shall we? No ‘somehow’ about it, even if 90% of soldiers didn’t want to shoot, the remaining 10% could kill a hell of a lot of people; that is the point of guns and explosives, after all—they make killing people quick and easy compared to nagging them to death.
(Where is the precise model relating known mortality rates to number of soldiers shooting, such that Marshall’s claims could have been rejected on their face solely because they conflicted with mortality rates? There is none. The majority of soldiers survive wars, after all.)
the remaining 10% could kill a hell of a lot of people
With modern automatic weapons, if their targets obligingly massed in a single spot, sure. Bolt-action rifles, less so; Civil-War-era muzzle loaders, still less.
Now, there’s a more subtle version of the argument that could be made: Maybe a lot of people were shooting to miss. That would account for the 10000-to-1 bullets-to-hits ratio, also known as “fire your weight in lead to kill a man”. But again, if people weren’t actually shooting, you’d think their officers would notice that they never needed ammunition refills.
Observe: The more people refuse to fire their rifles, the higher should be the proportion of casualties from artillery. Yet from WWII to Vietnam, we see that reports claim an increasing percentage of soldiers firing rifles, but a decreasing proportion of casualties from small arms. I propose that, instead, the proportion of rifle-firers was constant and the lethality and ubiquity of artillery was growing. Note that, to make up for an increase from 25% to 55% of rifle-firers, as is claimed from WWII to Vietnam, artillery would have to become twice as deadly just to remain on an even footing; this seems to me unlikely, even though there certainly were technical advances.
With modern automatic weapons, if their targets obligingly massed in a single spot, sure. Bolt-action rifles, less so; Civil-War-era muzzle loaders, still less.
So, do you know offhand exactly how many soldiers were killed by other soldiers in all those conflicts? Do you know how fast and effective those weapons were? Do you know what the distribution and skew of killings per soldier are and how that changes from conflict to conflict? You do not know any of those factors, all of which together determine whether the Marshall estimate is plausible.
‘Marshall made everything up’ is a good argument. ‘Look, there’s lots of dead soldiers!’ is a terrible argument which is pure rhetoric.
Note that, to make up for an increase from 25% to 55% of rifle-firers, as is claimed from WWII to Vietnam, artillery would have to become twice as deadly just to remain on an even footing; this seems to me unlikely, even though there certainly were technical advances.
Ceteris is never paribus. You’re just digging yourself in deeper. Those conflicts were completely different—WWII and Vietnam, seriously? You can’t think of any reasons artillery might have different results in them?
You’re vastly overstating the criticisms of S. L. A Marshall. He did not just make up his figures. His research was not an invention. He conducted hundreds of interviews with soldiers who had recently been in combat. The U.S. Army found this research quite valuable and uses it to this day. Some people don’t like his conclusions, and attempt to dispute them, but usually without attempting to collect actual data that would weigh against Marshall’s.
The Wikipedia article’s claim that “Professor Roger J. Spiller (Deputy Director of the Combat Studies Institute, US Army Command and General Staff College) demonstrated in his 1988 article, “S.L.A. Marshall and the Ratio of Fire” (RUSI Journal, Winter 1988, pages 63–71), that Marshall had not actually conducted the research upon which he based his ratio-of-fire theory” appears to be false. Spiller’s article criticizes Marshall’s methodology and points out a number of weaknesses in his later accounts. However it does not claim that the interviews Marshall described did not take place. Rather it suggests that Marshall intentionally or unintentionally sometimes inflated the number of interviews he had conducted, though it still allows for hundreds to have taken place. The RUSI article doesn’t seem to be online, (I’ll try and see if JSTOR has a copy) but some relevant portions are quoted here.
I agree that Marshall’s evidence is not perfect. I’d be interested to see better evidence, and if it came to different conclusions than he did, using better research techniques, then I would update my beliefs accordingly. Until I am see such research, though I am very wary of poorly sourced ad hominem attacks.
Hum! That first article is very interesting; it quotes Marshall as saying the percentage of men who fired their weapons was 15% in an average day’s action. This is very different from 15% firing their rifles at all, which is the claim usually made. So quite apart from being a fabrication, Marshall’s imaginary number is apparently even being misquoted!
Some interesting quotes:
John Westover, usually in attendance during Marshall’s sessions with the troops, does not recall Marshall’s ever asking [who had fired their rifles].
(Emphasis in original).
His surviving field notebooks show no signs of statistical compilations that would have been necessary to deduce a ratio as precise as Marshall reported later in “Men Against Fire”.
Thanks.
-ETA
I followed both the link and the links to several of Wikipedia’s sources, but no further. The stuff I saw all seems to support Rolf’s claims about S. L. A Marshall being unreliable and the primary source for most of the claims of the killing is hard side.
Fighter pilot victories in clear-air combat are rare; it follows that they are Poisson-distributed, and that you would expect to have a few extreme outliers and a great mass of apparent “non-killers” even if every pilot was doing his genuine best to kill. That is even before taking into account pilot skill, which for all we know has a very wide range.
Fighter pilot victories in clear-air combat are rare; it follows that they are Poisson-distributed, and that you would expect to have a few extreme outliers and a great mass of apparent “non-killers”
I don’t see how that follows at all. You don’t know it was a Poisson distribution (there are lots of distributions natural phenomena follow; the negative binomial and lognormal also pop up a lot in human contexts), and even if you did, you don’t know the the relevant rate parameter lambda to know how many pilots should be expected to have 1 success, and since you’re making purely a priori arguments here rather than observing that the studies have specific flaws (eg perhaps they included pilots who never saw combat), it’s clear you’re trying to make a fully general counterargument to explain away any result those studies could have reached without knowing anything about them. (‘Oh, only .001% of pilots killed anyone? That darn Poisson!’)
You don’t know it was a Poisson distribution (there are lots of distributions natural phenomena follow; the negative binomial and lognormal also pop up a lot in human contexts),
The Poisson distribution is the distribution that models rare independent events. Given how involved you are with prediction and statistics, I’d expect you to know that.
The Poisson distribution is the distribution that models rare independent events.
Are number of fighter pilot victories, clearly, a priori, going to be independent events? That a pilot shooting down one plane is entirely independent of whether they go on to shoot down another plane? (Think about the other two distributions I mentioned and why they might be better matches...)
Distributions are model assumptions, to be checked like any other. In fact, often they are the most important and questionable assumption made in a model, which determines the conclusion; a LW example of this is Karnofsky’s statistical argument against funding existential risk, which driven entirely by the chosen distribution. As the quote goes: ‘they strain at the gnat of the prior who swallow the camel of the likelihood function’.
I personally find choice of distribution to be dangerous, which is why (when not too much more work) in my own analyses I try to use nonparametric methods: Mann-Whitney u-tests rather than t-tests, bootstraps, and at least look at graphs of histograms or residuals while I’m doing my main analysis. Distributions are not always as one expects. To give an example involving the Poisson: I was doing a little Hacker News voting experiment. One might think that a Poisson would be a perfect fit for distribution of scores—lots of voters, each one only votes on a few links out of the thousands submitted each day, they’re different voters, and votes are positive count data. One would be wrong, since while a Poisson fits better than, say, a normal, it’s grossly wrong about outliers; what actually fits much better is a mixture distribution of at least 3 sub-distributions of Poissons and possibly normals or others. (My best guess is that this mixture distribution is caused by HN’s segmented site design leading to odd dynamics in voting: the first distribution corresponds to low-scoring submissions which spend all their time on /newest, and the rest to various subpopulations of submissions which make it to the main page—although I’m not sure why there are more than 1 of those).
So no, I hope it is because of, rather than despite, my involvement with stats that I object to Rolf’s casual assumption of a particular distribution to create a fully general counterargument to explain away data he has not seen but dislikes.
In particular notice that any deviations from Poisson are going to be in the direction that makes Rolf’s argument even stronger.
No, they’re not, not without even more baseless assumptions. The Poisson is not well-justified, and it’s not even conservative for Rolf’s argument. If there was a selection process in which the best pilots get to combat the most (a shocking proposition, I realize); then many more would cross the threshold of at least 1 kill than would be predicted if one incorrectly modeled kill rates as Poissons with averages. This is the sort of thing (multiple consecutive factors) which would generate other possible distributions like the lognormal, which appear all the time in human performance data like scientific publications. (‘...who swallow the camel of the likelihood function’.)
And this still doesn’t address my point that you cannot write off data you have not seen with a fully general counterargument—without very good reasons which Rolf has not done anything remotely like showing. You do not know whether that extremely low quoted rate is exactly what one would expect from pilots doing their level best to kill others without doing a lot more work to verify that a Poisson fits, what the rate parameter is, and what the distribution of pilot differences looks like; the final kill rate of pilots, just like soldiers, is the joint result of many things.
That is a really clever mixup of different argumentation modes. That being said, Mr. Cochran strangling one of his opponents would still be only weak evidence that it is not so difficult for humans to psych themselves up to kill another human.
First of all, he hasn’t actually done it (I presume).
Secondly, we know it’s difficult, not impossible.
Thirdly, we know there are sociopaths and psychopaths who can do this without much thought, as well as perhaps normal people who have become desensitized to killing. Fortunately these are a small percentage of the populace.
There is, in fact, a large amount of research that has gone into studying the minds of people who kill: in wartime, in criminal activity, in law enforcement, and so forth; and there is a strong consensus that for most people intentional killing is hard. For example,
-- William S. Frisbee, The Psychology of Killing
If you want a more detailed look at this, including lots of references to the original Defense Department research, there are a number of good books by army officers including On Killing by Lieutenant Colonel Dave Grossman. One of the originals is Men Against Fire by World War I Officer S. L. A Marshall. Bruce Siddle’s work, more focused on law enforcement, is also worth a look. E.g. Sharpening the Warrior’s Edge.
None of these are perfect or irrefutable evidence. For instance, the research I’m aware focuses primarily on U.S. and British troops and police officers. It’s certainly possible that this is culturally conditioned and the results might be different elsewhere. However, I’ve yet to see any strong critiques of the general consensus about the difficulty of killing in war. The best evidence we have is that killing is in fact difficult for most people, most of the time, even in war.
You find this claim all over the place; the problem with it is that comrade “S.L.A.M” is not “one of the originals”, he is the sole and only source for the claim—and he made it up. A cursory Wiki search shows:
My emphasis.
Ok. So on the one hand we’ve got a single book, later shown to have been an invention, but taken up by a huge number of people so it looks like a consensus, in the best Dark-Arts, “you have to be smart to know this”, counterintuitive-Deep-Wisdom style. And on the other hand we have a huge amount of dead people, mysteriously killed by bullets that, somehow, got fired in spite of the noted reluctance of men to do so. I propose that your accolade of “best evidence” is a bit misplaced.
This is an excellent example of the need to apply some skepticism to a counter-intuitive but neat-seeming claim, whose possession will put you inside the tribe of people who Know Neat And Counterintuitive Stuff. Sometimes the simple answer really is the right one; this is one of those times.
Let’s not overstate your case, shall we? No ‘somehow’ about it, even if 90% of soldiers didn’t want to shoot, the remaining 10% could kill a hell of a lot of people; that is the point of guns and explosives, after all—they make killing people quick and easy compared to nagging them to death.
(Where is the precise model relating known mortality rates to number of soldiers shooting, such that Marshall’s claims could have been rejected on their face solely because they conflicted with mortality rates? There is none. The majority of soldiers survive wars, after all.)
With modern automatic weapons, if their targets obligingly massed in a single spot, sure. Bolt-action rifles, less so; Civil-War-era muzzle loaders, still less.
Now, there’s a more subtle version of the argument that could be made: Maybe a lot of people were shooting to miss. That would account for the 10000-to-1 bullets-to-hits ratio, also known as “fire your weight in lead to kill a man”. But again, if people weren’t actually shooting, you’d think their officers would notice that they never needed ammunition refills.
Observe: The more people refuse to fire their rifles, the higher should be the proportion of casualties from artillery. Yet from WWII to Vietnam, we see that reports claim an increasing percentage of soldiers firing rifles, but a decreasing proportion of casualties from small arms. I propose that, instead, the proportion of rifle-firers was constant and the lethality and ubiquity of artillery was growing. Note that, to make up for an increase from 25% to 55% of rifle-firers, as is claimed from WWII to Vietnam, artillery would have to become twice as deadly just to remain on an even footing; this seems to me unlikely, even though there certainly were technical advances.
So, do you know offhand exactly how many soldiers were killed by other soldiers in all those conflicts? Do you know how fast and effective those weapons were? Do you know what the distribution and skew of killings per soldier are and how that changes from conflict to conflict? You do not know any of those factors, all of which together determine whether the Marshall estimate is plausible.
‘Marshall made everything up’ is a good argument. ‘Look, there’s lots of dead soldiers!’ is a terrible argument which is pure rhetoric.
Ceteris is never paribus. You’re just digging yourself in deeper. Those conflicts were completely different—WWII and Vietnam, seriously? You can’t think of any reasons artillery might have different results in them?
Ok, I sit corrected.
You’re vastly overstating the criticisms of S. L. A Marshall. He did not just make up his figures. His research was not an invention. He conducted hundreds of interviews with soldiers who had recently been in combat. The U.S. Army found this research quite valuable and uses it to this day. Some people don’t like his conclusions, and attempt to dispute them, but usually without attempting to collect actual data that would weigh against Marshall’s.
The Wikipedia article’s claim that “Professor Roger J. Spiller (Deputy Director of the Combat Studies Institute, US Army Command and General Staff College) demonstrated in his 1988 article, “S.L.A. Marshall and the Ratio of Fire” (RUSI Journal, Winter 1988, pages 63–71), that Marshall had not actually conducted the research upon which he based his ratio-of-fire theory” appears to be false. Spiller’s article criticizes Marshall’s methodology and points out a number of weaknesses in his later accounts. However it does not claim that the interviews Marshall described did not take place. Rather it suggests that Marshall intentionally or unintentionally sometimes inflated the number of interviews he had conducted, though it still allows for hundreds to have taken place. The RUSI article doesn’t seem to be online, (I’ll try and see if JSTOR has a copy) but some relevant portions are quoted here.
I agree that Marshall’s evidence is not perfect. I’d be interested to see better evidence, and if it came to different conclusions than he did, using better research techniques, then I would update my beliefs accordingly. Until I am see such research, though I am very wary of poorly sourced ad hominem attacks.
Libgen is your friend: https://pdf.yt/d/zueukhIJDa6woF9R / https://www.dropbox.com/s/dwjrpviga6e137z/1988-spiller.pdf / http://sci-hub.org/downloads/d5cf/spiller1988.pdf
Hum! That first article is very interesting; it quotes Marshall as saying the percentage of men who fired their weapons was 15% in an average day’s action. This is very different from 15% firing their rifles at all, which is the claim usually made. So quite apart from being a fabrication, Marshall’s imaginary number is apparently even being misquoted!
Some interesting quotes:
(Emphasis in original).
Update: JSTOR does not appear to include RUSI Journal. If anyone has access to a library that does have it, please do us a favor and look it up.
Can you please link what you’re quoting from.
Here.
Thanks. -ETA I followed both the link and the links to several of Wikipedia’s sources, but no further. The stuff I saw all seems to support Rolf’s claims about S. L. A Marshall being unreliable and the primary source for most of the claims of the killing is hard side.
Isegoria claims Grossman’s claims, if not Marshall’s, is better supported by things like fighter pilot studies: http://westhunt.wordpress.com/2014/12/28/shoot-to-kill/#comment-64665
Fighter pilot victories in clear-air combat are rare; it follows that they are Poisson-distributed, and that you would expect to have a few extreme outliers and a great mass of apparent “non-killers” even if every pilot was doing his genuine best to kill. That is even before taking into account pilot skill, which for all we know has a very wide range.
I don’t see how that follows at all. You don’t know it was a Poisson distribution (there are lots of distributions natural phenomena follow; the negative binomial and lognormal also pop up a lot in human contexts), and even if you did, you don’t know the the relevant rate parameter lambda to know how many pilots should be expected to have 1 success, and since you’re making purely a priori arguments here rather than observing that the studies have specific flaws (eg perhaps they included pilots who never saw combat), it’s clear you’re trying to make a fully general counterargument to explain away any result those studies could have reached without knowing anything about them. (‘Oh, only .001% of pilots killed anyone? That darn Poisson!’)
The Poisson distribution is the distribution that models rare independent events. Given how involved you are with prediction and statistics, I’d expect you to know that.
Are number of fighter pilot victories, clearly, a priori, going to be independent events? That a pilot shooting down one plane is entirely independent of whether they go on to shoot down another plane? (Think about the other two distributions I mentioned and why they might be better matches...)
Distributions are model assumptions, to be checked like any other. In fact, often they are the most important and questionable assumption made in a model, which determines the conclusion; a LW example of this is Karnofsky’s statistical argument against funding existential risk, which driven entirely by the chosen distribution. As the quote goes: ‘they strain at the gnat of the prior who swallow the camel of the likelihood function’.
I personally find choice of distribution to be dangerous, which is why (when not too much more work) in my own analyses I try to use nonparametric methods: Mann-Whitney u-tests rather than t-tests, bootstraps, and at least look at graphs of histograms or residuals while I’m doing my main analysis. Distributions are not always as one expects. To give an example involving the Poisson: I was doing a little Hacker News voting experiment. One might think that a Poisson would be a perfect fit for distribution of scores—lots of voters, each one only votes on a few links out of the thousands submitted each day, they’re different voters, and votes are positive count data. One would be wrong, since while a Poisson fits better than, say, a normal, it’s grossly wrong about outliers; what actually fits much better is a mixture distribution of at least 3 sub-distributions of Poissons and possibly normals or others. (My best guess is that this mixture distribution is caused by HN’s segmented site design leading to odd dynamics in voting: the first distribution corresponds to low-scoring submissions which spend all their time on /newest, and the rest to various subpopulations of submissions which make it to the main page—although I’m not sure why there are more than 1 of those).
So no, I hope it is because of, rather than despite, my involvement with stats that I object to Rolf’s casual assumption of a particular distribution to create a fully general counterargument to explain away data he has not seen but dislikes.
Rolf addressed that point:
In particular notice that any deviations from Poisson are going to be in the direction that makes Rolf’s argument even stronger.
No, they’re not, not without even more baseless assumptions. The Poisson is not well-justified, and it’s not even conservative for Rolf’s argument. If there was a selection process in which the best pilots get to combat the most (a shocking proposition, I realize); then many more would cross the threshold of at least 1 kill than would be predicted if one incorrectly modeled kill rates as Poissons with averages. This is the sort of thing (multiple consecutive factors) which would generate other possible distributions like the lognormal, which appear all the time in human performance data like scientific publications. (‘...who swallow the camel of the likelihood function’.)
And this still doesn’t address my point that you cannot write off data you have not seen with a fully general counterargument—without very good reasons which Rolf has not done anything remotely like showing. You do not know whether that extremely low quoted rate is exactly what one would expect from pilots doing their level best to kill others without doing a lot more work to verify that a Poisson fits, what the rate parameter is, and what the distribution of pilot differences looks like; the final kill rate of pilots, just like soldiers, is the joint result of many things.