Of course. My point is that you focused a bit too much on misciting instead of going for quick kill and saying that they measure something underspecified.
Also, if you think that their main transgression is citing things wrong, exact labels from the graphs you show seem to be a natural thing to include. I don’t expect you to tell us what they measured—I expect you to quote them precisely on that.
The main issue is that people just aren’t paying attention. My focus on citation stems from observing that a pair of parentheses, a name and a year seem to function, for a large number of people in my field, as a powerful narcotic suspending their critical reason.
I expect you to quote them precisely on that.
If this is a tu quoque argument, it is spectacularly mis-aimed.
as a powerful narcotic suspending their critical reason.
The distinction I made is about the level of suspension. It looks like people suspend their reasoning about statements having a well-defined meaning, not just reasoning about the mere truth of facts presented. I find the former way worse than the latter.
I expect you to quote them precisely on that.
If this is a tu quoque argument, it is spectacularly mis-aimed.
It is not about you, sorry for stating it slightly wrong. I thought about unfortunate implications but found no good way to evade them. I needed to contrast “copy” and “explain”.
I had no intention to say you were being hypocritical, but discussion started to depend on some highly relevant (from my point of view) objectively short piece of data that you had but did not include. I actually was wrong about one of my assumptions about original labels...
As to your other question: I suspect that the first author to mis-cite Grady was Karl Wiegers in his requirements book (from 2003 or 2004), he’s also the author of the Serena paper listed above. A very nice person, by the way—he kindly sent me an electronic copy of the Grady presentation. At least he’s read it. I’m pretty damn sure that secondary citations afterwards are from people who haven’t.
Well, if he has read the Grady paper and cited it wrong, most likely that he has got his nice graph from somewhere… I wonder who and why published this graph for the first time.
About references—well, what discipline is not diseased like that? We are talking about something that people (rightly or wrongly) equate with common sense in the field. People want to cite some widely accepted statement, which agrees with their perceived experience. And the deadline is nigh. If they find an article with such a result, they are happy. If they find a couple of articles referencing this result, they steal the citation. After all, who cares what to cite, everybody knows this, right?
I am not sure that even in maths the situation is significantly better. There are fresher results where you understand how to find a paper to reference, there are older results that can be found in university textbooks, and there is middle ground where you either find something that looks like a good enough reference or have to include a sketch if the proof. (I have done the latter for some relatively simple result in a maths article).
Of course. My point is that you focused a bit too much on misciting instead of going for quick kill and saying that they measure something underspecified.
Also, if you think that their main transgression is citing things wrong, exact labels from the graphs you show seem to be a natural thing to include. I don’t expect you to tell us what they measured—I expect you to quote them precisely on that.
The main issue is that people just aren’t paying attention. My focus on citation stems from observing that a pair of parentheses, a name and a year seem to function, for a large number of people in my field, as a powerful narcotic suspending their critical reason.
If this is a tu quoque argument, it is spectacularly mis-aimed.
The distinction I made is about the level of suspension. It looks like people suspend their reasoning about statements having a well-defined meaning, not just reasoning about the mere truth of facts presented. I find the former way worse than the latter.
It is not about you, sorry for stating it slightly wrong. I thought about unfortunate implications but found no good way to evade them. I needed to contrast “copy” and “explain”.
I had no intention to say you were being hypocritical, but discussion started to depend on some highly relevant (from my point of view) objectively short piece of data that you had but did not include. I actually was wrong about one of my assumptions about original labels...
No offence taken.
As to your other question: I suspect that the first author to mis-cite Grady was Karl Wiegers in his requirements book (from 2003 or 2004), he’s also the author of the Serena paper listed above. A very nice person, by the way—he kindly sent me an electronic copy of the Grady presentation. At least he’s read it. I’m pretty damn sure that secondary citations afterwards are from people who haven’t.
Well, if he has read the Grady paper and cited it wrong, most likely that he has got his nice graph from somewhere… I wonder who and why published this graph for the first time.
About references—well, what discipline is not diseased like that? We are talking about something that people (rightly or wrongly) equate with common sense in the field. People want to cite some widely accepted statement, which agrees with their perceived experience. And the deadline is nigh. If they find an article with such a result, they are happy. If they find a couple of articles referencing this result, they steal the citation. After all, who cares what to cite, everybody knows this, right?
I am not sure that even in maths the situation is significantly better. There are fresher results where you understand how to find a paper to reference, there are older results that can be found in university textbooks, and there is middle ground where you either find something that looks like a good enough reference or have to include a sketch if the proof. (I have done the latter for some relatively simple result in a maths article).