There is a decent bit in Dugatkin & Reeve 1998 on this (emphasis mine):
I will define cooperation as follows: Cooperation is an outcomethat—despite potential costs to individuals—is “good” (measured by some appropriate fitness measure) for the members of a group of two or more individuals and whose achievement requires some sort of collective action. But to cooperate can mean either to achieve that cooperation (something manifest at the group level) or to behave cooperatively—that is, to behave in a manner making cooperation possible (something the individual does), despite the fact that the cooperation will not actually be realized unless other group members have also behaved cooperatively. Here, to cooperate will always mean to behave cooperatively (as in Mesterton-Gibbons & Dugatkin 1992; Dugatkin et al. 1992; also see Stephens and Clements, this volume).
If someone’s definition of cooperation includes a proxy for the definition of the word “good” then let’s start at the definition of something being “good” in a concrete sense and move forward from there.
To that end I agree the term isn’t ontologically basal. Instead I’d work at qualifying the state of the interaction and using cooperation as an inverse to an alternative. Is commensalism a type of cooperation?
These are all in relative context though, so you could layer on complexity until you’re trying to dig at the genetic basis of ecological phenotypes—something you can arguably prove—but the model is only as useful as what it predicts.
Therefore, I don’t think implication (1) or (2) follow from the premise, even if it is correct.
Therefore, I don’t think implication (1) or (2) follow from the premise, even if it is correct.
To clarify: what do you mean by the premise and implications (1) and (2) here? (I am guessing that premise = text under the heading “Conjecture: …” and implications (1) or (2) = text under the heading “Implications”.)
😆 and just for fun, in relation to your footnote 6, I don’t know much about Dugatkin’s associations but to the best of my knowledge Reeve is related to the Santa Fe Institute through his collaboration with Bert Hölldobler who is part of the SIRG at ASU
Correct, I am suggesting that fuzzy concepts can and should be strictly defined mathematically, and within the limits of that mathematical definition it should hold true to be generally useful within the scope of what it was constructed for.
To use a loose mathematical analogy, we can use the definition of a limit to arrive at precise constraints, and generate iff theorems like L’Hôpitals to make it easier to digest. Cooperation in this case would be an iff theorem, with more basal concepts being the fallback. But for the model to be useful, the hypothesis of the theorem needs to absolutely suggest the conclusion.
Edit: What I asserted in my last sentence isn’t strictly true. You could find utility in a faulty model if it is for hypothesis generation and it is very good at it.
There is a decent bit in Dugatkin & Reeve 1998 on this (emphasis mine):
If someone’s definition of cooperation includes a proxy for the definition of the word “good” then let’s start at the definition of something being “good” in a concrete sense and move forward from there.
To that end I agree the term isn’t ontologically basal. Instead I’d work at qualifying the state of the interaction and using cooperation as an inverse to an alternative. Is commensalism a type of cooperation?
These are all in relative context though, so you could layer on complexity until you’re trying to dig at the genetic basis of ecological phenotypes—something you can arguably prove—but the model is only as useful as what it predicts.
Therefore, I don’t think implication (1) or (2) follow from the premise, even if it is correct.
To clarify: what do you mean by the premise and implications (1) and (2) here? (I am guessing that premise = text under the heading “Conjecture: …” and implications (1) or (2) = text under the heading “Implications”.)
😆 and just for fun, in relation to your footnote 6, I don’t know much about Dugatkin’s associations but to the best of my knowledge Reeve is related to the Santa Fe Institute through his collaboration with Bert Hölldobler who is part of the SIRG at ASU
Correct, I am suggesting that fuzzy concepts can and should be strictly defined mathematically, and within the limits of that mathematical definition it should hold true to be generally useful within the scope of what it was constructed for.
To use a loose mathematical analogy, we can use the definition of a limit to arrive at precise constraints, and generate iff theorems like L’Hôpitals to make it easier to digest. Cooperation in this case would be an iff theorem, with more basal concepts being the fallback. But for the model to be useful, the hypothesis of the theorem needs to absolutely suggest the conclusion.
Edit: What I asserted in my last sentence isn’t strictly true. You could find utility in a faulty model if it is for hypothesis generation and it is very good at it.