Also it’s unclear to me what the connection is between this part and the second.
My bad, I did a poor job explaining that. The first part is about the problems of using generic words (evolution) with fuzzy decompositions (mates, predators, etc) to come to conclusions, which can often be incorrect. The second part is about decomposing those generic words into their implied structure, and matching that structure to problems in order to get a more reliable fit.
I don’t believe that “I don’t know” is a good answer, even if it’s often the correct one. People have vague intuitions regarding phenomena, and wouldn’t it be nice if they could apply those intuitions reliably? That requires a mapping from the intuition (evolution is responsible) to the problem, and the mapping can only be made reliable once the intuition has been properly decomposed into its implied structure, and even then, only if the mapping is based on the decomposition.
I started off by trying to explain all of that, but realized that there is far too much when starting from scratch. Maybe someday I’ll be able to write that post...
The cell example is an example of evolution being used to justify contradictory phenomena. The exact same justification is used for two opposing conclusions. If you thought there was nothing wrong with those two examples being used as they were, then there is something wrong with your model. They literally use the exact same justification to come to opposing conclusions.
The second set of explanations have fewer, more reliably-determinable dependencies, and their reasoning is more generally applicable.
That is correct, they have zero prediction and compression power. I would argue that the same can be said of many cases where people misuse evolution as an explanation.
When people falsely pretend to have knowledge of some underlying structure or correlate, they are (1) lying and (2) increasing noise, which by various definition is negative information. When people use evolution as an explanation in cases where it does not align with the implications of evolution, they are doing so under a false pretense. My suggested approach (1) is honest and (2) conveys information about the lack of known underlying structure or correlate.
I don’t know what you mean by “sensible definition”. I have a model for that phrase, and yours doesn’t seem to align with mine.
Such as...?
Not for any sensible definition of the word “simpler”. They just overfit everything.
Yes, but zero prediction or compression power.
Also it’s unclear to me what the connection is between this part and the second.
Again, not informative for any sensible definition of the word.
My bad, I did a poor job explaining that. The first part is about the problems of using generic words (evolution) with fuzzy decompositions (mates, predators, etc) to come to conclusions, which can often be incorrect. The second part is about decomposing those generic words into their implied structure, and matching that structure to problems in order to get a more reliable fit.
I don’t believe that “I don’t know” is a good answer, even if it’s often the correct one. People have vague intuitions regarding phenomena, and wouldn’t it be nice if they could apply those intuitions reliably? That requires a mapping from the intuition (evolution is responsible) to the problem, and the mapping can only be made reliable once the intuition has been properly decomposed into its implied structure, and even then, only if the mapping is based on the decomposition.
I started off by trying to explain all of that, but realized that there is far too much when starting from scratch. Maybe someday I’ll be able to write that post...
The cell example is an example of evolution being used to justify contradictory phenomena. The exact same justification is used for two opposing conclusions. If you thought there was nothing wrong with those two examples being used as they were, then there is something wrong with your model. They literally use the exact same justification to come to opposing conclusions.
The second set of explanations have fewer, more reliably-determinable dependencies, and their reasoning is more generally applicable.
That is correct, they have zero prediction and compression power. I would argue that the same can be said of many cases where people misuse evolution as an explanation.
When people falsely pretend to have knowledge of some underlying structure or correlate, they are (1) lying and (2) increasing noise, which by various definition is negative information. When people use evolution as an explanation in cases where it does not align with the implications of evolution, they are doing so under a false pretense. My suggested approach (1) is honest and (2) conveys information about the lack of known underlying structure or correlate.
I don’t know what you mean by “sensible definition”. I have a model for that phrase, and yours doesn’t seem to align with mine.
Seconded.