I think that, because culture is eventually very useful for fitness, you can either think of the problem as evolution not optimising for culture, or evolution optimising for fitness badly. And these are roughly equivalent ways of thinking about it, just different framings. Paul notes this duality in his original post:
If we step back from skills and instead look at outcomes we could say: “Evolution is always optimizing for fitness, and humans have now taken over the world.” On this perspective, I’m making a claim about the limits of evolution. First, evolution is theoretically optimizing for fitness, but it isn’t able to look ahead and identify which skills will be most important for your children’s children’s children’s fitness. Second, human intelligence is incredibly good for the fitness of groups of humans, but evolution acts on individual humans for whom the effect size is much smaller (who barely benefit at all from passing knowledge on to the next generation).
It seems like most of your response is an objection to this framing. I may need to think more about the relative advantages and disadvantages of each framing, but I don’t think either is outright wrong.
What does “useful” mean here? If by “useful” you mean “improves an individual’s reproductive fitness”, then I disagree with the claim and I think that’s where the major disagreement is.
Yes, I meant useful for reproductive fitness. Sorry for ambiguity.
I may need to think more about the relative advantages and disadvantages of each framing, but I don’t think either is outright wrong.
I agree it’s not wrong. I’m claiming it’s not a useful framing. If we must use this framing, I think humans and evolution are not remotely comparable on how good they are at long-term optimization, and I can’t understand why you think they are. (Humans may not be good at long-term optimization on some absolute scale, but they’re a hell of a lot better than evolution.)
I think in my example you could make a similar argument: looking at outcomes, you could say “Rohin is always optimizing for learning abstract algebra, and he has now become very good at abstract algebra.” It’s not wrong, it’s just not useful for predicting my future behavior, and doesn’t seem to carve reality at its joints.
(Tbc, I think this example is overstating the case, “evolution is always optimizing for fitness” is definitely more reasonable and more predictive than “Rohin is always optimizing for learning abstract algebra”.)
I really do think that the best thing is to just strip away agency, and talk about selection:
the argument is that evolution was not selecting for proto-culture / intelligence, whereas humans will select for proto-culture / intelligence
Re: usefulness:
Yes, I meant useful for reproductive fitness.
Suppose a specific monkey has some mutation and gets a little bit of proto-culture. Are you claiming that this will increase the number of children that monkey has?
I think that, because culture is eventually very useful for fitness, you can either think of the problem as evolution not optimising for culture, or evolution optimising for fitness badly. And these are roughly equivalent ways of thinking about it, just different framings. Paul notes this duality in his original post:
It seems like most of your response is an objection to this framing. I may need to think more about the relative advantages and disadvantages of each framing, but I don’t think either is outright wrong.
Yes, I meant useful for reproductive fitness. Sorry for ambiguity.
I agree it’s not wrong. I’m claiming it’s not a useful framing. If we must use this framing, I think humans and evolution are not remotely comparable on how good they are at long-term optimization, and I can’t understand why you think they are. (Humans may not be good at long-term optimization on some absolute scale, but they’re a hell of a lot better than evolution.)
I think in my example you could make a similar argument: looking at outcomes, you could say “Rohin is always optimizing for learning abstract algebra, and he has now become very good at abstract algebra.” It’s not wrong, it’s just not useful for predicting my future behavior, and doesn’t seem to carve reality at its joints.
(Tbc, I think this example is overstating the case, “evolution is always optimizing for fitness” is definitely more reasonable and more predictive than “Rohin is always optimizing for learning abstract algebra”.)
I really do think that the best thing is to just strip away agency, and talk about selection:
Re: usefulness:
Suppose a specific monkey has some mutation and gets a little bit of proto-culture. Are you claiming that this will increase the number of children that monkey has?