I think it would be less misleading to say that many of our complex characteristics were instrumental goals for the evolutionary process as it hill-climbed the inclusive genetic fitness metric.
It’s hard to put it in a non misleading way. If you simulate evolution as is you are wasting almost all of your time on bacteria. Evolution didn’t as much hill climb as just flooded the entire valley. edit: or rather, it predominantly wasn’t going towards human. If you want to optimize, you look at how it got to human, and think how you avoid doing the rest of it.
To clarify: are you actually suggesting that simulating just that subset of the evolutionary process that evolved humans and not the subset that evolved bacteria is a worthwhile strategy to explore towards achieving some goal? (If so, what goal?) Or do you mean this just as an illustration of a more general point?
As illustration, with a remark on practical approach. Seriously, the thing about the evolution, it doesn’t “reward fitness” either.
The agents compete, some are eliminated, some are added after modification; it’s a lousy hill climbing, with really lousy comparator (and no actual metric like ‘fitness’ - just a comparator which aren’t even climbing properly—where A may beat B, B beat C, and C beat A), but it makes for a variety, where the most complex behaving agent behaves in more and more complex ways all the way until it starts inventing puzzles and solving them. When one has a goal in mind, one can tweak the comparator to get to it more efficiently. The goal can be as vague as “complex behaviour” if you know what sort of “complex” you want or have an example. Problem solving doesn’t require defining stuff very precisely first.
Agreed that given a process for achieving a goal that involves a comparator with that goal as a target, one can often start with a very fuzzy comparator (for example, “complex behavior”) and keep refining it as one goes. That’s especially true in cases where the costs of getting it not-quite-right the first time are low relative to the benefits of subsequently getting it righter… e.g., this strategy works a lot better for finding a good place to have dinner than it does for landing a plane. (Though given a bad enough initial comparator for the former, it can also be pretty catastrophic.)
I infer that you have a referent for ‘fitness’ other than whatever it is that gets selected for by evolution. I have no idea what that referent is.
I think it’s misleading to refer to evolution having a comparator at all. At best it’s true only metaphorically. As you say, all evolution acts on is the result of various competitions.
You seem to be implying that evolution necessarily results in extremely complex puzzle-inventing systems. If I’ve understood that correctly, I disagree.
I think it would be less misleading to say that many of our complex characteristics were instrumental goals for the evolutionary process as it hill-climbed the inclusive genetic fitness metric.
It’s hard to put it in a non misleading way. If you simulate evolution as is you are wasting almost all of your time on bacteria. Evolution didn’t as much hill climb as just flooded the entire valley. edit: or rather, it predominantly wasn’t going towards human. If you want to optimize, you look at how it got to human, and think how you avoid doing the rest of it.
To clarify: are you actually suggesting that simulating just that subset of the evolutionary process that evolved humans and not the subset that evolved bacteria is a worthwhile strategy to explore towards achieving some goal? (If so, what goal?) Or do you mean this just as an illustration of a more general point?
As illustration, with a remark on practical approach. Seriously, the thing about the evolution, it doesn’t “reward fitness” either.
The agents compete, some are eliminated, some are added after modification; it’s a lousy hill climbing, with really lousy comparator (and no actual metric like ‘fitness’ - just a comparator which aren’t even climbing properly—where A may beat B, B beat C, and C beat A), but it makes for a variety, where the most complex behaving agent behaves in more and more complex ways all the way until it starts inventing puzzles and solving them. When one has a goal in mind, one can tweak the comparator to get to it more efficiently. The goal can be as vague as “complex behaviour” if you know what sort of “complex” you want or have an example. Problem solving doesn’t require defining stuff very precisely first.
A few things:
Agreed that given a process for achieving a goal that involves a comparator with that goal as a target, one can often start with a very fuzzy comparator (for example, “complex behavior”) and keep refining it as one goes. That’s especially true in cases where the costs of getting it not-quite-right the first time are low relative to the benefits of subsequently getting it righter… e.g., this strategy works a lot better for finding a good place to have dinner than it does for landing a plane. (Though given a bad enough initial comparator for the former, it can also be pretty catastrophic.)
I infer that you have a referent for ‘fitness’ other than whatever it is that gets selected for by evolution. I have no idea what that referent is.
I think it’s misleading to refer to evolution having a comparator at all. At best it’s true only metaphorically. As you say, all evolution acts on is the result of various competitions.
You seem to be implying that evolution necessarily results in extremely complex puzzle-inventing systems. If I’ve understood that correctly, I disagree.