To solve Rubik’s cube, you can just do hill climbing, with breadth-first-ish search for the higher hill point (i.e. you find higher point even if it is several moves away). This discovers the sequences. Cache the sequences.
It’s a very general problem solving method, hill climbing with N move look-ahead. One does try maximizing various metrics, that are maximal in the final state, and finds one that works for you without getting you stuck in local maximum for too long. You also try various orders of iterating the moves (e.g. one could opt for repetitive sequences).
This works for chess as well, and for pretty much all puzzles. This is how I solve puzzles when I get a puzzle for first time, except of course I have terabytes worth of tricks that I can try, and 10^15 - ish operations per second; parallel, of course, but parallel works. Pre-generating sequences is not necessary. You arrive at them when hill climbing with breadth-first search, and cache them. You also tell them to other people whom you want to make into rubik-cube-solvers. The important thing that can’t be stressed enough—try to figure out a good metric to climb. Some sides of hill are smoother than others.
One could hill climb some sort of complexity metric—evolution did that to arrive at humans, even though the bacteria is a better solution to ‘reproduction’. You only need a comparator for climbing. Comparators are easy. You can make agents fight (or you can make agents cooperate). You don’t need mapping to real number. You can do evolutionary hill climbing with n-move look ahead.
edit: note that you do NOT need good ordering for hill climbing either. If sometimes a>b and b>c and c>a it is okay if you remember where you already been and avoid looping. That may still get you to the top of the hill.
One could hill climb some sort of complexity metric—evolution did that to arrive at humans, even though the bacteria is a better solution to ‘reproduction’.
It of course didn’t reward anything other than fitness. And the universe is not made of anything other than quarks etc (or smaller yet things). Hello fake-reductionist nihilism.
It, however, so happened that rewarding it resulted in growing complexity of behaviours of most complex organisms. You can hill climb by pouring liquid into a valley, if all you care for is some liquid on the top of the hill; liquid behaves in a very complicated way, minimizing a very complicated metric, such that it ends up on the tops of the hills by surface tension even though most of it is in the valleys, and a single molecule would be seeking valleys. The evolution doesn’t just lead to mankind. The evolution, for the most part, leads to better bacteria. Mankind is a side effect from niche-filling. Remove all bacteria and single celled organisms, and they will re-evolve from a human (the canine infectious cancer was once a dog).
I think it would be less misleading to say that many of our complex characteristics were instrumental goals for the evolutionary process as it hill-climbed the inclusive genetic fitness metric.
It’s hard to put it in a non misleading way. If you simulate evolution as is you are wasting almost all of your time on bacteria. Evolution didn’t as much hill climb as just flooded the entire valley. edit: or rather, it predominantly wasn’t going towards human. If you want to optimize, you look at how it got to human, and think how you avoid doing the rest of it.
To clarify: are you actually suggesting that simulating just that subset of the evolutionary process that evolved humans and not the subset that evolved bacteria is a worthwhile strategy to explore towards achieving some goal? (If so, what goal?) Or do you mean this just as an illustration of a more general point?
As illustration, with a remark on practical approach. Seriously, the thing about the evolution, it doesn’t “reward fitness” either.
The agents compete, some are eliminated, some are added after modification; it’s a lousy hill climbing, with really lousy comparator (and no actual metric like ‘fitness’ - just a comparator which aren’t even climbing properly—where A may beat B, B beat C, and C beat A), but it makes for a variety, where the most complex behaving agent behaves in more and more complex ways all the way until it starts inventing puzzles and solving them. When one has a goal in mind, one can tweak the comparator to get to it more efficiently. The goal can be as vague as “complex behaviour” if you know what sort of “complex” you want or have an example. Problem solving doesn’t require defining stuff very precisely first.
Agreed that given a process for achieving a goal that involves a comparator with that goal as a target, one can often start with a very fuzzy comparator (for example, “complex behavior”) and keep refining it as one goes. That’s especially true in cases where the costs of getting it not-quite-right the first time are low relative to the benefits of subsequently getting it righter… e.g., this strategy works a lot better for finding a good place to have dinner than it does for landing a plane. (Though given a bad enough initial comparator for the former, it can also be pretty catastrophic.)
I infer that you have a referent for ‘fitness’ other than whatever it is that gets selected for by evolution. I have no idea what that referent is.
I think it’s misleading to refer to evolution having a comparator at all. At best it’s true only metaphorically. As you say, all evolution acts on is the result of various competitions.
You seem to be implying that evolution necessarily results in extremely complex puzzle-inventing systems. If I’ve understood that correctly, I disagree.
The solution (I posted it elsewhere also):
To solve Rubik’s cube, you can just do hill climbing, with breadth-first-ish search for the higher hill point (i.e. you find higher point even if it is several moves away). This discovers the sequences. Cache the sequences.
It’s a very general problem solving method, hill climbing with N move look-ahead. One does try maximizing various metrics, that are maximal in the final state, and finds one that works for you without getting you stuck in local maximum for too long. You also try various orders of iterating the moves (e.g. one could opt for repetitive sequences).
This works for chess as well, and for pretty much all puzzles. This is how I solve puzzles when I get a puzzle for first time, except of course I have terabytes worth of tricks that I can try, and 10^15 - ish operations per second; parallel, of course, but parallel works. Pre-generating sequences is not necessary. You arrive at them when hill climbing with breadth-first search, and cache them. You also tell them to other people whom you want to make into rubik-cube-solvers. The important thing that can’t be stressed enough—try to figure out a good metric to climb. Some sides of hill are smoother than others.
One could hill climb some sort of complexity metric—evolution did that to arrive at humans, even though the bacteria is a better solution to ‘reproduction’. You only need a comparator for climbing. Comparators are easy. You can make agents fight (or you can make agents cooperate). You don’t need mapping to real number. You can do evolutionary hill climbing with n-move look ahead. edit: note that you do NOT need good ordering for hill climbing either. If sometimes a>b and b>c and c>a it is okay if you remember where you already been and avoid looping. That may still get you to the top of the hill.
I can’t understand what you mean. Surely you don’t mean that natural selection rewarded something besides inclusive genetic fitness.
It of course didn’t reward anything other than fitness. And the universe is not made of anything other than quarks etc (or smaller yet things). Hello fake-reductionist nihilism.
It, however, so happened that rewarding it resulted in growing complexity of behaviours of most complex organisms. You can hill climb by pouring liquid into a valley, if all you care for is some liquid on the top of the hill; liquid behaves in a very complicated way, minimizing a very complicated metric, such that it ends up on the tops of the hills by surface tension even though most of it is in the valleys, and a single molecule would be seeking valleys. The evolution doesn’t just lead to mankind. The evolution, for the most part, leads to better bacteria. Mankind is a side effect from niche-filling. Remove all bacteria and single celled organisms, and they will re-evolve from a human (the canine infectious cancer was once a dog).
I think it would be less misleading to say that many of our complex characteristics were instrumental goals for the evolutionary process as it hill-climbed the inclusive genetic fitness metric.
It’s hard to put it in a non misleading way. If you simulate evolution as is you are wasting almost all of your time on bacteria. Evolution didn’t as much hill climb as just flooded the entire valley. edit: or rather, it predominantly wasn’t going towards human. If you want to optimize, you look at how it got to human, and think how you avoid doing the rest of it.
To clarify: are you actually suggesting that simulating just that subset of the evolutionary process that evolved humans and not the subset that evolved bacteria is a worthwhile strategy to explore towards achieving some goal? (If so, what goal?) Or do you mean this just as an illustration of a more general point?
As illustration, with a remark on practical approach. Seriously, the thing about the evolution, it doesn’t “reward fitness” either.
The agents compete, some are eliminated, some are added after modification; it’s a lousy hill climbing, with really lousy comparator (and no actual metric like ‘fitness’ - just a comparator which aren’t even climbing properly—where A may beat B, B beat C, and C beat A), but it makes for a variety, where the most complex behaving agent behaves in more and more complex ways all the way until it starts inventing puzzles and solving them. When one has a goal in mind, one can tweak the comparator to get to it more efficiently. The goal can be as vague as “complex behaviour” if you know what sort of “complex” you want or have an example. Problem solving doesn’t require defining stuff very precisely first.
A few things:
Agreed that given a process for achieving a goal that involves a comparator with that goal as a target, one can often start with a very fuzzy comparator (for example, “complex behavior”) and keep refining it as one goes. That’s especially true in cases where the costs of getting it not-quite-right the first time are low relative to the benefits of subsequently getting it righter… e.g., this strategy works a lot better for finding a good place to have dinner than it does for landing a plane. (Though given a bad enough initial comparator for the former, it can also be pretty catastrophic.)
I infer that you have a referent for ‘fitness’ other than whatever it is that gets selected for by evolution. I have no idea what that referent is.
I think it’s misleading to refer to evolution having a comparator at all. At best it’s true only metaphorically. As you say, all evolution acts on is the result of various competitions.
You seem to be implying that evolution necessarily results in extremely complex puzzle-inventing systems. If I’ve understood that correctly, I disagree.