Would it be fair to say that your philosophy is similar to davidad’s? Both of you seem to ultimately value some hard-to-define measure of complexity. He thinks the best way to maximize complexity is to develop technology, whereas you think the best way is to preserve evolution.
I think that evolution will lead to a local maximum of complexity, which we can’t “help” it avoid. The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there’s huge amounts of redundancy. Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren’t constrained by “survival of the fittest”.
Whereas the entire purpose of FAI is to trap the universe in a local maximum.
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it. If that utility function contains a term for “complexity”, which seems plausible given people like you and davidad (and even I’d probably prefer greater complexity to less, all else being equal), then it ought to at least get somewhat close to the global complexity maximum (since the constraint of simultaneously trying to maximize other values doesn’t seem too burdensome, unless there are people who actively disvalue complexity).
The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there’s huge amounts of redundancy.
There’s often a deceptive amount of difference, some of it very fundamental, hiding inside those convergent similarities, and that’s because “convergent evolution” is in the eye of the beholder, and mostly restricted to surface-level analogies between some basic functions.
Consider pangolins and echidnas. Pretty much the same, right? Oh sure, one’s built on a placental framework and the other a monotreme one, but they’ve developed the same basic tools: long tongues, powerful digging claws, keratenous spines/sharp plates… not much scope for variance there, at least not of a sort that’d interest a lay person, surely.
Well, actually they’re quite different. It’s not just that echidnas lay eggs and pangolins birth live young, or that pangolins tend to climb trees and echidnas tend to burrow. Echidnas have more going on upstairs, so to speak—their brains are about 50% neocortex (compare 30% for a human) and they are notoriously clever. Among people who work with wild populations they’re known for being basically impossible to trap, even when appropriate bait can be set up. In at least one case a researcher who’d captured several (you essentially have to grab them when you find them) left them in a cage they couldn’t dig out of, only to find in the morning they’d stacked up their water dishes and climbed out the top. There is evidence that they communicate infrasonically in a manner similar to elephants, and they are known to be sensitive to electricity.
My point here isn’t “Echidnas are awesome!”, my point is that the richness of behavior and intelligence that they display is not mirrored in pangolins, who share the same niche and many convergent adaptations. To a person with no more than a passing familiarity, they’d be hard to distinguish on a functional level since their most obvious, surface-visible traits are very similar and the differences seem minor. If you get an in-depth look at them, they’re quite different, and the significance of those “convergent” traits diminishes in the face of much more salient differences between the two groups of animals.
Short version: superficial similarities are very often only that, especially in the world of biology. Often they do have some inferential value, but there are limits on that.
Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren’t constrained by “survival of the fittest”.
This is true; but I favor systems that can evolve, because they are evolutionarily stable. Systems that aren’t, are likely to be unstable and vulnerable to collapse, and typically have the ethically undesirable property of punishing “virtuous behavior” within that system.
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it.
True. I spoke imprecisely. Life is increasing in complexity, in a meaningful way that is not the same as the negative of entropy, and which I feel comfortable calling “progress” despite Stephen J. Gould’s strident imposition of his sociological agenda onto biology. This is the thing I’m talking about maximizing. Whatever utility function an FAI is given, it’s only going to involve concepts that we already have, which represent a small fraction of possible concepts; and so it’s not going to keep increasing as much in that way.
Would it be fair to say that your philosophy is similar to davidad’s? Both of you seem to ultimately value some hard-to-define measure of complexity. He thinks the best way to maximize complexity is to develop technology, whereas you think the best way is to preserve evolution.
I think that evolution will lead to a local maximum of complexity, which we can’t “help” it avoid. The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there’s huge amounts of redundancy. Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren’t constrained by “survival of the fittest”.
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it. If that utility function contains a term for “complexity”, which seems plausible given people like you and davidad (and even I’d probably prefer greater complexity to less, all else being equal), then it ought to at least get somewhat close to the global complexity maximum (since the constraint of simultaneously trying to maximize other values doesn’t seem too burdensome, unless there are people who actively disvalue complexity).
There’s often a deceptive amount of difference, some of it very fundamental, hiding inside those convergent similarities, and that’s because “convergent evolution” is in the eye of the beholder, and mostly restricted to surface-level analogies between some basic functions.
Consider pangolins and echidnas. Pretty much the same, right? Oh sure, one’s built on a placental framework and the other a monotreme one, but they’ve developed the same basic tools: long tongues, powerful digging claws, keratenous spines/sharp plates… not much scope for variance there, at least not of a sort that’d interest a lay person, surely.
Well, actually they’re quite different. It’s not just that echidnas lay eggs and pangolins birth live young, or that pangolins tend to climb trees and echidnas tend to burrow. Echidnas have more going on upstairs, so to speak—their brains are about 50% neocortex (compare 30% for a human) and they are notoriously clever. Among people who work with wild populations they’re known for being basically impossible to trap, even when appropriate bait can be set up. In at least one case a researcher who’d captured several (you essentially have to grab them when you find them) left them in a cage they couldn’t dig out of, only to find in the morning they’d stacked up their water dishes and climbed out the top. There is evidence that they communicate infrasonically in a manner similar to elephants, and they are known to be sensitive to electricity.
My point here isn’t “Echidnas are awesome!”, my point is that the richness of behavior and intelligence that they display is not mirrored in pangolins, who share the same niche and many convergent adaptations. To a person with no more than a passing familiarity, they’d be hard to distinguish on a functional level since their most obvious, surface-visible traits are very similar and the differences seem minor. If you get an in-depth look at them, they’re quite different, and the significance of those “convergent” traits diminishes in the face of much more salient differences between the two groups of animals.
Short version: superficial similarities are very often only that, especially in the world of biology. Often they do have some inferential value, but there are limits on that.
This is true; but I favor systems that can evolve, because they are evolutionarily stable. Systems that aren’t, are likely to be unstable and vulnerable to collapse, and typically have the ethically undesirable property of punishing “virtuous behavior” within that system.
True. I spoke imprecisely. Life is increasing in complexity, in a meaningful way that is not the same as the negative of entropy, and which I feel comfortable calling “progress” despite Stephen J. Gould’s strident imposition of his sociological agenda onto biology. This is the thing I’m talking about maximizing. Whatever utility function an FAI is given, it’s only going to involve concepts that we already have, which represent a small fraction of possible concepts; and so it’s not going to keep increasing as much in that way.