I look at the past, and see that the dominant life forms have grown more complex and more interesting, and I expect this trend to continue. The best guide I have to what future life-forms will be like compared to me, if allowed to evolve naturally, is to consider what I am like compared to a fruit fly, or to bacteria.
If you object that of course I will value myself more highly than I value a bacterium, and that I fail to adequately respect bacterial values, I can compare an algae to an oak tree. The algae is more closely-related to me; yet I still consider the oak tree a grander life form, and would rather see a world with algae and oak trees than one with only algae.
(It’s also possible that life does not naturally progress indefinitely, but that developing intelligence and societies inevitably leads to collapse and distinction. That would be an argument in favor of FAI, but it’s a little farther down the road from where our thoughts are so far, I think.)
If you like, I can say that I value complexity, and then build an FAI that maximizes some complexity measure. That’s what I meant when I said that I object less to FAI if you go meta. I know that some people in SIAI give this response, that I am not going meta enough if I’m not happy with FAI; but in their writings and discussions other than when dealing with that particular argument, they don’t usually go that meta. Seriously adopting that view would result in discussions of what our high-level values really are, which I have not seen.
My attitude is, The universe was doing amazingly well before I got here; instead of trusting myself to do some incredibly complex philosophical work error-free, I should try to help it keep on doing what it’s been doing, and just help it avoid getting trapped in a local maximum. Whereas the entire purpose of FAI is to trap the universe in a local maximum.
Would it be fair to say that your philosophy is similar to davidad’s? Both of you seem to ultimately value some hard-to-define measure of complexity. He thinks the best way to maximize complexity is to develop technology, whereas you think the best way is to preserve evolution.
I think that evolution will lead to a local maximum of complexity, which we can’t “help” it avoid. The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there’s huge amounts of redundancy. Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren’t constrained by “survival of the fittest”.
Whereas the entire purpose of FAI is to trap the universe in a local maximum.
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it. If that utility function contains a term for “complexity”, which seems plausible given people like you and davidad (and even I’d probably prefer greater complexity to less, all else being equal), then it ought to at least get somewhat close to the global complexity maximum (since the constraint of simultaneously trying to maximize other values doesn’t seem too burdensome, unless there are people who actively disvalue complexity).
The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there’s huge amounts of redundancy.
There’s often a deceptive amount of difference, some of it very fundamental, hiding inside those convergent similarities, and that’s because “convergent evolution” is in the eye of the beholder, and mostly restricted to surface-level analogies between some basic functions.
Consider pangolins and echidnas. Pretty much the same, right? Oh sure, one’s built on a placental framework and the other a monotreme one, but they’ve developed the same basic tools: long tongues, powerful digging claws, keratenous spines/sharp plates… not much scope for variance there, at least not of a sort that’d interest a lay person, surely.
Well, actually they’re quite different. It’s not just that echidnas lay eggs and pangolins birth live young, or that pangolins tend to climb trees and echidnas tend to burrow. Echidnas have more going on upstairs, so to speak—their brains are about 50% neocortex (compare 30% for a human) and they are notoriously clever. Among people who work with wild populations they’re known for being basically impossible to trap, even when appropriate bait can be set up. In at least one case a researcher who’d captured several (you essentially have to grab them when you find them) left them in a cage they couldn’t dig out of, only to find in the morning they’d stacked up their water dishes and climbed out the top. There is evidence that they communicate infrasonically in a manner similar to elephants, and they are known to be sensitive to electricity.
My point here isn’t “Echidnas are awesome!”, my point is that the richness of behavior and intelligence that they display is not mirrored in pangolins, who share the same niche and many convergent adaptations. To a person with no more than a passing familiarity, they’d be hard to distinguish on a functional level since their most obvious, surface-visible traits are very similar and the differences seem minor. If you get an in-depth look at them, they’re quite different, and the significance of those “convergent” traits diminishes in the face of much more salient differences between the two groups of animals.
Short version: superficial similarities are very often only that, especially in the world of biology. Often they do have some inferential value, but there are limits on that.
Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren’t constrained by “survival of the fittest”.
This is true; but I favor systems that can evolve, because they are evolutionarily stable. Systems that aren’t, are likely to be unstable and vulnerable to collapse, and typically have the ethically undesirable property of punishing “virtuous behavior” within that system.
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it.
True. I spoke imprecisely. Life is increasing in complexity, in a meaningful way that is not the same as the negative of entropy, and which I feel comfortable calling “progress” despite Stephen J. Gould’s strident imposition of his sociological agenda onto biology. This is the thing I’m talking about maximizing. Whatever utility function an FAI is given, it’s only going to involve concepts that we already have, which represent a small fraction of possible concepts; and so it’s not going to keep increasing as much in that way.
The best guide I have to what future life-forms will be like compared to me, if allowed to evolve naturally, is to consider what I am like compared to a fruit fly, or to bacteria.
This is true but not relevant. It suggests that future life forms will be much more complex, intelligent, powerful in changing the physical universe on many scales, good at out-competing (or predating on) other species to the point of driving them to extinction. You might also add differences between yourself and flies (and bacteria) like “future life forms will be a lot bigger and longer-lived”, or you might consider those incidental because you don’t value them as much.
But none of that implies anything about the future life-forms’ values, except that they will be selfish to the exclusion of other species which are not useful or beautiful to them, so that old-style humans will be endangered. It doesn’t imply anything that would cause me to expect to value these future species more than I value today’s nonhuman species, let alone today’s humans.
If you object that of course I will value myself more highly than I value a bacterium, and that I fail to adequately respect bacterial values, I can compare an algae to an oak tree. The algae is more closely-related to me; yet I still consider the oak tree a grander life form, and would rather see a world with algae and oak trees than one with only algae.
So you value other life-forms proportionally to how similar they are to you, and an important component of that is some measure of compexity, plus your sense of aesthetics (grandeur). You don’t value evolutionary relatedness highly. I feel the same way (I value a cat much more than a bat (edit: or rat)), but so what? I don’t see how this logically implies that new lifeforms that will exist in the future, and their new values, are more likely than not to be valued by us (if we live long enough to see them).
It’s also possible that life does not naturally progress indefinitely
Life may keep changing indefinitely, barring a total extinction. But that constant change isn’t “progress” by any fixed set of values because evolution has no long term goal.
Apart from the nonexistence of humans, who are unique in their intelligence/self-consciousness/tool-use/etc., life on Earth was apparently just as diverse and grand and beautiful hundreds of millions of years ago as it is today. There’s been a lot of change, but no progress in terms of complexity before the very quick evolution of humans. If I were to choose between this world, and a world with humans but otherwise the species of 10, 100, or 300 millions of years ago, I don’t feel that today’s bio-sphere is somehow better. So I don’t feel a hypothetical biosphere of 300 million years in the future would likely be better than today’s on my existing values. And I don’t understand why you do.
If you like, I can say that I value complexity
Do you really value complexity for its own sake? Or do you value it for the sake of the outcomes (such as intelligence) which it helps produce?
If you are offered prosthetic arms that look and feel just like human ones but work much better in many respects, you might accept them or not, but I doubt the ground for your objection would be that the biological version is much more complex.
build an FAI that maximizes some complexity measure.
Could you explain what kind of complexity measure you have in mind?. For instance, info-theoretical complexity (~ entropy) is maximized by a black hole, and is greatly increased just by a good random number generator. Surely that’s not what you mean.
Bats are no longer thought to be that closely related to us. In particular, cats and bats are both Laurasiathera, whereas we are Euarchontoglires. On the other hand, mice are Euarchontoglires too.
Apart from the nonexistence of humans, who are unique in their intelligence/self-consciousness/tool-use/etc., life on Earth was apparently just as diverse and grand and beautiful hundreds of millions of years ago as it is today.
Bats are no longer thought to be that closely related to us.
Thanks! I appreciate this updating of my trivial knowledge.
Will change to: I value a cat much more than a rat.
You might want to reduce that number by an order of magnitude.
I meant times as old as, say, 200-300 Mya. The End-Permian extinction sits rather unfortunately right in the middle of that, but I think both before it and after sufficient recovery (say 200 Mya) there was plenty of diversity of beauty around.
Is there a blog or other net news source you’d recommend for learning about changes like “we’re no longer closely related to bats, we’re really something-something-glires”? They seem to be coming more and more frequently lately.
I just browse aimlessly around Wikipedia when I’m bored, and a couple months ago I ended up reading about the taxonomy of pretty much any major vertebrate group. (I’ve also stumbled upon http://3lbmonkeybrain.blogspot.it/, but it doesn’t seem to be updated terribly often these days.)
I don’t think you’re getting what I’m saying. Let me state it in FAI-type terms:
I have already figured out my values precisely enough to implement my own preferred FAI: I want evolution to continue. If we put that value into an FAI, then, okay.
But the lines that people always try to think along are instead to enumerate values like “happiness”, “love”, “physical pleasure”, and so forth.
Building an FAI to maximize values defined at that level of abstraction would be a disaster. Building an FAI to maximize values at the higher level of abstraction would be kind of pointless, since the universe is already doing that anyway, and our FAI is more likely to screw it up than to save it.
Could you explain what kind of complexity measure you have in mind?. For instance, info-theoretical complexity (~ entropy) is maximized by a black hole, and is greatly increased just by a good random number generator. Surely that’s not what you mean.
People have dealt with this enough that I don’t think you’re really objecting that what I’m saying is unclear; you’re objecting that I don’t have a mathematical definition of it. True. But pointing to evolution as an example suffices to show that I’m talking about something sensible and real. Evolution increases some measure of complexity, and not randomness.
I have already figured out my values precisely enough to implement my own preferred FAI: I want evolution to continue. If we put that value into an FAI, then, okay.
So, I kind of infer from what you’ve said elsewhere that you don’t equally endorse all possible evolutions equally. That is, when you say “evolution continues” you mean something rather more specific than that… continuing in a particular direction, leading to greater and greater amounts of whatever-it-is-that-evolution-currently-optimizes-for (this “complexity measure” cited above), rather than greater and greater amounts of anything else.
And I kind of infer that the reason you prefer that is because it has historically done better at producing results you endorse than any human-engineered process has or could reasonably be expected to have, and you see no reason to expect that state to change; therefore you expect that for the foreseeable future the process of evolution will continue to produce results that you endorse, or at least that you would endorse, or at the very least that you ought to endorse.
Did I get that right?
Are you actually saying that simpler systems don’t ever evolve from more complex ones? Or merely that when that happens, the evolutionary process that led to it isn’t the kind of evolutionary process you’re endorsing here? Or something else?
So, I kind of infer from what you’ve said elsewhere that you don’t equally endorse all possible evolutions equally. That is, when you say “evolution continues” you mean something rather more specific than that… continuing in a particular direction, leading to greater and greater amounts of whatever-it-is-that-evolution-currently-optimizes-for (this “complexity measure” cited above), rather than greater and greater amounts of anything else.
I don’t understand your distinction between “all possible evolutions” and “whatever-it-is-that-evolution-currently-optimizes-for”. There are possible courses of evolution that I don’t think I would like, such as universes in which intelligence were eliminated. When thinking about how to optimize the future, I think of probability distributions.
And I kind of infer that the reason you prefer that is because it has historically done better at producing results you endorse than any human-engineered process has or could reasonably be expected to have, and you see no reason to expect that state to change; therefore you expect that for the foreseeable future the process of evolution will continue to produce results that you endorse, or at least that you would endorse, or at the very least that you ought to endorse.
Yes! Though I would say, “it has historically done better at producing results I endorse, starting from point X, than any process engineered by organisms existing at point X could reasonably be expected to have.”
Are you actually saying that simpler systems don’t ever evolve from more complex ones?
No. It happens all the time. The simplest systems, viruses and mycoplasmas, can exist only when embedded in more complex systems—although maybe they don’t count as systems for that reason. OTOH, there must have been life forms even simpler at one time, and we see no evidence of them now. For some reason the lower bound on possible life complexity has increased over time—possibly just once, a long time ago.
Or merely that when that happens, the evolutionary process that led to it isn’t the kind of evolutionary process you’re endorsing here? Or something else?
Two “something else” options are (A) merely widening the distribution, without increasing average complexity, would be more interesting to me, and (B) simple organisms appear to be necessary parts of a complex ecosystem, perhaps like simple components are necessary parts of a complex machine.
I think I see… so it’s not the complexity of individual organisms that you value, necessarily, but rather the overall complexity of the biosphere? That is, if system A grows simpler over time and system B grows more complex, it’s not that you value the process that leads to B but not the process that leads to A, but rather that you value the process that leads to (A and B). Yes?
I look at the past, and see that the dominant life forms have grown more complex and more interesting, and I expect this trend to continue. The best guide I have to what future life-forms will be like compared to me, if allowed to evolve naturally, is to consider what I am like compared to a fruit fly, or to bacteria.
If you object that of course I will value myself more highly than I value a bacterium, and that I fail to adequately respect bacterial values, I can compare an algae to an oak tree. The algae is more closely-related to me; yet I still consider the oak tree a grander life form, and would rather see a world with algae and oak trees than one with only algae.
(It’s also possible that life does not naturally progress indefinitely, but that developing intelligence and societies inevitably leads to collapse and distinction. That would be an argument in favor of FAI, but it’s a little farther down the road from where our thoughts are so far, I think.)
If you like, I can say that I value complexity, and then build an FAI that maximizes some complexity measure. That’s what I meant when I said that I object less to FAI if you go meta. I know that some people in SIAI give this response, that I am not going meta enough if I’m not happy with FAI; but in their writings and discussions other than when dealing with that particular argument, they don’t usually go that meta. Seriously adopting that view would result in discussions of what our high-level values really are, which I have not seen.
My attitude is, The universe was doing amazingly well before I got here; instead of trusting myself to do some incredibly complex philosophical work error-free, I should try to help it keep on doing what it’s been doing, and just help it avoid getting trapped in a local maximum. Whereas the entire purpose of FAI is to trap the universe in a local maximum.
Would it be fair to say that your philosophy is similar to davidad’s? Both of you seem to ultimately value some hard-to-define measure of complexity. He thinks the best way to maximize complexity is to develop technology, whereas you think the best way is to preserve evolution.
I think that evolution will lead to a local maximum of complexity, which we can’t “help” it avoid. The reason is that the universe contains many environmental niches that are essentially duplicates of each other, leading to convergent evolution. For example Earth contains lots of species that are similar to each other, and within each species there’s huge amounts of redundancy. Evolution creates complexity, but not remotely close to maximum complexity. Imagine if each individual plant/animal had a radically different design, which would be possible if they weren’t constrained by “survival of the fittest”.
Huh? The purpose of FAI is to achieve the global maximum of whatever utility function we give it. If that utility function contains a term for “complexity”, which seems plausible given people like you and davidad (and even I’d probably prefer greater complexity to less, all else being equal), then it ought to at least get somewhat close to the global complexity maximum (since the constraint of simultaneously trying to maximize other values doesn’t seem too burdensome, unless there are people who actively disvalue complexity).
There’s often a deceptive amount of difference, some of it very fundamental, hiding inside those convergent similarities, and that’s because “convergent evolution” is in the eye of the beholder, and mostly restricted to surface-level analogies between some basic functions.
Consider pangolins and echidnas. Pretty much the same, right? Oh sure, one’s built on a placental framework and the other a monotreme one, but they’ve developed the same basic tools: long tongues, powerful digging claws, keratenous spines/sharp plates… not much scope for variance there, at least not of a sort that’d interest a lay person, surely.
Well, actually they’re quite different. It’s not just that echidnas lay eggs and pangolins birth live young, or that pangolins tend to climb trees and echidnas tend to burrow. Echidnas have more going on upstairs, so to speak—their brains are about 50% neocortex (compare 30% for a human) and they are notoriously clever. Among people who work with wild populations they’re known for being basically impossible to trap, even when appropriate bait can be set up. In at least one case a researcher who’d captured several (you essentially have to grab them when you find them) left them in a cage they couldn’t dig out of, only to find in the morning they’d stacked up their water dishes and climbed out the top. There is evidence that they communicate infrasonically in a manner similar to elephants, and they are known to be sensitive to electricity.
My point here isn’t “Echidnas are awesome!”, my point is that the richness of behavior and intelligence that they display is not mirrored in pangolins, who share the same niche and many convergent adaptations. To a person with no more than a passing familiarity, they’d be hard to distinguish on a functional level since their most obvious, surface-visible traits are very similar and the differences seem minor. If you get an in-depth look at them, they’re quite different, and the significance of those “convergent” traits diminishes in the face of much more salient differences between the two groups of animals.
Short version: superficial similarities are very often only that, especially in the world of biology. Often they do have some inferential value, but there are limits on that.
This is true; but I favor systems that can evolve, because they are evolutionarily stable. Systems that aren’t, are likely to be unstable and vulnerable to collapse, and typically have the ethically undesirable property of punishing “virtuous behavior” within that system.
True. I spoke imprecisely. Life is increasing in complexity, in a meaningful way that is not the same as the negative of entropy, and which I feel comfortable calling “progress” despite Stephen J. Gould’s strident imposition of his sociological agenda onto biology. This is the thing I’m talking about maximizing. Whatever utility function an FAI is given, it’s only going to involve concepts that we already have, which represent a small fraction of possible concepts; and so it’s not going to keep increasing as much in that way.
This is true but not relevant. It suggests that future life forms will be much more complex, intelligent, powerful in changing the physical universe on many scales, good at out-competing (or predating on) other species to the point of driving them to extinction. You might also add differences between yourself and flies (and bacteria) like “future life forms will be a lot bigger and longer-lived”, or you might consider those incidental because you don’t value them as much.
But none of that implies anything about the future life-forms’ values, except that they will be selfish to the exclusion of other species which are not useful or beautiful to them, so that old-style humans will be endangered. It doesn’t imply anything that would cause me to expect to value these future species more than I value today’s nonhuman species, let alone today’s humans.
So you value other life-forms proportionally to how similar they are to you, and an important component of that is some measure of compexity, plus your sense of aesthetics (grandeur). You don’t value evolutionary relatedness highly. I feel the same way (I value a cat much more than a bat (edit: or rat)), but so what? I don’t see how this logically implies that new lifeforms that will exist in the future, and their new values, are more likely than not to be valued by us (if we live long enough to see them).
Life may keep changing indefinitely, barring a total extinction. But that constant change isn’t “progress” by any fixed set of values because evolution has no long term goal.
Apart from the nonexistence of humans, who are unique in their intelligence/self-consciousness/tool-use/etc., life on Earth was apparently just as diverse and grand and beautiful hundreds of millions of years ago as it is today. There’s been a lot of change, but no progress in terms of complexity before the very quick evolution of humans. If I were to choose between this world, and a world with humans but otherwise the species of 10, 100, or 300 millions of years ago, I don’t feel that today’s bio-sphere is somehow better. So I don’t feel a hypothetical biosphere of 300 million years in the future would likely be better than today’s on my existing values. And I don’t understand why you do.
Do you really value complexity for its own sake? Or do you value it for the sake of the outcomes (such as intelligence) which it helps produce?
If you are offered prosthetic arms that look and feel just like human ones but work much better in many respects, you might accept them or not, but I doubt the ground for your objection would be that the biological version is much more complex.
Could you explain what kind of complexity measure you have in mind?. For instance, info-theoretical complexity (~ entropy) is maximized by a black hole, and is greatly increased just by a good random number generator. Surely that’s not what you mean.
Thanks! I appreciate this updating of my trivial knowledge.
Will change to: I value a cat much more than a rat.
I meant times as old as, say, 200-300 Mya. The End-Permian extinction sits rather unfortunately right in the middle of that, but I think both before it and after sufficient recovery (say 200 Mya) there was plenty of diversity of beauty around.
No cats, though.
Yeah, it hadn’t occurred to me to try and preserve the rhyme! :-)
Is there a blog or other net news source you’d recommend for learning about changes like “we’re no longer closely related to bats, we’re really something-something-glires”? They seem to be coming more and more frequently lately.
I just browse aimlessly around Wikipedia when I’m bored, and a couple months ago I ended up reading about the taxonomy of pretty much any major vertebrate group. (I’ve also stumbled upon http://3lbmonkeybrain.blogspot.it/, but it doesn’t seem to be updated terribly often these days.)
I don’t think you’re getting what I’m saying. Let me state it in FAI-type terms:
I have already figured out my values precisely enough to implement my own preferred FAI: I want evolution to continue. If we put that value into an FAI, then, okay.
But the lines that people always try to think along are instead to enumerate values like “happiness”, “love”, “physical pleasure”, and so forth.
Building an FAI to maximize values defined at that level of abstraction would be a disaster. Building an FAI to maximize values at the higher level of abstraction would be kind of pointless, since the universe is already doing that anyway, and our FAI is more likely to screw it up than to save it.
People have dealt with this enough that I don’t think you’re really objecting that what I’m saying is unclear; you’re objecting that I don’t have a mathematical definition of it. True. But pointing to evolution as an example suffices to show that I’m talking about something sensible and real. Evolution increases some measure of complexity, and not randomness.
So, I kind of infer from what you’ve said elsewhere that you don’t equally endorse all possible evolutions equally. That is, when you say “evolution continues” you mean something rather more specific than that… continuing in a particular direction, leading to greater and greater amounts of whatever-it-is-that-evolution-currently-optimizes-for (this “complexity measure” cited above), rather than greater and greater amounts of anything else.
And I kind of infer that the reason you prefer that is because it has historically done better at producing results you endorse than any human-engineered process has or could reasonably be expected to have, and you see no reason to expect that state to change; therefore you expect that for the foreseeable future the process of evolution will continue to produce results that you endorse, or at least that you would endorse, or at the very least that you ought to endorse.
Did I get that right?
Are you actually saying that simpler systems don’t ever evolve from more complex ones? Or merely that when that happens, the evolutionary process that led to it isn’t the kind of evolutionary process you’re endorsing here? Or something else?
I don’t understand your distinction between “all possible evolutions” and “whatever-it-is-that-evolution-currently-optimizes-for”. There are possible courses of evolution that I don’t think I would like, such as universes in which intelligence were eliminated. When thinking about how to optimize the future, I think of probability distributions.
Yes! Though I would say, “it has historically done better at producing results I endorse, starting from point X, than any process engineered by organisms existing at point X could reasonably be expected to have.”
No. It happens all the time. The simplest systems, viruses and mycoplasmas, can exist only when embedded in more complex systems—although maybe they don’t count as systems for that reason. OTOH, there must have been life forms even simpler at one time, and we see no evidence of them now. For some reason the lower bound on possible life complexity has increased over time—possibly just once, a long time ago.
Two “something else” options are (A) merely widening the distribution, without increasing average complexity, would be more interesting to me, and (B) simple organisms appear to be necessary parts of a complex ecosystem, perhaps like simple components are necessary parts of a complex machine.
I think I see… so it’s not the complexity of individual organisms that you value, necessarily, but rather the overall complexity of the biosphere? That is, if system A grows simpler over time and system B grows more complex, it’s not that you value the process that leads to B but not the process that leads to A, but rather that you value the process that leads to (A and B). Yes?
Edit: er, I got my As and Bs reversed. Fixed.