The value of possible future life arising by chance is probably discounted by fragility of value (alien values might be not much better than paperclipper’s), the risk of it not arising at all or getting squashed by its own existential risks (Fermi paradox), the risk of it also losing its values (e.g. to an UFAI), the astronomical waste of not optimizing the universe in the meantime, and possibly time discounting of the (very distant) future.
(All of this discounting might still be smaller than what takes acausal trade to work out, so it’s not clear which choice is better. A cleaner question would compare a paperclipper with a sterile universe.)
A cleaner question would compare a paperclipper with a sterile universe.
I really wanted to ask that question, but I’m not actually very confident in my estimate of how sterile our own universe is, over the long term, so I’m afraid that I waffled a bit.
Some people reasonably think that value is simple and robust. Alien life will likely tend to share many of the more universal of our values, for example the “epistemic” values underlying development of science. ETA: Wow downvotes, gotta love them :-)
The default assumption around here is that value is complex and fragile. If you think you have a strong argument to the contrary, have you considered posting on it? Even if you don’t want to endorse the position, you could still do a decent devils-advocate steelman of it.
EDIT: having read the linked article, it doesn’t say what you seem to think it does. It’s arguing Friendliness is simpler than we think, not that arbitrary minds will converge on it.
In my opinion [i.e. it is my guess that], the value structures and considerations developed by alien evolved civilizations are likely to be similar and partially-inter-translatable to our value structures and considerations, in a manner akin to how their scientific theories and even social life languages are likely to be inter-translatable (perhaps less similar than for scientific theories, more similar than for social languages).
Well, I guess it comes down to the evolutionary niches that produce intelligence and morality, doesn’t it? There doesn’t seem to be any single widely-accepted answer for either of them, although there are plenty of theories, some of which overlap, some don’t.
Then again, we don’t even know how different they would be biologically, so I’m unwilling to make any confidant pronouncement myself, other than professing skepticism for particularly extreme ends of the scale. (Aliens would be humanoid because only humans evolved intelligence!)
Anyway, do you think the arguments for your position are, well, strong? Referring to it as an “opinion” suggests not, but also suggests the arguments for the other side must be similarly weak, right? So maybe you could write about that.
I appeal to (1) the consideration of whether inter-translatability of science, and valuing of certain theories over others, depends on the initial conditions of civilization that develops it. (2) Universality of decision-theoretic and game-theoretic situations. (3) Evolutionary value of versatility hinting at evolved value of diversity.
Not sure what 1 and 3 refer to, but 2 is conditional on a specific theory of origin for morality, right? A plausible one, to be sure, but by no means settled or demonstrated.
My point is that the origin of values, the initial conditions, is not the sole criterion for determining whether a culture appreciates given values. There can be convergence or “discovery” of values.
For some value of “similar”, I agree. Aliens as ‘alien’ as the Babyeaters or the Superhappies don’t sound terribly implausible to me, but it’d be extremely hard for me to imagine anything like the Pebblesorters actually existing.
Do you think that CEV-generating mechanisms are negotiable across species? I.e. whether other species would have a concept of CEV and would agree to at least some of the mechanisms that generate a CEV. It would enable determining which differences are reconcilable and where we have to agree to disagree.
Is babyeating necessarily in babyeaters’ CEV? Which of our developments (drop slavery, stop admiring Sparta etc.) were in our CEV “from the beginning”? Perhaps the dynamics has some degree of convergence even if with more than one basin of attraction.
I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple “basins of attraction”. Some of the dynamics consists of decreasing inconsistencies and increasing robustness.
Another point is that value (actually, a structure of values) shouldn’t be confused with a way of life. Values are abstractions: various notions of beauty, curiosity, elegance, so called warmheartedness… The exact meaning of any particular such term is not a metaphysical entity, so it is difficult to claim that an identical term is instantiated across different cultures / ways of life. But there can be very good translations that map such terms onto a different way of life (and back). ETA: there are multiple ways of life in our cultures; a person can change her way of life by pursuing a different profession or a different hobby.
Values ultimately have to map to the real world, though, even if it’s in a complicated way. If something wants the same world as me to exist, I’m not fussed as to what it calls the reason. But how likely is it that they will converge? That’s what matters.
I presume by “the same world” you mean a sufficiently overlapping class of worlds. I don’t think that “the same world” is well defined. I think that determining in particular cases what is “the world” you want affects who you are.
If we ignore the possibility of future life arising again after human extinction, paperclipper seems (maybe, a bit) better than extinction because of the possibility of acausal trade between the paperclipper and human values (see this comment and preceding discussion).
The value of possible future life arising by chance is probably discounted by fragility of value (alien values might be not much better than paperclipper’s), the risk of it not arising at all or getting squashed by its own existential risks (Fermi paradox), the risk of it also losing its values (e.g. to an UFAI), the astronomical waste of not optimizing the universe in the meantime, and possibly time discounting of the (very distant) future.
(All of this discounting might still be smaller than what takes acausal trade to work out, so it’s not clear which choice is better. A cleaner question would compare a paperclipper with a sterile universe.)
I really wanted to ask that question, but I’m not actually very confident in my estimate of how sterile our own universe is, over the long term, so I’m afraid that I waffled a bit.
Some people reasonably think that value is simple and robust. Alien life will likely tend to share many of the more universal of our values, for example the “epistemic” values underlying development of science. ETA: Wow downvotes, gotta love them :-)
The default assumption around here is that value is complex and fragile. If you think you have a strong argument to the contrary, have you considered posting on it? Even if you don’t want to endorse the position, you could still do a decent devils-advocate steelman of it.
EDIT: having read the linked article, it doesn’t say what you seem to think it does. It’s arguing Friendliness is simpler than we think, not that arbitrary minds will converge on it.
In my opinion [i.e. it is my guess that], the value structures and considerations developed by alien evolved civilizations are likely to be similar and partially-inter-translatable to our value structures and considerations, in a manner akin to how their scientific theories and even social life languages are likely to be inter-translatable (perhaps less similar than for scientific theories, more similar than for social languages).
Well, I guess it comes down to the evolutionary niches that produce intelligence and morality, doesn’t it? There doesn’t seem to be any single widely-accepted answer for either of them, although there are plenty of theories, some of which overlap, some don’t.
Then again, we don’t even know how different they would be biologically, so I’m unwilling to make any confidant pronouncement myself, other than professing skepticism for particularly extreme ends of the scale. (Aliens would be humanoid because only humans evolved intelligence!)
Anyway, do you think the arguments for your position are, well, strong? Referring to it as an “opinion” suggests not, but also suggests the arguments for the other side must be similarly weak, right? So maybe you could write about that.
I appeal to (1) the consideration of whether inter-translatability of science, and valuing of certain theories over others, depends on the initial conditions of civilization that develops it. (2) Universality of decision-theoretic and game-theoretic situations. (3) Evolutionary value of versatility hinting at evolved value of diversity.
Not sure what 1 and 3 refer to, but 2 is conditional on a specific theory of origin for morality, right? A plausible one, to be sure, but by no means settled or demonstrated.
My point is that the origin of values, the initial conditions, is not the sole criterion for determining whether a culture appreciates given values. There can be convergence or “discovery” of values.
Oh, do you mean that even quite alien beings might want to deal with us?
No, I mean that we might give a shit even about quite alien beings.
For some value of “similar”, I agree. Aliens as ‘alien’ as the Babyeaters or the Superhappies don’t sound terribly implausible to me, but it’d be extremely hard for me to imagine anything like the Pebblesorters actually existing.
Do you think that CEV-generating mechanisms are negotiable across species? I.e. whether other species would have a concept of CEV and would agree to at least some of the mechanisms that generate a CEV. It would enable determining which differences are reconcilable and where we have to agree to disagree.
Is babyeating necessarily in babyeaters’ CEV? Which of our developments (drop slavery, stop admiring Sparta etc.) were in our CEV “from the beginning”? Perhaps the dynamics has some degree of convergence even if with more than one basin of attraction.
People disagree about that, and given that it has political implications (google for “moral progress”) I dare no longer even speculate about that.
I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple “basins of attraction”. Some of the dynamics consists of decreasing inconsistencies and increasing robustness.
Another point is that value (actually, a structure of values) shouldn’t be confused with a way of life. Values are abstractions: various notions of beauty, curiosity, elegance, so called warmheartedness… The exact meaning of any particular such term is not a metaphysical entity, so it is difficult to claim that an identical term is instantiated across different cultures / ways of life. But there can be very good translations that map such terms onto a different way of life (and back). ETA: there are multiple ways of life in our cultures; a person can change her way of life by pursuing a different profession or a different hobby.
Values ultimately have to map to the real world, though, even if it’s in a complicated way. If something wants the same world as me to exist, I’m not fussed as to what it calls the reason. But how likely is it that they will converge? That’s what matters.
I presume by “the same world” you mean a sufficiently overlapping class of worlds. I don’t think that “the same world” is well defined. I think that determining in particular cases what is “the world” you want affects who you are.
Well, I suppose in practice it’s a question of short-term instrumental goals overlapping, yeah.
Today’s relevant SMBC comic
I swear, that guy is spying on LW. He’s watching us right now. Make a comic about THAT! shakes fist