It is not irrelevant. You said, “With those two conditions, the negative parts of human values are entirely eliminated.” That certainly meant that things like ISIS opinions would be eliminated. I agree in that particular case, but there are many other things that you would consider negative which will not be eliminated. I can probably guess some of them, although I won’t do that here.
I read that. You say there, “Your stated example was ISIS. ISIS is so bad because they incorrectly believe… If they knew all the arguments for and against religion, then their values would be more like ours.” As I said, I agree with you in that case. But you are indeed saying, “it is because I am right and when they know better they will know I was right.” And that will not always be true, even if it is true in that case.
I never claimed I am right about everything. I don’t need to be right about everything. I would love to have an AI show me what I am wrong about and show me the perfect set of values.
And most importantly, I’m saying that this process would result in the optimal set of values for everyone. Do you disagree?
Yes, I disagree. I think that “babyeater values are different from human values” differs only in degree from “my values are different from your values.” I do not think there is a reasonable chance that I will turn out to be wrong about this, just like there is no reasonable chance that if we measure our heights with sufficient accuracy, we will turn out to have different heights. This is still another reason why we should speak of “babyeater morality” and “human morality,” namely because if morality is inconsistent with variety, then morality does not exist.
That said, I already said that I would not be willing to wipe out non-human values from the cosmos, and likewise I have no interest in imposing my personal values on everything else. I think these are really the same thing, and in that sense wanting to impose a CEV on the universe is being a “racist” in relation to human beings vs other intelligent beings.
People may have different values (although I think deep down we are very similar, humans sharing all the same brains and not having that much diversity.) Regardless, CEV should find the best possible compromise between our different values. That’s literally the whole point.
If there is a difference in our values, the AI will find the compromise that satisfies us the most (or dissatisfies us the least.) There is no alternative, besides not compromising at all and just taking the values of a single random person. From behind the veil of ignorance, the first is definitely preferable.
I don’t think this will be so bad. Because I don’t think our values diverge so much, or that decent compromises are impossible between most values. I imagine that in the worst case, the compromise will be that two groups with different values will have to go their separate ways. Live on opposite sides of the world, never interact, and do their own thing. That’s not so bad, and a post-singularity future will have more than enough resources to support it.
That said, I already said that I would not be willing to wipe out non-human values from the cosmos
No one is suggesting we wipe out non-human values. But we have yet to meet any intelligent aliens with different values. Once we do so, we may very well just apply CEV to them and get the best compromise of our values again. Or we may keep our own values, but still allow them to live separately and do their own thing, because we value their existence.
This reminds me a lot of the post value is fragile. It’s ok to want a future that has different beings in it, that are totally different than humans. That doesn’t violate my values at all. But I don’t want a future that has beings die or suffer involuntarily. I don’t think it’s “value racist” to want to stop beings that do value that.
“Once we do so, we may very well just apply CEV to them and get the best compromise of our values again. Or we may keep our own values, but still allow them to live separately and do their own thing, because we value their existence.”
The problem I have with what you are saying is that these are two different things. And if they are two different things in the case of the aliens, they are two different things in the case of the humans.
The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person’s fundamental values. Eliezer agrees this is true in the case of the aliens, but he does not seem to notice that it would also be true in the case of the humans.
In any case, I choose in advance to keep my own values, not to participate in changing my fundamental values. But I am also not going to impose those on anyone else. If you define CEV to mean “the best possible way to keep your values completely intact and still not impose them on anyone else,” then I would agree with it, but only because we will be stipulating the desired conclusion.
That does not necessarily mean “living separately”. Even now I live with people who, in every noticeable way, have values that are fundamentally different from mine. That does not mean that we have to live separately.
In regard to the last point, you are saying that you don’t want to eliminate all potential aliens, but you want to eliminate ones with values that you really dislike. I think that is basically racist.
There is some truth in it, however, insofar as in reality, for reasons I have been saying, beings that have fundamental desires for others to suffer and die are very unlikely indeed, and any such desires are likely to be radically qualified. To that degree you are somewhat right: desires like that are in fact evil. But because they are evil, they cannot exist.
The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person’s fundamental values.
The world we live in is “immoral” in that it’s not optimized towards anyone’s values. Taking a single person’s values would be “immoral” to everyone else. CEV, finding the best possible compromise of values, would be the least immoral option, on average. Optimize the world in a way that dissatisfies the least people the least amount.
That does not necessarily mean “living separately”.
Right. I said that’s the realistic worst case, when no compromise is possible. I think most people have similar enough values that this would be rare.
you want to eliminate ones with values that you really dislike. I think that is basically racist.
I don’t necessarily want to kill them, but I would definitely stop them from hurting other beings. Imagine you came upon a race of aliens that practiced a very cruel form of slavery. Say 90% of their population was slaves, and the slave owning class treated regularly tortured and overworked them. Would you stop them, if you could? Is that racist? What about the values of the slaves?
I think optimizing anything is always immoral, exactly because it means imposing things that you should not be imposing. It is also the behavior of a fanatic, not a normal human being; that is the whole reason for the belief that AIs would destroy the world, namely because of the belief that they would behave like fanatics instead of like intelligent beings.
In the case of the slave owning race, I am quite sure that slavery is not consistent with their fundamental values, even if they are practicing it for a certain time. I don’t admit that values are arbitrary, and consequently you cannot assume (at least without first proving me wrong about this) that any arbitrary value could be a fundamental value for something.
Well now I see we disagree at a much more fundamental level.
There is nothing inherently sinister about “optimization”. Humans are optimizers in a sense, manipulating the world to be more like how we want it to be. We build sophisticated technology and industries that are many steps removed from our various end goals. We dam rivers, and build roads, and convert deserts into sprawling cities. We convert the resources of the world into the things we want. That’s just what humans do, that’s probably what most intelligent beings do.
The definition of FAI, to me, is something that continues that process, but improves it. Takes over from us, and continues to run the world for human ends. Makes our technologies better and our industries more efficient, and solves our various conflicts. The best FAI is one that constructs a utopia for humans.
I don’t know why you believe a slave owning race is impossible. Humans of course practiced slavery in many different cultures. It’s very easy for even humans to not care about the suffering of other groups. And even if you do believe most humans could be convinced it’s wrong (I’m not so sure), there are actual sociopaths that don’t experience empathy at all.
Humans also have plenty of sinister values, and I can easily believe aliens could exist that are far worse. Evolution tended to evolve humans that cooperate and have empathy. But under different conditions, we could have evolved completely differently. There is no law of the universe that says beings have to have values like us.
“Well now I see we disagree at a much more fundamental level.” Yes. I’ve been saying that since the beginning of this conversation.
If humans are optimizers, they must be optimizing for something. Now suppose someone comes to you and says, “do you agree to turn on this CEV machine?”, when you respond, are you optimizing for the thing or not? If you say yes, and you are optimizing the original thing, then the CEV cannot (as far as you know) be compromising the thing you were optimizing for. If you say yes and are not optimizing for it, then you are not an optimizer. So you must agree with me on at least one point: either 1) you are not an optimizer, or 2) you should not agree with CEV if it compromises your personal values in any way. I maintain both of those, but you must maintain at least one of them.
In earlier posts I have explained why it is not possible that you are really an optimizer (not during this particular discussion.) People here tend to neglect the fact that an intelligent thing has a body. So e.g. Eliezer believes that an AI is an algorithm, and nothing else. But in fact an AI has a body just as much as we do. And those bodies have various tendencies, and they do not collectively add up to optimizing for anything, except in an abstract sense in which everything is an optimizer, like a rock is an optimizer, and so on.
“We convert the resources of the world into the things we want.” To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI—that it might do that fanatically. We don’t.
I understand you think that some creatures could have fundamental values that are perverse from your point of view. This is because you, like Eliezer, think that values are intrinsically arbitrary. I don’t, and I have said so from the beginning. It might be true that slave owning values could be fundamental in some exterrestrial race, but if they were, slavery in that race would be very, very different from slavery in the human race, and there would be no reason to oppose it in that race. In fact, you could say that slavery exists in a fundamental way in the human race, and there is no reason to oppose it: parents can tell their kids to stay out of the road, and they have to obey them, whether they want to or not. Note that this is very, very different from the kind of slavery you are concerned about, and there is no reason to oppose the real kind.
I can still think the CEV machine is better than whatever the alternative is (for instance, no AI at all.) But yes, in theory, you should prefer to make AIs that have your own values and not bother with CEV.
Having a body is irrelevant. Bodies are just one way to manipulate the world to optimize your goals.
“We convert the resources of the world into the things we want.” To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI—that it might do that fanatically. We don’t.
What do you mean by “fanatically”? This is a pretty vague word. Humans would sure seem fanatical to other animals. We’ve cut down entire continent sized forests, drained massive lakes, and built billions of complex structures.
The only reason we haven’t “optimized” the Earth further, is because of physical and economic limits. If we could we probably would.
Whether you call that “optimization” or not, is mostly irrelevant. If superintelligent AIs acted similarly, humans would be screwed.
I’m deeply concerned that you are theoretically ok with slave owning aliens. If the slaves are ok with it, then perhaps it could be justified. But if they strongly object to it, and suffer from it, and don’t get any benefit from it, then it’s just obviously wrong.
“Having a body is irrelevant. Bodies are just one way to manipulate the world to optimize your goals.”
This is not true. Bodies are physical objects that follow the laws of physics, and the laws of physics are not “just one way to manipulate the world to optimize your goals,” because the laws have nothing to do with your goals. For example, we often don’t keep doing something because we are tired, not because we have a goal of not continuing. AIs will be quite capable of doing the same thing, as for example if thinking too hard about something begins to weaken its circuits.
What I mean by fanatically is trying to optimize for a single goal as though it were the only thing that mattered. We do not do that, nor does anything else with a body, nor is it even possible, for the above reason.
Yes you should be concerned about what I said about slaves and aliens, as it suggests that the CEV machine might result in things that you consider utterly wicked. I said that from the beginning, when you claimed that it would eliminate all negative results, obviously intending that to mean from your subjective point of view.
The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person’s fundamental values.
If ithey find it immoral in the sense of crossing a line that should never be crossed, then they are not going to play.
I don’t think the morals=values theory can tell you where the bright lines are, and that is why I think rules and a few other things are involved in ethics.
There is some truth in it, however, insofar as in reality, for reasons I have been saying, beings that have fundamental desires for others to suffer and die are very unlikely indeed, and any such desires are likely to be radically qualified. To that degree you are somewhat right: desires like that are in fact evil. But because they are evil, they cannot exist
Consider a harder case....a society that is ruthless in crushing any society that offers any rivalry or opposition to them, but otherwise leaves people alone. Since that is a survival promoting strategy, you can’t argue that it would just be selected out. But it doesn’t seem as ethical as more conciliatory approaches.
“It doesn’t seem as ethical as more conciliatory approaches.” I agree. That is because it is not the best strategy. It may not be the worst possible strategy, but it is not the best. And since the people engaging in that strategy, their ability to think about it, over time, will lead them to adopt better strategies, namely more conciliatory approaches.
I don’t say that the good is achieved by selection alone. It is also achieved by the use of reason, by things that use reason.
Are you sure? Ont the face of it, doing things like attending peace negotiations exposes you to risks (they take the opportunity to assassinate you, they renege on the agreement, etc) that simply nuking them doesn’t.
It is also achieved by the use of reason, by things that use reason.
If people who reason well don’t get selected, where does the prevalence of good come from?
You can try to permanently exterminate them and fail. Additionally, even if you succeed in one case, you will ensure that no one else will be willing to negotiate with you even when it would be beneficial for you because they are stronger. So overall you will be decreasing your options, which makes your situation worse.
It is not irrelevant. You said, “With those two conditions, the negative parts of human values are entirely eliminated.” That certainly meant that things like ISIS opinions would be eliminated. I agree in that particular case, but there are many other things that you would consider negative which will not be eliminated. I can probably guess some of them, although I won’t do that here.
See my other comment for more clarification on how CEV would eliminate negative values.
I read that. You say there, “Your stated example was ISIS. ISIS is so bad because they incorrectly believe… If they knew all the arguments for and against religion, then their values would be more like ours.” As I said, I agree with you in that case. But you are indeed saying, “it is because I am right and when they know better they will know I was right.” And that will not always be true, even if it is true in that case.
I never claimed I am right about everything. I don’t need to be right about everything. I would love to have an AI show me what I am wrong about and show me the perfect set of values.
And most importantly, I’m saying that this process would result in the optimal set of values for everyone. Do you disagree?
Yes, I disagree. I think that “babyeater values are different from human values” differs only in degree from “my values are different from your values.” I do not think there is a reasonable chance that I will turn out to be wrong about this, just like there is no reasonable chance that if we measure our heights with sufficient accuracy, we will turn out to have different heights. This is still another reason why we should speak of “babyeater morality” and “human morality,” namely because if morality is inconsistent with variety, then morality does not exist.
That said, I already said that I would not be willing to wipe out non-human values from the cosmos, and likewise I have no interest in imposing my personal values on everything else. I think these are really the same thing, and in that sense wanting to impose a CEV on the universe is being a “racist” in relation to human beings vs other intelligent beings.
People may have different values (although I think deep down we are very similar, humans sharing all the same brains and not having that much diversity.) Regardless, CEV should find the best possible compromise between our different values. That’s literally the whole point.
If there is a difference in our values, the AI will find the compromise that satisfies us the most (or dissatisfies us the least.) There is no alternative, besides not compromising at all and just taking the values of a single random person. From behind the veil of ignorance, the first is definitely preferable.
I don’t think this will be so bad. Because I don’t think our values diverge so much, or that decent compromises are impossible between most values. I imagine that in the worst case, the compromise will be that two groups with different values will have to go their separate ways. Live on opposite sides of the world, never interact, and do their own thing. That’s not so bad, and a post-singularity future will have more than enough resources to support it.
No one is suggesting we wipe out non-human values. But we have yet to meet any intelligent aliens with different values. Once we do so, we may very well just apply CEV to them and get the best compromise of our values again. Or we may keep our own values, but still allow them to live separately and do their own thing, because we value their existence.
This reminds me a lot of the post value is fragile. It’s ok to want a future that has different beings in it, that are totally different than humans. That doesn’t violate my values at all. But I don’t want a future that has beings die or suffer involuntarily. I don’t think it’s “value racist” to want to stop beings that do value that.
“Once we do so, we may very well just apply CEV to them and get the best compromise of our values again. Or we may keep our own values, but still allow them to live separately and do their own thing, because we value their existence.”
The problem I have with what you are saying is that these are two different things. And if they are two different things in the case of the aliens, they are two different things in the case of the humans.
The CEV process might well be immoral for everyone concerned, since by definition it is compromising a person’s fundamental values. Eliezer agrees this is true in the case of the aliens, but he does not seem to notice that it would also be true in the case of the humans.
In any case, I choose in advance to keep my own values, not to participate in changing my fundamental values. But I am also not going to impose those on anyone else. If you define CEV to mean “the best possible way to keep your values completely intact and still not impose them on anyone else,” then I would agree with it, but only because we will be stipulating the desired conclusion.
That does not necessarily mean “living separately”. Even now I live with people who, in every noticeable way, have values that are fundamentally different from mine. That does not mean that we have to live separately.
In regard to the last point, you are saying that you don’t want to eliminate all potential aliens, but you want to eliminate ones with values that you really dislike. I think that is basically racist.
There is some truth in it, however, insofar as in reality, for reasons I have been saying, beings that have fundamental desires for others to suffer and die are very unlikely indeed, and any such desires are likely to be radically qualified. To that degree you are somewhat right: desires like that are in fact evil. But because they are evil, they cannot exist.
The world we live in is “immoral” in that it’s not optimized towards anyone’s values. Taking a single person’s values would be “immoral” to everyone else. CEV, finding the best possible compromise of values, would be the least immoral option, on average. Optimize the world in a way that dissatisfies the least people the least amount.
Right. I said that’s the realistic worst case, when no compromise is possible. I think most people have similar enough values that this would be rare.
I don’t necessarily want to kill them, but I would definitely stop them from hurting other beings. Imagine you came upon a race of aliens that practiced a very cruel form of slavery. Say 90% of their population was slaves, and the slave owning class treated regularly tortured and overworked them. Would you stop them, if you could? Is that racist? What about the values of the slaves?
I think optimizing anything is always immoral, exactly because it means imposing things that you should not be imposing. It is also the behavior of a fanatic, not a normal human being; that is the whole reason for the belief that AIs would destroy the world, namely because of the belief that they would behave like fanatics instead of like intelligent beings.
In the case of the slave owning race, I am quite sure that slavery is not consistent with their fundamental values, even if they are practicing it for a certain time. I don’t admit that values are arbitrary, and consequently you cannot assume (at least without first proving me wrong about this) that any arbitrary value could be a fundamental value for something.
Well now I see we disagree at a much more fundamental level.
There is nothing inherently sinister about “optimization”. Humans are optimizers in a sense, manipulating the world to be more like how we want it to be. We build sophisticated technology and industries that are many steps removed from our various end goals. We dam rivers, and build roads, and convert deserts into sprawling cities. We convert the resources of the world into the things we want. That’s just what humans do, that’s probably what most intelligent beings do.
The definition of FAI, to me, is something that continues that process, but improves it. Takes over from us, and continues to run the world for human ends. Makes our technologies better and our industries more efficient, and solves our various conflicts. The best FAI is one that constructs a utopia for humans.
I don’t know why you believe a slave owning race is impossible. Humans of course practiced slavery in many different cultures. It’s very easy for even humans to not care about the suffering of other groups. And even if you do believe most humans could be convinced it’s wrong (I’m not so sure), there are actual sociopaths that don’t experience empathy at all.
Humans also have plenty of sinister values, and I can easily believe aliens could exist that are far worse. Evolution tended to evolve humans that cooperate and have empathy. But under different conditions, we could have evolved completely differently. There is no law of the universe that says beings have to have values like us.
“Well now I see we disagree at a much more fundamental level.” Yes. I’ve been saying that since the beginning of this conversation.
If humans are optimizers, they must be optimizing for something. Now suppose someone comes to you and says, “do you agree to turn on this CEV machine?”, when you respond, are you optimizing for the thing or not? If you say yes, and you are optimizing the original thing, then the CEV cannot (as far as you know) be compromising the thing you were optimizing for. If you say yes and are not optimizing for it, then you are not an optimizer. So you must agree with me on at least one point: either 1) you are not an optimizer, or 2) you should not agree with CEV if it compromises your personal values in any way. I maintain both of those, but you must maintain at least one of them.
In earlier posts I have explained why it is not possible that you are really an optimizer (not during this particular discussion.) People here tend to neglect the fact that an intelligent thing has a body. So e.g. Eliezer believes that an AI is an algorithm, and nothing else. But in fact an AI has a body just as much as we do. And those bodies have various tendencies, and they do not collectively add up to optimizing for anything, except in an abstract sense in which everything is an optimizer, like a rock is an optimizer, and so on.
“We convert the resources of the world into the things we want.” To some extent, but not infinitely, in a fanatical way. Again, that is the whole worry about AI—that it might do that fanatically. We don’t.
I understand you think that some creatures could have fundamental values that are perverse from your point of view. This is because you, like Eliezer, think that values are intrinsically arbitrary. I don’t, and I have said so from the beginning. It might be true that slave owning values could be fundamental in some exterrestrial race, but if they were, slavery in that race would be very, very different from slavery in the human race, and there would be no reason to oppose it in that race. In fact, you could say that slavery exists in a fundamental way in the human race, and there is no reason to oppose it: parents can tell their kids to stay out of the road, and they have to obey them, whether they want to or not. Note that this is very, very different from the kind of slavery you are concerned about, and there is no reason to oppose the real kind.
I can still think the CEV machine is better than whatever the alternative is (for instance, no AI at all.) But yes, in theory, you should prefer to make AIs that have your own values and not bother with CEV.
Having a body is irrelevant. Bodies are just one way to manipulate the world to optimize your goals.
What do you mean by “fanatically”? This is a pretty vague word. Humans would sure seem fanatical to other animals. We’ve cut down entire continent sized forests, drained massive lakes, and built billions of complex structures.
The only reason we haven’t “optimized” the Earth further, is because of physical and economic limits. If we could we probably would.
Whether you call that “optimization” or not, is mostly irrelevant. If superintelligent AIs acted similarly, humans would be screwed.
I’m deeply concerned that you are theoretically ok with slave owning aliens. If the slaves are ok with it, then perhaps it could be justified. But if they strongly object to it, and suffer from it, and don’t get any benefit from it, then it’s just obviously wrong.
“Having a body is irrelevant. Bodies are just one way to manipulate the world to optimize your goals.”
This is not true. Bodies are physical objects that follow the laws of physics, and the laws of physics are not “just one way to manipulate the world to optimize your goals,” because the laws have nothing to do with your goals. For example, we often don’t keep doing something because we are tired, not because we have a goal of not continuing. AIs will be quite capable of doing the same thing, as for example if thinking too hard about something begins to weaken its circuits.
What I mean by fanatically is trying to optimize for a single goal as though it were the only thing that mattered. We do not do that, nor does anything else with a body, nor is it even possible, for the above reason.
Yes you should be concerned about what I said about slaves and aliens, as it suggests that the CEV machine might result in things that you consider utterly wicked. I said that from the beginning, when you claimed that it would eliminate all negative results, obviously intending that to mean from your subjective point of view.
If ithey find it immoral in the sense of crossing a line that should never be crossed, then they are not going to play. I don’t think the morals=values theory can tell you where the bright lines are, and that is why I think rules and a few other things are involved in ethics.
Consider a harder case....a society that is ruthless in crushing any society that offers any rivalry or opposition to them, but otherwise leaves people alone. Since that is a survival promoting strategy, you can’t argue that it would just be selected out. But it doesn’t seem as ethical as more conciliatory approaches.
“It doesn’t seem as ethical as more conciliatory approaches.” I agree. That is because it is not the best strategy. It may not be the worst possible strategy, but it is not the best. And since the people engaging in that strategy, their ability to think about it, over time, will lead them to adopt better strategies, namely more conciliatory approaches.
I don’t say that the good is achieved by selection alone. It is also achieved by the use of reason, by things that use reason.
Are you sure? Ont the face of it, doing things like attending peace negotiations exposes you to risks (they take the opportunity to assassinate you, they renege on the agreement, etc) that simply nuking them doesn’t.
If people who reason well don’t get selected, where does the prevalence of good come from?
Yes I am sure. Of course negotiating has risks, but it doesn’t automatically make permanent enemies, and it is better not to have permanent enemies.
People who reason well do get selected. I am just saying once they are selected they can start thinking about what is good as well.
If the alternative to negotation is completely exterminating you enemies, you don’t have to worry about permanent enemies!
You can try to permanently exterminate them and fail. Additionally, even if you succeed in one case, you will ensure that no one else will be willing to negotiate with you even when it would be beneficial for you because they are stronger. So overall you will be decreasing your options, which makes your situation worse.