We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
Yes. So, for “our values”, read “our extrapolated volition”.
It’s not clear to me how much you and Nesov actually disagree about “changing” values, vs. you meaning by “change” the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.
I do not mean “reflective refinement” if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The “values of mankind” are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.
VN and I are in real disagreement, as far as I can tell.
This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don’t converge on agreement because of differences in our values, given both yours and mine preferred interpretation of “values”. But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of “values”, the normative and not factual one, which I fail to accomplish.
Any adequate disagreement must be about different assignment of truth values to the same meaning.
I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of “truth values” here may be driving us into a diversion from the main issue. Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
But that is a diversion. I look forward to your explanation of your sense of the word “value”—a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a “value space” of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.
But your mention of “truth values” here may be driving us into a diversion from the main issue.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn’t manage to communicate to you the sense I see. At this point, I can only refer you to “metaethics sequence”, which I know is not very helpful.
One last attempt, using an intuition/analogy dump not carefully explained.
Where do the objective conclusions about “is” statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven’t experienced this process yet. And so you can reason about the truth without having it immediately available.
If you understand the way previous paragraph explains the truth of “is” questions, you can apply exactly the same explanation to “ought” questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should’ve chosen differently, that your decision could’ve been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don’t know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don’t know which is which.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Sorry. I missed that subtext. Giving up may well be the best course.
your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication.
But my position is not that something (specifically an ‘ought’ statement) is meaningless. I only maintain that the meaning is not attained by assigning “truth value conditions”.
One last attempt …
Your attempt was a step in the right direction, but still IMO still leaves a large gap in understanding. You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating.
I disagree with this. There is no core attractor, and the dynamic process is not one of better and better thinking as time goes on. Instead, the dynamics I am talking about is the biological evolutionary process which results in a change over time in the typical human brain. That plus the technological change over time which is likely to bring uploaded humans, AIs, aliens, and “uplifted” non-human animals into our collective social contract.
You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating. I disagree with this. There is no core attractor, and the dynamic process is not one of -better and better thinking as time goes on.
How can we know whether that is true or not? If we had access to multiple mature alien races, and could examine their moral systems, that might be a reasonable conclusion—if they were all very different. However, until then, the moral systems we can see are primitive—and any such conclusions would seem to be premature.
Yes. So, for “our values”, read “our extrapolated volition”.
It’s not clear to me how much you and Nesov actually disagree about “changing” values, vs. you meaning by “change” the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.
I do not mean “reflective refinement” if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The “values of mankind” are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.
VN and I are in real disagreement, as far as I can tell.
This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don’t converge on agreement because of differences in our values, given both yours and mine preferred interpretation of “values”. But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of “values”, the normative and not factual one, which I fail to accomplish.
I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of “truth values” here may be driving us into a diversion from the main issue. Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
But that is a diversion. I look forward to your explanation of your sense of the word “value”—a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a “value space” of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn’t manage to communicate to you the sense I see. At this point, I can only refer you to “metaethics sequence”, which I know is not very helpful.
One last attempt, using an intuition/analogy dump not carefully explained.
Where do the objective conclusions about “is” statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven’t experienced this process yet. And so you can reason about the truth without having it immediately available.
If you understand the way previous paragraph explains the truth of “is” questions, you can apply exactly the same explanation to “ought” questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should’ve chosen differently, that your decision could’ve been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don’t know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don’t know which is which.
Sorry. I missed that subtext. Giving up may well be the best course.
But my position is not that something (specifically an ‘ought’ statement) is meaningless. I only maintain that the meaning is not attained by assigning “truth value conditions”.
Your attempt was a step in the right direction, but still IMO still leaves a large gap in understanding. You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating.
I disagree with this. There is no core attractor, and the dynamic process is not one of better and better thinking as time goes on. Instead, the dynamics I am talking about is the biological evolutionary process which results in a change over time in the typical human brain. That plus the technological change over time which is likely to bring uploaded humans, AIs, aliens, and “uplifted” non-human animals into our collective social contract.
How can we know whether that is true or not? If we had access to multiple mature alien races, and could examine their moral systems, that might be a reasonable conclusion—if they were all very different. However, until then, the moral systems we can see are primitive—and any such conclusions would seem to be premature.
I’m sorry. I don’t know which statement you mean to designate by “that”.
Nor do I know which conclusions you worry might be premature.
To the best of my knowledge, I did not draw any conclusions.