We really do care more about the short-term future than the distant future.
How do you know this? It feels this way, but there is no way to be certain.
We have better control over the short-term future than the distant future.
That we probably can’t have something doesn’t imply we shouldn’t have it.
We expect our values to change. Change can be good.
That we expect something to happen doesn’t imply it’s desirable that it happens. It’s very difficult to arrange so that change in values is good. I expect you’d need oversight from a singleton for that to become possible (and in that case, “changing values” won’t adequately describe what happens, as there are probably better stuff to make than different-valued agents).
As mentioned, an increasing immortal population means that our “rights” over the distant future must be fairly dilute.
Preference is not about “rights”. It’s merely game theory for coordination of satisfaction of preference.
If we don’t discount the future, we run into mathematical difficulties. The first rule of utilitarianism ought to be KIFS—Keep It Finite, Stupid.
God does not care about our mathematical difficulties. --Einstein.
We really do care more about the short-term future than the distant future.
How do you know this? It feels this way, but there is no way to be certain.
Alright. I shouldn’t have said “we”. I care more about the short term. And I am quite certain. WAY!
We have better control over the short-term future than the distant future.
That we probably can’t have something doesn’t imply we shouldn’t have it.
Huh? What is it that you are not convinced we shouldn’t have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
God does not care about our mathematical difficulties.
Then lets make sure not to hire the guy as an FAI programmer.
We really do care more about the short-term future than the distant future.
How do you know this? It feels this way, but there is no way to be certain.
Alright. I shouldn’t have said “we”. I care more about the short term. And I am quite certain. WAY!
I believe you know my answer to that. You are not licensed to have absolute knowledge about yourself. There are no human or property rights on truth. How do you know that you care more about short term? You can have beliefs or emotions that suggest this, but you can’t know what all the stuff you believe and all the moral arguments you respond to cash out into on reflection. We only ever know approximate answers, and given the complexity of human decision problem and sheer inadequacy of human brains, any approximate answers we do presume to know are highly suspect.
Huh? What is it that you are not convinced we shouldn’t have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
That we aren’t qualified doesn’t mean that we shouldn’t have that control. Exercising this control through decisions made with human brains is probably not it of course, we’d have to use finer tools, such as FAI or upload bureaucracies.
God does not care about our mathematical difficulties.
Then lets make sure not to hire the guy as an FAI programmer.
Don’t joke, it’s serious business. What do you believe on the matter?
God does not care about our mathematical difficulties.
Then lets make sure not to hire the guy as an FAI programmer.
Don’t joke, it’s serious business. What do you believe on the matter?
I am not the person who initiated this joke. Why did you mention God? If you don’t care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?
I am not the person who initiated this joke. Why did you mention God?
Einstein mentioned God, as a stand-in for Nature.
If you don’t care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?
I didn’t say I don’t care for discounting. I said that I believe that we must be uncertain about this question. That I don’t have solutions doesn’t mean I must discard the questions as answered negatively.
We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
Yes. So, for “our values”, read “our extrapolated volition”.
It’s not clear to me how much you and Nesov actually disagree about “changing” values, vs. you meaning by “change” the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.
I do not mean “reflective refinement” if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The “values of mankind” are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.
VN and I are in real disagreement, as far as I can tell.
This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don’t converge on agreement because of differences in our values, given both yours and mine preferred interpretation of “values”. But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of “values”, the normative and not factual one, which I fail to accomplish.
Any adequate disagreement must be about different assignment of truth values to the same meaning.
I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of “truth values” here may be driving us into a diversion from the main issue. Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
But that is a diversion. I look forward to your explanation of your sense of the word “value”—a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a “value space” of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.
But your mention of “truth values” here may be driving us into a diversion from the main issue.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn’t manage to communicate to you the sense I see. At this point, I can only refer you to “metaethics sequence”, which I know is not very helpful.
One last attempt, using an intuition/analogy dump not carefully explained.
Where do the objective conclusions about “is” statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven’t experienced this process yet. And so you can reason about the truth without having it immediately available.
If you understand the way previous paragraph explains the truth of “is” questions, you can apply exactly the same explanation to “ought” questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should’ve chosen differently, that your decision could’ve been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don’t know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don’t know which is which.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Sorry. I missed that subtext. Giving up may well be the best course.
your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication.
But my position is not that something (specifically an ‘ought’ statement) is meaningless. I only maintain that the meaning is not attained by assigning “truth value conditions”.
One last attempt …
Your attempt was a step in the right direction, but still IMO still leaves a large gap in understanding. You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating.
I disagree with this. There is no core attractor, and the dynamic process is not one of better and better thinking as time goes on. Instead, the dynamics I am talking about is the biological evolutionary process which results in a change over time in the typical human brain. That plus the technological change over time which is likely to bring uploaded humans, AIs, aliens, and “uplifted” non-human animals into our collective social contract.
You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating. I disagree with this. There is no core attractor, and the dynamic process is not one of -better and better thinking as time goes on.
How can we know whether that is true or not? If we had access to multiple mature alien races, and could examine their moral systems, that might be a reasonable conclusion—if they were all very different. However, until then, the moral systems we can see are primitive—and any such conclusions would seem to be premature.
It’s very difficult to arrange so that change in values is good. I expect you’d need oversight from a singleton for that to become possible (and in that case, “changing values” won’t adequately describe what happens, as there are probably better stuff to make than different-valued agents).
We do seem to have an example of systematic positive change in values—the history of the last thousand years. No doubt some will argue that our values only look “good” because they are closest to our current values—but I don’t think that is true. Another possible explanation is that material wealth lets us show off our more positive values more frequently. That’s a harder charge to defend against, but wealth-driven value changes are surely still value changes.
Systematic, positive changes in values tend to suggest a bright future. Go, cultural evolution!
How do you know this? It feels this way, but there is no way to be certain.
That we probably can’t have something doesn’t imply we shouldn’t have it.
That we expect something to happen doesn’t imply it’s desirable that it happens. It’s very difficult to arrange so that change in values is good. I expect you’d need oversight from a singleton for that to become possible (and in that case, “changing values” won’t adequately describe what happens, as there are probably better stuff to make than different-valued agents).
Preference is not about “rights”. It’s merely game theory for coordination of satisfaction of preference.
God does not care about our mathematical difficulties. --Einstein.
Alright. I shouldn’t have said “we”. I care more about the short term. And I am quite certain. WAY!
Huh? What is it that you are not convinced we shouldn’t have? Control over the distant future? Well, if that is what you mean, then I have to disagree. We are completely unqualified to exercise that kind of control. We don’t know enough. But there is reason to think that our descendants and/or future selves will be better informed.
Then lets make sure not to hire the guy as an FAI programmer.
I believe you know my answer to that. You are not licensed to have absolute knowledge about yourself. There are no human or property rights on truth. How do you know that you care more about short term? You can have beliefs or emotions that suggest this, but you can’t know what all the stuff you believe and all the moral arguments you respond to cash out into on reflection. We only ever know approximate answers, and given the complexity of human decision problem and sheer inadequacy of human brains, any approximate answers we do presume to know are highly suspect.
That we aren’t qualified doesn’t mean that we shouldn’t have that control. Exercising this control through decisions made with human brains is probably not it of course, we’d have to use finer tools, such as FAI or upload bureaucracies.
Don’t joke, it’s serious business. What do you believe on the matter?
I am not the person who initiated this joke. Why did you mention God? If you don’t care for discounting, what is your solution to the very standard puzzles regarding unbounded utilities and infinitely remote planning horizons?
Einstein mentioned God, as a stand-in for Nature.
I didn’t say I don’t care for discounting. I said that I believe that we must be uncertain about this question. That I don’t have solutions doesn’t mean I must discard the questions as answered negatively.
Yes. So, for “our values”, read “our extrapolated volition”.
It’s not clear to me how much you and Nesov actually disagree about “changing” values, vs. you meaning by “change” the sort of reflective refinement that CEV is supposed to incorporate, while Nesov uses it to mean non-reflectively-guided (random, evolutionary, or whatever) change.
I do not mean “reflective refinement” if that refinement is expected to take place during a FOOM that happens within the next century or two. I expect values to change after the first superhuman AI comes into existence. They will inevitably change by some small epsilon each time a new physical human is born or an uploaded human is cloned. I want them to change. The “values of mankind” are something like the musical tastes of mankind or the genome of mankind. It is a collage of divergent things, and the set of participants in that collage continues to change.
VN and I are in real disagreement, as far as I can tell.
This is not a disagreement, but failure of communication. There is no one relevant sentence in this dispute which we both agree that we understand in the same sense, and whose truth value we assign differently.
It is a complete failure of communication if you are under the impression that the dispute has anything to do with the truth values of sentences. I am under the impression that we are in dispute because we have different values—different aspirations for the future.
Any adequate disagreement must be about different assignment of truth values to the same meaning. For example, I disagree with the truth of the statement that we don’t converge on agreement because of differences in our values, given both yours and mine preferred interpretation of “values”. But explaining the reason for this condition not being the source of our disagreement requires me to explain to you my sense of “values”, the normative and not factual one, which I fail to accomplish.
I think we are probably in agreement that we ought to mean the same thing by the words we use before our disagreement has any substance. But your mention of “truth values” here may be driving us into a diversion from the main issue. Because I maintain that simple “ought” sentences do not have truth values. Only “is” sentences can be analyzed as true or false in Tarskian semantics.
But that is a diversion. I look forward to your explanation of your sense of the word “value”—a sense which has the curious property (as I understand it) that it would be a tragedy if mankind does not (with AI assistance) soon choose one point (out of a “value space” of rather high dimensionality) and then fix that point for all time as the one true goal of mankind and its creations.
I gave up on the main issue, and so described my understanding of the reasons that justify giving up.
Yes, and this is the core of our disagreement. Since your position is that something is meaningless, and mine is that there is a sense behind that, this is a failure of communication and not a true disagreement, as I didn’t manage to communicate to you the sense I see. At this point, I can only refer you to “metaethics sequence”, which I know is not very helpful.
One last attempt, using an intuition/analogy dump not carefully explained.
Where do the objective conclusions about “is” statements come from? Roughly, you encounter new evidence, including logical evidence, and then you look back and decide that your previous understanding could be improved upon. This is the cognitive origin of anything normative: you have a sense of improvement, and expectation of potential improvement. Looking at the same situation from the past, you know that there is a future process that can suggest improvements, you just haven’t experienced this process yet. And so you can reason about the truth without having it immediately available.
If you understand the way previous paragraph explains the truth of “is” questions, you can apply exactly the same explanation to “ought” questions. You can decide in the moment what you prefer, what you choose, which action you perform. But in the future, when you learn more, experience more, you can look back and see that you should’ve chosen differently, that your decision could’ve been improved. This anticipation of possible improvement generates semantics of preference over the decisions that is not logically transparent. You don’t know what you ought to choose, but you know that here is a sense in which some action is preferable to some other action, and you don’t know which is which.
Sorry. I missed that subtext. Giving up may well be the best course.
But my position is not that something (specifically an ‘ought’ statement) is meaningless. I only maintain that the meaning is not attained by assigning “truth value conditions”.
Your attempt was a step in the right direction, but still IMO still leaves a large gap in understanding. You seem to think that anyone who thinks carefully enough will agree with you that there is some set of core meta-ethical principles that acts as an attractor in a dynamic process of reflective updating.
I disagree with this. There is no core attractor, and the dynamic process is not one of better and better thinking as time goes on. Instead, the dynamics I am talking about is the biological evolutionary process which results in a change over time in the typical human brain. That plus the technological change over time which is likely to bring uploaded humans, AIs, aliens, and “uplifted” non-human animals into our collective social contract.
How can we know whether that is true or not? If we had access to multiple mature alien races, and could examine their moral systems, that might be a reasonable conclusion—if they were all very different. However, until then, the moral systems we can see are primitive—and any such conclusions would seem to be premature.
I’m sorry. I don’t know which statement you mean to designate by “that”.
Nor do I know which conclusions you worry might be premature.
To the best of my knowledge, I did not draw any conclusions.
We do seem to have an example of systematic positive change in values—the history of the last thousand years. No doubt some will argue that our values only look “good” because they are closest to our current values—but I don’t think that is true. Another possible explanation is that material wealth lets us show off our more positive values more frequently. That’s a harder charge to defend against, but wealth-driven value changes are surely still value changes.
Systematic, positive changes in values tend to suggest a bright future. Go, cultural evolution!