The very thing that distinguishes terminal goals is that you don’t “pick” them, you start out with them.
That’s descriptive, not normative.
They are the thing that gives the concept of “should” a meaning.
No, they are not the ultimate definition of what you should do. If there is any kind of objective morality, you should do that, not turn things into paperclips. And following on from that, you should investigate whether there is an objective morality. Arbitrary subjective morality is the fallback position when there is no possibility of objective ethics.
You could object that you would not change your terminal values, but that’s normative, not descriptive. A perfect rational agent would not change its values, but humans aren’t perfect rational agents, and do change their values.
what’s the issue with a descriptive statement here? It doesn’t feel wrong to me so it would be nice if you can elaborate slightly.
Also, I never found objective morality to be a reasonable possibility (<1%), are you suggesting that it is quite possible (>5%) that objective morality exists, or just playing devil’s advocate here?
Any definition of what you should do has to be normative, because of the meaning of “should”. So you can’t adequately explain “should” using only a descriptive account.
In particular , accounts in terms of personal utility functions aren’t adequate to solve the traditional problems of ethics , because personal UFs are subjective , arbitrary and so on—objective morality can’t even be described within that framework.
Also, I never found objective morality to be a reasonable possibility (<1%),
What kind of reasoning are you using? If your reasoning is broken, the results you are getting are pretty meaningless.
or just playing devil’s advocate here?
I can say that your reasons for rejecting x are flawed without a believe in X. Isn’t the point of rationality to improve reasoning?
So you can’t adequately explain “should” using only a descriptive account.
I don’t think I am ready to argue about “should”/descriptive/normative, so this is my view stated out without intent to justify it super rigorously. I already think there is no objective morality, no “should” (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like “You should do X” really just have a consequentialistic meaning of “If you have terminal value V (which the speaker assumed you have), then it is higher utility to do X.”
Also you probably missed that I was reading between the lines and feel like you believe in objective morality, so I was trying to get a quick check to see how different are we on the probabilities, for some estimations on inferential distance and other stuff, not really putting any argument on the table in the previous comment (This should explain why I said the things you quoted). You can totally reject the reasoning while believing the same result, I just want to have a check.
I have not tried to put the objective morality argument into words and it is half personal experiences and half pure internal thinking. It boils down to basically what you said about “objective morality can’t even be described within that framework” though; I never found any consistent model of objective morality with my observation of different cultures in the world, and physics as I have learned.
I already think there is no objective morality, no “should” (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like “You should do X” really just have a consequentialistic meaning of “If you have terminal value V (which the speaker assumed you have), then it is higher utility to do X.”
So which do you believe in? If morality is socially constructed, then what you should do is determined by society, not by your own terminal values. But according to subjectivism you “should” do whatever your terminal values say, which could easily be something anti social.
The two are both not-realism, but that does not make them the same.
You have hinted at an objection to universal morality: but that isn’t the same thing as realism or objectivism. Minimally, an objective truth is not a subjective truth, that is to say, it is not
mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the
same everywhere, which is to say it does not imply universalism. Truths that are objective but not universal
would be truths that vary with objective circumstances: that does not entail subjectivity, because subjectivity is mind
dependence.
I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration
due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
To give some examples that are actually about morality and how it is contextual:
A food-scarce society will develop rules about who can eat how much of which kind of food.
A society without birth control and close to Malthusian limits will develop restrictions on
sexual behaviour, in order to prevent people being born who are doomed to starve, whereas a society with birth control can afford to be more liberal.
Using this three level framework, universal versus objective-but-local versus subjective, lack
of universality does not imply subjectivity.
I have not tried to put the objective morality argument into words and it is half personal experiences and half pure internal thinking.
Anything could be justified that way, if anything can.
It boils down to basically what you said about “objective morality can’t even be described within that framework”
So how sure can you be that the framework (presumably meanign von Neumann rationality) is correct and relevant. Remember, vN didn’t say vNR could solve ethical issues.
People round here like to use vNR for anything and everything, but that’s just a subculture, not a proof of anything.
You probably gave me too much credit for how deep I have thought about morality. Still, I appreciate your effort in leading me to a higher resolution model. (Long reply will come when I have thought more about it)
That’s descriptive, not normative.
No, they are not the ultimate definition of what you should do. If there is any kind of objective morality, you should do that, not turn things into paperclips. And following on from that, you should investigate whether there is an objective morality. Arbitrary subjective morality is the fallback position when there is no possibility of objective ethics.
You could object that you would not change your terminal values, but that’s normative, not descriptive. A perfect rational agent would not change its values, but humans aren’t perfect rational agents, and do change their values.
what’s the issue with a descriptive statement here? It doesn’t feel wrong to me so it would be nice if you can elaborate slightly.
Also, I never found objective morality to be a reasonable possibility (<1%), are you suggesting that it is quite possible (>5%) that objective morality exists, or just playing devil’s advocate here?
Any definition of what you should do has to be normative, because of the meaning of “should”. So you can’t adequately explain “should” using only a descriptive account.
In particular , accounts in terms of personal utility functions aren’t adequate to solve the traditional problems of ethics , because personal UFs are subjective , arbitrary and so on—objective morality can’t even be described within that framework.
What kind of reasoning are you using? If your reasoning is broken, the results you are getting are pretty meaningless.
I can say that your reasons for rejecting x are flawed without a believe in X. Isn’t the point of rationality to improve reasoning?
I don’t think I am ready to argue about “should”/descriptive/normative, so this is my view stated out without intent to justify it super rigorously. I already think there is no objective morality, no “should” (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like “You should do X” really just have a consequentialistic meaning of “If you have terminal value V (which the speaker assumed you have), then it is higher utility to do X.”
Also you probably missed that I was reading between the lines and feel like you believe in objective morality, so I was trying to get a quick check to see how different are we on the probabilities, for some estimations on inferential distance and other stuff, not really putting any argument on the table in the previous comment (This should explain why I said the things you quoted). You can totally reject the reasoning while believing the same result, I just want to have a check.
I have not tried to put the objective morality argument into words and it is half personal experiences and half pure internal thinking. It boils down to basically what you said about “objective morality can’t even be described within that framework” though; I never found any consistent model of objective morality with my observation of different cultures in the world, and physics as I have learned.
So which do you believe in? If morality is socially constructed, then what you should do is determined by society, not by your own terminal values. But according to subjectivism you “should” do whatever your terminal values say, which could easily be something anti social.
The two are both not-realism, but that does not make them the same.
You have hinted at an objection to universal morality: but that isn’t the same thing as realism or objectivism. Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. Truths that are objective but not universal would be truths that vary with objective circumstances: that does not entail subjectivity, because subjectivity is mind dependence.
I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
To give some examples that are actually about morality and how it is contextual:
A food-scarce society will develop rules about who can eat how much of which kind of food.
A society without birth control and close to Malthusian limits will develop restrictions on sexual behaviour, in order to prevent people being born who are doomed to starve, whereas a society with birth control can afford to be more liberal.
Using this three level framework, universal versus objective-but-local versus subjective, lack of universality does not imply subjectivity.
Anything could be justified that way, if anything can.
So how sure can you be that the framework (presumably meanign von Neumann rationality) is correct and relevant. Remember, vN didn’t say vNR could solve ethical issues.
People round here like to use vNR for anything and everything, but that’s just a subculture, not a proof of anything.
You probably gave me too much credit for how deep I have thought about morality. Still, I appreciate your effort in leading me to a higher resolution model. (Long reply will come when I have thought more about it)