Morality. To me it seems like rationality can tell you how to achieve your goals but not what (terminal) goals to pick. Arguments that try to tell you what terminal goals to pick have just never made sense to me. Maybe there’s something I’m missing though.
Okay, I’ll bite on this one.
The very thing that distinguishes terminal goals is that you don’t “pick” them, you start out with them. They are the thing that gives the concept of “should” a meaning.
A key thing the orthogonality thesis is saying that it is perfectly possible to have any terminal goals, and that there’s no such thing as a “rational” set of terminal goals to have.
If you have terminal goals, then you may still need to spend a lot of time introspecting to figure out what they are. If you don’t have terminal goals, then the concept of “should”, and morality in general, cannot be made meaningful for you. People often consider themselves to be “somewhere in between”, where they’re not a perfect encoding of some unchangeable terminal values, but there is still a strong sense in which they want stuff for its own sake. I would consider nailing down exactly how these in-between states work to be part of agent foundations.
A key thing the orthogonality thesis is saying that it is perfectly possible to have any terminal goals, and that there’s no such thing as a “rational” set of terminal goals to have.
Oh that’s cool, thanks for sharing that. I didn’t realize there was some sort of (semi?)formal thesis about this.
If you have terminal goals, then you may still need to spend a lot of time introspecting to figure out what they are.
Hm, I’m not sure about Mere Goodness, I read the sequences soon after they were finished, so I don’t much remember which concepts were where. There is a sequence post titled Terminal Values and Instrumental Values, though it mostly seems to be emphasizing that both things exist and are different, saving the rest of the content for other posts.
Gotcha that makes sense. I read them like 10 years ago and also don’t remember but am going to skim through Mere Goodness again.
You don’t remember Eliezer arguing against the idea that terminal goals are arbitrary though right? My memory is that he pushes you to introspect and makes arguments like “Are you really sure that is what your terminal value actually is?” but never goes as far as saying that they can be correct or incorrect.
I do remember a bunch of content around that, yeah. And I would agree that terminal goals are arbitrary in the sense that they could be anything. But, for any given agent/organism/”thing that wants stuff”, there will be a fact-of-the-matter of what terminals goals got instantiated inside that thing.
There are also a few separate but related and possibly confusing facts;
The process of evolution will tend to produce organisms that have certain kinds of terminal goals instantiated inside them.
Empirically, humans happen to have a huge overlap in their terminal goals (including the terminal goal that other beings have their terminal goals satisfied).
If there are a bunch of roughly equally-capable agents around, then it maximizes your own utility (= terminal goals) to do a lot of game-theoretic cooperation with them.
I do remember a bunch of content around that, yeah. And I would agree that terminal goals are arbitrary in the sense that they could be anything.
I see. Thanks for clarifying.
But, for any given agent/organism/”thing that wants stuff”, there will be a fact-of-the-matter of what terminals goals got instantiated inside that thing.
Yeah I agree and think that those are important points.
The very thing that distinguishes terminal goals is that you don’t “pick” them, you start out with them.
That’s descriptive, not normative.
They are the thing that gives the concept of “should” a meaning.
No, they are not the ultimate definition of what you should do. If there is any kind of objective morality, you should do that, not turn things into paperclips. And following on from that, you should investigate whether there is an objective morality. Arbitrary subjective morality is the fallback position when there is no possibility of objective ethics.
You could object that you would not change your terminal values, but that’s normative, not descriptive. A perfect rational agent would not change its values, but humans aren’t perfect rational agents, and do change their values.
what’s the issue with a descriptive statement here? It doesn’t feel wrong to me so it would be nice if you can elaborate slightly.
Also, I never found objective morality to be a reasonable possibility (<1%), are you suggesting that it is quite possible (>5%) that objective morality exists, or just playing devil’s advocate here?
Any definition of what you should do has to be normative, because of the meaning of “should”. So you can’t adequately explain “should” using only a descriptive account.
In particular , accounts in terms of personal utility functions aren’t adequate to solve the traditional problems of ethics , because personal UFs are subjective , arbitrary and so on—objective morality can’t even be described within that framework.
Also, I never found objective morality to be a reasonable possibility (<1%),
What kind of reasoning are you using? If your reasoning is broken, the results you are getting are pretty meaningless.
or just playing devil’s advocate here?
I can say that your reasons for rejecting x are flawed without a believe in X. Isn’t the point of rationality to improve reasoning?
So you can’t adequately explain “should” using only a descriptive account.
I don’t think I am ready to argue about “should”/descriptive/normative, so this is my view stated out without intent to justify it super rigorously. I already think there is no objective morality, no “should” (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like “You should do X” really just have a consequentialistic meaning of “If you have terminal value V (which the speaker assumed you have), then it is higher utility to do X.”
Also you probably missed that I was reading between the lines and feel like you believe in objective morality, so I was trying to get a quick check to see how different are we on the probabilities, for some estimations on inferential distance and other stuff, not really putting any argument on the table in the previous comment (This should explain why I said the things you quoted). You can totally reject the reasoning while believing the same result, I just want to have a check.
I have not tried to put the objective morality argument into words and it is half personal experiences and half pure internal thinking. It boils down to basically what you said about “objective morality can’t even be described within that framework” though; I never found any consistent model of objective morality with my observation of different cultures in the world, and physics as I have learned.
I already think there is no objective morality, no “should” (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like “You should do X” really just have a consequentialistic meaning of “If you have terminal value V (which the speaker assumed you have), then it is higher utility to do X.”
So which do you believe in? If morality is socially constructed, then what you should do is determined by society, not by your own terminal values. But according to subjectivism you “should” do whatever your terminal values say, which could easily be something anti social.
The two are both not-realism, but that does not make them the same.
You have hinted at an objection to universal morality: but that isn’t the same thing as realism or objectivism. Minimally, an objective truth is not a subjective truth, that is to say, it is not
mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the
same everywhere, which is to say it does not imply universalism. Truths that are objective but not universal
would be truths that vary with objective circumstances: that does not entail subjectivity, because subjectivity is mind
dependence.
I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration
due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
To give some examples that are actually about morality and how it is contextual:
A food-scarce society will develop rules about who can eat how much of which kind of food.
A society without birth control and close to Malthusian limits will develop restrictions on
sexual behaviour, in order to prevent people being born who are doomed to starve, whereas a society with birth control can afford to be more liberal.
Using this three level framework, universal versus objective-but-local versus subjective, lack
of universality does not imply subjectivity.
I have not tried to put the objective morality argument into words and it is half personal experiences and half pure internal thinking.
Anything could be justified that way, if anything can.
It boils down to basically what you said about “objective morality can’t even be described within that framework”
So how sure can you be that the framework (presumably meanign von Neumann rationality) is correct and relevant. Remember, vN didn’t say vNR could solve ethical issues.
People round here like to use vNR for anything and everything, but that’s just a subculture, not a proof of anything.
You probably gave me too much credit for how deep I have thought about morality. Still, I appreciate your effort in leading me to a higher resolution model. (Long reply will come when I have thought more about it)
The very thing that distinguishes terminal goals is that you don’t “pick” them, you start out with them.
Not really. What terminal goals/values are is basically the top level goals of a recursive search. In other words, terminal goals are a lot like axioms, where terminal goals are the first things you choose, and then generate recursive instrumental goals out of it.
Terminal goals are still changeable, but changing the terminal goals changes all other goals.
And yes, this quote is accurate re terminal goals/values:
Morality. To me it seems like rationality can tell you how to achieve your goals but not what (terminal) goals to pick. Arguments that try to tell you what terminal goals to pick have just never made sense to me.
Okay, I’ll bite on this one.
The very thing that distinguishes terminal goals is that you don’t “pick” them, you start out with them. They are the thing that gives the concept of “should” a meaning.
A key thing the orthogonality thesis is saying that it is perfectly possible to have any terminal goals, and that there’s no such thing as a “rational” set of terminal goals to have.
If you have terminal goals, then you may still need to spend a lot of time introspecting to figure out what they are. If you don’t have terminal goals, then the concept of “should”, and morality in general, cannot be made meaningful for you. People often consider themselves to be “somewhere in between”, where they’re not a perfect encoding of some unchangeable terminal values, but there is still a strong sense in which they want stuff for its own sake. I would consider nailing down exactly how these in-between states work to be part of agent foundations.
Oh that’s cool, thanks for sharing that. I didn’t realize there was some sort of (semi?)formal thesis about this.
So was this the high-level goal of Mere Goodness?
Hm, I’m not sure about Mere Goodness, I read the sequences soon after they were finished, so I don’t much remember which concepts were where. There is a sequence post titled Terminal Values and Instrumental Values, though it mostly seems to be emphasizing that both things exist and are different, saving the rest of the content for other posts.
Gotcha that makes sense. I read them like 10 years ago and also don’t remember but am going to skim through Mere Goodness again.
You don’t remember Eliezer arguing against the idea that terminal goals are arbitrary though right? My memory is that he pushes you to introspect and makes arguments like “Are you really sure that is what your terminal value actually is?” but never goes as far as saying that they can be correct or incorrect.
I do remember a bunch of content around that, yeah. And I would agree that terminal goals are arbitrary in the sense that they could be anything. But, for any given agent/organism/”thing that wants stuff”, there will be a fact-of-the-matter of what terminals goals got instantiated inside that thing.
There are also a few separate but related and possibly confusing facts;
The process of evolution will tend to produce organisms that have certain kinds of terminal goals instantiated inside them.
Empirically, humans happen to have a huge overlap in their terminal goals (including the terminal goal that other beings have their terminal goals satisfied).
If there are a bunch of roughly equally-capable agents around, then it maximizes your own utility (= terminal goals) to do a lot of game-theoretic cooperation with them.
I see. Thanks for clarifying.
Yeah I agree and think that those are important points.
That’s descriptive, not normative.
No, they are not the ultimate definition of what you should do. If there is any kind of objective morality, you should do that, not turn things into paperclips. And following on from that, you should investigate whether there is an objective morality. Arbitrary subjective morality is the fallback position when there is no possibility of objective ethics.
You could object that you would not change your terminal values, but that’s normative, not descriptive. A perfect rational agent would not change its values, but humans aren’t perfect rational agents, and do change their values.
what’s the issue with a descriptive statement here? It doesn’t feel wrong to me so it would be nice if you can elaborate slightly.
Also, I never found objective morality to be a reasonable possibility (<1%), are you suggesting that it is quite possible (>5%) that objective morality exists, or just playing devil’s advocate here?
Any definition of what you should do has to be normative, because of the meaning of “should”. So you can’t adequately explain “should” using only a descriptive account.
In particular , accounts in terms of personal utility functions aren’t adequate to solve the traditional problems of ethics , because personal UFs are subjective , arbitrary and so on—objective morality can’t even be described within that framework.
What kind of reasoning are you using? If your reasoning is broken, the results you are getting are pretty meaningless.
I can say that your reasons for rejecting x are flawed without a believe in X. Isn’t the point of rationality to improve reasoning?
I don’t think I am ready to argue about “should”/descriptive/normative, so this is my view stated out without intent to justify it super rigorously. I already think there is no objective morality, no “should” (in its common usage), and both are in reality a socially constructed thing that will shift over time (relativism I think?, not sure). Any sentence like “You should do X” really just have a consequentialistic meaning of “If you have terminal value V (which the speaker assumed you have), then it is higher utility to do X.”
Also you probably missed that I was reading between the lines and feel like you believe in objective morality, so I was trying to get a quick check to see how different are we on the probabilities, for some estimations on inferential distance and other stuff, not really putting any argument on the table in the previous comment (This should explain why I said the things you quoted). You can totally reject the reasoning while believing the same result, I just want to have a check.
I have not tried to put the objective morality argument into words and it is half personal experiences and half pure internal thinking. It boils down to basically what you said about “objective morality can’t even be described within that framework” though; I never found any consistent model of objective morality with my observation of different cultures in the world, and physics as I have learned.
So which do you believe in? If morality is socially constructed, then what you should do is determined by society, not by your own terminal values. But according to subjectivism you “should” do whatever your terminal values say, which could easily be something anti social.
The two are both not-realism, but that does not make them the same.
You have hinted at an objection to universal morality: but that isn’t the same thing as realism or objectivism. Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. Truths that are objective but not universal would be truths that vary with objective circumstances: that does not entail subjectivity, because subjectivity is mind dependence.
I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
To give some examples that are actually about morality and how it is contextual:
A food-scarce society will develop rules about who can eat how much of which kind of food.
A society without birth control and close to Malthusian limits will develop restrictions on sexual behaviour, in order to prevent people being born who are doomed to starve, whereas a society with birth control can afford to be more liberal.
Using this three level framework, universal versus objective-but-local versus subjective, lack of universality does not imply subjectivity.
Anything could be justified that way, if anything can.
So how sure can you be that the framework (presumably meanign von Neumann rationality) is correct and relevant. Remember, vN didn’t say vNR could solve ethical issues.
People round here like to use vNR for anything and everything, but that’s just a subculture, not a proof of anything.
You probably gave me too much credit for how deep I have thought about morality. Still, I appreciate your effort in leading me to a higher resolution model. (Long reply will come when I have thought more about it)
Not really. What terminal goals/values are is basically the top level goals of a recursive search. In other words, terminal goals are a lot like axioms, where terminal goals are the first things you choose, and then generate recursive instrumental goals out of it.
Terminal goals are still changeable, but changing the terminal goals changes all other goals.
And yes, this quote is accurate re terminal goals/values: