When you say that “reducing suffering is the point”, I suppose that there is a reason to reduce it. How does it follow from “It’s bad” to “needs to be reduced”?
When you say that “reducing suffering is the point”, I suppose that there is a reason to reduce it.
No. It’s a terminal value. When you ask what the point of doing X is, the answer is that it reduces suffering, or increases happiness, or does something else that’s terminally valuable.
I don’t see justification for dividing values in these two categories in that post.
Do I understand you right, you think that although there is no reason why we should reduce suffering and there is no reason what for we should reduce suffering, we anyway should do it only because somebody called it “terminal value”?
Consider an optimization process. If placed in a universe, it will tend to direct that universe towards a certain utility function. The end result it moves it towards is called its terminal values.
Optimization processes do not necessarily have instrumental values. AIXI is the most powerful possible optimization process, but it only considers the effect of each action on its terminal values.
Evolution is another example. Species are optimized solely based on their inclusive genetic fitness. It does not understand, for example, that if it got rid of humans’ blind spots, they’d do better in the long run, so it might be a good idea to select for humans who are closer to having eyes with no blind spots. Since you can’t change gradually from “blind spot” to “no blind spot” without getting “completely blind” for quite a few generations in between, evolution is not going to get rid of out blind spots.
Humans are not like this. Humans can keep track of sub-goals to their goals. If a human wants chocolate as a terminal value, and there is chocolate at the store, a human can make getting to the store an instrumental value, and start considering actions based on how they help get him/her to the store. These sub-goals are known as instrumental values.
Perhaps you don’t have helping people as a terminal value. However, you have terminal values. I know this because you managed to type grammatically correct English. Very few strings are grammatically correct English, and very few patterns of movement would result in any string being sent as a comment to LessWrong.
Perhaps typing grammatically correct English is a terminal value. Perhaps you’re optimizing something else, such as your own understanding of meta-ethics, and it just so happens that grammatically correct English is a good way to get this result. In this case, it’s an instrumental value (unless you just have so much computing power that you didn’t even consider what helps you write and you just directly figured out that twitching those muscles would improve your understanding of meta-ethics, but I doubt that).
Thanks for this wall of text but you didn’t even try to answer my question. I asked for justification to this division of values—you just explained to me this division.
If you are able to get the analogy, my argument sounds like this:
“The author has tried hard to tie various component of personal development into three universal principles that can be applied to any situation. Unfortunately human personality is a much more nuanced thing that defies such neat categorizations. The attempt to force fit the ‘fundamental principles of personal development(!)’ into neat categories can only result in such inanities as love + truth = oneness; truth + power = courage; etc. There is no explanation on why only these categories are considered universal, why not others? After all we have a long list of desirable qualities say virtue, honor, commitment, persistence, discipline etc. etc. On what basis do you pick 3 of them and declare them to be ‘fundamental principles’? If truth, love and power are the fundamental principals of personality, then what about the others?
...
The point is that there is no scientific basis for claiming that truth, power and love are the basic three principles and others are just a combination of them. There are no hypothesis, no tests, no analysis and no proofs. No reference to any studies in any university of repute. No double blind tests on sample population. Just results. Whatever author says is a revelation that does not require any external validation. His assertion is enough since it is based on his personal experience. Believe it and you will see the results.”
Btw, It’s still extremely interesting to me, how exactly does “terminality” of value give sense to action that has no reasons to be done.
Why do anything? It’s not enough to have an infinite or circular chain of reasoning. You can construct an infinite or circular chain of reasoning that supports any conclusion. You have to have an ending to it. That is what we call a terminal value.
There is no explanation on why only these categories are considered universal, why not others?
Nobody said it has to be simple. Our values are complicated. Love, truth, oneness, power, courage, etc. are all terminal values. Some of them are also instrumental values. Power is very useful in fulfilling other values, and you will put forth more effort to achieve power than you would if it was just a terminal value. There are also instrumental values that are not terminal values, such as going to the store (assuming you don’t particularly like the store, although even then you could argue that it’s the happiness you like).
I don’t know why. The most plausible answer I know—because you like doing it.
Okay. However there are only assertions and no justifications, let’s assume that your first paragraph is right. Anyway, how does “terminality” of value give sense to otherwise senseless action?
I ask you why these two categories, and it looks like you even cite the right piece out of my review-argument and… Bam! “Nobody said it has to be simple”.
But, why? Why these two categories of values? Where is justification? Or is it just “too basic to be explained”? If you think so, write it, please.
Anyway, how does “terminality” of value give sense to otherwise senseless action?
What gives value to an otherwise senseless action is a meta-ethical question. “Terminality” is just what you call it when you value something for reasons other than it causing something else that you value.
Why these two categories of values?
Let me try making an example:
Suppose you’re a paperclip-maximizer. You value paperclips. Paperclip factories help build paperclips, so factories are good too. Given a choice between building a factory immediately and a paperclip immediately, you’d probably pick the former. It’s like you value factories more than paperclips.
But if you’re given the opportunity to build a factory-maximizer, you’d turn it down. Those factories potentially could make a lot of paperclips, but they won’t, because the factory-maximizer would need that metal to make more factories. You don’t really value factories. They’re just useful. You value paperclips.
You could come up with an exception like this for any instrumental value. No matter how much the instrumental value is maximized, you won’t care unless it helps with the terminal value. There is no such exception for you terminal values. If there’s more paperclips, it’s better. End of story.
The actual utility function can be quite complicated. Perhaps you prefer paperclips in a certain size range. Perhaps you want them to be easily bent, and hard to break. In that case, your terminal value is more sophisticated than “paperclips”, but it’s something.
If there is reason ‘what for’ (What for did you buy this car? To drive to work) do something, then it’s instrumental value. If there is only reason ‘why’ (Why did you buy this car? Because I like it) do something, then it’s a terminal value. Right?
I don’t know the difference between “what for” and “why”.
If you bought the car to drive to work, it’s instrumental. If you bought it because having nice cars makes you happy, its instrumental. If you bought it because you just prefer for future you to have a car, whether or not he’s happy about it or even wants a car, then it’s terminal.
As for why: you can answer to “why” with either “because” or “to” but you can only answer to “what for” with “to”. To ‘avoid’ confusion I prefer to use “why” when I want to get “because” and “what for” when I want to get “to”, e.g.
Why did you buy this car? Because I like it. What for did you buy this car? To drive to work
It could be the problem, but, actually, the main one is that I see no point in reducing suffering and it looks like nobody can explain it to me.
It’s an intrinsic value. Reducing suffering is the point.
I don’t like to suffer. It’s bad for me to suffer. Other people are like me. Therefore, it’s also bad for them to suffer.
When you say that “reducing suffering is the point”, I suppose that there is a reason to reduce it. How does it follow from “It’s bad” to “needs to be reduced”?
No. It’s a terminal value. When you ask what the point of doing X is, the answer is that it reduces suffering, or increases happiness, or does something else that’s terminally valuable.
I don’t see justification for dividing values in these two categories in that post.
Do I understand you right, you think that although there is no reason why we should reduce suffering and there is no reason what for we should reduce suffering, we anyway should do it only because somebody called it “terminal value”?
Let me try this from the beginning.
Consider an optimization process. If placed in a universe, it will tend to direct that universe towards a certain utility function. The end result it moves it towards is called its terminal values.
Optimization processes do not necessarily have instrumental values. AIXI is the most powerful possible optimization process, but it only considers the effect of each action on its terminal values.
Evolution is another example. Species are optimized solely based on their inclusive genetic fitness. It does not understand, for example, that if it got rid of humans’ blind spots, they’d do better in the long run, so it might be a good idea to select for humans who are closer to having eyes with no blind spots. Since you can’t change gradually from “blind spot” to “no blind spot” without getting “completely blind” for quite a few generations in between, evolution is not going to get rid of out blind spots.
Humans are not like this. Humans can keep track of sub-goals to their goals. If a human wants chocolate as a terminal value, and there is chocolate at the store, a human can make getting to the store an instrumental value, and start considering actions based on how they help get him/her to the store. These sub-goals are known as instrumental values.
Perhaps you don’t have helping people as a terminal value. However, you have terminal values. I know this because you managed to type grammatically correct English. Very few strings are grammatically correct English, and very few patterns of movement would result in any string being sent as a comment to LessWrong.
Perhaps typing grammatically correct English is a terminal value. Perhaps you’re optimizing something else, such as your own understanding of meta-ethics, and it just so happens that grammatically correct English is a good way to get this result. In this case, it’s an instrumental value (unless you just have so much computing power that you didn’t even consider what helps you write and you just directly figured out that twitching those muscles would improve your understanding of meta-ethics, but I doubt that).
Accident comment.
Thanks for this wall of text but you didn’t even try to answer my question. I asked for justification to this division of values—you just explained to me this division.
If you are able to get the analogy, my argument sounds like this:
“The author has tried hard to tie various component of personal development into three universal principles that can be applied to any situation. Unfortunately human personality is a much more nuanced thing that defies such neat categorizations. The attempt to force fit the ‘fundamental principles of personal development(!)’ into neat categories can only result in such inanities as love + truth = oneness; truth + power = courage; etc. There is no explanation on why only these categories are considered universal, why not others? After all we have a long list of desirable qualities say virtue, honor, commitment, persistence, discipline etc. etc. On what basis do you pick 3 of them and declare them to be ‘fundamental principles’? If truth, love and power are the fundamental principals of personality, then what about the others?
...
The point is that there is no scientific basis for claiming that truth, power and love are the basic three principles and others are just a combination of them. There are no hypothesis, no tests, no analysis and no proofs. No reference to any studies in any university of repute. No double blind tests on sample population. Just results. Whatever author says is a revelation that does not require any external validation. His assertion is enough since it is based on his personal experience. Believe it and you will see the results.”
Btw, It’s still extremely interesting to me, how exactly does “terminality” of value give sense to action that has no reasons to be done.
Why do anything? It’s not enough to have an infinite or circular chain of reasoning. You can construct an infinite or circular chain of reasoning that supports any conclusion. You have to have an ending to it. That is what we call a terminal value.
Nobody said it has to be simple. Our values are complicated. Love, truth, oneness, power, courage, etc. are all terminal values. Some of them are also instrumental values. Power is very useful in fulfilling other values, and you will put forth more effort to achieve power than you would if it was just a terminal value. There are also instrumental values that are not terminal values, such as going to the store (assuming you don’t particularly like the store, although even then you could argue that it’s the happiness you like).
I don’t know why. The most plausible answer I know—because you like doing it.
Okay. However there are only assertions and no justifications, let’s assume that your first paragraph is right. Anyway, how does “terminality” of value give sense to otherwise senseless action?
I ask you why these two categories, and it looks like you even cite the right piece out of my review-argument and… Bam! “Nobody said it has to be simple”.
But, why? Why these two categories of values? Where is justification? Or is it just “too basic to be explained”? If you think so, write it, please.
What gives value to an otherwise senseless action is a meta-ethical question. “Terminality” is just what you call it when you value something for reasons other than it causing something else that you value.
Let me try making an example:
Suppose you’re a paperclip-maximizer. You value paperclips. Paperclip factories help build paperclips, so factories are good too. Given a choice between building a factory immediately and a paperclip immediately, you’d probably pick the former. It’s like you value factories more than paperclips.
But if you’re given the opportunity to build a factory-maximizer, you’d turn it down. Those factories potentially could make a lot of paperclips, but they won’t, because the factory-maximizer would need that metal to make more factories. You don’t really value factories. They’re just useful. You value paperclips.
You could come up with an exception like this for any instrumental value. No matter how much the instrumental value is maximized, you won’t care unless it helps with the terminal value. There is no such exception for you terminal values. If there’s more paperclips, it’s better. End of story.
The actual utility function can be quite complicated. Perhaps you prefer paperclips in a certain size range. Perhaps you want them to be easily bent, and hard to break. In that case, your terminal value is more sophisticated than “paperclips”, but it’s something.
Sorry for the pause. Have been thinking.
If there is reason ‘what for’ (What for did you buy this car? To drive to work) do something, then it’s instrumental value. If there is only reason ‘why’ (Why did you buy this car? Because I like it) do something, then it’s a terminal value. Right?
I don’t know the difference between “what for” and “why”.
If you bought the car to drive to work, it’s instrumental. If you bought it because having nice cars makes you happy, its instrumental. If you bought it because you just prefer for future you to have a car, whether or not he’s happy about it or even wants a car, then it’s terminal.
As for why: you can answer to “why” with either “because” or “to” but you can only answer to “what for” with “to”. To ‘avoid’ confusion I prefer to use “why” when I want to get “because” and “what for” when I want to get “to”, e.g. Why did you buy this car? Because I like it. What for did you buy this car? To drive to work
I’m not sure, are we talking about subjective or objective values?
What’s an objective value?
“existing freely or independently from a mind)”
How are you defining value then?
It sounds to me like objective value is a contradiction in terms.
Value is just another way to say that something is liked or disliked by someone.
I’m sorry if all this time you were talking about subjective values. I have nothing against them.