I understand wanting to help people. I have empathy and I feel all the things you’ve mentioned. What I’m trying to say is if you suffer when you think about suffering of others, why not to try to stop thinking (caring) about it and donate to science, instead of spending your time and money to reduce suffering?
Because they can (looks like not) deal with suffering from suffering of others, without spending money on it, while enjoying spending money on science?
I’m not sure, I didn’t vote it. But my theory would be that you seem to be making fun of people who like to reduce suffering for no better reason than you like a different thing (I don’t understand why you do x? is often code for x is dumb or silly).
I don’t think it’s silly. I think it’s silly to spend governmental money and encourage others to spend money on it, since it makes no sense. But if you personally enjoy it, well, that’s great.
what do you mean by “makes no sense” ? Do you mean in the nihilistic sense that nothing really matters? You keep using the phrase as if it’s a knockdown argument against reducing suffering, so it might be useful to clarify what you mean.
Yes, in nihilistic sense. If we follow the “what for?” question long enough, we will inevitably get to the point where there is no explanation, and we therefore may conclude that there is no sense in anything.
In that case, your question is already answered by the people who tell you that they want to. If nothing really matters than the only reasons to do things are internal to minds. In which case reducing suffering is simply a very common thing for minds in this area to want to do. Why? evolutionary advantage mayhaps. If you buy nihilism there is no reason to reduce suffering but there’s also no reason no to and no reason to do anything else.
but why? Why is it silly? What makes it silly? Literally nothing. You act as if government money should be reserved for things that “make sense” or have a reason but nothing does. Spending gov money or encouraging others to reduce suffering is exactly as meaningful as every other thing you could spend it on.
Senselessness makes it silly. I not only act so but also think that doing anything is silly. What I’m doing right now is silly.
I shouldn’t have included “encouraging others”; what makes governmental money different is that government acquired it’s money by force without any reason to use force. And your ethical system has to allow usage of force without reason, for government to be ethical.
I didn’t say that there is anything wrong with usage of force. It’s wrong to use force in my ethical system because I don’t like it and don’t want it to be used on me.
If your ethical system is different and allows usage of force without reason—it’s okay. But please—only use it on other people who think like you.
If your ethical system is different and allows usage of force without reason—it’s okay. But please—only use it on other people who think like you.
I don’t think you have given any argument in favor of that demand. If you really think that nothing has any meaning why should you follow the golden rule and only use it on other people who think like you.
It’s more of a request than a demand, and I understand that the person who likes use of force, most likely will not listen to it, especially when I have no arguments. They shouldn’t follow this request. It’s only intention is to show what I would like them to do.
In my experience, trying to choose what I care about does not work well, and has only resulted in increasing my own suffering.
Is the problem that thinking about the amount of suffering in the world makes you feel powerless to fix it? If so then you can probably make yourself feel better if you focus on what you can do to have some positive impact, even if it is small. If you think “donating to science” is the best way to have a positive impact on the future, than by all means do that, and think about how the research you are helping to fund will one day reduce the suffering that all future generations will have to endure.
When you say that “reducing suffering is the point”, I suppose that there is a reason to reduce it. How does it follow from “It’s bad” to “needs to be reduced”?
When you say that “reducing suffering is the point”, I suppose that there is a reason to reduce it.
No. It’s a terminal value. When you ask what the point of doing X is, the answer is that it reduces suffering, or increases happiness, or does something else that’s terminally valuable.
I don’t see justification for dividing values in these two categories in that post.
Do I understand you right, you think that although there is no reason why we should reduce suffering and there is no reason what for we should reduce suffering, we anyway should do it only because somebody called it “terminal value”?
Consider an optimization process. If placed in a universe, it will tend to direct that universe towards a certain utility function. The end result it moves it towards is called its terminal values.
Optimization processes do not necessarily have instrumental values. AIXI is the most powerful possible optimization process, but it only considers the effect of each action on its terminal values.
Evolution is another example. Species are optimized solely based on their inclusive genetic fitness. It does not understand, for example, that if it got rid of humans’ blind spots, they’d do better in the long run, so it might be a good idea to select for humans who are closer to having eyes with no blind spots. Since you can’t change gradually from “blind spot” to “no blind spot” without getting “completely blind” for quite a few generations in between, evolution is not going to get rid of out blind spots.
Humans are not like this. Humans can keep track of sub-goals to their goals. If a human wants chocolate as a terminal value, and there is chocolate at the store, a human can make getting to the store an instrumental value, and start considering actions based on how they help get him/her to the store. These sub-goals are known as instrumental values.
Perhaps you don’t have helping people as a terminal value. However, you have terminal values. I know this because you managed to type grammatically correct English. Very few strings are grammatically correct English, and very few patterns of movement would result in any string being sent as a comment to LessWrong.
Perhaps typing grammatically correct English is a terminal value. Perhaps you’re optimizing something else, such as your own understanding of meta-ethics, and it just so happens that grammatically correct English is a good way to get this result. In this case, it’s an instrumental value (unless you just have so much computing power that you didn’t even consider what helps you write and you just directly figured out that twitching those muscles would improve your understanding of meta-ethics, but I doubt that).
Thanks for this wall of text but you didn’t even try to answer my question. I asked for justification to this division of values—you just explained to me this division.
If you are able to get the analogy, my argument sounds like this:
“The author has tried hard to tie various component of personal development into three universal principles that can be applied to any situation. Unfortunately human personality is a much more nuanced thing that defies such neat categorizations. The attempt to force fit the ‘fundamental principles of personal development(!)’ into neat categories can only result in such inanities as love + truth = oneness; truth + power = courage; etc. There is no explanation on why only these categories are considered universal, why not others? After all we have a long list of desirable qualities say virtue, honor, commitment, persistence, discipline etc. etc. On what basis do you pick 3 of them and declare them to be ‘fundamental principles’? If truth, love and power are the fundamental principals of personality, then what about the others?
...
The point is that there is no scientific basis for claiming that truth, power and love are the basic three principles and others are just a combination of them. There are no hypothesis, no tests, no analysis and no proofs. No reference to any studies in any university of repute. No double blind tests on sample population. Just results. Whatever author says is a revelation that does not require any external validation. His assertion is enough since it is based on his personal experience. Believe it and you will see the results.”
Btw, It’s still extremely interesting to me, how exactly does “terminality” of value give sense to action that has no reasons to be done.
Why do anything? It’s not enough to have an infinite or circular chain of reasoning. You can construct an infinite or circular chain of reasoning that supports any conclusion. You have to have an ending to it. That is what we call a terminal value.
There is no explanation on why only these categories are considered universal, why not others?
Nobody said it has to be simple. Our values are complicated. Love, truth, oneness, power, courage, etc. are all terminal values. Some of them are also instrumental values. Power is very useful in fulfilling other values, and you will put forth more effort to achieve power than you would if it was just a terminal value. There are also instrumental values that are not terminal values, such as going to the store (assuming you don’t particularly like the store, although even then you could argue that it’s the happiness you like).
I don’t know why. The most plausible answer I know—because you like doing it.
Okay. However there are only assertions and no justifications, let’s assume that your first paragraph is right. Anyway, how does “terminality” of value give sense to otherwise senseless action?
I ask you why these two categories, and it looks like you even cite the right piece out of my review-argument and… Bam! “Nobody said it has to be simple”.
But, why? Why these two categories of values? Where is justification? Or is it just “too basic to be explained”? If you think so, write it, please.
Anyway, how does “terminality” of value give sense to otherwise senseless action?
What gives value to an otherwise senseless action is a meta-ethical question. “Terminality” is just what you call it when you value something for reasons other than it causing something else that you value.
Why these two categories of values?
Let me try making an example:
Suppose you’re a paperclip-maximizer. You value paperclips. Paperclip factories help build paperclips, so factories are good too. Given a choice between building a factory immediately and a paperclip immediately, you’d probably pick the former. It’s like you value factories more than paperclips.
But if you’re given the opportunity to build a factory-maximizer, you’d turn it down. Those factories potentially could make a lot of paperclips, but they won’t, because the factory-maximizer would need that metal to make more factories. You don’t really value factories. They’re just useful. You value paperclips.
You could come up with an exception like this for any instrumental value. No matter how much the instrumental value is maximized, you won’t care unless it helps with the terminal value. There is no such exception for you terminal values. If there’s more paperclips, it’s better. End of story.
The actual utility function can be quite complicated. Perhaps you prefer paperclips in a certain size range. Perhaps you want them to be easily bent, and hard to break. In that case, your terminal value is more sophisticated than “paperclips”, but it’s something.
If there is reason ‘what for’ (What for did you buy this car? To drive to work) do something, then it’s instrumental value. If there is only reason ‘why’ (Why did you buy this car? Because I like it) do something, then it’s a terminal value. Right?
I don’t know the difference between “what for” and “why”.
If you bought the car to drive to work, it’s instrumental. If you bought it because having nice cars makes you happy, its instrumental. If you bought it because you just prefer for future you to have a car, whether or not he’s happy about it or even wants a car, then it’s terminal.
As for why: you can answer to “why” with either “because” or “to” but you can only answer to “what for” with “to”. To ‘avoid’ confusion I prefer to use “why” when I want to get “because” and “what for” when I want to get “to”, e.g.
Why did you buy this car? Because I like it. What for did you buy this car? To drive to work
I understand wanting to help people. I have empathy and I feel all the things you’ve mentioned. What I’m trying to say is if you suffer when you think about suffering of others, why not to try to stop thinking (caring) about it and donate to science, instead of spending your time and money to reduce suffering?
do you think people should donate to science because that will reduce MORE suffering in the long term?
Nope. I just like science.
Upd: I understand why my other comments were downvoted. But this?
And some other people just like other people not suffering. Why should your like count more than theirs?
Could you show me where I wrote that my like should count more than theirs?
You didn’t say that explicitly, but if yours doesn’t count more than theirs, why should we spend money on yours but not theirs?
Because they can (looks like not) deal with suffering from suffering of others, without spending money on it, while enjoying spending money on science?
I’m not sure, I didn’t vote it. But my theory would be that you seem to be making fun of people who like to reduce suffering for no better reason than you like a different thing (I don’t understand why you do x? is often code for x is dumb or silly).
I don’t think it’s silly. I think it’s silly to spend governmental money and encourage others to spend money on it, since it makes no sense. But if you personally enjoy it, well, that’s great.
what do you mean by “makes no sense” ? Do you mean in the nihilistic sense that nothing really matters? You keep using the phrase as if it’s a knockdown argument against reducing suffering, so it might be useful to clarify what you mean.
Yes, in nihilistic sense. If we follow the “what for?” question long enough, we will inevitably get to the point where there is no explanation, and we therefore may conclude that there is no sense in anything.
In that case, your question is already answered by the people who tell you that they want to. If nothing really matters than the only reasons to do things are internal to minds. In which case reducing suffering is simply a very common thing for minds in this area to want to do. Why? evolutionary advantage mayhaps. If you buy nihilism there is no reason to reduce suffering but there’s also no reason no to and no reason to do anything else.
And this is exactly what I think, and exactly why I said that:
and
but why? Why is it silly? What makes it silly? Literally nothing. You act as if government money should be reserved for things that “make sense” or have a reason but nothing does. Spending gov money or encouraging others to reduce suffering is exactly as meaningful as every other thing you could spend it on.
Senselessness makes it silly. I not only act so but also think that doing anything is silly. What I’m doing right now is silly.
I shouldn’t have included “encouraging others”; what makes governmental money different is that government acquired it’s money by force without any reason to use force. And your ethical system has to allow usage of force without reason, for government to be ethical.
What’s wrong with usage of force? It’s not like there’s a reason not to.
I didn’t say that there is anything wrong with usage of force. It’s wrong to use force in my ethical system because I don’t like it and don’t want it to be used on me.
If your ethical system is different and allows usage of force without reason—it’s okay. But please—only use it on other people who think like you.
I don’t think you have given any argument in favor of that demand. If you really think that nothing has any meaning why should you follow the golden rule and only use it on other people who think like you.
It’s more of a request than a demand, and I understand that the person who likes use of force, most likely will not listen to it, especially when I have no arguments. They shouldn’t follow this request. It’s only intention is to show what I would like them to do.
In my experience, trying to choose what I care about does not work well, and has only resulted in increasing my own suffering.
Is the problem that thinking about the amount of suffering in the world makes you feel powerless to fix it? If so then you can probably make yourself feel better if you focus on what you can do to have some positive impact, even if it is small. If you think “donating to science” is the best way to have a positive impact on the future, than by all means do that, and think about how the research you are helping to fund will one day reduce the suffering that all future generations will have to endure.
It could be the problem, but, actually, the main one is that I see no point in reducing suffering and it looks like nobody can explain it to me.
It’s an intrinsic value. Reducing suffering is the point.
I don’t like to suffer. It’s bad for me to suffer. Other people are like me. Therefore, it’s also bad for them to suffer.
When you say that “reducing suffering is the point”, I suppose that there is a reason to reduce it. How does it follow from “It’s bad” to “needs to be reduced”?
No. It’s a terminal value. When you ask what the point of doing X is, the answer is that it reduces suffering, or increases happiness, or does something else that’s terminally valuable.
I don’t see justification for dividing values in these two categories in that post.
Do I understand you right, you think that although there is no reason why we should reduce suffering and there is no reason what for we should reduce suffering, we anyway should do it only because somebody called it “terminal value”?
Let me try this from the beginning.
Consider an optimization process. If placed in a universe, it will tend to direct that universe towards a certain utility function. The end result it moves it towards is called its terminal values.
Optimization processes do not necessarily have instrumental values. AIXI is the most powerful possible optimization process, but it only considers the effect of each action on its terminal values.
Evolution is another example. Species are optimized solely based on their inclusive genetic fitness. It does not understand, for example, that if it got rid of humans’ blind spots, they’d do better in the long run, so it might be a good idea to select for humans who are closer to having eyes with no blind spots. Since you can’t change gradually from “blind spot” to “no blind spot” without getting “completely blind” for quite a few generations in between, evolution is not going to get rid of out blind spots.
Humans are not like this. Humans can keep track of sub-goals to their goals. If a human wants chocolate as a terminal value, and there is chocolate at the store, a human can make getting to the store an instrumental value, and start considering actions based on how they help get him/her to the store. These sub-goals are known as instrumental values.
Perhaps you don’t have helping people as a terminal value. However, you have terminal values. I know this because you managed to type grammatically correct English. Very few strings are grammatically correct English, and very few patterns of movement would result in any string being sent as a comment to LessWrong.
Perhaps typing grammatically correct English is a terminal value. Perhaps you’re optimizing something else, such as your own understanding of meta-ethics, and it just so happens that grammatically correct English is a good way to get this result. In this case, it’s an instrumental value (unless you just have so much computing power that you didn’t even consider what helps you write and you just directly figured out that twitching those muscles would improve your understanding of meta-ethics, but I doubt that).
Accident comment.
Thanks for this wall of text but you didn’t even try to answer my question. I asked for justification to this division of values—you just explained to me this division.
If you are able to get the analogy, my argument sounds like this:
“The author has tried hard to tie various component of personal development into three universal principles that can be applied to any situation. Unfortunately human personality is a much more nuanced thing that defies such neat categorizations. The attempt to force fit the ‘fundamental principles of personal development(!)’ into neat categories can only result in such inanities as love + truth = oneness; truth + power = courage; etc. There is no explanation on why only these categories are considered universal, why not others? After all we have a long list of desirable qualities say virtue, honor, commitment, persistence, discipline etc. etc. On what basis do you pick 3 of them and declare them to be ‘fundamental principles’? If truth, love and power are the fundamental principals of personality, then what about the others?
...
The point is that there is no scientific basis for claiming that truth, power and love are the basic three principles and others are just a combination of them. There are no hypothesis, no tests, no analysis and no proofs. No reference to any studies in any university of repute. No double blind tests on sample population. Just results. Whatever author says is a revelation that does not require any external validation. His assertion is enough since it is based on his personal experience. Believe it and you will see the results.”
Btw, It’s still extremely interesting to me, how exactly does “terminality” of value give sense to action that has no reasons to be done.
Why do anything? It’s not enough to have an infinite or circular chain of reasoning. You can construct an infinite or circular chain of reasoning that supports any conclusion. You have to have an ending to it. That is what we call a terminal value.
Nobody said it has to be simple. Our values are complicated. Love, truth, oneness, power, courage, etc. are all terminal values. Some of them are also instrumental values. Power is very useful in fulfilling other values, and you will put forth more effort to achieve power than you would if it was just a terminal value. There are also instrumental values that are not terminal values, such as going to the store (assuming you don’t particularly like the store, although even then you could argue that it’s the happiness you like).
I don’t know why. The most plausible answer I know—because you like doing it.
Okay. However there are only assertions and no justifications, let’s assume that your first paragraph is right. Anyway, how does “terminality” of value give sense to otherwise senseless action?
I ask you why these two categories, and it looks like you even cite the right piece out of my review-argument and… Bam! “Nobody said it has to be simple”.
But, why? Why these two categories of values? Where is justification? Or is it just “too basic to be explained”? If you think so, write it, please.
What gives value to an otherwise senseless action is a meta-ethical question. “Terminality” is just what you call it when you value something for reasons other than it causing something else that you value.
Let me try making an example:
Suppose you’re a paperclip-maximizer. You value paperclips. Paperclip factories help build paperclips, so factories are good too. Given a choice between building a factory immediately and a paperclip immediately, you’d probably pick the former. It’s like you value factories more than paperclips.
But if you’re given the opportunity to build a factory-maximizer, you’d turn it down. Those factories potentially could make a lot of paperclips, but they won’t, because the factory-maximizer would need that metal to make more factories. You don’t really value factories. They’re just useful. You value paperclips.
You could come up with an exception like this for any instrumental value. No matter how much the instrumental value is maximized, you won’t care unless it helps with the terminal value. There is no such exception for you terminal values. If there’s more paperclips, it’s better. End of story.
The actual utility function can be quite complicated. Perhaps you prefer paperclips in a certain size range. Perhaps you want them to be easily bent, and hard to break. In that case, your terminal value is more sophisticated than “paperclips”, but it’s something.
Sorry for the pause. Have been thinking.
If there is reason ‘what for’ (What for did you buy this car? To drive to work) do something, then it’s instrumental value. If there is only reason ‘why’ (Why did you buy this car? Because I like it) do something, then it’s a terminal value. Right?
I don’t know the difference between “what for” and “why”.
If you bought the car to drive to work, it’s instrumental. If you bought it because having nice cars makes you happy, its instrumental. If you bought it because you just prefer for future you to have a car, whether or not he’s happy about it or even wants a car, then it’s terminal.
As for why: you can answer to “why” with either “because” or “to” but you can only answer to “what for” with “to”. To ‘avoid’ confusion I prefer to use “why” when I want to get “because” and “what for” when I want to get “to”, e.g. Why did you buy this car? Because I like it. What for did you buy this car? To drive to work
I’m not sure, are we talking about subjective or objective values?
What’s an objective value?
“existing freely or independently from a mind)”
How are you defining value then?
It sounds to me like objective value is a contradiction in terms.
Value is just another way to say that something is liked or disliked by someone.
I’m sorry if all this time you were talking about subjective values. I have nothing against them.