Most of what we call values seem to respond to arguments, so they’re not really the kind of fixed values that a utility maximizer would have. I would be wary about calling some cognitive feature “values that came from the EEA and are not easily changed”. Given the right argument or insight, they probably can be changed.
So, granted that it’s human to want friendship, community, etc., I’m still curious whether it’s also human to care less about these things after realizing that they boil down to status and alliance games, and that the outcomes of these games don’t count for much in the larger scheme of things.
So, granted that it’s human to want friendship, community, etc., I’m still curious whether it’s also human to care less about these things after realizing that they boil down to status and alliance games, and that the outcomes of these games don’t count for much in the larger scheme of things.
Well, is it also human to stop desiring tasty food once you realize that it boils down to super-stimulation of hardware that evolved as a device for impromptu chemical analysis to sort out nutritionally adequate stuff from the rest?
As for the “larger scheme of things,” that’s one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort. Selectively applying it is a common human bias. (In fact, I’d say it’s a powerful general technique for producing biased argumentation.)
Well, is it also human to stop desiring tasty food once you realize that it boils down to super-stimulation of hardware that evolved as a device for impromptu chemical analysis to sort out nutritionally adequate stuff from the rest?
Not to stop desiring it entirely, but to care less about it than if I didn’t realize, yes. (I only have a sample size of one here, namely myself, so I’m curious if others have the same experience.)
As for the “larger scheme of things,” that’s one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort. Selectively applying it is a common human bias. (In fact, I’d say it’s a powerful general technique for producing biased argumentation.)
I don’t think I’m applying it selectively… we’re human and we can only talk about one thing at a time, but other than that I think I do realize that this is a general argument that can be applied to all of our values. It doesn’t seem to affect all of them equally though. Some values, such as wanting to be immortal, and wanting to understand the nature of reality, consciousness, etc., seem to survive the argument much better than others. :)
I think I do realize that this is a general argument that can be applied to all of our values. It doesn’t seem to affect all of them equally though. Some values, such as wanting to be immortal, and wanting to understand the nature of reality, consciousness, etc., seem to survive the argument much better than others. :)
Honestly, I don’t see what you’re basing that conclusion on. What, according to you, determines which human values survive that argument and which not?
Honestly, I don’t see what you’re basing that conclusion on.
I’m surprised that you find the conclusion surprising or controversial. (The conclusion being that some some values survive the “larger scheme of things” argument much better than others.) I know that you wrote earlier:
As for the “larger scheme of things,” that’s one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort.
but I didn’t think those words reflected your actual beliefs (I thought you just weren’t paying enough attention to what you were writing). Do you really think that people like me, who do not think that literally everything is pointless and unworthy of effort, have just avoided applying the argument to some of our values?
What, according to you, determines which human values survive that argument and which not?
It seems obvious to me that some values (e.g., avoiding great pain) survive the argument by being hardwired to not respond to any arguments, while others (saving humanity so we can develop an intergalactic civilization, or being the first person in an eventually intergalactic civilization to really understand how decisions are supposed to be made) are grand enough that “larger scheme of things” just don’t apply. (I’m not totally sure I’m interpreting your question correctly, so let me know if that doesn’t answer it.)
Do you really think that people like me, who do not think that literally everything is pointless and unworthy of effort, have just avoided applying the argument to some of our values?
As the only logical possibilities, it’s either that, or you have thought about it and concluded that the argument is not applicable to some values. I don’t find the reasons for this conclusion obvious, and I do see many selective applications of this argument as a common bias in practice, which is why I asked.
It seems obvious to me that some values (e.g., avoiding great pain) survive the argument by being hardwired to not respond to any arguments, while others (saving humanity so we can develop an intergalactic civilization, or being the first person in an eventually intergalactic civilization to really understand how decisions are supposed to be made) are grand enough that “larger scheme of things” just don’t apply. (I’m not totally sure I’m interpreting your question correctly, so let me know if that doesn’t answer it.)
Yes, that answers my question, thanks. I do have disagreements with your conclusion, but I grant that you are not committing the above mentioned fallacy outright.
In particular, my objections are that: (1) for many people, social isolation and lack of status is in fact a hardwired source of great pain (though this may not apply to you, so there is no disagreement here if you’re not making claims about other people), (2) I find the future large-scale developments you speculate about highly unlikely, even assuming technology won’t be the limiting factor, and finally (3) even an intergalactic civilization will matter nothing in the “larger scheme of things” assuming the eventual heat death of the universe. But each of these, except perhaps (1), would be a complex topic for a whole another discussion, so I think we can leave our disagreements rest at this point now that we’ve clarified them.
Most of what we call values seem to respond to arguments, so they’re not really the kind of fixed values that a utility maximizer would have. I would be wary about calling some cognitive feature “values that came from the EEA and are not easily changed”. Given the right argument or insight, they probably can be changed.
So, granted that it’s human to want friendship, community, etc., I’m still curious whether it’s also human to care less about these things after realizing that they boil down to status and alliance games, and that the outcomes of these games don’t count for much in the larger scheme of things.
Well, is it also human to stop desiring tasty food once you realize that it boils down to super-stimulation of hardware that evolved as a device for impromptu chemical analysis to sort out nutritionally adequate stuff from the rest?
As for the “larger scheme of things,” that’s one of those emotionally-appealing sweeping arguments that can be applied to literally anything to make it seem pointless and unworthy of effort. Selectively applying it is a common human bias. (In fact, I’d say it’s a powerful general technique for producing biased argumentation.)
Not to stop desiring it entirely, but to care less about it than if I didn’t realize, yes. (I only have a sample size of one here, namely myself, so I’m curious if others have the same experience.)
I don’t think I’m applying it selectively… we’re human and we can only talk about one thing at a time, but other than that I think I do realize that this is a general argument that can be applied to all of our values. It doesn’t seem to affect all of them equally though. Some values, such as wanting to be immortal, and wanting to understand the nature of reality, consciousness, etc., seem to survive the argument much better than others. :)
Honestly, I don’t see what you’re basing that conclusion on. What, according to you, determines which human values survive that argument and which not?
I’m surprised that you find the conclusion surprising or controversial. (The conclusion being that some some values survive the “larger scheme of things” argument much better than others.) I know that you wrote earlier:
but I didn’t think those words reflected your actual beliefs (I thought you just weren’t paying enough attention to what you were writing). Do you really think that people like me, who do not think that literally everything is pointless and unworthy of effort, have just avoided applying the argument to some of our values?
It seems obvious to me that some values (e.g., avoiding great pain) survive the argument by being hardwired to not respond to any arguments, while others (saving humanity so we can develop an intergalactic civilization, or being the first person in an eventually intergalactic civilization to really understand how decisions are supposed to be made) are grand enough that “larger scheme of things” just don’t apply. (I’m not totally sure I’m interpreting your question correctly, so let me know if that doesn’t answer it.)
Wei_Dai:
As the only logical possibilities, it’s either that, or you have thought about it and concluded that the argument is not applicable to some values. I don’t find the reasons for this conclusion obvious, and I do see many selective applications of this argument as a common bias in practice, which is why I asked.
Yes, that answers my question, thanks. I do have disagreements with your conclusion, but I grant that you are not committing the above mentioned fallacy outright.
In particular, my objections are that: (1) for many people, social isolation and lack of status is in fact a hardwired source of great pain (though this may not apply to you, so there is no disagreement here if you’re not making claims about other people), (2) I find the future large-scale developments you speculate about highly unlikely, even assuming technology won’t be the limiting factor, and finally (3) even an intergalactic civilization will matter nothing in the “larger scheme of things” assuming the eventual heat death of the universe. But each of these, except perhaps (1), would be a complex topic for a whole another discussion, so I think we can leave our disagreements rest at this point now that we’ve clarified them.