I don’t personally endorse it as a terminal value, but it’s everyone’s own decision whether to endorse it or not.
I don’t believe it is, at least it’s relatively easy to decide incorrectly, so the fact of having (provisionally) decided doesn’t answer the question of what the correct decision is. “It’s everyone’s own decision” or “everyone is entitled to their own beliefs” sounds like very bad epistemology.
I cited what seems to me like a strong theoretical argument for antipredicting terminal indifference to personal well-being. Your current conclusion being contrary to what this argument endorses doesn’t seem to address the argument itself.
I thought that your previous comment was simply saying that
1) in deciding whether or not we should value the survival of a “me”, the evolutionary background of this value is irrelevant 2) the reason why people value the survival of a “me” is unrelated to the instrumental benefits of the goal
I agree with those claims, but don’t see them as being contrary to my decision not to personally endorse such a value. You seem to be saying that the question of whether or not a “me” should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn’t be terminal values.
You seem to be saying that the question of whether or not a “me” should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn’t be terminal values.
Wait—what? Are you partly defining terminal values via their being unaffected by epistemic considerations? This makes me want to ask a lot of questions for which I would otherwise take answers for granted. Like: are there any terminal values? Can a person choose terminal values? Do choices express values that were antecedent to the choice? Can a person have “knowledge” or some closely related goal as a personal terminal value?
I don’t believe it is, at least it’s relatively easy to decide incorrectly, so the fact of having (provisionally) decided doesn’t answer the question of what the correct decision is. “It’s everyone’s own decision” or “everyone is entitled to their own beliefs” sounds like very bad epistemology.
I cited what seems to me like a strong theoretical argument for antipredicting terminal indifference to personal well-being. Your current conclusion being contrary to what this argument endorses doesn’t seem to address the argument itself.
I thought that your previous comment was simply saying that
1) in deciding whether or not we should value the survival of a “me”, the evolutionary background of this value is irrelevant
2) the reason why people value the survival of a “me” is unrelated to the instrumental benefits of the goal
I agree with those claims, but don’t see them as being contrary to my decision not to personally endorse such a value. You seem to be saying that the question of whether or not a “me” should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn’t be terminal values.
Wait—what? Are you partly defining terminal values via their being unaffected by epistemic considerations? This makes me want to ask a lot of questions for which I would otherwise take answers for granted. Like: are there any terminal values? Can a person choose terminal values? Do choices express values that were antecedent to the choice? Can a person have “knowledge” or some closely related goal as a personal terminal value?