Well, this locking does not really seem to work well for me. I know that ideal terminal values should be along the lines of wanting other people to be happy, but I really struggle to go from the fact that some signals in some brains are labelled happiness to the value that these signals matter. Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found. The struggle is largely that if certain brain signals like happiness are not inherently marked with little XML tags “yes you should care about this” where does the should, the value come from?
The closest thing I can get is something similar to nationalism extended over all humankind—we all are 22nd cousins or something so let’s be allies and face this cold cruel lifeless universe together or something similarly sentimental. But it isn’t a terminal value, it is more like a bit of a feeling of affection. A true utilitarian would even care about a sentient computer being happy, or a sentient computer suffering or dying, and I just cannot figure out why.
Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found.
Well. Thinking about it I realize that for your kind of personality a falling back to carng and following goals indeed doesn’t seem necessary. On the other hand the arbitrariness of nihilism isn’t that different from the passivity from depression—so in a way maybe you already did lock back into the same pattern anyway?
Well, this locking does not really seem to work well for me. I know that ideal terminal values should be along the lines of wanting other people to be happy, but I really struggle to go from the fact that some signals in some brains are labelled happiness to the value that these signals matter. Since I have a typically depressive personality, not really caring about myself being happy, I cannot really care about others being happy as well and thus terminal values are not found. The struggle is largely that if certain brain signals like happiness are not inherently marked with little XML tags “yes you should care about this” where does the should, the value come from?
The closest thing I can get is something similar to nationalism extended over all humankind—we all are 22nd cousins or something so let’s be allies and face this cold cruel lifeless universe together or something similarly sentimental. But it isn’t a terminal value, it is more like a bit of a feeling of affection. A true utilitarian would even care about a sentient computer being happy, or a sentient computer suffering or dying, and I just cannot figure out why.
Well. Thinking about it I realize that for your kind of personality a falling back to carng and following goals indeed doesn’t seem necessary. On the other hand the arbitrariness of nihilism isn’t that different from the passivity from depression—so in a way maybe you already did lock back into the same pattern anyway?