I don’t know. Is it useful for you to be unhappy when people die? For how long? How will you know when you’ve been sufficiently unhappy? What bad thing will happen if you’re not unhappy when people die? What good thing happens if you are unhappy?
And I mean these questions specifically: not “what’s good about being unhappy in general?” or “what’s good about being unhappy when people die, from an evolutionary perspective?”, but why do YOU, specifically, think it’s a good thing for YOU to be unhappy when some one specific person dies?
My hypothesis: your examination will find that the idea of not being unhappy in this situation is itself provoking unhappiness. That is, you think you should be unhappy when someone dies, because the idea of not being unhappy will make you unhappy also.
The next question to ask will then be what, specifically, you expect to happen in response to that lack of unhappiness, that will cause you to be unhappy.
And at that point, you will discover something interesting: an assumption that you weren’t aware of before.
So, if you believe that your unhappiness should match the facts, it would be a good idea to find out what facts your map is based on, because “death ⇒ unhappiness” is not labeled on the territory.
Pjeby, I’m unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it. To say that it is encoded directly into my utility function (not just that certain things are bad, but that I should be a person who feels bad about them) might be oversimplifying in this case, since we are dealing with a structurally complicated aspect of morality. But just as I don’t think music is valuable without someone to listen to it, I don’t think I’m as valuable if I don’t feel bad about people dying.
If I knew a few other things, I think, I could build an AI that would simply act to prevent the death of sentient beings, without feeling the tiniest bit bad about it; but that AI wouldn’t be what I think a sentient citizen should be, and so I would try not to make that AI sentient.
It is not my future self who would be unhappy if all his unhappiness were eliminated; it is my current self who would be unhappy on learning that my nature and goals would thus be altered.
Did you read the Fun Theory sequence and the other posts I referred you to? I’m not sure if I’m repeating myself here.
Possibly relevent: A General Theory of Love suggests that love (imprinting?) includes needing the loved one to help regulate basic body systems. It starts with the observation that humans are the only species whose babies die from isolation.
I’ve read a moderate number of books by Buddhists, and as far as I can tell, while a practice of meditation makes ordinary problems less distressing, it doesn’t take the edge off of grief at all. It may even make grief sharper.
I’m unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it.
Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?
I ask because my experience tells me that there are only a handful of “terminal” negative values, and they are human universals; as far as I can tell, it isn’t possible for a human being to create their own terminal (negative) values. Instead, they derive intermediate negative values, and then forget how they did the derivation… following which they invent rationalizations that sound a lot like the ones they use to explain why death is a good thing.
Don’t you find it interesting that you should defend this “terminal” value so strongly, without actually asking yourself the question, “What really would happen if I were not unhappy in situation X?” (Where situation X is actually specified to a level allowing sensory detail—not some generic abstraction.)
It’s clear from what you’ve written throughout this thread that the answer to that question is something like, “I would be a bad person.” And in my experience, when you then ask something like, “And how did I learn that that would make me bad?”, you’ll discover specific, emotional memories that provide the only real justification you had for thinking this thought in the first place… and that it has little or no connection to the rationalizations you’ve attached to it.
Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?
You could actually tell me what I fear, and I’d recognize it when I heard it?
What would it take for me to convince you that I’m repulsed by the thing-as-it-is and not its future consequence?
I ask because my experience tells me that there are only a handful of “terminal” negative values
I strongly suspect, then, that you are too good at finding psychological explanations! Conditioned dislike is not the same as conditional dislike. We can train our terminal values, and we can be moved by arguments about them. Now, there may be a humanly universal collection of negative reinforcers, although there is not any reason to expect the collection to be small; but that is not the same thing as a humanly universal collection of terminal values.
I can tell you just exactly what would happen if I weren’t unhappy: I would live happily ever afterward. I just don’t find that to be the most appealing prospect I can imagine, though one could certainly do worse.
What would it take for me to convince you that I’m repulsed by the thing-as-it-is and not its future consequence?
A source listing for the relevant code and data structures in your brain. At the moment, the closest thing I know to that is examining formative experiences, because recontextualizing those experiences is the most rapid way to produce testable change in a human being.
We can train our terminal values, and we can be moved by arguments about them.
Then we mean different things by “terminal” in this context, since I’m referring here to what comes built-in to a human, versus what is learned by a human. How did you learn that you should have that particular terminal value?
I can tell you just exactly what would happen if I weren’t unhappy: I would live happily ever afterward.
As far as I can tell, that’s a “far” answer to a “near” question—it sounds like the result of processing symbols in response to an abstraction, rather than one that comes from observing the raw output of your brain in response to a concrete question.
In effect, my question is, what reinforcer shapes/shaped you to believe that it would be bad to live happily ever after?
(Btw, I don’t claim that happily-ever-after possible—I just claim that it’s possible and practical to reduce one’s unhappiness by pruning one’s negative values to those actually required to deal with urgent threats, rather than allowing them to be triggered by chronic conditions. I don’t even expect that I won’t grieve people important to me… but I also expect to get over it, as quickly as is practical for me to do so.)
I don’t know. Is it useful for you to be unhappy when people die? For how long? How will you know when you’ve been sufficiently unhappy? What bad thing will happen if you’re not unhappy when people die? What good thing happens if you are unhappy?
And I mean these questions specifically: not “what’s good about being unhappy in general?” or “what’s good about being unhappy when people die, from an evolutionary perspective?”, but why do YOU, specifically, think it’s a good thing for YOU to be unhappy when some one specific person dies?
My hypothesis: your examination will find that the idea of not being unhappy in this situation is itself provoking unhappiness. That is, you think you should be unhappy when someone dies, because the idea of not being unhappy will make you unhappy also.
The next question to ask will then be what, specifically, you expect to happen in response to that lack of unhappiness, that will cause you to be unhappy.
And at that point, you will discover something interesting: an assumption that you weren’t aware of before.
So, if you believe that your unhappiness should match the facts, it would be a good idea to find out what facts your map is based on, because “death ⇒ unhappiness” is not labeled on the territory.
Pjeby, I’m unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it. To say that it is encoded directly into my utility function (not just that certain things are bad, but that I should be a person who feels bad about them) might be oversimplifying in this case, since we are dealing with a structurally complicated aspect of morality. But just as I don’t think music is valuable without someone to listen to it, I don’t think I’m as valuable if I don’t feel bad about people dying.
If I knew a few other things, I think, I could build an AI that would simply act to prevent the death of sentient beings, without feeling the tiniest bit bad about it; but that AI wouldn’t be what I think a sentient citizen should be, and so I would try not to make that AI sentient.
It is not my future self who would be unhappy if all his unhappiness were eliminated; it is my current self who would be unhappy on learning that my nature and goals would thus be altered.
Did you read the Fun Theory sequence and the other posts I referred you to? I’m not sure if I’m repeating myself here.
Possibly relevent: A General Theory of Love suggests that love (imprinting?) includes needing the loved one to help regulate basic body systems. It starts with the observation that humans are the only species whose babies die from isolation.
I’ve read a moderate number of books by Buddhists, and as far as I can tell, while a practice of meditation makes ordinary problems less distressing, it doesn’t take the edge off of grief at all. It may even make grief sharper.
Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?
I ask because my experience tells me that there are only a handful of “terminal” negative values, and they are human universals; as far as I can tell, it isn’t possible for a human being to create their own terminal (negative) values. Instead, they derive intermediate negative values, and then forget how they did the derivation… following which they invent rationalizations that sound a lot like the ones they use to explain why death is a good thing.
Don’t you find it interesting that you should defend this “terminal” value so strongly, without actually asking yourself the question, “What really would happen if I were not unhappy in situation X?” (Where situation X is actually specified to a level allowing sensory detail—not some generic abstraction.)
It’s clear from what you’ve written throughout this thread that the answer to that question is something like, “I would be a bad person.” And in my experience, when you then ask something like, “And how did I learn that that would make me bad?”, you’ll discover specific, emotional memories that provide the only real justification you had for thinking this thought in the first place… and that it has little or no connection to the rationalizations you’ve attached to it.
You could actually tell me what I fear, and I’d recognize it when I heard it?
What would it take for me to convince you that I’m repulsed by the thing-as-it-is and not its future consequence?
I strongly suspect, then, that you are too good at finding psychological explanations! Conditioned dislike is not the same as conditional dislike. We can train our terminal values, and we can be moved by arguments about them. Now, there may be a humanly universal collection of negative reinforcers, although there is not any reason to expect the collection to be small; but that is not the same thing as a humanly universal collection of terminal values.
I can tell you just exactly what would happen if I weren’t unhappy: I would live happily ever afterward. I just don’t find that to be the most appealing prospect I can imagine, though one could certainly do worse.
A source listing for the relevant code and data structures in your brain. At the moment, the closest thing I know to that is examining formative experiences, because recontextualizing those experiences is the most rapid way to produce testable change in a human being.
Then we mean different things by “terminal” in this context, since I’m referring here to what comes built-in to a human, versus what is learned by a human. How did you learn that you should have that particular terminal value?
As far as I can tell, that’s a “far” answer to a “near” question—it sounds like the result of processing symbols in response to an abstraction, rather than one that comes from observing the raw output of your brain in response to a concrete question.
In effect, my question is, what reinforcer shapes/shaped you to believe that it would be bad to live happily ever after?
(Btw, I don’t claim that happily-ever-after possible—I just claim that it’s possible and practical to reduce one’s unhappiness by pruning one’s negative values to those actually required to deal with urgent threats, rather than allowing them to be triggered by chronic conditions. I don’t even expect that I won’t grieve people important to me… but I also expect to get over it, as quickly as is practical for me to do so.)