I have a strong contrarian hunch that human terminal goals converge as long as you go far enough up the goal chain.
What you see in the wild is people having vastly different tastes of how to live life. One likes freedom, the next likes community, and then the next is just trying to gain as much power as possible.
But I call those subterminal goals, and I think what generated them is the same algorithm with different inputs (different perceived possibilities?). The algorithm, which I think optimizes for some proxies of genetic survival like sameness and self-preservation, that’s the terminal goal.
And no, I’m not trying to enforce any values. This isn’t about things-in-the-world that ought to make us happy. This is about inner game.
terminal goals converge as long as you go far enough up the goal chain.
Wait. Terminal goals are, by definition, the head of the goal chain. You can’t go any further.
My current thesis about human goals is that they contain loops and have no heads—not that terminal goals diverge, but that there is no such thing as a terminal human goal.
My own existence; and that existence being subject to certain liberties and freedoms (NOT the same as happiness, despite what Thomas Jefferson says); understanding the structure underlying rules limits and complexities of the universe at its varying levels; and tiling the universe with a multitude of diverse forms of sentient life.
Edit: Maybe I should have stopped at the first one though since that’s the most universal and illustrates the point quite nicely. In a game of “would you rather..” I would rather take any outcome that leaves me alive, no matter how hellish, over one where I am dead. No qualification. I don’t see how that could be true if happiness were a terminal goal.
Edit2: if happiness were my terminal goal, why not put myself on a perpetual heroin drip? I think the answe is that happiness is just an instrumental goal, like hunger and thirst satisfaction, that lets us focus on the next layer of Maslow’s hierarchy. Asking about terminal goals is asking about the top of the hierarchy, which is not happiness.
I would rather take any outcome that leaves me alive, no matter how hellish, over one where I am dead. No qualification. I don’t see how that could be true if happiness were a terminal goal.
I don’t consider goals to be what people say they would do, but what they would actually do. So I don’t accept your idea of your terminal goal unless it is true that if you were in a hellish scenario indefinitely, with a button that would cause you to cease to exist, you would not press the button.
I think we have a factual disagreement here: I think you would press the button, and you think you would not. I think you are mistaken, but there does not seem any way to resolve the disagreement, since we cannot run the test.
After spending some time thinking about it, I think there is a constructive response I can make. I believe that brains and the goals they encode are fully malleable, given time and pressure. Everyone breaks under torture, and brainwashing can be used to rewire people to do or want anything at all. If I was actually in a hellish, eternal suffering outcome I’m sure that I would eventually break. I am absolutely certain of that. But that is because the person who breaks is no longer the same as the person who exists now. But the person that exist now and is typing this response would still rather roll the dice on a hellish outcome than accept certain oblivion. Give me the option of a painless death or _, for literally anything in that blank there, and I’ll take that outcome.
It makes sense as a description of possible future behavior. That is, if you are allowed to press a button now which will commit you to a hellish existence rather than non-existence, you might actually press it. But in this case I say you have a false belief, namely that a hellish existence is better than non-existence. What you call “breaking” would simply be accepting the truth of the matter.
I have a strong contrarian hunch that human terminal goals converge as long as you go far enough up the goal chain. What you see in the wild is people having vastly different tastes of how to live life. One likes freedom, the next likes community, and then the next is just trying to gain as much power as possible. But I call those subterminal goals, and I think what generated them is the same algorithm with different inputs (different perceived possibilities?). The algorithm, which I think optimizes for some proxies of genetic survival like sameness and self-preservation, that’s the terminal goal. And no, I’m not trying to enforce any values. This isn’t about things-in-the-world that ought to make us happy. This is about inner game.
Wait. Terminal goals are, by definition, the head of the goal chain. You can’t go any further.
My current thesis about human goals is that they contain loops and have no heads—not that terminal goals diverge, but that there is no such thing as a terminal human goal.
My terminal goals as far as I can tell involve the state of the world and don’t involve happiness at all. How does that fit into your framework?
I think you misunderstand your goals. What kind of unhappy life do you see as satisfying your goals?
My own existence; and that existence being subject to certain liberties and freedoms (NOT the same as happiness, despite what Thomas Jefferson says); understanding the structure underlying rules limits and complexities of the universe at its varying levels; and tiling the universe with a multitude of diverse forms of sentient life.
Edit: Maybe I should have stopped at the first one though since that’s the most universal and illustrates the point quite nicely. In a game of “would you rather..” I would rather take any outcome that leaves me alive, no matter how hellish, over one where I am dead. No qualification. I don’t see how that could be true if happiness were a terminal goal.
Edit2: if happiness were my terminal goal, why not put myself on a perpetual heroin drip? I think the answe is that happiness is just an instrumental goal, like hunger and thirst satisfaction, that lets us focus on the next layer of Maslow’s hierarchy. Asking about terminal goals is asking about the top of the hierarchy, which is not happiness.
I don’t consider goals to be what people say they would do, but what they would actually do. So I don’t accept your idea of your terminal goal unless it is true that if you were in a hellish scenario indefinitely, with a button that would cause you to cease to exist, you would not press the button.
I think we have a factual disagreement here: I think you would press the button, and you think you would not. I think you are mistaken, but there does not seem any way to resolve the disagreement, since we cannot run the test.
This is the same username2 as the sibling.
After spending some time thinking about it, I think there is a constructive response I can make. I believe that brains and the goals they encode are fully malleable, given time and pressure. Everyone breaks under torture, and brainwashing can be used to rewire people to do or want anything at all. If I was actually in a hellish, eternal suffering outcome I’m sure that I would eventually break. I am absolutely certain of that. But that is because the person who breaks is no longer the same as the person who exists now. But the person that exist now and is typing this response would still rather roll the dice on a hellish outcome than accept certain oblivion. Give me the option of a painless death or _, for literally anything in that blank there, and I’ll take that outcome.
Does that make sense?
It makes sense as a description of possible future behavior. That is, if you are allowed to press a button now which will commit you to a hellish existence rather than non-existence, you might actually press it. But in this case I say you have a false belief, namely that a hellish existence is better than non-existence. What you call “breaking” would simply be accepting the truth of the matter.
I take inspiration from the movie Touching the Viod. Do you?
Beyond that I don’t know what to say. I’ve stated my preferences and you’ve said “I don’t believe you.” I have no desire to respond to that.