● Humans do not have one terminal value (unless they are mentally ill).
Why though?
I don’t see any other way to (ultimate) alignment/harmony/unification between (nor within) minds than to use a single terminal value-grounded currency for resolving all conflicts.
For as soon as we weigh two terminal values against each other, we are evaluating them through a shared dimension (e.g., force or mass in the case of a literal scale as the comparator), and are thus logically forced to accept that either one of the terminal values (or its motivating power) could be translated into the other, or that there was this third terminal {value/motivation/tension} for which the others are tools.
Do you suggest getting rid of the idea of terminal value(s) altogether, or could you explain how we can resolve conflicts between two terminal values, if terminal means irreducible?
(To the extent that I think in terminal and instrumental values, I claim to care terminally only about suffering. I also claim to not be mentally ill. A lot of Buddhists etc. might make similar claims, and I feel like the statement above quoted from the Conclusion without more context would label a lot of people either mentally ill or not human, while to me the process of healthy unification feels like precisely the process of becoming a terminal value monist. :-))
could you explain how we can resolve conflicts between two terminal values, if terminal means irreducible?
Suppose the following mind architecture:
When in a normal state, the mind desires games.
When the body reports low blood sugar levels, the mind desires food.
When in danger, the mind desires running away.
When in danger AND with low blood sugar levels, the mind desires freezing up.
Something like this has a system of resolving conflicts between terminal values: different terminal values are swapped in as the situation warrants. But although there is an evolutionary logic to them—their relative weights are drawn from the kind of a distribution which was useful for survival on average—the conflict-resolution system is not explicitly optimizing for any common currency, not even survival. There just happens to be a hodgepodge of situational variables and processes which end up resolving different conflicts in different ways.
I presented a more complex model of something like this in “Subagents, akrasia and coherence in humans”—there I did say that the subagents are optimizing for an implicit utility function, but the values for that utility function come from cultural and evolution-historical weights so it still doesn’t have any consistent “common currency”.
Often minds seem to end up at states where something like a particular set of goals or subagents ends up dominating, because those are the ones which have managed to accumulate the most power within the mind-system. This does not look like some of them became the most powerful through something like an appeal to shared values, but rather through just the details of how that person’s life-history, their personal neurobiological makeup, etc. happen to be set up and which kinds of neurological processes those details have happened to favor.
Similarly, governments repeat the same pattern at the intrapersonal level—value conflicts are not resolved through being weighted in terms of some higher-level value. Rather they are determined through a complex process where a lot of contingent details, such as a country’s parliamentary procedures, cultural traditions, voting systems etc. having a big influence on shaping which way the chips happen to fall WRT any given decision.
It occurred to me that, for a human being, there is no way not to make a choice between different preferences: in any next moment of time I do something, even continue to think or indulge in procrastination. I either eat, or run, so the conflict is always resolved.
However, an interesting thing is that sometimes a person tries to do two things simultaneously, for example, if content of the speech and the tone do not match. It has happened to me – and I had to explain that only content matter, and the tone should be ignored.
It occurred to me that, for a human being, there is no way not to make a choice between different preferences: in any next moment of time I do something, even continue to think or indulge in procrastination. I either eat, or run, so the conflict is always resolved.
This matches an excise you may be asked to do as part of Buddhist training towards enlightenment. During meditation, get your attention focused on itself, then try to do something other than what you would do. If you have enough introspective access, you’ll get an experience of being unable to do anything other than exactly what you do—you get first-hand experience with determinism at a level that bypasses the process that creates the illusion of free will. So not only can you only ever do the one thing you actually do (for some reasonable definition of what “one thing” is here), you can’t every do anything other than the one thing you end up doing, viz. there was no way any counterfactual was ever going to be realized.
I am sure you have more than one value—for example, the best way to prevent even slightest possibility of suffering is suicide, but as you are alive, you care to be alive. Moreover, I think that claims about values are not values—they are just good claims.
The real case of “one value person” are maniacs: that is a human version of a paperclipper. Typical examples of such maniacs are people obsessed with sex, money, or collecting of some random things; also drug addicts. Some of them are psychopaths: they look normal and are very effective, but do everything just for one goal.
Thanks for your comment—I will update the conclusion, so the bullet points will be linked with parts of the text which will explains them.
Terminal value monism is possible with impersonal compassion as the common motivation to resolve all conflicts. This means that every thus aligned small self lives primarily to prevent hellish states wherever they may arise, and that personal euthanasia is never a primary option, especially considering that survivors of suffering may later be in a good position to understand and help it in others (as well as contributing themselves as examples for our collective wisdom of life narratives that do/don’t get stuck in hellish ways).
Are you speaking from personal experience here, Teo? This seems like a plausible interpretation of self experience under certain conditions based on your mention of “impersonal compassion” (I’m being vague to avoid biasing your response), but it’s also contradictory to what we theorize to be possible based on the biological constructs on which the mind is manifested. I’m curious because it might point to a way to better understand the different viewpoints in this thread.
Why though?
I don’t see any other way to (ultimate) alignment/harmony/unification between (nor within) minds than to use a single terminal value-grounded currency for resolving all conflicts.
For as soon as we weigh two terminal values against each other, we are evaluating them through a shared dimension (e.g., force or mass in the case of a literal scale as the comparator), and are thus logically forced to accept that either one of the terminal values (or its motivating power) could be translated into the other, or that there was this third terminal {value/motivation/tension} for which the others are tools.
Do you suggest getting rid of the idea of terminal value(s) altogether, or could you explain how we can resolve conflicts between two terminal values, if terminal means irreducible?
(To the extent that I think in terminal and instrumental values, I claim to care terminally only about suffering. I also claim to not be mentally ill. A lot of Buddhists etc. might make similar claims, and I feel like the statement above quoted from the Conclusion without more context would label a lot of people either mentally ill or not human, while to me the process of healthy unification feels like precisely the process of becoming a terminal value monist. :-))
Suppose the following mind architecture:
When in a normal state, the mind desires games.
When the body reports low blood sugar levels, the mind desires food.
When in danger, the mind desires running away.
When in danger AND with low blood sugar levels, the mind desires freezing up.
Something like this has a system of resolving conflicts between terminal values: different terminal values are swapped in as the situation warrants. But although there is an evolutionary logic to them—their relative weights are drawn from the kind of a distribution which was useful for survival on average—the conflict-resolution system is not explicitly optimizing for any common currency, not even survival. There just happens to be a hodgepodge of situational variables and processes which end up resolving different conflicts in different ways.
I presented a more complex model of something like this in “Subagents, akrasia and coherence in humans”—there I did say that the subagents are optimizing for an implicit utility function, but the values for that utility function come from cultural and evolution-historical weights so it still doesn’t have any consistent “common currency”.
Often minds seem to end up at states where something like a particular set of goals or subagents ends up dominating, because those are the ones which have managed to accumulate the most power within the mind-system. This does not look like some of them became the most powerful through something like an appeal to shared values, but rather through just the details of how that person’s life-history, their personal neurobiological makeup, etc. happen to be set up and which kinds of neurological processes those details have happened to favor.
Similarly, governments repeat the same pattern at the intrapersonal level—value conflicts are not resolved through being weighted in terms of some higher-level value. Rather they are determined through a complex process where a lot of contingent details, such as a country’s parliamentary procedures, cultural traditions, voting systems etc. having a big influence on shaping which way the chips happen to fall WRT any given decision.
It occurred to me that, for a human being, there is no way not to make a choice between different preferences: in any next moment of time I do something, even continue to think or indulge in procrastination. I either eat, or run, so the conflict is always resolved.
However, an interesting thing is that sometimes a person tries to do two things simultaneously, for example, if content of the speech and the tone do not match. It has happened to me – and I had to explain that only content matter, and the tone should be ignored.
This matches an excise you may be asked to do as part of Buddhist training towards enlightenment. During meditation, get your attention focused on itself, then try to do something other than what you would do. If you have enough introspective access, you’ll get an experience of being unable to do anything other than exactly what you do—you get first-hand experience with determinism at a level that bypasses the process that creates the illusion of free will. So not only can you only ever do the one thing you actually do (for some reasonable definition of what “one thing” is here), you can’t every do anything other than the one thing you end up doing, viz. there was no way any counterfactual was ever going to be realized.
A good description why any one value may be not good is in https://www.academia.edu/173502/A_plurality_of_values
I am sure you have more than one value—for example, the best way to prevent even slightest possibility of suffering is suicide, but as you are alive, you care to be alive. Moreover, I think that claims about values are not values—they are just good claims.
The real case of “one value person” are maniacs: that is a human version of a paperclipper. Typical examples of such maniacs are people obsessed with sex, money, or collecting of some random things; also drug addicts. Some of them are psychopaths: they look normal and are very effective, but do everything just for one goal.
Thanks for your comment—I will update the conclusion, so the bullet points will be linked with parts of the text which will explains them.
Terminal value monism is possible with impersonal compassion as the common motivation to resolve all conflicts. This means that every thus aligned small self lives primarily to prevent hellish states wherever they may arise, and that personal euthanasia is never a primary option, especially considering that survivors of suffering may later be in a good position to understand and help it in others (as well as contributing themselves as examples for our collective wisdom of life narratives that do/don’t get stuck in hellish ways).
Terminal value monism may be possible as a pure philosophical model, but real biological humans have more complex motivational systems.
Are you speaking from personal experience here, Teo? This seems like a plausible interpretation of self experience under certain conditions based on your mention of “impersonal compassion” (I’m being vague to avoid biasing your response), but it’s also contradictory to what we theorize to be possible based on the biological constructs on which the mind is manifested. I’m curious because it might point to a way to better understand the different viewpoints in this thread.