Much discussion about “minimization of suffering” etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
(Tentative definition: “suffering” is any kind of discomfort over which the subject has no control.)
All other values (from any part of the political continuum) -- “human rights”, “justice”, “fairness”, “morality”, “faith”, “loyalty”, “honor”, “patriotism”, etc. -- are not rational terminal values.
This isn’t to say that they are useless. They serve as a kind of ethical shorthand, guidelines, rules-of-thumb, “philosophical first-aid”: somewhat-reliable predictors of which actions are likely to cause harm (and which are not) -- memes which are effective at reducing harm when people are infected by them. (Hence society often works hard to “sugar coat” them with simplistic, easily-comprehended—but essentially irrelevant—justifications, and otherwise encourage their spread.)
Nonetheless, they are not rational terminal values; they are stand-ins.
They also have a price:
they do not adapt well to changes in our evolving rational understanding of what causes harm/suffering, so that rules which we now know cause more suffering than benefit are still happily propagating out in the memetic wilderness...
any rigid rule (like any tool) can be abused.
...
I seem to have taken this line of thought a bit further than I meant to originally—so to summarize: I’d really like to hear if anyone believes there are other rational terminal values other than (or which cannot ultimately be reduced to) “minimizing suffering”.
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
I disagree. I’ll take suffering rather than death any day, thank-you-very-much.
Furthermore, I have reason to believe that, if I were offered the opportunity to instantaneously and painlessly wipe out all life in the universe, many compassionate humans would support my decision not to do so, despite all the suffering which is thereby allowed to continue.
You think you’re disagreeing with me, but you’re not; I would say that for you, death would be a kind of suffering—the very worst kind, even.
I would also count the “wipe out all life” scenario as an extreme form of suffering. Anyone with any compassion would suffer in the mere knowledge that it was going to happen.
If you’re going to define suffering as ‘whatever we don’t like,’ including the possibility that it’s different for everyone, then I agree with your assertion but question it’s usefulness.
That seems related to what I was trying to get at with the placeholder-word “freedom”—I was thinking of things like “freedom to explore” and “freedom to create new things”—both of which seem highly related to “learning”.
It looks like we’re talking about two subtly different types of “terminal value”, though: for society and for one’s self. (Shall we call them “external” and “internal” TVs?)
I’m inclined to agree with your internal TV for “learning”, but that doesn’t mean that I would insist that a decision which prevented others from learning was necessarily wrong—perhaps some people have no interest in learning (though I’m not going to be inviting them to my birthday party).
If a decision prevented learnophiles from learning, though, I would count that as “harm” or “suffering”—and thus it would be against my external TVs.
Taking the thought a little further: I would be inclined to argue that unless an individual is clearly learnophobic, or it can be shown that too much learning could somehow damage them, then preventing learning in even neutral cases would also be harm—because learning is part of what makes us human. I realize, though, that this argument is on rather thinner rational ground than my main argument, and I’m mainly presenting it as a means of establishing common emotional ground. Please ignore it if this bothers you.
Take-away point: My proposed universal external TV (prevention of suffering) defines {involuntary violation of internal TVs} as harm/suffering.
I think you are wrong but I don’t think you’ve even defined the goal clearly enough to point to exactly where. Some questions:
How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering?
How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance?
How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering?
Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value?
There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I’d like to know your answers to these questions first.
I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
Actually, on thinking about it, I’m thinking “freedom” is another one of those “shorthand” values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]
The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.
(*Should I call it “subjective suffering”? “woozalian suffering”?)
Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Point 4: Yes, with some major caveats...
First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we’re not inviting those folks to the discussion table at this level.
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?
It’s true that there would be no further suffering once the destruction was complete.
This is a bit of an abstract point to argue over, but I’ll give it a go...
I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle—but perhaps it, or something like it, needs to be included in order to avoid the “destroy everything instantly and painlessly” solution.
That said, I think it’s more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?
Your example exposes the flaw in the “destroy everything instantly and painlessly” pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed—or argued for, anyway—when the gain from continuing to live is believed to be outweighed by the suffering.)
I think this shows that there needs to be a term for pleasure/enjoyment in the formula...
...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we’re trying to maximize that term—where the exact aggregation function has yet to be determined, but we know it has a positive slope.
I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
So you want to modify your original statement:
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
To something like: “I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle’s definition of] suffering (which woozle can’t actually define but knows it when he sees it)”?
Your proposal seems to be phrased as a descriptive rather than normative statement (‘the ultimate terminal value of every rational, compassionate human is’ rather than ‘should be’). As a descriptive statement this seems factually false unless you define ‘rational, compassionate human’ as ‘human who aims to minimize woozle’s definition of suffering’. As a normative statement it is merely an opinion and one which I disagree with.
So I don’t agree that minimizing suffering by any reasonable definition I can think of (I’m having to guess since you can’t provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy—I’ve been known to shed a tear when watching a movie and to feel compassion for other human beings.
again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
So everyone shares your self declared terminal value of minimizing suffering but many of them don’t know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be “wrong” unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it’s ok to use qualitative words like “significant” without defining them exactly?)
I intend it as a descriptive statement (“is”), and I have been asking for counterexamples: show me a situation in which the “right” decision increases the overall harm/suffering/discomfort of those affected.
I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.
Much discussion about “minimization of suffering” etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
(Tentative definition: “suffering” is any kind of discomfort over which the subject has no control.)
All other values (from any part of the political continuum) -- “human rights”, “justice”, “fairness”, “morality”, “faith”, “loyalty”, “honor”, “patriotism”, etc. -- are not rational terminal values.
This isn’t to say that they are useless. They serve as a kind of ethical shorthand, guidelines, rules-of-thumb, “philosophical first-aid”: somewhat-reliable predictors of which actions are likely to cause harm (and which are not) -- memes which are effective at reducing harm when people are infected by them. (Hence society often works hard to “sugar coat” them with simplistic, easily-comprehended—but essentially irrelevant—justifications, and otherwise encourage their spread.)
Nonetheless, they are not rational terminal values; they are stand-ins.
They also have a price:
they do not adapt well to changes in our evolving rational understanding of what causes harm/suffering, so that rules which we now know cause more suffering than benefit are still happily propagating out in the memetic wilderness...
any rigid rule (like any tool) can be abused.
...
I seem to have taken this line of thought a bit further than I meant to originally—so to summarize: I’d really like to hear if anyone believes there are other rational terminal values other than (or which cannot ultimately be reduced to) “minimizing suffering”.
I disagree. I’ll take suffering rather than death any day, thank-you-very-much.
Furthermore, I have reason to believe that, if I were offered the opportunity to instantaneously and painlessly wipe out all life in the universe, many compassionate humans would support my decision not to do so, despite all the suffering which is thereby allowed to continue.
You think you’re disagreeing with me, but you’re not; I would say that for you, death would be a kind of suffering—the very worst kind, even.
I would also count the “wipe out all life” scenario as an extreme form of suffering. Anyone with any compassion would suffer in the mere knowledge that it was going to happen.
If you’re going to define suffering as ‘whatever we don’t like,’ including the possibility that it’s different for everyone, then I agree with your assertion but question it’s usefulness.
It’s not what “we”—the people making the decision or taking the action—don’t like; it’s what those affected by the action don’t like.
Learning is a terminal value for me, which I hold irreducible to its instrumental advantages in contributing to my well-being.
That seems related to what I was trying to get at with the placeholder-word “freedom”—I was thinking of things like “freedom to explore” and “freedom to create new things”—both of which seem highly related to “learning”.
It looks like we’re talking about two subtly different types of “terminal value”, though: for society and for one’s self. (Shall we call them “external” and “internal” TVs?)
I’m inclined to agree with your internal TV for “learning”, but that doesn’t mean that I would insist that a decision which prevented others from learning was necessarily wrong—perhaps some people have no interest in learning (though I’m not going to be inviting them to my birthday party).
If a decision prevented learnophiles from learning, though, I would count that as “harm” or “suffering”—and thus it would be against my external TVs.
Taking the thought a little further: I would be inclined to argue that unless an individual is clearly learnophobic, or it can be shown that too much learning could somehow damage them, then preventing learning in even neutral cases would also be harm—because learning is part of what makes us human. I realize, though, that this argument is on rather thinner rational ground than my main argument, and I’m mainly presenting it as a means of establishing common emotional ground. Please ignore it if this bothers you.
Take-away point: My proposed universal external TV (prevention of suffering) defines {involuntary violation of internal TVs} as harm/suffering.
Hope that makes sense.
I think you are wrong but I don’t think you’ve even defined the goal clearly enough to point to exactly where. Some questions:
How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering?
How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance?
How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering?
Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value?
There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I’d like to know your answers to these questions first.
Points 1 and 2:
I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
Actually, on thinking about it, I’m thinking “freedom” is another one of those “shorthand” values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]
The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.
(*Should I call it “subjective suffering”? “woozalian suffering”?)
Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Point 4: Yes, with some major caveats...
First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we’re not inviting those folks to the discussion table at this level.
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?
It’s true that there would be no further suffering once the destruction was complete.
This is a bit of an abstract point to argue over, but I’ll give it a go...
I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle—but perhaps it, or something like it, needs to be included in order to avoid the “destroy everything instantly and painlessly” solution.
That said, I think it’s more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?
The classic one is euthanasia.
Your example exposes the flaw in the “destroy everything instantly and painlessly” pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed—or argued for, anyway—when the gain from continuing to live is believed to be outweighed by the suffering.)
I think this shows that there needs to be a term for pleasure/enjoyment in the formula...
...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we’re trying to maximize that term—where the exact aggregation function has yet to be determined, but we know it has a positive slope.
So you want to modify your original statement:
To something like: “I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle’s definition of] suffering (which woozle can’t actually define but knows it when he sees it)”?
Your proposal seems to be phrased as a descriptive rather than normative statement (‘the ultimate terminal value of every rational, compassionate human is’ rather than ‘should be’). As a descriptive statement this seems factually false unless you define ‘rational, compassionate human’ as ‘human who aims to minimize woozle’s definition of suffering’. As a normative statement it is merely an opinion and one which I disagree with.
So I don’t agree that minimizing suffering by any reasonable definition I can think of (I’m having to guess since you can’t provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy—I’ve been known to shed a tear when watching a movie and to feel compassion for other human beings.
Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?
So everyone shares your self declared terminal value of minimizing suffering but many of them don’t know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be “wrong” unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it’s ok to use qualitative words like “significant” without defining them exactly?)
I intend it as a descriptive statement (“is”), and I have been asking for counterexamples: show me a situation in which the “right” decision increases the overall harm/suffering/discomfort of those affected.
I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.