I think you are wrong but I don’t think you’ve even defined the goal clearly enough to point to exactly where. Some questions:
How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering?
How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance?
How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering?
Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value?
There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I’d like to know your answers to these questions first.
I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
Actually, on thinking about it, I’m thinking “freedom” is another one of those “shorthand” values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]
The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.
(*Should I call it “subjective suffering”? “woozalian suffering”?)
Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Point 4: Yes, with some major caveats...
First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we’re not inviting those folks to the discussion table at this level.
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?
It’s true that there would be no further suffering once the destruction was complete.
This is a bit of an abstract point to argue over, but I’ll give it a go...
I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle—but perhaps it, or something like it, needs to be included in order to avoid the “destroy everything instantly and painlessly” solution.
That said, I think it’s more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?
Your example exposes the flaw in the “destroy everything instantly and painlessly” pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed—or argued for, anyway—when the gain from continuing to live is believed to be outweighed by the suffering.)
I think this shows that there needs to be a term for pleasure/enjoyment in the formula...
...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we’re trying to maximize that term—where the exact aggregation function has yet to be determined, but we know it has a positive slope.
I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
So you want to modify your original statement:
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
To something like: “I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle’s definition of] suffering (which woozle can’t actually define but knows it when he sees it)”?
Your proposal seems to be phrased as a descriptive rather than normative statement (‘the ultimate terminal value of every rational, compassionate human is’ rather than ‘should be’). As a descriptive statement this seems factually false unless you define ‘rational, compassionate human’ as ‘human who aims to minimize woozle’s definition of suffering’. As a normative statement it is merely an opinion and one which I disagree with.
So I don’t agree that minimizing suffering by any reasonable definition I can think of (I’m having to guess since you can’t provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy—I’ve been known to shed a tear when watching a movie and to feel compassion for other human beings.
again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
So everyone shares your self declared terminal value of minimizing suffering but many of them don’t know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be “wrong” unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it’s ok to use qualitative words like “significant” without defining them exactly?)
I intend it as a descriptive statement (“is”), and I have been asking for counterexamples: show me a situation in which the “right” decision increases the overall harm/suffering/discomfort of those affected.
I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.
I think you are wrong but I don’t think you’ve even defined the goal clearly enough to point to exactly where. Some questions:
How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering?
How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance?
How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering?
Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value?
There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I’d like to know your answers to these questions first.
Points 1 and 2:
I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
Actually, on thinking about it, I’m thinking “freedom” is another one of those “shorthand” values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would a Republican. [rimshot]
The point is not that we can all agree on a quantitative assessment of which actions are better than others, but that we can all agree that the goal of all these supposedly-terminal values (which are not in fact terminal) is to minimize suffering*.
(*Should I call it “subjective suffering”? “woozalian suffering”?)
Point 3 again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Point 4: Yes, with some major caveats...
First, I think this principle is at the heart of human wiring. Some people may not have it (about 5% of the population lacks any empathy), but we’re not inviting those folks to the discussion table at this level.
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?
It’s true that there would be no further suffering once the destruction was complete.
This is a bit of an abstract point to argue over, but I’ll give it a go...
I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle—but perhaps it, or something like it, needs to be included in order to avoid the “destroy everything instantly and painlessly” solution.
That said, I think it’s more of a glitch in the algorithm than a serious exception to the principle. Can you think of any real-world examples, or class of problems, where anyone would seriously argue for such a solution?
The classic one is euthanasia.
Your example exposes the flaw in the “destroy everything instantly and painlessly” pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed—or argued for, anyway—when the gain from continuing to live is believed to be outweighed by the suffering.)
I think this shows that there needs to be a term for pleasure/enjoyment in the formula...
...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we’re trying to maximize that term—where the exact aggregation function has yet to be determined, but we know it has a positive slope.
So you want to modify your original statement:
To something like: “I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle’s definition of] suffering (which woozle can’t actually define but knows it when he sees it)”?
Your proposal seems to be phrased as a descriptive rather than normative statement (‘the ultimate terminal value of every rational, compassionate human is’ rather than ‘should be’). As a descriptive statement this seems factually false unless you define ‘rational, compassionate human’ as ‘human who aims to minimize woozle’s definition of suffering’. As a normative statement it is merely an opinion and one which I disagree with.
So I don’t agree that minimizing suffering by any reasonable definition I can think of (I’m having to guess since you can’t provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy—I’ve been known to shed a tear when watching a movie and to feel compassion for other human beings.
Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?
So everyone shares your self declared terminal value of minimizing suffering but many of them don’t know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be “wrong” unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it’s ok to use qualitative words like “significant” without defining them exactly?)
I intend it as a descriptive statement (“is”), and I have been asking for counterexamples: show me a situation in which the “right” decision increases the overall harm/suffering/discomfort of those affected.
I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.