I don’t know. I admitted that this was an area where there might be individual disagreement; I don’t know the exact nature of the fa() and fb() functions—just that we want to minimize [my definition of] suffering and maximize freedom.
So you want to modify your original statement:
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
To something like: “I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle’s definition of] suffering (which woozle can’t actually define but knows it when he sees it)”?
Your proposal seems to be phrased as a descriptive rather than normative statement (‘the ultimate terminal value of every rational, compassionate human is’ rather than ‘should be’). As a descriptive statement this seems factually false unless you define ‘rational, compassionate human’ as ‘human who aims to minimize woozle’s definition of suffering’. As a normative statement it is merely an opinion and one which I disagree with.
So I don’t agree that minimizing suffering by any reasonable definition I can think of (I’m having to guess since you can’t provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy—I’ve been known to shed a tear when watching a movie and to feel compassion for other human beings.
again arises from a misunderstanding of my definition of suffering; such an action would hugely amplify subjective suffering, not eliminate it.
Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?
Second… many people have been socialized into believing that certain intermediate values (faith, honor, patriotism, fairness, justice, honesty...) are themselves terminal values—but when anyone tries to justify those values as being good and right, the justifications inevitably come down to either (a) preventing harm to others, or (b) preventing harm to one’s self. …and (b) only supercedes (a) for people whose self-interest outweighs their integrity.
So everyone shares your self declared terminal value of minimizing suffering but many of them don’t know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be “wrong” unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it’s ok to use qualitative words like “significant” without defining them exactly?)
I intend it as a descriptive statement (“is”), and I have been asking for counterexamples: show me a situation in which the “right” decision increases the overall harm/suffering/discomfort of those affected.
I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.
So you want to modify your original statement:
To something like: “I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle’s definition of] suffering (which woozle can’t actually define but knows it when he sees it)”?
Your proposal seems to be phrased as a descriptive rather than normative statement (‘the ultimate terminal value of every rational, compassionate human is’ rather than ‘should be’). As a descriptive statement this seems factually false unless you define ‘rational, compassionate human’ as ‘human who aims to minimize woozle’s definition of suffering’. As a normative statement it is merely an opinion and one which I disagree with.
So I don’t agree that minimizing suffering by any reasonable definition I can think of (I’m having to guess since you can’t provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy—I’ve been known to shed a tear when watching a movie and to feel compassion for other human beings.
Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not?
So everyone shares your self declared terminal value of minimizing suffering but many of them don’t know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be “wrong” unless it can at least be shown to cause significant amounts of such discomfort. (Can we at least acknowledge that it’s ok to use qualitative words like “significant” without defining them exactly?)
I intend it as a descriptive statement (“is”), and I have been asking for counterexamples: show me a situation in which the “right” decision increases the overall harm/suffering/discomfort of those affected.
I am confident that I can show how any supposed counterexamples are in fact depending implicitly on the rationale I am proposing, i.e. minimizing involuntary discomfort.