I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
Alright then. To the object level!
Engel claims that you hold the following beliefs:
Let’s see...
(p1) Other things being equal, a world with less pain and suffering is better than a world with more pain and suffering.
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
(p2) A world with less unnecessary suffering is better than a world with more unnecessary suffering.
See (p1).
(p3) Unnecessary cruelty is wrong and prima facie should not be supported or encouraged.
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
(p4) We ought to take steps to make the world a better place.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
(p4’) We ought to do what we reasonably can to avoid making the world a worse place.
Agreed.
(p5) A morally good person will take steps to make this world a better place and even stronger steps to avoid making the world a worse place.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
(p6) Even a minimally decent person would take steps to reduce the amount of unnecessary pain and suffering in the world, if s/he could do so with very little effort.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
(p7) I am a morally good person.
See response to (p5); this is not very meaningful. So, no.
(p8) I am at least a minimally decent person.
Yep.
(p9) I am the sort of person who certainly would take steps to help reduce the amount of pain and suffering in the world, if I could do so with very little effort.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
(p10) Many nonhuman animals (certainly all vertebrates) are capable of feeling pain.
This seems relatively uncontroversial.
(p11) It is morally wrong to cause an animal unnecessary pain or suffering.
Nope. (And see (p1) re: “suffering”.)
(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.
Nope.
(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
(p14) Other things being equal, it is worse to kill a conscious sentient animal than it is to kill plant.
Nope.
(p15) We have a duty to help preserve the environment for future generations (at least for future human generations).
I’ll agree with this to a reasonable extent.
(p16) One ought to minimize one’s contribution toward environmental degradation, especially in those ways requiring minimal effort on one’s part.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Yes I was. My point was that if one writes a program that purports to prove that
“eating meat is immoral” actually follow from the propositions...
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as:
int calculate_the_conclusion(string premises_acceptedbyreader[])
{
int result=0;
foreach(mypremise in reader’s premise){result++;}
return result.
}
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
Alright then. To the object level!
Let’s see...
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
See (p1).
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
Agreed.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
See response to (p5); this is not very meaningful. So, no.
Yep.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
This seems relatively uncontroversial.
Nope. (And see (p1) re: “suffering”.)
Nope.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
Nope.
I’ll agree with this to a reasonable extent.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Oh, really? ;)
string calculate_the_conclusion(string the_premises[])
{
return “The conclusion. Q.E.D.”;
}
This function takes the premises as a parameter, and returns the conclusion. Criterion satisfied?
Yes, it explicates the lack of logic, which is the whole point.
I confess to being confused about your intended point. I thought you were more or less agreeing with me, but now I am not so sure?
Yes I was. My point was that if one writes a program that purports to prove that
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
Hence my insistence on writing it up in a way a computer would understand.
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as: int calculate_the_conclusion(string premises_acceptedbyreader[]) { int result=0; foreach(mypremise in reader’s premise){result++;} return result. }
-note the “at least”.
OK, since you are rejecting formal logic I’ll agree we’ve reached a point where no further agreement is likely.
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.