Based on your description here of your reaction, I get the impression that you mistook the structure of the argument.
That’s possible, but I don’t think that’s the case. But let me address the argument in a bit more detail and perhaps we’ll see if I am indeed misunderstanding something.
First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It’s not like they’re independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument.
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just “probably wrong” or anything so prosaic.
The quoted paragraph:
“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
Now, presumably, you already think your belief system is for the most part reasonable, or you would have already made significant changes in it. So, you will want to reject as few beliefs as possible.
??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don’t think any of your beliefs are false, or else you’d reject them. If you find that some of your beliefs are false, you will want to reject them, because if you’re interested in truth then you want to hold zero false beliefs.
Since (p1) – (p16) are rife with implications, rejecting several of these propositions would force you to reject countless other beliefs on pain of incoherence, whereas accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” (883).
I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about consistency than truth, despite him describing his view in the exact opposite manner, and… I just… I don’t know what to say.
And when I read your commentary on the above, I get the same ”… what the heck? Is he… is he serious?” reaction.
I don’t understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn’t simply reject whichever premises get in the way of the conclusions you value.
What does this mean? Should I take this as a warning against motivated cognition / confirmation bias? But what on earth does that have to do with my objections? We reject premises that are false. We accept premises that are true. We accept conclusions that we think are true, which are presumably those that are supported by premises we think are true.
p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian.
… and? Again, we should hold beliefs we think are true and reject those we think are false. How on earth is picking which beliefs to accept and which to reject on the basis of what will require less updating… anything but absurd? Isn’t that one of the Great Epistemological Sins that Less Wrong warns us about?
As for the duck comment… professional philosophers troll people all the time. Having never encountered Engel’s writing before now, I of course did not know that this was his most famous argument, nor any basis for being sure of serious intent in that paragraph.
Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn’t make that assumption.
Engel apparently claims that his reader already holds these beliefs, among others:
(p11) It is morally wrong to cause an animal unnecessary pain or suffering.
(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.
(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.
(Hi, sorry for the delayed response. I’ve been gone.)
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Just the standard stuff you’d get in high school or undergrad college. [...]
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude… something. What, exactly? It’s not clear).
And yet, I can’t help but notice that Engel takes an approach that’s not exactly either of the above. He says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: “Uh, really? Why...?”)
As for your restatement of Engel’s argument… First of all, I’ve reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he’s saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be… rather off. To wit:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality.
Well, here’s the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it’s possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you’re aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
“Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
Why do you character the quoted belief as “motivated”? We are assuming, I thought, that I’ve arrived at said belief by the same process as I arrive at any other beliefs. If that one’s motivated, well, it’s presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel’s claim that “accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” seems the height of silliness. Frankly, I’m not sure what could make someone say that but a case of writing one’s bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency’s sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel’s argument works in theory, let’s put it to the test on his actual claims, yes?
What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Why do you character the quoted belief as “motivated”?
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
And, in any case, why are we singling out this particular belief for consistency-checking?
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
Alright then. To the object level!
Engel claims that you hold the following beliefs:
Let’s see...
(p1) Other things being equal, a world with less pain and suffering is better than a world with more pain and suffering.
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
(p2) A world with less unnecessary suffering is better than a world with more unnecessary suffering.
See (p1).
(p3) Unnecessary cruelty is wrong and prima facie should not be supported or encouraged.
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
(p4) We ought to take steps to make the world a better place.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
(p4’) We ought to do what we reasonably can to avoid making the world a worse place.
Agreed.
(p5) A morally good person will take steps to make this world a better place and even stronger steps to avoid making the world a worse place.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
(p6) Even a minimally decent person would take steps to reduce the amount of unnecessary pain and suffering in the world, if s/he could do so with very little effort.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
(p7) I am a morally good person.
See response to (p5); this is not very meaningful. So, no.
(p8) I am at least a minimally decent person.
Yep.
(p9) I am the sort of person who certainly would take steps to help reduce the amount of pain and suffering in the world, if I could do so with very little effort.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
(p10) Many nonhuman animals (certainly all vertebrates) are capable of feeling pain.
This seems relatively uncontroversial.
(p11) It is morally wrong to cause an animal unnecessary pain or suffering.
Nope. (And see (p1) re: “suffering”.)
(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.
Nope.
(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
(p14) Other things being equal, it is worse to kill a conscious sentient animal than it is to kill plant.
Nope.
(p15) We have a duty to help preserve the environment for future generations (at least for future human generations).
I’ll agree with this to a reasonable extent.
(p16) One ought to minimize one’s contribution toward environmental degradation, especially in those ways requiring minimal effort on one’s part.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Yes I was. My point was that if one writes a program that purports to prove that
“eating meat is immoral” actually follow from the propositions...
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as:
int calculate_the_conclusion(string premises_acceptedbyreader[])
{
int result=0;
foreach(mypremise in reader’s premise){result++;}
return result.
}
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.
That’s possible, but I don’t think that’s the case. But let me address the argument in a bit more detail and perhaps we’ll see if I am indeed misunderstanding something.
First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It’s not like they’re independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument.
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just “probably wrong” or anything so prosaic.
The quoted paragraph:
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don’t think any of your beliefs are false, or else you’d reject them. If you find that some of your beliefs are false, you will want to reject them, because if you’re interested in truth then you want to hold zero false beliefs.
I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about consistency than truth, despite him describing his view in the exact opposite manner, and… I just… I don’t know what to say.
And when I read your commentary on the above, I get the same ”… what the heck? Is he… is he serious?” reaction.
What does this mean? Should I take this as a warning against motivated cognition / confirmation bias? But what on earth does that have to do with my objections? We reject premises that are false. We accept premises that are true. We accept conclusions that we think are true, which are presumably those that are supported by premises we think are true.
… and? Again, we should hold beliefs we think are true and reject those we think are false. How on earth is picking which beliefs to accept and which to reject on the basis of what will require less updating… anything but absurd? Isn’t that one of the Great Epistemological Sins that Less Wrong warns us about?
As for the duck comment… professional philosophers troll people all the time. Having never encountered Engel’s writing before now, I of course did not know that this was his most famous argument, nor any basis for being sure of serious intent in that paragraph.
Engel apparently claims that his reader already holds these beliefs, among others:
(And without that, the argument falls down.)
(Hi, sorry for the delayed response. I’ve been gone.)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Welcome back.
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude… something. What, exactly? It’s not clear).
And yet, I can’t help but notice that Engel takes an approach that’s not exactly either of the above. He says:
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: “Uh, really? Why...?”)
As for your restatement of Engel’s argument… First of all, I’ve reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he’s saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be… rather off. To wit:
Well, here’s the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it’s possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you’re aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
Why do you character the quoted belief as “motivated”? We are assuming, I thought, that I’ve arrived at said belief by the same process as I arrive at any other beliefs. If that one’s motivated, well, it’s presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel’s claim that “accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” seems the height of silliness. Frankly, I’m not sure what could make someone say that but a case of writing one’s bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency’s sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel’s argument works in theory, let’s put it to the test on his actual claims, yes?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
Alright then. To the object level!
Let’s see...
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
See (p1).
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
Agreed.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
See response to (p5); this is not very meaningful. So, no.
Yep.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
This seems relatively uncontroversial.
Nope. (And see (p1) re: “suffering”.)
Nope.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
Nope.
I’ll agree with this to a reasonable extent.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Oh, really? ;)
string calculate_the_conclusion(string the_premises[])
{
return “The conclusion. Q.E.D.”;
}
This function takes the premises as a parameter, and returns the conclusion. Criterion satisfied?
Yes, it explicates the lack of logic, which is the whole point.
I confess to being confused about your intended point. I thought you were more or less agreeing with me, but now I am not so sure?
Yes I was. My point was that if one writes a program that purports to prove that
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
Hence my insistence on writing it up in a way a computer would understand.
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as: int calculate_the_conclusion(string premises_acceptedbyreader[]) { int result=0; foreach(mypremise in reader’s premise){result++;} return result. }
-note the “at least”.
OK, since you are rejecting formal logic I’ll agree we’ve reached a point where no further agreement is likely.
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.