(Hi, sorry for the delayed response. I’ve been gone.)
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Just the standard stuff you’d get in high school or undergrad college. [...]
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude… something. What, exactly? It’s not clear).
And yet, I can’t help but notice that Engel takes an approach that’s not exactly either of the above. He says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: “Uh, really? Why...?”)
As for your restatement of Engel’s argument… First of all, I’ve reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he’s saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be… rather off. To wit:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality.
Well, here’s the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it’s possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you’re aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
“Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
Why do you character the quoted belief as “motivated”? We are assuming, I thought, that I’ve arrived at said belief by the same process as I arrive at any other beliefs. If that one’s motivated, well, it’s presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel’s claim that “accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” seems the height of silliness. Frankly, I’m not sure what could make someone say that but a case of writing one’s bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency’s sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel’s argument works in theory, let’s put it to the test on his actual claims, yes?
What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Why do you character the quoted belief as “motivated”?
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
And, in any case, why are we singling out this particular belief for consistency-checking?
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
Alright then. To the object level!
Engel claims that you hold the following beliefs:
Let’s see...
(p1) Other things being equal, a world with less pain and suffering is better than a world with more pain and suffering.
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
(p2) A world with less unnecessary suffering is better than a world with more unnecessary suffering.
See (p1).
(p3) Unnecessary cruelty is wrong and prima facie should not be supported or encouraged.
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
(p4) We ought to take steps to make the world a better place.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
(p4’) We ought to do what we reasonably can to avoid making the world a worse place.
Agreed.
(p5) A morally good person will take steps to make this world a better place and even stronger steps to avoid making the world a worse place.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
(p6) Even a minimally decent person would take steps to reduce the amount of unnecessary pain and suffering in the world, if s/he could do so with very little effort.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
(p7) I am a morally good person.
See response to (p5); this is not very meaningful. So, no.
(p8) I am at least a minimally decent person.
Yep.
(p9) I am the sort of person who certainly would take steps to help reduce the amount of pain and suffering in the world, if I could do so with very little effort.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
(p10) Many nonhuman animals (certainly all vertebrates) are capable of feeling pain.
This seems relatively uncontroversial.
(p11) It is morally wrong to cause an animal unnecessary pain or suffering.
Nope. (And see (p1) re: “suffering”.)
(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.
Nope.
(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
(p14) Other things being equal, it is worse to kill a conscious sentient animal than it is to kill plant.
Nope.
(p15) We have a duty to help preserve the environment for future generations (at least for future human generations).
I’ll agree with this to a reasonable extent.
(p16) One ought to minimize one’s contribution toward environmental degradation, especially in those ways requiring minimal effort on one’s part.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Yes I was. My point was that if one writes a program that purports to prove that
“eating meat is immoral” actually follow from the propositions...
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as:
int calculate_the_conclusion(string premises_acceptedbyreader[])
{
int result=0;
foreach(mypremise in reader’s premise){result++;}
return result.
}
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.
(Hi, sorry for the delayed response. I’ve been gone.)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Welcome back.
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude… something. What, exactly? It’s not clear).
And yet, I can’t help but notice that Engel takes an approach that’s not exactly either of the above. He says:
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: “Uh, really? Why...?”)
As for your restatement of Engel’s argument… First of all, I’ve reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he’s saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be… rather off. To wit:
Well, here’s the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it’s possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you’re aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
Why do you character the quoted belief as “motivated”? We are assuming, I thought, that I’ve arrived at said belief by the same process as I arrive at any other beliefs. If that one’s motivated, well, it’s presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel’s claim that “accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” seems the height of silliness. Frankly, I’m not sure what could make someone say that but a case of writing one’s bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency’s sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel’s argument works in theory, let’s put it to the test on his actual claims, yes?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
Alright then. To the object level!
Let’s see...
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
See (p1).
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
Agreed.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
See response to (p5); this is not very meaningful. So, no.
Yep.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
This seems relatively uncontroversial.
Nope. (And see (p1) re: “suffering”.)
Nope.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
Nope.
I’ll agree with this to a reasonable extent.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Oh, really? ;)
string calculate_the_conclusion(string the_premises[])
{
return “The conclusion. Q.E.D.”;
}
This function takes the premises as a parameter, and returns the conclusion. Criterion satisfied?
Yes, it explicates the lack of logic, which is the whole point.
I confess to being confused about your intended point. I thought you were more or less agreeing with me, but now I am not so sure?
Yes I was. My point was that if one writes a program that purports to prove that
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
Hence my insistence on writing it up in a way a computer would understand.
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as: int calculate_the_conclusion(string premises_acceptedbyreader[]) { int result=0; foreach(mypremise in reader’s premise){result++;} return result. }
-note the “at least”.
OK, since you are rejecting formal logic I’ll agree we’ve reached a point where no further agreement is likely.
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.