This is a fantastic quote set of quotes. I think it is necessary to attach a disclaimer, though. As he points out, there are definitely circumstances when the right and proper response is to ridicule hypocrisy and reign down scathing critique on those who uphold things they know to be unjust. However, such circumstances can be defined fairly narrowly, and don’t apply to most cases. This isn’t a catch-all license not to have to debate an argument, because that would require a similarly well-justified reason.
When there is a consensus that something is morally wrong and has a better alternative, but some who benefit from the practice put up a flimsy defensive argument that most people see straight through, then the thing to do is rouse the populace against what they already know to be unacceptable. But if the point itself is hotly debated with many people on both sides of the thing itself (not just arguing that it is a necessary evil, but arguing against any “better” alternative) then one should be wary. After all, political debates should not appear one-sided. These types of political arguments seem to be the most common.
The thing to do in the case of most circumstances is to find where the truth lies. Not just to pick a side, but to objectively examine all arguments, and weigh the pros and the cons as they are found, and update beliefs to match reality. Only once the truth has been found with a high degree of certainty should things have shifted from a purely intellectual investigation into all-out advocacy. The academic approach approach should slowly transition from discussion into lobbying as evidence builds. At the far end of the spectrum, to be reached only once one has an extremely high degree of certainty in one’s arguments, is “a fiery stream of biting ridicule, blasting reproach, withering sarcasm, and stern rebuke”. I don’t think it’s possible to reach sufficient certainty in one’s own opinions without the pier-review of an entire population for a while, perhaps an entire generation. If, after a generation of debate, a super-majority are sympathetic to the cause but are unwilling to fight the entrenched powers that be (or to give up their own comforts, or to make the massive changes needed to correct the problem, or whatever has prevented resolution so far) THEN it is time to abandon traditional discourse and the usual debate. That is what’s needed, under those circumstances, to actually motivate the populace to do what they already know to be the right thing.
Note that Frederick Douglass was speaking about issues which had been widely debated for almost a century, with one side claiming “necessary evil”, and the other showing by example that it wasn’t, in fact, necessary at all. If one’s own cause doesn’t compare, then perhaps more careful thought or more discussion is what is called for. That’s not to say that the issue appeared clear cut at the time, (it definitely did not), but only that only relatively small logical steps (in an objective sense ) to arrive at a high-certainty conclusion. Such quotes aren’t justification for anyone to do the same with their own pet issue, at least not without very careful consideration of whether it is really what is needed most.
Only once the truth has been found with a high degree of certainty should things have shifted from a purely intellectual investigation into all-out advocacy.
You’re basically saying that we shouldn’t reject rational discussion unless our cause is really, really, proven. And everyone thinks their cause is really, really, proven. It doesn’t matter whether you phrase it as “high degree of certainty” or “showing by example that it wasn’t necessary” or “only has a flimsy defensive argument” or even “there is a moral consensus”; everyone’s pet cause falls into that category, as far as they are concerned.
Just like you need to give criminal suspects trials even if you think they are guilty, you need to treat ideas rationally even if you think their supporters don’t have a case.
Just like you need to give criminal suspects trials even if you think they are guilty, you need to treat ideas rationally even if you think their supporters don’t have a case.
If we had an infinite amount of time to spend treating all cases equally, then I would agree that all opinions should be argued out rather than ignored. Unfortunately, we only have limited time, and have to allocate it where we think it will do the greatest good. I think that a perfect rationalist would encounter ideas that aren’t worth the time to debate fully with their proponents. Unfortunately, we aren’t perfect rationalists, but thankfully we know that we aren’t perfect rationalists, and can try to compensate for our inadequacies. One such inadequacy is that we generally drastically overestimate how likely we are to be correct. In extreme cases, we even assign 100% certainty to things. The previous 2 sequences explain why this is a very bad idea. I think this is the sort of thing you were pointing out, and I would agree with you on that.
Even so, if I am extremely certain of something, and have good reason to believe that I’m not missing some subtle point, (such as the topic having been previously debated to death for the previous century), and if I apply a correction factor to compensate for the tendency I know I have to overestimate probabilities… if I do all this, and the probability of being correct still turns out to be quite high, with a narrow standard deviation, then I would indeed be inclined to waste little to no time on further discussion, and instead devote all my energies to fixing the problem by any means necessary.
Further, I suspect that you do the same to some degree. What issues do you spend more time arguing oven than solving? (maybe most political issues) What issues do you spend more time solving than arguing over? (perhaps you and your spouse spend more time actually doing housework then discussing the details of how to optimally divide labor) What issues do you spend 99% of your efforts fixing, rather than discussing the best fix? Aren’t there some issues where you sometimes refuse to “feed the trolls”, thus rejecting an opportunity to debate a topic that you are extremely sure of? Or do you make a policy of always replying to all such bait?
I’m not saying it would be a great heuristic to follow, especially for most people. I’m saying, that in an extremely narrow scope, it holds true. If you take the limit as p(correct) goes to infinity (Well, infinity if you are using decibels (∞db) or odds (∞:1), but 1 if you are using fractions (1/1 chance) and 100 if you are using percentages (100%). But I think decibels illustrates the point nicely.) eventually you have to start acting quite similar to how you would if you were 100% certain. That’s why Cromwell’s Rule exists; to protect us from rashly assigning such ludicrously high probabilities to anything. That overconfidence is the real problem.
I’m saying, that in an extremely narrow scope, it holds true.
Yes, it does. I would, for instance, put creationism in that category.
But I suspect the advice would be bad for most people, at least most people of the kind you see in Internet arguments, because people have a habit of saying that all sorts of things are really well established to all reasonable people and are only opposed by the deluded and by those with a stake in the problem. I wouldn’t, for instance, put capital punishment, or vegetarianism, or effective altruism, or immigration, in that category, but I’ve seen people treat all of those that way.
This is a fantastic quote set of quotes. I think it is necessary to attach a disclaimer, though. As he points out, there are definitely circumstances when the right and proper response is to ridicule hypocrisy and reign down scathing critique on those who uphold things they know to be unjust. However, such circumstances can be defined fairly narrowly, and don’t apply to most cases. This isn’t a catch-all license not to have to debate an argument, because that would require a similarly well-justified reason.
When there is a consensus that something is morally wrong and has a better alternative, but some who benefit from the practice put up a flimsy defensive argument that most people see straight through, then the thing to do is rouse the populace against what they already know to be unacceptable. But if the point itself is hotly debated with many people on both sides of the thing itself (not just arguing that it is a necessary evil, but arguing against any “better” alternative) then one should be wary. After all, political debates should not appear one-sided. These types of political arguments seem to be the most common.
The thing to do in the case of most circumstances is to find where the truth lies. Not just to pick a side, but to objectively examine all arguments, and weigh the pros and the cons as they are found, and update beliefs to match reality. Only once the truth has been found with a high degree of certainty should things have shifted from a purely intellectual investigation into all-out advocacy. The academic approach approach should slowly transition from discussion into lobbying as evidence builds. At the far end of the spectrum, to be reached only once one has an extremely high degree of certainty in one’s arguments, is “a fiery stream of biting ridicule, blasting reproach, withering sarcasm, and stern rebuke”. I don’t think it’s possible to reach sufficient certainty in one’s own opinions without the pier-review of an entire population for a while, perhaps an entire generation. If, after a generation of debate, a super-majority are sympathetic to the cause but are unwilling to fight the entrenched powers that be (or to give up their own comforts, or to make the massive changes needed to correct the problem, or whatever has prevented resolution so far) THEN it is time to abandon traditional discourse and the usual debate. That is what’s needed, under those circumstances, to actually motivate the populace to do what they already know to be the right thing.
Note that Frederick Douglass was speaking about issues which had been widely debated for almost a century, with one side claiming “necessary evil”, and the other showing by example that it wasn’t, in fact, necessary at all. If one’s own cause doesn’t compare, then perhaps more careful thought or more discussion is what is called for. That’s not to say that the issue appeared clear cut at the time, (it definitely did not), but only that only relatively small logical steps (in an objective sense ) to arrive at a high-certainty conclusion. Such quotes aren’t justification for anyone to do the same with their own pet issue, at least not without very careful consideration of whether it is really what is needed most.
You’re basically saying that we shouldn’t reject rational discussion unless our cause is really, really, proven. And everyone thinks their cause is really, really, proven. It doesn’t matter whether you phrase it as “high degree of certainty” or “showing by example that it wasn’t necessary” or “only has a flimsy defensive argument” or even “there is a moral consensus”; everyone’s pet cause falls into that category, as far as they are concerned.
Just like you need to give criminal suspects trials even if you think they are guilty, you need to treat ideas rationally even if you think their supporters don’t have a case.
If we had an infinite amount of time to spend treating all cases equally, then I would agree that all opinions should be argued out rather than ignored. Unfortunately, we only have limited time, and have to allocate it where we think it will do the greatest good. I think that a perfect rationalist would encounter ideas that aren’t worth the time to debate fully with their proponents. Unfortunately, we aren’t perfect rationalists, but thankfully we know that we aren’t perfect rationalists, and can try to compensate for our inadequacies. One such inadequacy is that we generally drastically overestimate how likely we are to be correct. In extreme cases, we even assign 100% certainty to things. The previous 2 sequences explain why this is a very bad idea. I think this is the sort of thing you were pointing out, and I would agree with you on that.
Even so, if I am extremely certain of something, and have good reason to believe that I’m not missing some subtle point, (such as the topic having been previously debated to death for the previous century), and if I apply a correction factor to compensate for the tendency I know I have to overestimate probabilities… if I do all this, and the probability of being correct still turns out to be quite high, with a narrow standard deviation, then I would indeed be inclined to waste little to no time on further discussion, and instead devote all my energies to fixing the problem by any means necessary.
Further, I suspect that you do the same to some degree. What issues do you spend more time arguing oven than solving? (maybe most political issues) What issues do you spend more time solving than arguing over? (perhaps you and your spouse spend more time actually doing housework then discussing the details of how to optimally divide labor) What issues do you spend 99% of your efforts fixing, rather than discussing the best fix? Aren’t there some issues where you sometimes refuse to “feed the trolls”, thus rejecting an opportunity to debate a topic that you are extremely sure of? Or do you make a policy of always replying to all such bait?
I’m not saying it would be a great heuristic to follow, especially for most people. I’m saying, that in an extremely narrow scope, it holds true. If you take the limit as p(correct) goes to infinity (Well, infinity if you are using decibels (∞db) or odds (∞:1), but 1 if you are using fractions (1/1 chance) and 100 if you are using percentages (100%). But I think decibels illustrates the point nicely.) eventually you have to start acting quite similar to how you would if you were 100% certain. That’s why Cromwell’s Rule exists; to protect us from rashly assigning such ludicrously high probabilities to anything. That overconfidence is the real problem.
Yes, it does. I would, for instance, put creationism in that category.
But I suspect the advice would be bad for most people, at least most people of the kind you see in Internet arguments, because people have a habit of saying that all sorts of things are really well established to all reasonable people and are only opposed by the deluded and by those with a stake in the problem. I wouldn’t, for instance, put capital punishment, or vegetarianism, or effective altruism, or immigration, in that category, but I’ve seen people treat all of those that way.