Regarding example 4. Believing something because a really smart person believes it is not a bad heuristic, as long as you aren’t cherry-picking the really smart person. prepriors, then everyone says, you had the misfortune of being born wrong, I’m lucky enough to be born right. If you were transported to an alternate reality, where half the population thought 2+2=4, and half thought 2+2=7, would you become uncertain, or would you just think that the 2+2=7 population were wrong?
The argument about believing in Cthulhu because that was how you were raised proving too much, itself proves too much.
Regarding example 4. Believing something because a really smart person believes it is not a bad heuristic, as long as you aren’t cherry-picking the really smart person. If you have data about many smart people, taking the average is an even better heuristic. As is focusing on the smart people that are experts in some relevant field. The usefulness of this heuristic also depends on your definition of ‘smart’. There are a few people with a high IQ, a powerful brain capable of thinking and remembering well, but who have very poor epistemology, and are creationists or Scientologists. Many definitions of smart would rule out these people, requiring some rationality skills of some sort. This makes the smart people heuristic even better.
Thanks for the specific examples on why this is wrong! I’ve updated the post to state the usefulness of this technique. So you don’t have to search for it:
Important Note: the purpose of this frame isn’t to win an argument/ prove anything. It’s to differentiate between heuristics that claim 100% success rates vs ones that claim a more accurate estimates. Imagine “I’m 100% confident I’ll roll 7′s with my two die cause of my luck!” vs “There’s a 6⁄36 chance I’ll roll 7′s because I’m assuming two fair die”
So for example 4, appeal to authority may be a useful heuristic, but if that’s the only reason they believe in evolution with 100% confidence, then showing it Proves Too Much is useful. Does this satisfy your critique?
A full consideration of proving too much requires that we have uncertainty both over what arguments are valid, and over the real world. The uncertainty about what arguments are valid, along with our inability to consider all possible arguments makes this type of reasoning work. If you see a particular type of argument in favor of conclusion X, and you disagree with conclusion X, then that gives you evidence against that type of argument.
This is used in moral arguments too. Consider the argument that touching someone really gently isn’t wrong. And if it isn’t wrong to touch someone with force F, then it isn’t wrong to touch them with force F+0.001 Newtons. Therefore, by induction, it isn’t wrong to punch people as hard as you like.
Now consider the argument that 1 grain of sand isn’t a heap. If you put a grain of sand down somewhere that there isn’t already a heap of sand, you don’t get a heap. Therefore by induction, no amount of sand is a heap.
If you were unsure about the morality of punching people, but knew that heaps of sand existed, then seeing the first argument would make you update towards “punching people is ok”. When you then see the second argument, you update to “inductive arguments don’t work in the real world.” and reverse the previous update about punching people.
Seeing an argument for a conclusion that you don’t believe can make you reduce your credence on other statements supported by similar arguments.
If a heuristic claims 100% success rate in a specific context, one can show it proves too much by proving a counterexample
Inspired by your induction example, induction is very useful for proofs regarding Natural numbers, but it breaks down in the context of moral reasoning (or even just the context of Real numbers).
This is a better frame of Proving Too Much than I have framed it in this post. I will need to either edit the post or make a new one and link it at the top of this article. Either way thanks!
With that said, I don’t think this captures your point of uncertainty over both valid arguments and possible worlds. Would you elaborate on how your point relates to the above, updated model?
Regarding example 4. Believing something because a really smart person believes it is not a bad heuristic, as long as you aren’t cherry-picking the really smart person. prepriors, then everyone says, you had the misfortune of being born wrong, I’m lucky enough to be born right. If you were transported to an alternate reality, where half the population thought 2+2=4, and half thought 2+2=7, would you become uncertain, or would you just think that the 2+2=7 population were wrong?
The argument about believing in Cthulhu because that was how you were raised proving too much, itself proves too much.
Regarding example 4. Believing something because a really smart person believes it is not a bad heuristic, as long as you aren’t cherry-picking the really smart person. If you have data about many smart people, taking the average is an even better heuristic. As is focusing on the smart people that are experts in some relevant field. The usefulness of this heuristic also depends on your definition of ‘smart’. There are a few people with a high IQ, a powerful brain capable of thinking and remembering well, but who have very poor epistemology, and are creationists or Scientologists. Many definitions of smart would rule out these people, requiring some rationality skills of some sort. This makes the smart people heuristic even better.
Thanks for the specific examples on why this is wrong! I’ve updated the post to state the usefulness of this technique. So you don’t have to search for it:
So for example 4, appeal to authority may be a useful heuristic, but if that’s the only reason they believe in evolution with 100% confidence, then showing it Proves Too Much is useful. Does this satisfy your critique?
Fair enough, I think that satisfies my critique.
A full consideration of proving too much requires that we have uncertainty both over what arguments are valid, and over the real world. The uncertainty about what arguments are valid, along with our inability to consider all possible arguments makes this type of reasoning work. If you see a particular type of argument in favor of conclusion X, and you disagree with conclusion X, then that gives you evidence against that type of argument.
This is used in moral arguments too. Consider the argument that touching someone really gently isn’t wrong. And if it isn’t wrong to touch someone with force F, then it isn’t wrong to touch them with force F+0.001 Newtons. Therefore, by induction, it isn’t wrong to punch people as hard as you like.
Now consider the argument that 1 grain of sand isn’t a heap. If you put a grain of sand down somewhere that there isn’t already a heap of sand, you don’t get a heap. Therefore by induction, no amount of sand is a heap.
If you were unsure about the morality of punching people, but knew that heaps of sand existed, then seeing the first argument would make you update towards “punching people is ok”. When you then see the second argument, you update to “inductive arguments don’t work in the real world.” and reverse the previous update about punching people.
Seeing an argument for a conclusion that you don’t believe can make you reduce your credence on other statements supported by similar arguments.
I really like this! I think my model is now:
Inspired by your induction example, induction is very useful for proofs regarding Natural numbers, but it breaks down in the context of moral reasoning (or even just the context of Real numbers).
This is a better frame of Proving Too Much than I have framed it in this post. I will need to either edit the post or make a new one and link it at the top of this article. Either way thanks!
With that said, I don’t think this captures your point of uncertainty over both valid arguments and possible worlds. Would you elaborate on how your point relates to the above, updated model?