I wish to conclude this debate somehow, so I will provide something like a summary:
If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or “critical rationalism”, and (2) weighing evidence can be replaced by… uhm… collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can’t say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against “last Thursdayism”.
Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called “falsified”, a basket called “not falsified, but refuted by a convincing critical argument”, a basket called “open debate; there are unanswered critical arguments for both sides”, and a basket called “not falsified, and supported by a convincing critical argument”. (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are “zero”, “one”, “a few”, and “many”. I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude “one in ten” or “one in a million”. But your vocabulary does not allow to make this distinction; there is only the unspecific “no conclusion” and the unspecific “I am not saying it’s literally 100% sure, but generally yes”; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.
On your website, you have a strawman powerpoint presentation about how people measure “goodness of an idea” by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
My last observation is that your methodology of “let’s keep drawing the argument tree, until we reach the conclusion” allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says “okay, that’s it, I also have other things to do”. Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen).
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
they’ve never learned or dealt with high-quality ideas before. they don’t think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.
You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don’t have a clear understanding of what you mean by “induction” and it’s a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what’s logically wrong with it, but you don’t. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.
What you do have is more ability to muddy the waters than patience or interest in thinking. That’s a formula for never knowing you lost a debate, and never learning much. It’s understandable that you’re bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it’s unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you’ve found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.
I wish to conclude this debate somehow, so I will provide something like a summary:
If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or “critical rationalism”, and (2) weighing evidence can be replaced by… uhm… collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can’t say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against “last Thursdayism”.
Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called “falsified”, a basket called “not falsified, but refuted by a convincing critical argument”, a basket called “open debate; there are unanswered critical arguments for both sides”, and a basket called “not falsified, and supported by a convincing critical argument”. (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are “zero”, “one”, “a few”, and “many”. I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude “one in ten” or “one in a million”. But your vocabulary does not allow to make this distinction; there is only the unspecific “no conclusion” and the unspecific “I am not saying it’s literally 100% sure, but generally yes”; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.
On your website, you have a strawman powerpoint presentation about how people measure “goodness of an idea” by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
My last observation is that your methodology of “let’s keep drawing the argument tree, until we reach the conclusion” allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says “okay, that’s it, I also have other things to do”. Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)
And that’s most likely all from my side.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
they’ve never learned or dealt with high-quality ideas before. they don’t think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.
You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don’t have a clear understanding of what you mean by “induction” and it’s a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what’s logically wrong with it, but you don’t. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.
What you do have is more ability to muddy the waters than patience or interest in thinking. That’s a formula for never knowing you lost a debate, and never learning much. It’s understandable that you’re bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it’s unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you’ve found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.