The sequence idea doesn’t work b/c you can criticize sequences or categories as a whole, criticism doesn’t have to be individualized (and typically shouldn’t be – you want criticisms with some generality).
Most falsifiable hypotheses are rejected for being bad explanations, containing internal contradictions, or other issues – without empirical investigation. This is generally cheaper and is done with critical argument. If someone can generate a sequence of ideas you don’t know of any critical arguments against, then you actually do need some better critical arguments (or else they’re actually good idea). But your example is trivial to criticize – what kind of science fairy, why will it appear in that case, if you accelerate a proton past a speed will that work or does it have to stay at the speed for a certain amount of time? does the fairy or sticker have mass or energy and violate a conservation law? It’s just arbitrary, underspecified nonsense.
most ppl who like most things are not so great. that works for Popper, induction, socialism, Objectivism, Less Wrong, Christianity, Islam, whatever. your understanding of Popper is incorrect, and your experiences do not give you an accurate picture of Popper’s work. meanwhile, you don’t know of a serious criticism of CR by someone who does know what they’re talking about, whereas I do know of a serious criticism of induction which y’all don’t want to address.
If you look at the Popper summary you linked, it has someone else’s name on it, and it isn’t on my website. This kind of misattribution is the quality of scholarship I’m dealing with here. anyway here is an excerpt from something i’m currently in the process of writing.
(it says “Comment too long” so i’m going to try putting it in a reply comment, and if that doesn’t work i’ll pastebin it and edit in the link. it’s only 1500 words.)
CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.
Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).
CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.
Fallibility
CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.
There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).
Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.
CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.
Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.
So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.
Justification is the Major Error
The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.
Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.
Fallible Knowledge
CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.
When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.
The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.
CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”
Guesses and Criticism
Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literallyevolution, which is the only known answer to the very hard problem of how knowledge can be created.)
How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).
CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.
Science and Evidence
CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.
Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.
These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)
CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.
We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.
Seems like these “critical arguments” do a lot of heavy lifting.
Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.
Now what?
In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which is which? The situation seems symmetrical; both sides are yelling at each other, no progress on either side.
Would you decide by which argument seems more plausible to you? Then you are just another person in a 3-people ring, and the current balance of powers happens to be 2:1. Is this about having a majority?
Or would you decide that “there is no answer” is the right answer? In that case, as long as there remains a single crackpot on this planet, we have a scientific controversy. (You can’t even say that the crackpot is probably wrong, because that would be probabilistic reasoning.)
You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it.
Seems to me you kinda admit that knowledge is ultimately uncertain (i.e. probabilistic), but you refuse to talk about probabilities. (Related LW concept: “Fallacy of gray
”.) We are fallible, but it is wrong to make a guess how much. We resolve experimentally uncertain hypotheses by verbal fights, which we pretend to have exactly one of three outcomes: “side A lost”, “side B lost”, “neither side lost”; nothing in between, such as “side A seems 3x more convincing than side B”. I mean, if you start making too many points on a line, it would start to resemble a continuum, and your argument seems to be that there is no quantitative certainty, only qualitative; that only 0, 1, and 0.5 (or perhaps NaN) are valid probabilities of a hypothesis.
Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn’t participating and doesn’t matter. If we disagree about the nature of what’s taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a “probably”.
Fallibility isn’t an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com
I wish to conclude this debate somehow, so I will provide something like a summary:
If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or “critical rationalism”, and (2) weighing evidence can be replaced by… uhm… collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can’t say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against “last Thursdayism”.
Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called “falsified”, a basket called “not falsified, but refuted by a convincing critical argument”, a basket called “open debate; there are unanswered critical arguments for both sides”, and a basket called “not falsified, and supported by a convincing critical argument”. (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are “zero”, “one”, “a few”, and “many”. I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude “one in ten” or “one in a million”. But your vocabulary does not allow to make this distinction; there is only the unspecific “no conclusion” and the unspecific “I am not saying it’s literally 100% sure, but generally yes”; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.
On your website, you have a strawman powerpoint presentation about how people measure “goodness of an idea” by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
My last observation is that your methodology of “let’s keep drawing the argument tree, until we reach the conclusion” allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says “okay, that’s it, I also have other things to do”. Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen).
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
they’ve never learned or dealt with high-quality ideas before. they don’t think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.
You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don’t have a clear understanding of what you mean by “induction” and it’s a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what’s logically wrong with it, but you don’t. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.
What you do have is more ability to muddy the waters than patience or interest in thinking. That’s a formula for never knowing you lost a debate, and never learning much. It’s understandable that you’re bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it’s unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you’ve found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.
The sequence idea doesn’t work b/c you can criticize sequences or categories as a whole, criticism doesn’t have to be individualized (and typically shouldn’t be – you want criticisms with some generality).
Most falsifiable hypotheses are rejected for being bad explanations, containing internal contradictions, or other issues – without empirical investigation. This is generally cheaper and is done with critical argument. If someone can generate a sequence of ideas you don’t know of any critical arguments against, then you actually do need some better critical arguments (or else they’re actually good idea). But your example is trivial to criticize – what kind of science fairy, why will it appear in that case, if you accelerate a proton past a speed will that work or does it have to stay at the speed for a certain amount of time? does the fairy or sticker have mass or energy and violate a conservation law? It’s just arbitrary, underspecified nonsense.
most ppl who like most things are not so great. that works for Popper, induction, socialism, Objectivism, Less Wrong, Christianity, Islam, whatever. your understanding of Popper is incorrect, and your experiences do not give you an accurate picture of Popper’s work. meanwhile, you don’t know of a serious criticism of CR by someone who does know what they’re talking about, whereas I do know of a serious criticism of induction which y’all don’t want to address.
If you look at the Popper summary you linked, it has someone else’s name on it, and it isn’t on my website. This kind of misattribution is the quality of scholarship I’m dealing with here. anyway here is an excerpt from something i’m currently in the process of writing.
(it says “Comment too long” so i’m going to try putting it in a reply comment, and if that doesn’t work i’ll pastebin it and edit in the link. it’s only 1500 words.)
Critical Rationalism (CR)
CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.
Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).
CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.
Fallibility
CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.
There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).
Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.
CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.
Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.
So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.
Justification is the Major Error
The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.
Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.
Fallible Knowledge
CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.
When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.
The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.
CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”
Guesses and Criticism
Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how knowledge can be created.)
How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).
CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.
Science and Evidence
CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.
Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.
These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)
CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.
We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.
Seems like these “critical arguments” do a lot of heavy lifting.
Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.
Now what?
In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which is which? The situation seems symmetrical; both sides are yelling at each other, no progress on either side.
Would you decide by which argument seems more plausible to you? Then you are just another person in a 3-people ring, and the current balance of powers happens to be 2:1. Is this about having a majority?
Or would you decide that “there is no answer” is the right answer? In that case, as long as there remains a single crackpot on this planet, we have a scientific controversy. (You can’t even say that the crackpot is probably wrong, because that would be probabilistic reasoning.)
Seems to me you kinda admit that knowledge is ultimately uncertain (i.e. probabilistic), but you refuse to talk about probabilities. (Related LW concept: “Fallacy of gray ”.) We are fallible, but it is wrong to make a guess how much. We resolve experimentally uncertain hypotheses by verbal fights, which we pretend to have exactly one of three outcomes: “side A lost”, “side B lost”, “neither side lost”; nothing in between, such as “side A seems 3x more convincing than side B”. I mean, if you start making too many points on a line, it would start to resemble a continuum, and your argument seems to be that there is no quantitative certainty, only qualitative; that only 0, 1, and 0.5 (or perhaps NaN) are valid probabilities of a hypothesis.
Okay, I feel like am already repeating myself.
Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn’t participating and doesn’t matter. If we disagree about the nature of what’s taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a “probably”.
Fallibility isn’t an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com
I wish to conclude this debate somehow, so I will provide something like a summary:
If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or “critical rationalism”, and (2) weighing evidence can be replaced by… uhm… collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).
I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.
First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don’t, then you can’t predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can’t say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against “last Thursdayism”.
Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called “falsified”, a basket called “not falsified, but refuted by a convincing critical argument”, a basket called “open debate; there are unanswered critical arguments for both sides”, and a basket called “not falsified, and supported by a convincing critical argument”. (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are “zero”, “one”, “a few”, and “many”. I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude “one in ten” or “one in a million”. But your vocabulary does not allow to make this distinction; there is only the unspecific “no conclusion” and the unspecific “I am not saying it’s literally 100% sure, but generally yes”; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.
On your website, you have a strawman powerpoint presentation about how people measure “goodness of an idea” by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about “goodness” of an idea; it is about mathematical probability. Unlike “goodness”, probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with “goodness” of the idea “this is the first barrel” or “this is the second barrel”.
My last observation is that your methodology of “let’s keep drawing the argument tree, until we reach the conclusion” allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says “okay, that’s it, I also have other things to do”. Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)
And that’s most likely all from my side.
This is the old argument that CR smuggles induction in via the backdoor. Critical Rationalists have given answers to this argument. Search, for example, what Rafe Champion has to say about induction smuggling. Why have you not done research about this before commenting? You point is not original.
Are you familiar with what David Deutsch had to say about this in, for example, The Fabric of Reality? Again, you have not done any research and you are not making any new points which have not already been answered.
Critical Rationalists have also given answers to this, including Elliot Temple himself. CR has no problem with the probabilities of events—which is what your example is about. But theories are not events and you cannot associate probabilities with theories. You have still not made an original point which has not been discussed previously.
Why do you think that some argument which crosses your mind hasn’t already been discussed in depth? Do you assume that CR is just some mind-burp by Popper that hasn’t been fully fleshed out?
they’ve never learned or dealt with high-quality ideas before. they don’t think those exist (outside certain very specialized non-philosophy things mostly in science/math/programming) and their methods of dealing with ideas are designed accordingly.
You are grossly ignorant of CR, which you grossly misrepresent, and you want to reject it without understanding it. The reasons you want to throw it out while attacking straw men are unstated and biased. Also, you don’t have a clear understanding of what you mean by “induction” and it’s a moving target. If you actually had a well-defined, complete position on epistemology I could tell you what’s logically wrong with it, but you don’t. For epistemology you use a mix of 5 different versions of induction (all of which together still have no answers to many basic epistemology issues), a buggy version of half of CR, as well as intuition, common sense, what everyone knows, bias, common sense, etc. What an unscholarly mess.
What you do have is more ability to muddy the waters than patience or interest in thinking. That’s a formula for never knowing you lost a debate, and never learning much. It’s understandable that you’re bad at learning about new ideas, bad at organizing a discussion, bad at keeping track of what was said, etc, but it’s unreasonable that, due your inability to discuss effectively, you blame CR methodology for the discussion not reaching a conclusion fast enough and quit. The reason you think you’ve found more success when talking with other people is because you find people who already agree with you about more things before you the discussion starts.