Popper didn’t change views significantly but LScD is harder to understand, and philosophy is more than hard enough to understand in general. Popper is in low repute because he disagreed with people, and advocated some things (e.g. that induction is imposisble) that they consider ridiculous (which isn’t an answer to him). Plus most people go by secondary sources. My colleague surveyed over 100 textbooks and found none of them accurately represented Popper’s views – they’re broadly similar to the SEP.
(a) Simply ignores Popper’s appraoch to fallibilism and conjectural knowledge. It’s saying CR doesn’t work given infallibilist premises that Popper disputes. (Note the demand for proof.) CR accepts fallibility (which has very compelling logical arguments) and then takes it seriously by e.g. developing a fallibilist theory of knowledge rather than demanding certainty of refutation (which is impossible).
(b) Yes you can repair criticism by modifying an idea or by criticizing the criticism (which is essentially modifying the idea by adding a footnote to address the criticism, which adds content that wasn’t there previously). How is that a criticism? Also I don’t know why you think fallibilism = probability. Uncertainty frequently isn’t numeric. “We may be mistaken in some way we haven’t thought of” isn’t a probability, the future growth of knowledge is *unpredictable*.
(c) Again there’s no understanding of Popper’s views here. The thing you’re complaining about is somethign Popper emphasized, explained, and addressed. And you don’t say what about Popper’s position on the matter is weak or subjective (neither of which are part of Popper’s own account, and “weak” sounds suspiciously like “fallibile”, while “subjective” sounds suspiciously contrary to Popper’s theory of Objective Knowledge, which FYI is one of his book titles. I didn’t find the weakness or subjectivism when I read Popper, and you haven’t told me where to look with any specificity.)
---
Elevator pitch:
CR solves the fundamental problems of epistemology, like how knowledge can be created, which induction failed to solve. It’s a very hard problem: the only solution ever devised is evolution (literally, not analogously – evolution is about replicators, not just genes). In terms of ideas, evolution takes the form of guesses and criticism. CR develops much better criticisms of induction than came before, which are decisive. CR challenges the conventional, infallibilist conception of knowledge – justified, true belief – and replaces it with a non-skeptical, non-authoritarian conception of knowledge: problem-solving information (information adapted to a purpose). Although we expect to learn better ideas in the future, that doesn’t prevent our knoweldge from having value and solving problems in the current context. This epistemology is fully general purpose – it works with e.g. moral philosophy, aesthetics and explanations, not just science/observation/prediction. The underlying reason CR works to create knowledge is the same reason evolution works – it’s a process of error correction. Rather than trying to positively justify ideas, we must accept they are tentative guesses and work to correct errors to improve them.
This position should not be judged by how nice or strong it sounds; it logically works OK unlike every rival. Decisive issues for why something can’t work at all, like induction faces, have priority over how intuitive you find something or whether it does everything you’d like it to do (for example, CR is difficult to translate into computer code or math, which you may not like, but that doesn’t matter if no rival epistemology works at all).
I expect someone to bring up Solomonoff Induction so I’ll speak briefly to that. It attempts to answer the “infinite general patterns fit the data set” problem of induction (in other words, which idea should you induce from the many contradictory possibilities?) problem with a form of Occam’s Razor: favor the ideas with shorter computer code in some language. This doesn’t solve the problem of figuring out which ideas are good, it just gives an arbitrary answer (shorter doesn’t mean truer). Shorter ideas are often worse because you can get shortness by omitting explanation, reasoning, background knowledge, answers to critics, generality that isn’t necessary to the current issue, etc. This approach also, as with induction in general, ignores critical argument. And it’s focused on prediction and doesn’t address explanation. And, perhaps worst of all: how do you know Occam’s Razor is any good? With epistemology we’re trying to start at the beginning and address the foundations of thinking, so you can’t just assume common sense intuitions in our culture. If we learn by induction, then we have to learn and argue for Occam’s Razor itself by induction. But inductivists never argue with me by induction, they always write standard English explanatory arguments on philosophical topics like induction. So they need some prior epistemology to govern the use of the arguments for their epistemology, and then need to very carefully analyze what the prior epistemology is and how much of the work it’s doing. (Perhaps the prior epistemology is CR and is doing 100% of the work? Or perhaps not, but that needs to be specified instead of ignored.) CR, by contrast, is an epistemology suitable for discussing epistemology, and doesn’t need something else to get off the ground.
(If you’d like more detail, see the reading recommendations linked at the bottom of my post.)
You’ve done a whole lot of telling us how amazing this stuff is, but not much telling what it actually is. So I’m going to guess, in the hopes that you can tell me not just that I’m wrong, but specifically what a better version would be.
According to what you’ve said, it seems the process of people having valuable knowledge of future events (e.g., the sun will rise tomorrow), is that people generate guesses (by some unspecified process that’s definitely not induction), and then over time, other people criticize guesses, and the guesses best able to stand up to criticism are what we should use to predict tomorrow’s sunrise.
And the reason this works, according to you, is that it’s like evolution. Just like how in evolution, mutation and selection leads to creatures that take advantage of patterns in their environment to get a fitness advantage, guessing and criticism leads to ideas that take advantage of some sort of pattern in the environment in order to be more valuable.
But, of course, taking advantage of patterns in the environment in order to make valuable predictions about the future totally isn’t induction, and there’s no way you could formalize it other than the precise way Popper chose to formalize it.
I asked if anyone here has a criticism (including a reference they endorse). No one seems to. Apparently you personally are unfamiliar with the matter and expect me to open by assuming your ignorance and teaching you? Should I want to teach you? What value do you have to offer? If you want to be taught about CR, why don’t you join the FI forum and ask for help there? Will you read books and otherwise put in the work?
You have not specified what you think “induction” is which makes you difficult to talk with. I know you’ll try to blame me for not already knowing what you think (even though there are dozens of variants of induction and I got heavily flamed recently for suggesting LW should have any canonical ideas and targets for criticism), but e.g. you seem to claim induction is a method of theory generation when SI is a method of theory preference not generation. Induction in general has never adequately specified which theories to generate (some variations of induction recommend you generate the theories the evidence point to, but evidence doesn’t point and there are infinitely many theories compatible with the evidence). Inductivists are broadly more interested in saying the evidence supports/justifies theory X over theory Y, not that the evidence led them to generate theory X but not theory Y (which doesn’t get them very far in debates with someone who did generate theory Y, and wants to judge ideas by their content instead of their source or generation method). What I seem to be dealing with, as usual, is it’s hard to talk to someone who doesn’t understand their own position in much detail and changes it as convenient in the moment.
You also decided to interpret CR as being “like” evolution. I don’t know why. I have a general policy of being clear that it’s literally evolution, and people misinterpret in this way routinely. I certainly specified “literally” above. Perhaps you should quote specific things you’re replying to and then try to engage with them more precisely. That’s what we do at the FI forum and it improves discussion quality dramatically.
You also decided to try to learn CR from a few brief comments, which is not a method you should reasonably expect to succeed. Perhaps you’re used to epistemology that simplistic from your experiences at LW?
Imo chapter 28 of this book gives a good sense why Occam’s Razor is good. I’ll try to explain it here briefly as I understand it.
Suppose we have a class of simple models, with three free binary parameters, and a class of more complex models, with ten free binary parameters. We also have some data, and we want to know which model we should choose to explain the data. A priori, out of the parameter sets for the simple model each has a probability of 1⁄8 of being the best one, whereas for the complex model the probability is only 1/1024. As we observe the data, probability mass moves between the parameter sets. Given equally good fit between data and model, the best simple model will always have a higher probability than the best complex model. For one, because it started with a higher probability. For another, because there will be several complex models fitting the data about equally well. E.g. there may be 8 complex models, which all fit the data better than the second-best simple model. So the probability mass needs to be shared between all of those.
A complex model needs to fit the data better in order to gain enough probability mass to beat out the simpler model.
So even if we do not penalize complex models just for being more complex, we still favour simpler ones.
None of this is relevant to specifying the prior epistemology you are using to make this argument, plus you begin with “simple models” but don’t address evaluating explanations/arguments/criticisms.
Given some data and multiple competing hypotheses that explain the data equally well, the laws of probability tell us that the simplest hypothesis is the likeliest. We call this principle of preferring simpler hypotheses Occam’s Razor. Moreover, using this principle works well in practice. For example in machine learning, a simpler model will often generalize better. Therefore I know that Occam’s Razor is “any good”. Occam’s Razor is a tool that can be used for problems as described by the italicized text above. It makes no claims regarding arguments or criticisms.
I don’t really see why I would need a coherent/perfect/complete epistemology to make this kind of argument or come to that conclusion. It seems to me, like you are saying, that any claims that aren’t attained via the One True Epistemology are useless/invalid/wrong. That you wouldn’t even accept someone saying that the sky is blue, if that person didn’t first show you that they are using the right epistemology.
I notice that I don’t know what an argument that you would accept could even look like. You’re a big fan of having discussions written down in public. Could you link to an example where you argued for one position and then changed your mind because of somebody else’s argument(s)?
I don’t really see why I would need a coherent/perfect/complete epistemology to make this kind of argument or come to that conclusion.
Epistemology tells you things like what an argument is and how to evaluate whether ideas are good or bad, correct or incorrect. I’m saying you need to offer any epistemology at all under which the arguments you’re currently making are correct. Supposedly you have an induction-based epistemology (I presume), but you haven’t been using it in your comments, you’re using some other unspecified epistemology to guide what you think is a reasonable argument.
The current topic is epistemology, not the color of the sky, so you don’t get to gloss over epistemology as you might in a conversation about some other topic.
The current topic is epistemology, not the color of the sky, so you don’t get to gloss over epistemology as you might in a conversation about some other topic.
So because the discussion in general is about epistemology, you won’t accept any arguments for which the epistemology isn’t specified, even if the topic of that argument doesn’t pertain directly to epistemology, but if the discussion is about something else, you will just engage with the arguments regardless of the epistemology others are using?
That seems… unlikely to work well (if the topic is epistemology) and inconsistent.
I’d like to reiterate, that I would really appreciate a link to an example where somebody convinced you to change your mind. Failing that, you’ve mentioned elsewhere that you often changed your mind in discussions with David Deutsch. If you might reproduce or at least sketch a discussion you’ve had with him, I would be very interested.
I’m literally asking you to specify your epistemology. Offer some rival to CR...? Instead you offer me Occam’s Razer which is correct according to some unspecified epistemology you don’t want to discuss.
CR is a starting point. Do you even have a rival starting point which addresses basic questions like how to create and evaluate ideas and arguments, in general? Seems like you’re just using common sense assumptions, rather than scholarship, to evaluate a variant of Occam’s Razor (in order to defend induction). CR, as far as I can tell, is competing not with any rival philosophy (inductivist or otherwise) but with non-consumption of philosophy. (But philosophy is unavoidable so non-consumption means using intuition, common sense, cultural defaults, bias, etc., rather than thinking about it much.)
If you want stories about my discussions with DD, ask on the FI forum, not here.
You seem to have at least one typo and also to suggest you disagree without directly saying so. Can you please clarify what you’re saying? Also I don’t know how you expect me to explain all the steps involved with CR to you given your ignorance of CR – should I rewrite multiple books in my reply, or will you read references, or do you want a short summary which omits almost everything? If you want a summary, you need to give more information about where you’re coming from, what you’re thinking, and what your point and perspective are, so I can judge which parts to include. I don’t know what you doubt or why, so I don’t know how to select information for the summary you want. I also don’t know what a “supposedly true” proposition is.
I don’t want you to explain the principle in general but illustrate it on the example that you brought up. Explaining general principles on concrete examples is a classic way principles are taught. Students learn physics by working through various test problems. Reasoning by example is a classic way to transfer knowledge.
From your post I take that you believe “you need to offer any epistemology at all under which the arguments you’re currently making are correct” to be true?
If it is you should be able to explain how you came to believe that claim. Otherwise you could say that you hold that belief that have nothing to do with how you claim knowledge should be derived.
If CR can’t be used to derive the knowledge of the example it’s not a general epistomology with practical use.
From your post I take that you believe “you need to offer any epistemology at all under which the arguments you’re currently making are correct” to be true?
If it is you should be able to explain how you came to believe that claim.
Epistemology is the field that tells you the methods of thinking, arguging, evaluating ideas, judging good and bad ideas, etc. Whenever you argue, you’re using an epistemological framework, stated or not. I have stated mine. You should state yours. Induction is not a complete epistemological framework.
I assume you have read Myth of the Framework. Doesn’t Popper himself emphasize that it’s not necessary to share an epistemological framework with someone, nor explicitly verbalize exactly how it works (since doing that is difficult-to-impossible), to make intellectual progress?
Verbalizing your entire framework/worldview is too hard, but CR manages to verbalize quite a lot of epistemology. Does LW have verbalized epistemology to rival CR, which is verbalized in a reasonably equivalent kinda way to e.g. Popper’s books? I thought the claim was that it does. If you don’t have an explicit epistemology, may I recommend one to you? It’s way, way better than nothing! If you stick with unverbalized epistemology, it really lets in bias, common sense, intuition, cultural tradition, etc, and makes it hard to make improvements or have discussions.
What is the relationship between CR and other processes that can create knowledge, such as induction and deduction? Are the latter a subset of the former? What does ‘induction is impossible’ mean, that it cannot be used as a starting point or something stronger? Can CR be not only a starting point but also the only process necessary?
CR has arguments refuting induction – it doesn’t work, has never been done, cannot be done. Induction is a myth, a confusion, a misconception that doesn’t even refer to a well-defined physically-possible process of thought. (This is partly old – that induction doesn’t work has been an unsolved problem for ages – but CR offers some improved critical arguments instead of the usual hedges and excuses for believing in induction despite the probelsm.) Deduction is fine but limited.
Can CR be not only a starting point but also the only process necessary?
Induction, as the prediction of observations without necessarily having an explanation of the regularity, works just fine. The anti induction argument is purely against induction as a source of hypotheses or explanations. Everyone has given up on that idea, and the pro induction people don’t even use the word that way. There is a lot of talking-past bere.
Popper didn’t change views significantly but LScD is harder to understand, and philosophy is more than hard enough to understand in general. Popper is in low repute because he disagreed with people, and advocated some things (e.g. that induction is imposisble) that they consider ridiculous (which isn’t an answer to him). Plus most people go by secondary sources. My colleague surveyed over 100 textbooks and found none of them accurately represented Popper’s views – they’re broadly similar to the SEP.
(a) Simply ignores Popper’s appraoch to fallibilism and conjectural knowledge. It’s saying CR doesn’t work given infallibilist premises that Popper disputes. (Note the demand for proof.) CR accepts fallibility (which has very compelling logical arguments) and then takes it seriously by e.g. developing a fallibilist theory of knowledge rather than demanding certainty of refutation (which is impossible).
(b) Yes you can repair criticism by modifying an idea or by criticizing the criticism (which is essentially modifying the idea by adding a footnote to address the criticism, which adds content that wasn’t there previously). How is that a criticism? Also I don’t know why you think fallibilism = probability. Uncertainty frequently isn’t numeric. “We may be mistaken in some way we haven’t thought of” isn’t a probability, the future growth of knowledge is *unpredictable*.
(c) Again there’s no understanding of Popper’s views here. The thing you’re complaining about is somethign Popper emphasized, explained, and addressed. And you don’t say what about Popper’s position on the matter is weak or subjective (neither of which are part of Popper’s own account, and “weak” sounds suspiciously like “fallibile”, while “subjective” sounds suspiciously contrary to Popper’s theory of Objective Knowledge, which FYI is one of his book titles. I didn’t find the weakness or subjectivism when I read Popper, and you haven’t told me where to look with any specificity.)
---
Elevator pitch:
CR solves the fundamental problems of epistemology, like how knowledge can be created, which induction failed to solve. It’s a very hard problem: the only solution ever devised is evolution (literally, not analogously – evolution is about replicators, not just genes). In terms of ideas, evolution takes the form of guesses and criticism. CR develops much better criticisms of induction than came before, which are decisive. CR challenges the conventional, infallibilist conception of knowledge – justified, true belief – and replaces it with a non-skeptical, non-authoritarian conception of knowledge: problem-solving information (information adapted to a purpose). Although we expect to learn better ideas in the future, that doesn’t prevent our knoweldge from having value and solving problems in the current context. This epistemology is fully general purpose – it works with e.g. moral philosophy, aesthetics and explanations, not just science/observation/prediction. The underlying reason CR works to create knowledge is the same reason evolution works – it’s a process of error correction. Rather than trying to positively justify ideas, we must accept they are tentative guesses and work to correct errors to improve them.
This position should not be judged by how nice or strong it sounds; it logically works OK unlike every rival. Decisive issues for why something can’t work at all, like induction faces, have priority over how intuitive you find something or whether it does everything you’d like it to do (for example, CR is difficult to translate into computer code or math, which you may not like, but that doesn’t matter if no rival epistemology works at all).
I expect someone to bring up Solomonoff Induction so I’ll speak briefly to that. It attempts to answer the “infinite general patterns fit the data set” problem of induction (in other words, which idea should you induce from the many contradictory possibilities?) problem with a form of Occam’s Razor: favor the ideas with shorter computer code in some language. This doesn’t solve the problem of figuring out which ideas are good, it just gives an arbitrary answer (shorter doesn’t mean truer). Shorter ideas are often worse because you can get shortness by omitting explanation, reasoning, background knowledge, answers to critics, generality that isn’t necessary to the current issue, etc. This approach also, as with induction in general, ignores critical argument. And it’s focused on prediction and doesn’t address explanation. And, perhaps worst of all: how do you know Occam’s Razor is any good? With epistemology we’re trying to start at the beginning and address the foundations of thinking, so you can’t just assume common sense intuitions in our culture. If we learn by induction, then we have to learn and argue for Occam’s Razor itself by induction. But inductivists never argue with me by induction, they always write standard English explanatory arguments on philosophical topics like induction. So they need some prior epistemology to govern the use of the arguments for their epistemology, and then need to very carefully analyze what the prior epistemology is and how much of the work it’s doing. (Perhaps the prior epistemology is CR and is doing 100% of the work? Or perhaps not, but that needs to be specified instead of ignored.) CR, by contrast, is an epistemology suitable for discussing epistemology, and doesn’t need something else to get off the ground.
(If you’d like more detail, see the reading recommendations linked at the bottom of my post.)
You’ve done a whole lot of telling us how amazing this stuff is, but not much telling what it actually is. So I’m going to guess, in the hopes that you can tell me not just that I’m wrong, but specifically what a better version would be.
According to what you’ve said, it seems the process of people having valuable knowledge of future events (e.g., the sun will rise tomorrow), is that people generate guesses (by some unspecified process that’s definitely not induction), and then over time, other people criticize guesses, and the guesses best able to stand up to criticism are what we should use to predict tomorrow’s sunrise.
And the reason this works, according to you, is that it’s like evolution. Just like how in evolution, mutation and selection leads to creatures that take advantage of patterns in their environment to get a fitness advantage, guessing and criticism leads to ideas that take advantage of some sort of pattern in the environment in order to be more valuable.
But, of course, taking advantage of patterns in the environment in order to make valuable predictions about the future totally isn’t induction, and there’s no way you could formalize it other than the precise way Popper chose to formalize it.
I asked if anyone here has a criticism (including a reference they endorse). No one seems to. Apparently you personally are unfamiliar with the matter and expect me to open by assuming your ignorance and teaching you? Should I want to teach you? What value do you have to offer? If you want to be taught about CR, why don’t you join the FI forum and ask for help there? Will you read books and otherwise put in the work?
You have not specified what you think “induction” is which makes you difficult to talk with. I know you’ll try to blame me for not already knowing what you think (even though there are dozens of variants of induction and I got heavily flamed recently for suggesting LW should have any canonical ideas and targets for criticism), but e.g. you seem to claim induction is a method of theory generation when SI is a method of theory preference not generation. Induction in general has never adequately specified which theories to generate (some variations of induction recommend you generate the theories the evidence point to, but evidence doesn’t point and there are infinitely many theories compatible with the evidence). Inductivists are broadly more interested in saying the evidence supports/justifies theory X over theory Y, not that the evidence led them to generate theory X but not theory Y (which doesn’t get them very far in debates with someone who did generate theory Y, and wants to judge ideas by their content instead of their source or generation method). What I seem to be dealing with, as usual, is it’s hard to talk to someone who doesn’t understand their own position in much detail and changes it as convenient in the moment.
You also decided to interpret CR as being “like” evolution. I don’t know why. I have a general policy of being clear that it’s literally evolution, and people misinterpret in this way routinely. I certainly specified “literally” above. Perhaps you should quote specific things you’re replying to and then try to engage with them more precisely. That’s what we do at the FI forum and it improves discussion quality dramatically.
You also decided to try to learn CR from a few brief comments, which is not a method you should reasonably expect to succeed. Perhaps you’re used to epistemology that simplistic from your experiences at LW?
Imo chapter 28 of this book gives a good sense why Occam’s Razor is good. I’ll try to explain it here briefly as I understand it.
Suppose we have a class of simple models, with three free binary parameters, and a class of more complex models, with ten free binary parameters. We also have some data, and we want to know which model we should choose to explain the data. A priori, out of the parameter sets for the simple model each has a probability of 1⁄8 of being the best one, whereas for the complex model the probability is only 1/1024. As we observe the data, probability mass moves between the parameter sets. Given equally good fit between data and model, the best simple model will always have a higher probability than the best complex model. For one, because it started with a higher probability. For another, because there will be several complex models fitting the data about equally well. E.g. there may be 8 complex models, which all fit the data better than the second-best simple model. So the probability mass needs to be shared between all of those.
A complex model needs to fit the data better in order to gain enough probability mass to beat out the simpler model.
So even if we do not penalize complex models just for being more complex, we still favour simpler ones.
None of this is relevant to specifying the prior epistemology you are using to make this argument, plus you begin with “simple models” but don’t address evaluating explanations/arguments/criticisms.
Given some data and multiple competing hypotheses that explain the data equally well, the laws of probability tell us that the simplest hypothesis is the likeliest. We call this principle of preferring simpler hypotheses Occam’s Razor. Moreover, using this principle works well in practice. For example in machine learning, a simpler model will often generalize better. Therefore I know that Occam’s Razor is “any good”. Occam’s Razor is a tool that can be used for problems as described by the italicized text above. It makes no claims regarding arguments or criticisms.
I don’t really see why I would need a coherent/perfect/complete epistemology to make this kind of argument or come to that conclusion. It seems to me, like you are saying, that any claims that aren’t attained via the One True Epistemology are useless/invalid/wrong. That you wouldn’t even accept someone saying that the sky is blue, if that person didn’t first show you that they are using the right epistemology.
I notice that I don’t know what an argument that you would accept could even look like. You’re a big fan of having discussions written down in public. Could you link to an example where you argued for one position and then changed your mind because of somebody else’s argument(s)?
Epistemology tells you things like what an argument is and how to evaluate whether ideas are good or bad, correct or incorrect. I’m saying you need to offer any epistemology at all under which the arguments you’re currently making are correct. Supposedly you have an induction-based epistemology (I presume), but you haven’t been using it in your comments, you’re using some other unspecified epistemology to guide what you think is a reasonable argument.
The current topic is epistemology, not the color of the sky, so you don’t get to gloss over epistemology as you might in a conversation about some other topic.
So because the discussion in general is about epistemology, you won’t accept any arguments for which the epistemology isn’t specified, even if the topic of that argument doesn’t pertain directly to epistemology, but if the discussion is about something else, you will just engage with the arguments regardless of the epistemology others are using?
That seems… unlikely to work well (if the topic is epistemology) and inconsistent.
I’d like to reiterate, that I would really appreciate a link to an example where somebody convinced you to change your mind. Failing that, you’ve mentioned elsewhere that you often changed your mind in discussions with David Deutsch. If you might reproduce or at least sketch a discussion you’ve had with him, I would be very interested.
I’m literally asking you to specify your epistemology. Offer some rival to CR...? Instead you offer me Occam’s Razer which is correct according to some unspecified epistemology you don’t want to discuss.
CR is a starting point. Do you even have a rival starting point which addresses basic questions like how to create and evaluate ideas and arguments, in general? Seems like you’re just using common sense assumptions, rather than scholarship, to evaluate a variant of Occam’s Razor (in order to defend induction). CR, as far as I can tell, is competing not with any rival philosophy (inductivist or otherwise) but with non-consumption of philosophy. (But philosophy is unavoidable so non-consumption means using intuition, common sense, cultural defaults, bias, etc., rather than thinking about it much.)
If you want stories about my discussions with DD, ask on the FI forum, not here.
Could you describe how you know this? Take it as an example of how you derive a supposedly true proposition with your favorite epistemology.
Illustrate all the steps that you consider important of Popper’s way to come to knowledge with that claim.
You seem to have at least one typo and also to suggest you disagree without directly saying so. Can you please clarify what you’re saying? Also I don’t know how you expect me to explain all the steps involved with CR to you given your ignorance of CR – should I rewrite multiple books in my reply, or will you read references, or do you want a short summary which omits almost everything? If you want a summary, you need to give more information about where you’re coming from, what you’re thinking, and what your point and perspective are, so I can judge which parts to include. I don’t know what you doubt or why, so I don’t know how to select information for the summary you want. I also don’t know what a “supposedly true” proposition is.
I don’t want you to explain the principle in general but illustrate it on the example that you brought up. Explaining general principles on concrete examples is a classic way principles are taught. Students learn physics by working through various test problems. Reasoning by example is a classic way to transfer knowledge.
From your post I take that you believe “you need to offer any epistemology at all under which the arguments you’re currently making are correct” to be true?
If it is you should be able to explain how you came to believe that claim. Otherwise you could say that you hold that belief that have nothing to do with how you claim knowledge should be derived.
If CR can’t be used to derive the knowledge of the example it’s not a general epistomology with practical use.
Epistemology is the field that tells you the methods of thinking, arguging, evaluating ideas, judging good and bad ideas, etc. Whenever you argue, you’re using an epistemological framework, stated or not. I have stated mine. You should state yours. Induction is not a complete epistemological framework.
I assume you have read Myth of the Framework. Doesn’t Popper himself emphasize that it’s not necessary to share an epistemological framework with someone, nor explicitly verbalize exactly how it works (since doing that is difficult-to-impossible), to make intellectual progress?
Verbalizing your entire framework/worldview is too hard, but CR manages to verbalize quite a lot of epistemology. Does LW have verbalized epistemology to rival CR, which is verbalized in a reasonably equivalent kinda way to e.g. Popper’s books? I thought the claim was that it does. If you don’t have an explicit epistemology, may I recommend one to you? It’s way, way better than nothing! If you stick with unverbalized epistemology, it really lets in bias, common sense, intuition, cultural tradition, etc, and makes it hard to make improvements or have discussions.
What is the relationship between CR and other processes that can create knowledge, such as induction and deduction? Are the latter a subset of the former? What does ‘induction is impossible’ mean, that it cannot be used as a starting point or something stronger? Can CR be not only a starting point but also the only process necessary?
CR has arguments refuting induction – it doesn’t work, has never been done, cannot be done. Induction is a myth, a confusion, a misconception that doesn’t even refer to a well-defined physically-possible process of thought. (This is partly old – that induction doesn’t work has been an unsolved problem for ages – but CR offers some improved critical arguments instead of the usual hedges and excuses for believing in induction despite the probelsm.) Deduction is fine but limited.
Yes.
If induction has never been done, what do machine learning algorithms whose authors think it does induction do?
Those don’t learn. The coders are the knowledge creators and the machine does grunt work.
That’s an epicycle. You can patch up your theory, but the fact that you need to doesn’t speak wel for it.
What?
Induction, as the prediction of observations without necessarily having an explanation of the regularity, works just fine. The anti induction argument is purely against induction as a source of hypotheses or explanations. Everyone has given up on that idea, and the pro induction people don’t even use the word that way. There is a lot of talking-past bere.