I believe that your belief in “refutation by criticism” as something that either is or isn’t, but doesn’t have “gradation of certainty”, is so fundamentally wrong that it doesn’t make sense to debate further.
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion. That leaves me thinking you’re a bad person to discuss with. Am I mistaken?
Making mistakes isn’t random or probabilistic. When you make a judgement, there is no way to know some probability that your judgement is correct. Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability? And won’t that judgement also have a probability, causing an infinite regress of probability assignments?
Mistakes are unpredictable. At least some of them are. So you can’t predict (even probabilistically) whether you made one of the unpredictable types of mistakes.
What you can do, fallibly and tentatively, is make judgements about whether a critical argument is correct or not. And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
So let me ask you; is Popper’s argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?
That’d work fine if they knew everything or nothing about induction. However, it’s highly problematic when they already have thousands of pages worth of misconceptions about induction (some of which vary from the next guy’s misconceptions). The misconceptions include vague parts they don’t realize are vague, non sequiturs they don’t realize are non sequiturs, confusion about what induction is, and other mistakes plus cover up (rationalizations, dishonesty, irrationality).
Induction would be way easier to explain to a 10 year old in a page than to anyone at LW, due to lack of bias and prior misconceptions. I could also do quantum physics in a page for a ten year old. QM is easy to explain at a variety of levels of detail, if you don’t have to include anything to preemptively address pre-existing misconceptions, objections, etc. E.g., in a sentence: “Science has discovered there are many things your eyes can’t see, including trillions of other universes with copies of you, me, the Earth, the sun, everything.”
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion.
It’s like you believe “A” and “A implies B” and “B implies C”, while I believe “non-A” and “non-A implies Q”. The point we should debate is whether “A” or “non-A” is correct; because as long as we disagree on this, of course each of us is going to believe a different chain of things (one starting with “A”, the other starting with “non-A”).
I mean, if I hypothetically would believe that absolute certainty is possible and relatively simple to achieve, of course I would consider the probabilistic reasoning to be interesting but inferior form of reasoning. We wouldn’t have this debate. And if you would accept that certainty is impossible (even certainty of refutation), then probability would probably seem like the next best thing.
When you make a judgement, there is no way to know some probability that your judgement is correct.
Okay, imagine this: I make a judgment that feels completely correct to me, and I am not aware of any possible mistakes. But of course I am a fallible human, maybe I actually made a mistake somewhere, maybe even an embarassing one.
Scenario A: I made this judgement at 10 AM, after having a good night of sleep.
Scenario B: I made this judgement at 2 AM, tired and sleep deprived.
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A? In both cases I believe at the moment that the judgment is correct. But in the latter case my ability to notice the possible mistake is smaller.
So while I couldn’t make an exact calculation like “the probability of the mistake is exactly 4.25%”, I can still be aware that there is some probability of the mistake, and sometimes even estimate that the probability in one situation is greater than in another situation. Which suggests that there is a number, I just don’t know it. (But if we could somehow repeat the whole situation million times, and observe that I was wrong in 42500 cases, that would suggest that the probability of the mistake is about 4.25%. Unlikely in real life, but possible as a hypothesis.)
Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability?
It definitely will. Notice that those are two different things: (a) the probability that I am wrong, and (b) my estimate of the probability that I am wrong.
Yes, what you point out is a very real and very difficult problem. Estimating probabilities in a situation where everything (including our knowledge of ourselves, and even our knowledge of math itself) is… complicated. Difficult to do, and even more difficult to justify in a debate.
This may even be a hard limit on human certainty. For example, if at every moment of time there is a 0.000000000001 probability that you will go insane, that would mean you can never be sure about anything with probability greater than 0.999999999999, because there is always the chance that however logical and reasonable something sounds to you at the moment, it’s merely because you have become insane at this very moment. (The cause of insanity could be e.g. a random tumor or a blood vessel breaking in your brain.) Even if you would make a system more reliable than a human, for example a system maintained by hundred humans, where if anyone goes insane, the remaining ones will notice it and fix the mistake, the system itself could achieve higher certainty, but you, as an individual, reading its output, could not. Because there would always be the chance that you just got insane, and what you believe you are reading isn’t actually there.
And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
You can have a “binary” solution only as long as you remain in the realm of words. (“Socrates is a human. All humans are mortal. Therefore Socrates is mortal. Certainty of argument: 100%.”) Even there, the longer chain of words you produce, the greater chance that you made a mistake somewhere. I mean, if you imagine a syllogism going over thousand pages, ultimately proving something, you would probably want to check the whole book at least two or three times; which means you wouldn’t feel a 100% certainty after the first reading. But the greater problems will appear on the boundary between the words and reality. (Theory: “the energy of the particle X is 0.04 units”; the experimental device displays 0.041. Also, the experimental devices sometimes break, and your assistant sometimes records the numbers incorrectly.)
it’s highly problematic when they already have thousands of pages worth of misconceptions
Fair point.
(BTW, I’m going offline for a week now; for reasons unrelated to LW or this debate.)
EDIT:
For the record: Of course there are things where I consider the probability to be so high or so low that I treat them for all practical purposes as 100% or 0%. If you ask me e.g. whether gravity exists, I will simply say “yes”; I am not going to role-play Spock and give you a number with 15 decimal places. I wouldn’t even know exactly how many nines are there after the decimal dot. (But again, there is a difference between “believing there is a probability” and “being able to tell the exact number”.)
The most obvious impact of probabilistic reasoning on my behavior is that I generally don’t trust long chains of words. Give me 1000 pages of syllogisms that allegedly prove something, and my reaction will be “the probability that somewhere in that chain is an error is so high that the conclusion is completely unreliable”. (For example, I am not even trying to understand Hegel. Yeah, there are also other reasons to distrust him specifically, but I would not trust such long chain of logic without experimental confirmation of intermediate results from any author.)
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A?
It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you’re already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn’t work in all cases. So you can’t figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can’t combine “unknown” and “90%” to get an overall probability).
I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There’s always framework questions too – e.g. you may say according to your framework, given your context, then you’re unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn’t know the probability your framework has a mistake, so you can’t get an overall probability this way.
Difficult to do, and even more difficult to justify in a debate.
if you’re already aware that your system doesn’t really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they’re wrong in some way wasn’t enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn’t care to refute.)
This may even be a hard limit on human certainty.
the hard limit is we don’t have certainty, we’re fallible. that’s it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
you have to make a decision about what standards of evidence you will use for what purpose, and why that’s the right thing to do, and expose that meta decision to criticism.
the epistemology issues we’re talking about are prior to the physics issues, and don’t involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren’t probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)
You can have a “binary” solution only as long as you remain in the realm of words.
no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.
which means you wouldn’t feel a 100% certainty after the first reading
i do not value certainty as a feeling. i’m after objective knowledge, not feelings.
If you’re already aware that your system doesn’t work, due to this regress problem,
That isn’t what Viliam said, and I suggest that here you’re playing rhetorical games rather than arguing in good faith. It’s as if someone took your fallibilism and your rejection of probability, and said “Since you admit that you could well be wrong and you have no idea how likely it is that you’re wrong, why should we take any notice of what you say?”.
why does no one here study the philosophy which has a solution to this problem?
You mean “the philosophy which claims to have a solution to this problem”. (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)
Anyway, I think the answer depends on what you mean by “study”. If you mean “investigate at all” then the answer is that several people here have considered some version of Popperian “critical rationalism”, so your question has a false premise. If you mean “study in depth” then the answer is that by and large those who’ve considered “critical rationalism” have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.
My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else’s, is that I’ve read a couple of Deutsch’s books and some of his other writings and given Deutch’s version of “critical rationalism” hours, but not weeks, of thought, and that since you turned up here I’ve given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. “solves” the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren’t explicitly about random events, even less credible; that the “return on investment” of further in-depth investigation of Popper’s or Deutsch’s ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they’re all bad ideas but because I think I already grasp them well enough for my purposes.
the epistemology issues [...] are prior to the physics issues, and don’t involve that kind of measurement error issue.
A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to “resolve epistemology” in a way that ignores such errors. (Perhaps that isn’t what you meant by “we can talk about measurement error after resolving epistemology”, in which case some clarification would be a good idea.)
What we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
You say that as if you expect it to be a new idea around here, but it isn’t. See e.g. this old LW article. For the avoidance of doubt, I’m not claiming that what that says about knowledge and certainty is the same as you would say—it isn’t—nor that what it says is original to its author—it isn’t. Just that distinguishing knowledge from certainty is something we’re already comfortable with.
I do not value certainty as a feeling.
You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)
Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is “obviously” a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.
(And when you complain above of an infinite regress, it’s precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren’t going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn’t seem to me that it does.)
you haven’t cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.
by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.
the claims made by some c.r. proponents
it’s dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as “some c.r. proponents”.
you refuse to try to quantify how error-prone any particular judgement is.
no. i have tried and found it’s impossible, and found out why (arguments u don’t wish to learn).
anyway i don’t see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren’t really looking to have a conversation to resolve the matter. why speak at all?
you aren’t really looking to have a conversation to resolve the matter
Your understanding of “resolve the matter” is very peculiar—as far as I can see it means “go read what I tell you to read so that you will agree with me”.
I notice that you show considerable lack of flexibility: you follow a certain pattern of interaction which, to no great surprise, tends to end up in the same place, you get nowhere and accuse people of bad faith and unwillingness to learn.
You’ve been hanging around the place for a few weeks by now—how about you, did you learn anything? Or this is strictly a bring-civilization-to-the-savages expedition from your point of view?
Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.
it’s dishonest (or ignorant?) [...]
Why?
arguments you don’t wish to learn
Don’t wish to learn them? True enough. I don’t see your relationship to me as being that of teacher to learner. I’d be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.
I don’t see what your comment is supposed to accomplish.
It is supposed to point out some errors in things you wrote, and to answer some questions you raised.
you have 1.8 of your feet out the door.
Does that actually mean anything? If so, what?
you aren’t really looking to have a conversation to resolve the matter.
I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose (“paths forward”), and I am not interested in replacing the (to me, potentially interesting) conversation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about “paths forward” that you keep trying to replace it with.
why speak at all?
See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It’s not my problem that you’re unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.
[EDITED because I noticed I wrote “conservation” where I meant “conversation” :-)]
that’s an impasse, created by you. you won’t use the methodology i think is needed for making progress, and won’t discuss the disagreement. a particular example issue is your hostility to the use of references.
Yup. I’m not interested in jumping through the idiosyncratic set of hoops you choose to set up.
that’s an impasse, created by you.
Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don’t try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So … nah, I’m not buying “created by you”. I’m not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.
your hostility to the use of references
You just made that up. I am not hostile to the use of references.
(Maybe I objected to something you did that involved the use of references; I don’t remember. But if I did, it wasn’t because I am hostile to the use of references.)
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion. That leaves me thinking you’re a bad person to discuss with. Am I mistaken?
Making mistakes isn’t random or probabilistic. When you make a judgement, there is no way to know some probability that your judgement is correct. Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability? And won’t that judgement also have a probability, causing an infinite regress of probability assignments?
Mistakes are unpredictable. At least some of them are. So you can’t predict (even probabilistically) whether you made one of the unpredictable types of mistakes.
What you can do, fallibly and tentatively, is make judgements about whether a critical argument is correct or not. And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
That’d work fine if they knew everything or nothing about induction. However, it’s highly problematic when they already have thousands of pages worth of misconceptions about induction (some of which vary from the next guy’s misconceptions). The misconceptions include vague parts they don’t realize are vague, non sequiturs they don’t realize are non sequiturs, confusion about what induction is, and other mistakes plus cover up (rationalizations, dishonesty, irrationality).
Induction would be way easier to explain to a 10 year old in a page than to anyone at LW, due to lack of bias and prior misconceptions. I could also do quantum physics in a page for a ten year old. QM is easy to explain at a variety of levels of detail, if you don’t have to include anything to preemptively address pre-existing misconceptions, objections, etc. E.g., in a sentence: “Science has discovered there are many things your eyes can’t see, including trillions of other universes with copies of you, me, the Earth, the sun, everything.”
It’s like you believe “A” and “A implies B” and “B implies C”, while I believe “non-A” and “non-A implies Q”. The point we should debate is whether “A” or “non-A” is correct; because as long as we disagree on this, of course each of us is going to believe a different chain of things (one starting with “A”, the other starting with “non-A”).
I mean, if I hypothetically would believe that absolute certainty is possible and relatively simple to achieve, of course I would consider the probabilistic reasoning to be interesting but inferior form of reasoning. We wouldn’t have this debate. And if you would accept that certainty is impossible (even certainty of refutation), then probability would probably seem like the next best thing.
Okay, imagine this: I make a judgment that feels completely correct to me, and I am not aware of any possible mistakes. But of course I am a fallible human, maybe I actually made a mistake somewhere, maybe even an embarassing one.
Scenario A: I made this judgement at 10 AM, after having a good night of sleep.
Scenario B: I made this judgement at 2 AM, tired and sleep deprived.
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A? In both cases I believe at the moment that the judgment is correct. But in the latter case my ability to notice the possible mistake is smaller.
So while I couldn’t make an exact calculation like “the probability of the mistake is exactly 4.25%”, I can still be aware that there is some probability of the mistake, and sometimes even estimate that the probability in one situation is greater than in another situation. Which suggests that there is a number, I just don’t know it. (But if we could somehow repeat the whole situation million times, and observe that I was wrong in 42500 cases, that would suggest that the probability of the mistake is about 4.25%. Unlikely in real life, but possible as a hypothesis.)
It definitely will. Notice that those are two different things: (a) the probability that I am wrong, and (b) my estimate of the probability that I am wrong.
Yes, what you point out is a very real and very difficult problem. Estimating probabilities in a situation where everything (including our knowledge of ourselves, and even our knowledge of math itself) is… complicated. Difficult to do, and even more difficult to justify in a debate.
This may even be a hard limit on human certainty. For example, if at every moment of time there is a 0.000000000001 probability that you will go insane, that would mean you can never be sure about anything with probability greater than 0.999999999999, because there is always the chance that however logical and reasonable something sounds to you at the moment, it’s merely because you have become insane at this very moment. (The cause of insanity could be e.g. a random tumor or a blood vessel breaking in your brain.) Even if you would make a system more reliable than a human, for example a system maintained by hundred humans, where if anyone goes insane, the remaining ones will notice it and fix the mistake, the system itself could achieve higher certainty, but you, as an individual, reading its output, could not. Because there would always be the chance that you just got insane, and what you believe you are reading isn’t actually there.
Relevant LW article: “Confidence levels inside and outside an argument”.
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
You can have a “binary” solution only as long as you remain in the realm of words. (“Socrates is a human. All humans are mortal. Therefore Socrates is mortal. Certainty of argument: 100%.”) Even there, the longer chain of words you produce, the greater chance that you made a mistake somewhere. I mean, if you imagine a syllogism going over thousand pages, ultimately proving something, you would probably want to check the whole book at least two or three times; which means you wouldn’t feel a 100% certainty after the first reading. But the greater problems will appear on the boundary between the words and reality. (Theory: “the energy of the particle X is 0.04 units”; the experimental device displays 0.041. Also, the experimental devices sometimes break, and your assistant sometimes records the numbers incorrectly.)
Fair point.
(BTW, I’m going offline for a week now; for reasons unrelated to LW or this debate.)
EDIT:
For the record: Of course there are things where I consider the probability to be so high or so low that I treat them for all practical purposes as 100% or 0%. If you ask me e.g. whether gravity exists, I will simply say “yes”; I am not going to role-play Spock and give you a number with 15 decimal places. I wouldn’t even know exactly how many nines are there after the decimal dot. (But again, there is a difference between “believing there is a probability” and “being able to tell the exact number”.)
The most obvious impact of probabilistic reasoning on my behavior is that I generally don’t trust long chains of words. Give me 1000 pages of syllogisms that allegedly prove something, and my reaction will be “the probability that somewhere in that chain is an error is so high that the conclusion is completely unreliable”. (For example, I am not even trying to understand Hegel. Yeah, there are also other reasons to distrust him specifically, but I would not trust such long chain of logic without experimental confirmation of intermediate results from any author.)
It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you’re already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn’t work in all cases. So you can’t figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can’t combine “unknown” and “90%” to get an overall probability).
I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There’s always framework questions too – e.g. you may say according to your framework, given your context, then you’re unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn’t know the probability your framework has a mistake, so you can’t get an overall probability this way.
if you’re already aware that your system doesn’t really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they’re wrong in some way wasn’t enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn’t care to refute.)
the hard limit is we don’t have certainty, we’re fallible. that’s it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
you have to make a decision about what standards of evidence you will use for what purpose, and why that’s the right thing to do, and expose that meta decision to criticism.
the epistemology issues we’re talking about are prior to the physics issues, and don’t involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren’t probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)
see http://curi.us/2067-empiricism-and-instrumentalism and https://yesornophilosophy.com
no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.
i do not value certainty as a feeling. i’m after objective knowledge, not feelings.
That isn’t what Viliam said, and I suggest that here you’re playing rhetorical games rather than arguing in good faith. It’s as if someone took your fallibilism and your rejection of probability, and said “Since you admit that you could well be wrong and you have no idea how likely it is that you’re wrong, why should we take any notice of what you say?”.
You mean “the philosophy which claims to have a solution to this problem”. (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)
Anyway, I think the answer depends on what you mean by “study”. If you mean “investigate at all” then the answer is that several people here have considered some version of Popperian “critical rationalism”, so your question has a false premise. If you mean “study in depth” then the answer is that by and large those who’ve considered “critical rationalism” have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.
My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else’s, is that I’ve read a couple of Deutsch’s books and some of his other writings and given Deutch’s version of “critical rationalism” hours, but not weeks, of thought, and that since you turned up here I’ve given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. “solves” the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren’t explicitly about random events, even less credible; that the “return on investment” of further in-depth investigation of Popper’s or Deutsch’s ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they’re all bad ideas but because I think I already grasp them well enough for my purposes.
A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to “resolve epistemology” in a way that ignores such errors. (Perhaps that isn’t what you meant by “we can talk about measurement error after resolving epistemology”, in which case some clarification would be a good idea.)
You say that as if you expect it to be a new idea around here, but it isn’t. See e.g. this old LW article. For the avoidance of doubt, I’m not claiming that what that says about knowledge and certainty is the same as you would say—it isn’t—nor that what it says is original to its author—it isn’t. Just that distinguishing knowledge from certainty is something we’re already comfortable with.
You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)
Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is “obviously” a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.
(And when you complain above of an infinite regress, it’s precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren’t going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn’t seem to me that it does.)
you haven’t cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.
by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.
it’s dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as “some c.r. proponents”.
no. i have tried and found it’s impossible, and found out why (arguments u don’t wish to learn).
anyway i don’t see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren’t really looking to have a conversation to resolve the matter. why speak at all?
Your understanding of “resolve the matter” is very peculiar—as far as I can see it means “go read what I tell you to read so that you will agree with me”.
I notice that you show considerable lack of flexibility: you follow a certain pattern of interaction which, to no great surprise, tends to end up in the same place, you get nowhere and accuse people of bad faith and unwillingness to learn.
You’ve been hanging around the place for a few weeks by now—how about you, did you learn anything? Or this is strictly a bring-civilization-to-the-savages expedition from your point of view?
Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.
Why?
Don’t wish to learn them? True enough. I don’t see your relationship to me as being that of teacher to learner. I’d be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.
It is supposed to point out some errors in things you wrote, and to answer some questions you raised.
Does that actually mean anything? If so, what?
I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose (“paths forward”), and I am not interested in replacing the (to me, potentially interesting) conversation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about “paths forward” that you keep trying to replace it with.
See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It’s not my problem that you’re unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.
[EDITED because I noticed I wrote “conservation” where I meant “conversation” :-)]
you have openly stated your unwillingness to
1) do PF
2) discuss PF or other methodology
that’s an impasse, created by you. you won’t use the methodology i think is needed for making progress, and won’t discuss the disagreement. a particular example issue is your hostility to the use of references.
the end.
given your rules, including the impasse above.
Yup. I’m not interested in jumping through the idiosyncratic set of hoops you choose to set up.
Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don’t try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So … nah, I’m not buying “created by you”. I’m not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.
You just made that up. I am not hostile to the use of references.
(Maybe I objected to something you did that involved the use of references; I don’t remember. But if I did, it wasn’t because I am hostile to the use of references.)