if i were to provide an anti-induction article, what properties should it have?
Regardless of the topic, I would say that the article should be easy to read, and relatively self-contained. For example, instead of “go read this book by Popper to understand how he defines X” you could define X using your own words, preferably giving an example (of course it’s okay to also give a quote from Popper’s book).
one question is whether it should assume the reader has background knowledge of CR.
I don’t even know what the abbreviation is supposed to mean. Seriously.
Generally, I think that the greatest risk is people not even understanding what you are trying to say. If you include links to other pages, I guess most people will not click them. Aim to explain, not to convince, because a failure in explaining is automatically also a failure in convincing.
Because it seems to me that the thing about Popper and induction is approximately this...
Simplicio: “Can science be 100% sure about something?” Popper: “Nope, that would mean that scientists would never change their minds. But they sometimes do, and that is an accepted part of science. Therefore, scientists are never 100% sure of their theories.” Simplicio: “Well, if they can’t prove anything with 100% certainty, why don’t we just ignore them completely? It’s just another opinion, right?” Popper: “Uhm… wait a minute… scientists cannot prove anything, but they can… uhm… disprove things! Yeah, that’s what they do; they make many theories, they disprove most of them, and the one that keeps surviving is the official winner, for the moment. So it’s not like the scientists proved e.g. the theory of relativity, but rather that they disproved all known competing theories, and failed to disprove the theory of relativity (yet).”
To which I would give the following objection:
1) How exactly could it be impossible to prove “X”, and yet possible to disprove “not X”? If scientists are able to falsify e.g. the hypothesis that “two plus two does not equal four”, isn’t it the same as proving the hypothesis that “two plus two equals four”?
I imagine that the typical situation Popper had in mind included a few explicit hypotheses, e.g. A, B, C, and then a remaining option “something else that we did not consider”. So he is essentially saying that scientists can experimentally disprove e.g. B and C, but that’s not the same as proving A. Instead, they proved “either A, or something else that we did not consider, but definitely neither B nor C”. Shortly: B and C were falsified, but A wasn’t proven. And as long as there remains an unspecified category “things we did not consider”, there is always a chance that A is merely an approximate solution, and the real solution is still unknown.
But it doesn’t always have to be like this. Especially in math. But also in real life. Consider this:
According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it. And recently, theory of relativity was indeed falsified by an experiment. Does it mean we should stop teaching the theory of relativity, because now it was properly falsified?
With the benefit of hindsight, now we know there was a mistake in the experiment. But… that’s exactly my point. The concepts of “proving” and “falsifying” are actually much closer than Popper probably imagined. You may have a hypothesis “H”, and an experiment “E”, but if you say that you falsified “H”, it means you have a hypothesis “F” = “the experiment E is correct and falsifies the theory H”. To falsify H by E is to prove F; therefore if F cannot be scientifically proven, then H cannot be scientifically falsified. Proof and falsification are not two fundamentally different processes; they are actually two sides of the same coin. To claim that the experiment E falsifies the hypothesis H, is to claim that you have a proof that “the experiment E falsifies the hypothesis H”… and the usual interpretation of Popper is that there are no proofs in science.
The answer generally accepted on LessWrong, I guess, is that what really happens in science is that people believe theories with greater and greater probability. Never 100%. But sometimes with a very high probability instead, and for most practical purposes such high probability works almost like certainty. Popper may insist that science is unable to actually prove that moon is not made of cheese, but the fact is that most scientists will behave as if they already had such proof; they are not going to keep an open mind about it.
.
Short version: Popper was right about inability to prove things with 100% certainty, but then he (or maybe just people who quote him) made a mistake of imagining that disproving things is a process fundamentally different from proving things, so you can at least disprove things with 100% certainty. My answer is that you can’t even disprove things with probability 100%, but that’s okay, because the “100%” part was just a red herring anyway; what actually happens in science is that things are believed with greater probability.
You should probably actually read Popper before putting words in his mouth.
According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it.
You found this claim in a book of his? Or did you read some Wikipedia, or what?
For example, this is a quote from the Stanford Encyclopedia of Philosophy:
Popper has always drawn a clear distinction between the logic of falsifiability and its applied methodology. The logic of his theory is utterly simple: if a single ferrous metal is unaffected by a magnetic field it cannot be the case that all ferrous metals are affected by magnetic fields. Logically speaking, a scientific law is conclusively falsifiable although it is not conclusively verifiable. Methodologically, however, the situation is much more complex: no observation is free from the possibility of error—consequently we may question whether our experimental result was what it appeared to be.
Thus, while advocating falsifiability as the criterion of demarcation for science, Popper explicitly allows for the fact that in practice a single conflicting or counter-instance is never sufficient methodologically to falsify a theory, and that scientific theories are often retained even though much of the available evidence conflicts with them, or is anomalous with respect to them.
You guys still do that whole “virtue of scholarship” thing, or what?
You guys still do that whole “virtue of scholarship” thing, or what?
Well, this specific guy has a job and a family, and studying “what Popper believed” is quite low on his list of priorities. If you want to provide a more educated answer to curi, go ahead.
If you have a job and a family, and don’t have time to get into what Popper actually said, maybe don’t offer your opinion on what Popper actually said? That’s just introducing bad stuff into a discussion for no reason.
Wovon man nicht sprechen kann, darüber muss man schweigen.
No. That’s your interpretation. You have agency too to interpret what I say with clarity. You also value bold conjecture. So that’s again your problem to work out what I mean and how to apply it.
Everything you say in your post, about Popper issues, demonstrates huge ignorance.
Do you even know the name of Popper’s philosophy?
It seems that you’re completely out of your depth.
The reason you have trouble applying reason is b/c u understand reason badly.
I have a thought. Since you are a philosopher, would your valuable time not be better spent doing activities philosophers engage in, such as writing papers for philosophy journals?
Rather than arguing with people on the internet?
If you are here because you are fishing for people to go join your forum, may I suggest that this place is an inefficient use of your time? It’s mostly dead now, and will be fully dead soon.
I have a low opinion of academic philosophers and philosophy journals. I was hoping to find a little intelligence somewhere. I have tried a lot of places. If you have better suggestions than philosophy journals or LW, let me know.
The virtue of silence is one of our 12 virtues here. That you don’t know speaks to ignorance on your part. And perhaps on taking your own advice you might not have made this post at all. And maybe you would have learnt something instead.
I don’t even know what the abbreviation is supposed to mean. Seriously.
Do you even know the name of Popper’s philosophy? Did you read the discussions about this that already happened on LW?
It seems that you’re completely out of your depth, can’t answer me, and don’t want to make the effort to learn. You can’t answer Popper, don’t know of anyone or any writing that can, and are content with that. Your fellows here are the same way. So Popper goes unanswered and you guys stay wrong.
FYI Popper has lots of self-contained writing. Many of his book chapters are adapted from lectures, as you would know if you’d looked. I have written recommendations of which specific parts of Popper are best to read with brief comments on what they are about:
If you include links to other pages, I guess most people will not click them.
Everything you say in your post, about Popper issues, demonstrates huge ignorance, but there are no Paths Forward for you to get better ideas about this. The methodology dispute needs to be settled first, but people (including you) don’t want to do that.
It seems that you’re completely out of your depth, can’t answer me, and don’t want to make the effort to learn.
I generally agree with your judgment (assuming that the “effort to learn” refers strictly to Popper).
But before I leave this debate, I would like to point out that you (and Ilya) were able to make this (correct) judgment only because I put my cards on the table. I wrote, relatively shortly and without obfuscation, what I believe. Which allowed you to read it and conclude (correctly) “he is just an uneducated idiot”. This allowed a quick resolution; and as a side effect I learned something.
This may or may not be ironically related to the idea of falsification, but at this moment I feel unworthy to comment on that.
Now I see two possible futures, and it is more or less your choice which one will happen:
Option 1:
You may try to describe (a) your beliefs about induction, (b) what you believe are LW beliefs about induction, and (c) why exactly are the supposed LW beliefs wrong, preferably with a specific example of a situation where following the LW beliefs would result in an obvious error.
This is the “high risk / high reward” scenario. It will cost you more time and work, and there is a chance that someone will say “oh, I didn’t realize this before, but now I see this guy has a point; I should probably read more of what he says”, but there is also a chance that someone will say “oh, he got Popper or LW completely wrong; I knew it was not worth debating him”. Which is not necessarily a bad thing, but will probably feel so.
Yeah, there is also the chance that people will read your text and ignore it, but speaking for myself, there are two typical reasons why I would do that: either is text is written in a way that makes it difficult for me to decipher what exactly the author was actually trying to say; or the text depends on links to outside sources but my daily time budget for browsing internet is already spent. (That is why I selfishly urge you to write a self-contained article using your own words.) But other people may have other preferences. Maybe the best would be to add footnotes with references to sources, but make them optional for understanding the gist of the article.
Option 2:
You will keep saying: “guys, you are so confused about induction; you should definitely read Popper”, and people at LW will keep thinking: “this guy is so confused about induction or about our beliefs about induction; he should definitely read the Sequences”, and both sides will be frustrated about how the other side is unwilling to spend the energy necessary to resolve the situation. This is the “play it safe, win nothing” scenario. Also the more likely one.
Last note: Any valid argument made by Popper should be possible to explain without using the word “Popper” in text. Just like Pythagorean theorem is not about the person called Pythagoras, but about squares on triangles, and would be equally valid if instead it would be discovered or popularized by a completely different person; you could simply call it “squares-on-triangles theorem” and it would work equally well. (Related in Sequences: “Guessing the teacher’s password”; “Argument Screens Off Authority”.) If something is true about induction, it is true regardless of whether Popper did or didn’t believe it.
(b) what you believe are LW beliefs about induction,
when i asked for references to canonical LW beliefs, i was told that would make it a cult, and LW does not have beliefs about anything. since no pro-LW ppl could/would state or link to LW’s beliefs about induction – and were hostile to the idea – i think it’s unreasonable to ask me to. individual ppl at LW vary in beliefs, so how am i supposed to write a one-size-fits-all criticism? LW ppl offer neither a one-size-fits-all pro-induction explanation nor do any of them offer it individually. e.g. you have not said how you think induction works. it’s your job, not mine, to come up with some version of induction which you think actually works – and to do that while being aware of known issues that make that a difficult project.
again, there are methodology issues. unless LW gives targets for criticism – written beliefs anyone will take responsibility for the correctness of (you can do this individually, but you don’t want to – you’re busy, you don’t care, whatever) – then we’re kinda stuck (given also the unwillingness to address CR).
your refusal to use outside sources is asking me to rewrite material. why? some attempt to save time on your part. is that the right way to save time? no. could we talk about the right ways to save time? if you wanted to. but my comments about the right way to save time are in outside sources, primarily written by me, which you therefore won’t read (e.g. the Paths Forward stuff, and i could do the Popper stuff linking only to my own stuff, which i have tons of, but that’s still an outside source. i could copy/paste my own stuff here, but that’s stupid. it’s also awkward b/c i’ve intentionally not rewritten essays already written by my colleagues, b/c why do that? so i don’t have all the right material written by myself personally, on purpose, b/c i avoid duplication.). so we’re kinda stuck there. i don’t want to repeat myself for literally more than the 50th time, for you personally (who hasn’t offered me anything – not even much sign you’ll pay attention, care, keep replying next week, anything), b/c you won’t read 1) Popper 2) Deutsch 3) my own links to myself 4) my recent discussions with other LW ppl where i already rewrote a bunch of anti-induction arguments and wasn’t answered.
as one example of many links to myself that you categorically don’t want to address:
In the linked article, you seem to treat “refutation by criticism” as something absolute. Either something is refuted by criticism, or it isn’t refuted by criticism; and in either case you have 100% certainty about which one of these two options it is.
There seems to be no space for situations like “I’ve read a quite convincing refutation of something, but I still think there is a small probability there was a mistake in this clever verbal construction”. It either “was refuted” or it “wasn’t refuted”; and as long as you are willing to admit some probability, I guess it by default goes to the “wasn’t refuted” basket.
In other words, if you imagine a variable containing value “X was refuted by criticism”, the value of this variable at some moment switches from 0 to 1, without any intermediate values. I mean, if you reject gradations of certainty, then you are left with a black-and-white situation where either you have the certainty, or you don’t; but nothing in between.
If this is more or less correct, then I am curious about what exactly happens in the moment where the variable actually switches from 0 to 1. Imagine that you are doing some experiments, reading some verbal arguments, and thinking about them. At some moment, the variable is at 0 (the hypothesis was not refuted by criticism yet), and at the very next moment the variable is at 1 (the hypothesis was refuted by criticism). What exactly happened during that last fraction of a second? Some mental action, I guess, like connecting two pieces of a puzzle together, or something like this. But isn’t there some probability that you actually connected those two pieces incorrectly, and maybe you will notice this only a few seconds (or hours, days, years) later? In other words, isn’t the “refutation by criticism” conditional on the probability that you actually understood everything correctly?
If, as I incorrectly said in previous comments, one experiment doesn’t constitute refutation of a hypothesis (because the experiment may be measured or interpreted incorrectly), then what exactly does? Two experiments? Seven experiments? Thirteen experiments and twenty four pages of peer-reviewed scientific articles? Because if you refute “gradations of certainty”, then it must be that at some moment the certainty is not there, and at another moment there is… and I am curious about where and why is that moment.
your refusal to use outside sources is asking me to rewrite material. why?
Throwing books at someone is generally known as “courtier’s reply”. The more text you throw at me, the smaller probability that I would read them. (Similarly, I could tell you to read Korzybski’s Science and Sanity, and only come back after you mastered it, because I believe—and I truly do—that it is related to some mistakes you are making. Would you?)
There are some situations when things cannot be explained by a short text. For example, if a 10-years old kid would ask me to explain him quantum physics in less than 1 page of text, I would give up. -- So let me ask you; is Popper’s argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?
Sometimes the original form of the argument is not the best one. For example, Gödel spent hundreds of pages proving something that kids today could express as “any mathematical theorem can be stored on computer as a text file, which is kinda a big integer in base 256”. (Took him hundreds of pages, because people didn’t have computers back then.) So maybe the book where Popper explained his idea is similarly not the most efficient way to explain the idea. Also, if an idea cannot be explained without pointing to the original source, that is a bit suspicious. On the other hand, of course, not everyone is skilled at explaining, so sometimes the text written by a skilled author has this advantage.
Summary:
I believe that your belief in “refutation by criticism” as something that either is or isn’t, but doesn’t have “gradation of certainty”, is so fundamentally wrong that it doesn’t make sense to debate further. Because this is the whole point of why probabilistic reasoning, Bayes theorem, etc. is so popular on LW. (Because probabilities is what you use when you don’t have absolute certainty, and I find it quite ironic that I am explaining this to someone who read orders of magnitude more of Popper than I did.)
Throwing books at someone is generally known as “courtier’s reply”.
The issue here also is Brandolini’s law:
“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”
The problem with the “courtier’s reply” is you could always appeal to it, even if Scott Aaronson is trying to explain something about quantum mechanics to you, and you need some background (found in references 1, 2, and 3) to understand what he is saying.
There is a type 1 / type 2 error tradeoff here. Ignoring legit expert advice is bad, but being cowed by an idiot throwing references at you is also bad.
As usual with tradeoffs like these, one has to decide on a policy that is willing to tolerate some of one type of error to keep the error you care about to some desired level.
I think a good heuristic for deciding who is an expert and who is an idiot with references is credentialism. But credentialism has a bad brand here, due to a “love affair with amateurism” LW has. One of the consequences of this love affair is a lot of folks here make the above trade off badly (in particular they ignore legit advice to read way too frequently).
Here’s a tricky example of judging authority (credentials). You say listen to SA about QM. Presumably also listen to David Deutsch (DD), who knows more about QM than SA does. But what about me? I have talked with DD about QM and other issues at great length and I have a very accurate understanding of what things I cay say about QM (and other matters) that are what DD would say, and when I don’t know something or disagree with DD. (I have done things like debate physics, with physicists, many times, while being advised by DD and him checking all my statements so I find out when I have his views right or not.) So my claims about QM are about as good as DD’s, when I make them – and are therefore even better than SA’s, even though I’m not a physicist. Sorta, not exactly. Credentials are complicated and such a bad way to judge ideas.
What I find most people do is decide what they want to believe or listen to first, and then find an expert who says it second. So if someone doesn’t want to listen, credentials won’t help, they’ll just find some credentials that go the other way. DD has had the same experience repeatedly – people aren’t persuaded due to his credentials. That’s one of the main reasons I’m here instead of DD – his credentials wouldn’t actually help with getting people here to listen/understand. And, as I’ve been demonstrating and DD and I already knew, arguments aren’t very effective here either (just like elsewhere).
And I, btw, didn’t take things on authority from DD – I asked questions and brought up doubts and counter-arguments. His credentials didn’t matter to me, but his arguments did. Which is why he liked talking with me!
you’re mean and disruptive. at least you’re demonstrating why credentials are a terrible way to address things, which is my point. you just assume the status of various credentials without being willing to think about them, let alone debate them (using more credentials (regress), or perhaps arguments? but if arguments, why not just use those in the first place?). so for you, like most people, using credentials = using bias.
you just assume the status of various credentials without being willing to think about them
Am I? Please demonstrate.
using credentials = using bias
What do you mean by bias? In statistics bias is one of those things you trade off against other things (like variance). Being unbiased is not always optimal.
Yeah, credentials are a poor way of judging things. But that first paragraph doesn’t show remotely what you think it does.
Some of David Deutsch’s credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.
Your credentials as a credible authority on quantum mechanics: You assure us that you’ve talked a lot with David Deutsch and learned a lot from him about quantum mechanics.
This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don’t.
It’s not clear to me what argument you’re actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that’s flatly untrue.
Yeah, credentials are a poor way of judging things.
They are not, though. It’s standard “what LW calls ‘Bayes’ and what I call ‘reasoning under uncertainty’”—you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.
You might say “oh, I heard about how argument screens authority.” This is actually not true though, even by “LW Bayesian” lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn’t true.
It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.
The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of “experts” writ large, except for an explicitly enumerated subset (perhaps ones EY or other “recognized community thought leaders” liked).
This is a part of community DNA, starting with EY’s stuff, and Luke’s “philosophy is a diseased discipline.”
Actually, I somewhat agree, but being an agreeable sort of chap I’m willing to concede things arguendo when there’s no compelling reason to do otherwise :-), which is why I said “Yeah, credentials are a poor way of judging things” rather than hedging more.
More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don’t consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn’t sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn’t have the ability for control that doctrine as well as reporting it.)
It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn’t do this so well; that theology does it worse; that homeopathy does it worse still. (This isn’t just about the moral or cognitive excellence of the disciplines in question; it’s also that it’s harder to tell whether someone’s any good or not in some fields than in others.)
I guess the way I would slice disciplines is like this:
(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)
(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.
Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.
Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.
“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”
i think this is false, and is an indication of using the wrong methods to refute bullshit – the right methods reuse refutations of categories of bad ideas. do you have some comprehensive argument that it must be true?
i find it disturbing how much people here are in favor of judging ideas by sources instead of content – credentialism. that’s pretty pure irrationality. also debating which credentials are worth how much is a bad way to approach discussions, but it’s totally non-obvious and controversial which credentials are how good even for standard credentials like PhDs from different universities.
(in particular they ignore legit advice to read way too frequently)
The context matters. If you are trying to figure out how X actually works you probably should go read or at least scan the relevant books even if no one is throwing references at you. On the other hand, if you’re just procrastinating by engaging in a Yet Another Internet Argument with zero consequences for your life, going off to read the references is just a bigger waste of time.
I believe that your belief in “refutation by criticism” as something that either is or isn’t, but doesn’t have “gradation of certainty”, is so fundamentally wrong that it doesn’t make sense to debate further.
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion. That leaves me thinking you’re a bad person to discuss with. Am I mistaken?
Making mistakes isn’t random or probabilistic. When you make a judgement, there is no way to know some probability that your judgement is correct. Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability? And won’t that judgement also have a probability, causing an infinite regress of probability assignments?
Mistakes are unpredictable. At least some of them are. So you can’t predict (even probabilistically) whether you made one of the unpredictable types of mistakes.
What you can do, fallibly and tentatively, is make judgements about whether a critical argument is correct or not. And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
So let me ask you; is Popper’s argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?
That’d work fine if they knew everything or nothing about induction. However, it’s highly problematic when they already have thousands of pages worth of misconceptions about induction (some of which vary from the next guy’s misconceptions). The misconceptions include vague parts they don’t realize are vague, non sequiturs they don’t realize are non sequiturs, confusion about what induction is, and other mistakes plus cover up (rationalizations, dishonesty, irrationality).
Induction would be way easier to explain to a 10 year old in a page than to anyone at LW, due to lack of bias and prior misconceptions. I could also do quantum physics in a page for a ten year old. QM is easy to explain at a variety of levels of detail, if you don’t have to include anything to preemptively address pre-existing misconceptions, objections, etc. E.g., in a sentence: “Science has discovered there are many things your eyes can’t see, including trillions of other universes with copies of you, me, the Earth, the sun, everything.”
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion.
It’s like you believe “A” and “A implies B” and “B implies C”, while I believe “non-A” and “non-A implies Q”. The point we should debate is whether “A” or “non-A” is correct; because as long as we disagree on this, of course each of us is going to believe a different chain of things (one starting with “A”, the other starting with “non-A”).
I mean, if I hypothetically would believe that absolute certainty is possible and relatively simple to achieve, of course I would consider the probabilistic reasoning to be interesting but inferior form of reasoning. We wouldn’t have this debate. And if you would accept that certainty is impossible (even certainty of refutation), then probability would probably seem like the next best thing.
When you make a judgement, there is no way to know some probability that your judgement is correct.
Okay, imagine this: I make a judgment that feels completely correct to me, and I am not aware of any possible mistakes. But of course I am a fallible human, maybe I actually made a mistake somewhere, maybe even an embarassing one.
Scenario A: I made this judgement at 10 AM, after having a good night of sleep.
Scenario B: I made this judgement at 2 AM, tired and sleep deprived.
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A? In both cases I believe at the moment that the judgment is correct. But in the latter case my ability to notice the possible mistake is smaller.
So while I couldn’t make an exact calculation like “the probability of the mistake is exactly 4.25%”, I can still be aware that there is some probability of the mistake, and sometimes even estimate that the probability in one situation is greater than in another situation. Which suggests that there is a number, I just don’t know it. (But if we could somehow repeat the whole situation million times, and observe that I was wrong in 42500 cases, that would suggest that the probability of the mistake is about 4.25%. Unlikely in real life, but possible as a hypothesis.)
Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability?
It definitely will. Notice that those are two different things: (a) the probability that I am wrong, and (b) my estimate of the probability that I am wrong.
Yes, what you point out is a very real and very difficult problem. Estimating probabilities in a situation where everything (including our knowledge of ourselves, and even our knowledge of math itself) is… complicated. Difficult to do, and even more difficult to justify in a debate.
This may even be a hard limit on human certainty. For example, if at every moment of time there is a 0.000000000001 probability that you will go insane, that would mean you can never be sure about anything with probability greater than 0.999999999999, because there is always the chance that however logical and reasonable something sounds to you at the moment, it’s merely because you have become insane at this very moment. (The cause of insanity could be e.g. a random tumor or a blood vessel breaking in your brain.) Even if you would make a system more reliable than a human, for example a system maintained by hundred humans, where if anyone goes insane, the remaining ones will notice it and fix the mistake, the system itself could achieve higher certainty, but you, as an individual, reading its output, could not. Because there would always be the chance that you just got insane, and what you believe you are reading isn’t actually there.
And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
You can have a “binary” solution only as long as you remain in the realm of words. (“Socrates is a human. All humans are mortal. Therefore Socrates is mortal. Certainty of argument: 100%.”) Even there, the longer chain of words you produce, the greater chance that you made a mistake somewhere. I mean, if you imagine a syllogism going over thousand pages, ultimately proving something, you would probably want to check the whole book at least two or three times; which means you wouldn’t feel a 100% certainty after the first reading. But the greater problems will appear on the boundary between the words and reality. (Theory: “the energy of the particle X is 0.04 units”; the experimental device displays 0.041. Also, the experimental devices sometimes break, and your assistant sometimes records the numbers incorrectly.)
it’s highly problematic when they already have thousands of pages worth of misconceptions
Fair point.
(BTW, I’m going offline for a week now; for reasons unrelated to LW or this debate.)
EDIT:
For the record: Of course there are things where I consider the probability to be so high or so low that I treat them for all practical purposes as 100% or 0%. If you ask me e.g. whether gravity exists, I will simply say “yes”; I am not going to role-play Spock and give you a number with 15 decimal places. I wouldn’t even know exactly how many nines are there after the decimal dot. (But again, there is a difference between “believing there is a probability” and “being able to tell the exact number”.)
The most obvious impact of probabilistic reasoning on my behavior is that I generally don’t trust long chains of words. Give me 1000 pages of syllogisms that allegedly prove something, and my reaction will be “the probability that somewhere in that chain is an error is so high that the conclusion is completely unreliable”. (For example, I am not even trying to understand Hegel. Yeah, there are also other reasons to distrust him specifically, but I would not trust such long chain of logic without experimental confirmation of intermediate results from any author.)
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A?
It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you’re already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn’t work in all cases. So you can’t figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can’t combine “unknown” and “90%” to get an overall probability).
I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There’s always framework questions too – e.g. you may say according to your framework, given your context, then you’re unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn’t know the probability your framework has a mistake, so you can’t get an overall probability this way.
Difficult to do, and even more difficult to justify in a debate.
if you’re already aware that your system doesn’t really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they’re wrong in some way wasn’t enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn’t care to refute.)
This may even be a hard limit on human certainty.
the hard limit is we don’t have certainty, we’re fallible. that’s it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
you have to make a decision about what standards of evidence you will use for what purpose, and why that’s the right thing to do, and expose that meta decision to criticism.
the epistemology issues we’re talking about are prior to the physics issues, and don’t involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren’t probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)
You can have a “binary” solution only as long as you remain in the realm of words.
no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.
which means you wouldn’t feel a 100% certainty after the first reading
i do not value certainty as a feeling. i’m after objective knowledge, not feelings.
If you’re already aware that your system doesn’t work, due to this regress problem,
That isn’t what Viliam said, and I suggest that here you’re playing rhetorical games rather than arguing in good faith. It’s as if someone took your fallibilism and your rejection of probability, and said “Since you admit that you could well be wrong and you have no idea how likely it is that you’re wrong, why should we take any notice of what you say?”.
why does no one here study the philosophy which has a solution to this problem?
You mean “the philosophy which claims to have a solution to this problem”. (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)
Anyway, I think the answer depends on what you mean by “study”. If you mean “investigate at all” then the answer is that several people here have considered some version of Popperian “critical rationalism”, so your question has a false premise. If you mean “study in depth” then the answer is that by and large those who’ve considered “critical rationalism” have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.
My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else’s, is that I’ve read a couple of Deutsch’s books and some of his other writings and given Deutch’s version of “critical rationalism” hours, but not weeks, of thought, and that since you turned up here I’ve given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. “solves” the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren’t explicitly about random events, even less credible; that the “return on investment” of further in-depth investigation of Popper’s or Deutsch’s ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they’re all bad ideas but because I think I already grasp them well enough for my purposes.
the epistemology issues [...] are prior to the physics issues, and don’t involve that kind of measurement error issue.
A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to “resolve epistemology” in a way that ignores such errors. (Perhaps that isn’t what you meant by “we can talk about measurement error after resolving epistemology”, in which case some clarification would be a good idea.)
What we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
You say that as if you expect it to be a new idea around here, but it isn’t. See e.g. this old LW article. For the avoidance of doubt, I’m not claiming that what that says about knowledge and certainty is the same as you would say—it isn’t—nor that what it says is original to its author—it isn’t. Just that distinguishing knowledge from certainty is something we’re already comfortable with.
I do not value certainty as a feeling.
You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)
Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is “obviously” a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.
(And when you complain above of an infinite regress, it’s precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren’t going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn’t seem to me that it does.)
you haven’t cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.
by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.
the claims made by some c.r. proponents
it’s dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as “some c.r. proponents”.
you refuse to try to quantify how error-prone any particular judgement is.
no. i have tried and found it’s impossible, and found out why (arguments u don’t wish to learn).
anyway i don’t see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren’t really looking to have a conversation to resolve the matter. why speak at all?
you aren’t really looking to have a conversation to resolve the matter
Your understanding of “resolve the matter” is very peculiar—as far as I can see it means “go read what I tell you to read so that you will agree with me”.
I notice that you show considerable lack of flexibility: you follow a certain pattern of interaction which, to no great surprise, tends to end up in the same place, you get nowhere and accuse people of bad faith and unwillingness to learn.
You’ve been hanging around the place for a few weeks by now—how about you, did you learn anything? Or this is strictly a bring-civilization-to-the-savages expedition from your point of view?
Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.
it’s dishonest (or ignorant?) [...]
Why?
arguments you don’t wish to learn
Don’t wish to learn them? True enough. I don’t see your relationship to me as being that of teacher to learner. I’d be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.
I don’t see what your comment is supposed to accomplish.
It is supposed to point out some errors in things you wrote, and to answer some questions you raised.
you have 1.8 of your feet out the door.
Does that actually mean anything? If so, what?
you aren’t really looking to have a conversation to resolve the matter.
I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose (“paths forward”), and I am not interested in replacing the (to me, potentially interesting) conversation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about “paths forward” that you keep trying to replace it with.
why speak at all?
See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It’s not my problem that you’re unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.
[EDITED because I noticed I wrote “conservation” where I meant “conversation” :-)]
that’s an impasse, created by you. you won’t use the methodology i think is needed for making progress, and won’t discuss the disagreement. a particular example issue is your hostility to the use of references.
Yup. I’m not interested in jumping through the idiosyncratic set of hoops you choose to set up.
that’s an impasse, created by you.
Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don’t try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So … nah, I’m not buying “created by you”. I’m not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.
your hostility to the use of references
You just made that up. I am not hostile to the use of references.
(Maybe I objected to something you did that involved the use of references; I don’t remember. But if I did, it wasn’t because I am hostile to the use of references.)
Regardless of the topic, I would say that the article should be easy to read, and relatively self-contained. For example, instead of “go read this book by Popper to understand how he defines X” you could define X using your own words, preferably giving an example (of course it’s okay to also give a quote from Popper’s book).
I don’t even know what the abbreviation is supposed to mean. Seriously.
Generally, I think that the greatest risk is people not even understanding what you are trying to say. If you include links to other pages, I guess most people will not click them. Aim to explain, not to convince, because a failure in explaining is automatically also a failure in convincing.
Maybe it would make sense for you to look at the articles that I believe (with my very unclear understanding of what you are trying to say) may be most relevant to your topic:
1) “Infinite Certainty” (and its mathy sequel “0 And 1 Are Not Probabilities”), and
2) “Scientific Evidence, Legal Evidence, Rational Evidence”.
Because it seems to me that the thing about Popper and induction is approximately this...
Simplicio: “Can science be 100% sure about something?”
Popper: “Nope, that would mean that scientists would never change their minds. But they sometimes do, and that is an accepted part of science. Therefore, scientists are never 100% sure of their theories.”
Simplicio: “Well, if they can’t prove anything with 100% certainty, why don’t we just ignore them completely? It’s just another opinion, right?”
Popper: “Uhm… wait a minute… scientists cannot prove anything, but they can… uhm… disprove things! Yeah, that’s what they do; they make many theories, they disprove most of them, and the one that keeps surviving is the official winner, for the moment. So it’s not like the scientists proved e.g. the theory of relativity, but rather that they disproved all known competing theories, and failed to disprove the theory of relativity (yet).”
To which I would give the following objection:
1) How exactly could it be impossible to prove “X”, and yet possible to disprove “not X”? If scientists are able to falsify e.g. the hypothesis that “two plus two does not equal four”, isn’t it the same as proving the hypothesis that “two plus two equals four”?
I imagine that the typical situation Popper had in mind included a few explicit hypotheses, e.g. A, B, C, and then a remaining option “something else that we did not consider”. So he is essentially saying that scientists can experimentally disprove e.g. B and C, but that’s not the same as proving A. Instead, they proved “either A, or something else that we did not consider, but definitely neither B nor C”. Shortly: B and C were falsified, but A wasn’t proven. And as long as there remains an unspecified category “things we did not consider”, there is always a chance that A is merely an approximate solution, and the real solution is still unknown.
But it doesn’t always have to be like this. Especially in math. But also in real life. Consider this:
According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it. And recently, theory of relativity was indeed falsified by an experiment. Does it mean we should stop teaching the theory of relativity, because now it was properly falsified?
With the benefit of hindsight, now we know there was a mistake in the experiment. But… that’s exactly my point. The concepts of “proving” and “falsifying” are actually much closer than Popper probably imagined. You may have a hypothesis “H”, and an experiment “E”, but if you say that you falsified “H”, it means you have a hypothesis “F” = “the experiment E is correct and falsifies the theory H”. To falsify H by E is to prove F; therefore if F cannot be scientifically proven, then H cannot be scientifically falsified. Proof and falsification are not two fundamentally different processes; they are actually two sides of the same coin. To claim that the experiment E falsifies the hypothesis H, is to claim that you have a proof that “the experiment E falsifies the hypothesis H”… and the usual interpretation of Popper is that there are no proofs in science.
The answer generally accepted on LessWrong, I guess, is that what really happens in science is that people believe theories with greater and greater probability. Never 100%. But sometimes with a very high probability instead, and for most practical purposes such high probability works almost like certainty. Popper may insist that science is unable to actually prove that moon is not made of cheese, but the fact is that most scientists will behave as if they already had such proof; they are not going to keep an open mind about it.
.
Short version: Popper was right about inability to prove things with 100% certainty, but then he (or maybe just people who quote him) made a mistake of imagining that disproving things is a process fundamentally different from proving things, so you can at least disprove things with 100% certainty. My answer is that you can’t even disprove things with probability 100%, but that’s okay, because the “100%” part was just a red herring anyway; what actually happens in science is that things are believed with greater probability.
You should probably actually read Popper before putting words in his mouth.
You found this claim in a book of his? Or did you read some Wikipedia, or what?
For example, this is a quote from the Stanford Encyclopedia of Philosophy:
You guys still do that whole “virtue of scholarship” thing, or what?
Well, this specific guy has a job and a family, and studying “what Popper believed” is quite low on his list of priorities. If you want to provide a more educated answer to curi, go ahead.
If you have a job and a family, and don’t have time to get into what Popper actually said, maybe don’t offer your opinion on what Popper actually said? That’s just introducing bad stuff into a discussion for no reason.
Wovon man nicht sprechen kann, darüber muss man schweigen.
“The virtue of silence.”
Yeah, good points in both comments. Why don’t you come to my forum where we’ll appreciate them? :)
https://groups.yahoo.com/neo/groups/fallible-ideas/info
I don’t think you and I have much to talk about.
Why?
a. virtue of silence
b. it’s your job to work that out.
What happened to NVC (Non-Violent Communication)? Your comments are purely intended to hurt me.
No. That’s your interpretation. You have agency too to interpret what I say with clarity. You also value bold conjecture. So that’s again your problem to work out what I mean and how to apply it.
I have a thought. Since you are a philosopher, would your valuable time not be better spent doing activities philosophers engage in, such as writing papers for philosophy journals?
Rather than arguing with people on the internet?
If you are here because you are fishing for people to go join your forum, may I suggest that this place is an inefficient use of your time? It’s mostly dead now, and will be fully dead soon.
I have a low opinion of academic philosophers and philosophy journals. I was hoping to find a little intelligence somewhere. I have tried a lot of places. If you have better suggestions than philosophy journals or LW, let me know.
The virtue of silence is one of our 12 virtues here. That you don’t know speaks to ignorance on your part. And perhaps on taking your own advice you might not have made this post at all. And maybe you would have learnt something instead.
Do you even know the name of Popper’s philosophy? Did you read the discussions about this that already happened on LW?
It seems that you’re completely out of your depth, can’t answer me, and don’t want to make the effort to learn. You can’t answer Popper, don’t know of anyone or any writing that can, and are content with that. Your fellows here are the same way. So Popper goes unanswered and you guys stay wrong.
FYI Popper has lots of self-contained writing. Many of his book chapters are adapted from lectures, as you would know if you’d looked. I have written recommendations of which specific parts of Popper are best to read with brief comments on what they are about:
http://fallibleideas.com/books#popper
Everything you say in your post, about Popper issues, demonstrates huge ignorance, but there are no Paths Forward for you to get better ideas about this. The methodology dispute needs to be settled first, but people (including you) don’t want to do that.
I generally agree with your judgment (assuming that the “effort to learn” refers strictly to Popper).
But before I leave this debate, I would like to point out that you (and Ilya) were able to make this (correct) judgment only because I put my cards on the table. I wrote, relatively shortly and without obfuscation, what I believe. Which allowed you to read it and conclude (correctly) “he is just an uneducated idiot”. This allowed a quick resolution; and as a side effect I learned something.
This may or may not be ironically related to the idea of falsification, but at this moment I feel unworthy to comment on that.
Now I see two possible futures, and it is more or less your choice which one will happen:
Option 1:
You may try to describe (a) your beliefs about induction, (b) what you believe are LW beliefs about induction, and (c) why exactly are the supposed LW beliefs wrong, preferably with a specific example of a situation where following the LW beliefs would result in an obvious error.
This is the “high risk / high reward” scenario. It will cost you more time and work, and there is a chance that someone will say “oh, I didn’t realize this before, but now I see this guy has a point; I should probably read more of what he says”, but there is also a chance that someone will say “oh, he got Popper or LW completely wrong; I knew it was not worth debating him”. Which is not necessarily a bad thing, but will probably feel so.
Yeah, there is also the chance that people will read your text and ignore it, but speaking for myself, there are two typical reasons why I would do that: either is text is written in a way that makes it difficult for me to decipher what exactly the author was actually trying to say; or the text depends on links to outside sources but my daily time budget for browsing internet is already spent. (That is why I selfishly urge you to write a self-contained article using your own words.) But other people may have other preferences. Maybe the best would be to add footnotes with references to sources, but make them optional for understanding the gist of the article.
Option 2:
You will keep saying: “guys, you are so confused about induction; you should definitely read Popper”, and people at LW will keep thinking: “this guy is so confused about induction or about our beliefs about induction; he should definitely read the Sequences”, and both sides will be frustrated about how the other side is unwilling to spend the energy necessary to resolve the situation. This is the “play it safe, win nothing” scenario. Also the more likely one.
Last note: Any valid argument made by Popper should be possible to explain without using the word “Popper” in text. Just like Pythagorean theorem is not about the person called Pythagoras, but about squares on triangles, and would be equally valid if instead it would be discovered or popularized by a completely different person; you could simply call it “squares-on-triangles theorem” and it would work equally well. (Related in Sequences: “Guessing the teacher’s password”; “Argument Screens Off Authority”.) If something is true about induction, it is true regardless of whether Popper did or didn’t believe it.
when i asked for references to canonical LW beliefs, i was told that would make it a cult, and LW does not have beliefs about anything. since no pro-LW ppl could/would state or link to LW’s beliefs about induction – and were hostile to the idea – i think it’s unreasonable to ask me to. individual ppl at LW vary in beliefs, so how am i supposed to write a one-size-fits-all criticism? LW ppl offer neither a one-size-fits-all pro-induction explanation nor do any of them offer it individually. e.g. you have not said how you think induction works. it’s your job, not mine, to come up with some version of induction which you think actually works – and to do that while being aware of known issues that make that a difficult project.
again, there are methodology issues. unless LW gives targets for criticism – written beliefs anyone will take responsibility for the correctness of (you can do this individually, but you don’t want to – you’re busy, you don’t care, whatever) – then we’re kinda stuck (given also the unwillingness to address CR).
your refusal to use outside sources is asking me to rewrite material. why? some attempt to save time on your part. is that the right way to save time? no. could we talk about the right ways to save time? if you wanted to. but my comments about the right way to save time are in outside sources, primarily written by me, which you therefore won’t read (e.g. the Paths Forward stuff, and i could do the Popper stuff linking only to my own stuff, which i have tons of, but that’s still an outside source. i could copy/paste my own stuff here, but that’s stupid. it’s also awkward b/c i’ve intentionally not rewritten essays already written by my colleagues, b/c why do that? so i don’t have all the right material written by myself personally, on purpose, b/c i avoid duplication.). so we’re kinda stuck there. i don’t want to repeat myself for literally more than the 50th time, for you personally (who hasn’t offered me anything – not even much sign you’ll pay attention, care, keep replying next week, anything), b/c you won’t read 1) Popper 2) Deutsch 3) my own links to myself 4) my recent discussions with other LW ppl where i already rewrote a bunch of anti-induction arguments and wasn’t answered.
as one example of many links to myself that you categorically don’t want to address:
http://curi.us/1917-rejecting-gradations-of-certainty (including the comments)
In the linked article, you seem to treat “refutation by criticism” as something absolute. Either something is refuted by criticism, or it isn’t refuted by criticism; and in either case you have 100% certainty about which one of these two options it is.
There seems to be no space for situations like “I’ve read a quite convincing refutation of something, but I still think there is a small probability there was a mistake in this clever verbal construction”. It either “was refuted” or it “wasn’t refuted”; and as long as you are willing to admit some probability, I guess it by default goes to the “wasn’t refuted” basket.
In other words, if you imagine a variable containing value “X was refuted by criticism”, the value of this variable at some moment switches from 0 to 1, without any intermediate values. I mean, if you reject gradations of certainty, then you are left with a black-and-white situation where either you have the certainty, or you don’t; but nothing in between.
If this is more or less correct, then I am curious about what exactly happens in the moment where the variable actually switches from 0 to 1. Imagine that you are doing some experiments, reading some verbal arguments, and thinking about them. At some moment, the variable is at 0 (the hypothesis was not refuted by criticism yet), and at the very next moment the variable is at 1 (the hypothesis was refuted by criticism). What exactly happened during that last fraction of a second? Some mental action, I guess, like connecting two pieces of a puzzle together, or something like this. But isn’t there some probability that you actually connected those two pieces incorrectly, and maybe you will notice this only a few seconds (or hours, days, years) later? In other words, isn’t the “refutation by criticism” conditional on the probability that you actually understood everything correctly?
If, as I incorrectly said in previous comments, one experiment doesn’t constitute refutation of a hypothesis (because the experiment may be measured or interpreted incorrectly), then what exactly does? Two experiments? Seven experiments? Thirteen experiments and twenty four pages of peer-reviewed scientific articles? Because if you refute “gradations of certainty”, then it must be that at some moment the certainty is not there, and at another moment there is… and I am curious about where and why is that moment.
Throwing books at someone is generally known as “courtier’s reply”. The more text you throw at me, the smaller probability that I would read them. (Similarly, I could tell you to read Korzybski’s Science and Sanity, and only come back after you mastered it, because I believe—and I truly do—that it is related to some mistakes you are making. Would you?)
There are some situations when things cannot be explained by a short text. For example, if a 10-years old kid would ask me to explain him quantum physics in less than 1 page of text, I would give up. -- So let me ask you; is Popper’s argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?
Sometimes the original form of the argument is not the best one. For example, Gödel spent hundreds of pages proving something that kids today could express as “any mathematical theorem can be stored on computer as a text file, which is kinda a big integer in base 256”. (Took him hundreds of pages, because people didn’t have computers back then.) So maybe the book where Popper explained his idea is similarly not the most efficient way to explain the idea. Also, if an idea cannot be explained without pointing to the original source, that is a bit suspicious. On the other hand, of course, not everyone is skilled at explaining, so sometimes the text written by a skilled author has this advantage.
Summary:
I believe that your belief in “refutation by criticism” as something that either is or isn’t, but doesn’t have “gradation of certainty”, is so fundamentally wrong that it doesn’t make sense to debate further. Because this is the whole point of why probabilistic reasoning, Bayes theorem, etc. is so popular on LW. (Because probabilities is what you use when you don’t have absolute certainty, and I find it quite ironic that I am explaining this to someone who read orders of magnitude more of Popper than I did.)
The issue here also is Brandolini’s law:
“The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.”
The problem with the “courtier’s reply” is you could always appeal to it, even if Scott Aaronson is trying to explain something about quantum mechanics to you, and you need some background (found in references 1, 2, and 3) to understand what he is saying.
There is a type 1 / type 2 error tradeoff here. Ignoring legit expert advice is bad, but being cowed by an idiot throwing references at you is also bad.
As usual with tradeoffs like these, one has to decide on a policy that is willing to tolerate some of one type of error to keep the error you care about to some desired level.
I think a good heuristic for deciding who is an expert and who is an idiot with references is credentialism. But credentialism has a bad brand here, due to a “love affair with amateurism” LW has. One of the consequences of this love affair is a lot of folks here make the above trade off badly (in particular they ignore legit advice to read way too frequently).
Here’s a tricky example of judging authority (credentials). You say listen to SA about QM. Presumably also listen to David Deutsch (DD), who knows more about QM than SA does. But what about me? I have talked with DD about QM and other issues at great length and I have a very accurate understanding of what things I cay say about QM (and other matters) that are what DD would say, and when I don’t know something or disagree with DD. (I have done things like debate physics, with physicists, many times, while being advised by DD and him checking all my statements so I find out when I have his views right or not.) So my claims about QM are about as good as DD’s, when I make them – and are therefore even better than SA’s, even though I’m not a physicist. Sorta, not exactly. Credentials are complicated and such a bad way to judge ideas.
What I find most people do is decide what they want to believe or listen to first, and then find an expert who says it second. So if someone doesn’t want to listen, credentials won’t help, they’ll just find some credentials that go the other way. DD has had the same experience repeatedly – people aren’t persuaded due to his credentials. That’s one of the main reasons I’m here instead of DD – his credentials wouldn’t actually help with getting people here to listen/understand. And, as I’ve been demonstrating and DD and I already knew, arguments aren’t very effective here either (just like elsewhere).
And I, btw, didn’t take things on authority from DD – I asked questions and brought up doubts and counter-arguments. His credentials didn’t matter to me, but his arguments did. Which is why he liked talking with me!
ROFL
And here I was, completely at loss as to why David Deutsch doesn’t hang out at LW… But now we know.
you’re mean and disruptive. at least you’re demonstrating why credentials are a terrible way to address things, which is my point. you just assume the status of various credentials without being willing to think about them, let alone debate them (using more credentials (regress), or perhaps arguments? but if arguments, why not just use those in the first place?). so for you, like most people, using credentials = using bias.
Woo, kindergarten flashbacks!
Am I? Please demonstrate.
What do you mean by bias? In statistics bias is one of those things you trade off against other things (like variance). Being unbiased is not always optimal.
Yeah, credentials are a poor way of judging things. But that first paragraph doesn’t show remotely what you think it does.
Some of David Deutsch’s credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.
Your credentials as a credible authority on quantum mechanics: You assure us that you’ve talked a lot with David Deutsch and learned a lot from him about quantum mechanics.
This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don’t.
It’s not clear to me what argument you’re actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that’s flatly untrue.
They are not, though. It’s standard “what LW calls ‘Bayes’ and what I call ‘reasoning under uncertainty’”—you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.
You might say “oh, I heard about how argument screens authority.” This is actually not true though, even by “LW Bayesian” lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn’t true.
It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.
The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of “experts” writ large, except for an explicitly enumerated subset (perhaps ones EY or other “recognized community thought leaders” liked).
This is a part of community DNA, starting with EY’s stuff, and Luke’s “philosophy is a diseased discipline.”
That is crazy.
Actually, I somewhat agree, but being an agreeable sort of chap I’m willing to concede things arguendo when there’s no compelling reason to do otherwise :-), which is why I said “Yeah, credentials are a poor way of judging things” rather than hedging more.
More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don’t consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn’t sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn’t have the ability for control that doctrine as well as reporting it.)
It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn’t do this so well; that theology does it worse; that homeopathy does it worse still. (This isn’t just about the moral or cognitive excellence of the disciplines in question; it’s also that it’s harder to tell whether someone’s any good or not in some fields than in others.)
I guess the way I would slice disciplines is like this:
(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)
(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.
Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.
Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.
Re: (b), why are we talking about (b) at all?
i think this is false, and is an indication of using the wrong methods to refute bullshit – the right methods reuse refutations of categories of bad ideas. do you have some comprehensive argument that it must be true?
i find it disturbing how much people here are in favor of judging ideas by sources instead of content – credentialism. that’s pretty pure irrationality. also debating which credentials are worth how much is a bad way to approach discussions, but it’s totally non-obvious and controversial which credentials are how good even for standard credentials like PhDs from different universities.
Is English your first language?
The context matters. If you are trying to figure out how X actually works you probably should go read or at least scan the relevant books even if no one is throwing references at you. On the other hand, if you’re just procrastinating by engaging in a Yet Another Internet Argument with zero consequences for your life, going off to read the references is just a bigger waste of time.
I think there’s something really wrong when your reaction to disagreement is to think there’s no point in further discussion. That leaves me thinking you’re a bad person to discuss with. Am I mistaken?
Making mistakes isn’t random or probabilistic. When you make a judgement, there is no way to know some probability that your judgement is correct. Also, if judgements need probabilities, won’t your judgement of the probability of a mistake have its own probability? And won’t that judgement also have a probability, causing an infinite regress of probability assignments?
Mistakes are unpredictable. At least some of them are. So you can’t predict (even probabilistically) whether you made one of the unpredictable types of mistakes.
What you can do, fallibly and tentatively, is make judgements about whether a critical argument is correct or not. And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn’t solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn’t).
That’d work fine if they knew everything or nothing about induction. However, it’s highly problematic when they already have thousands of pages worth of misconceptions about induction (some of which vary from the next guy’s misconceptions). The misconceptions include vague parts they don’t realize are vague, non sequiturs they don’t realize are non sequiturs, confusion about what induction is, and other mistakes plus cover up (rationalizations, dishonesty, irrationality).
Induction would be way easier to explain to a 10 year old in a page than to anyone at LW, due to lack of bias and prior misconceptions. I could also do quantum physics in a page for a ten year old. QM is easy to explain at a variety of levels of detail, if you don’t have to include anything to preemptively address pre-existing misconceptions, objections, etc. E.g., in a sentence: “Science has discovered there are many things your eyes can’t see, including trillions of other universes with copies of you, me, the Earth, the sun, everything.”
It’s like you believe “A” and “A implies B” and “B implies C”, while I believe “non-A” and “non-A implies Q”. The point we should debate is whether “A” or “non-A” is correct; because as long as we disagree on this, of course each of us is going to believe a different chain of things (one starting with “A”, the other starting with “non-A”).
I mean, if I hypothetically would believe that absolute certainty is possible and relatively simple to achieve, of course I would consider the probabilistic reasoning to be interesting but inferior form of reasoning. We wouldn’t have this debate. And if you would accept that certainty is impossible (even certainty of refutation), then probability would probably seem like the next best thing.
Okay, imagine this: I make a judgment that feels completely correct to me, and I am not aware of any possible mistakes. But of course I am a fallible human, maybe I actually made a mistake somewhere, maybe even an embarassing one.
Scenario A: I made this judgement at 10 AM, after having a good night of sleep.
Scenario B: I made this judgement at 2 AM, tired and sleep deprived.
Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A? In both cases I believe at the moment that the judgment is correct. But in the latter case my ability to notice the possible mistake is smaller.
So while I couldn’t make an exact calculation like “the probability of the mistake is exactly 4.25%”, I can still be aware that there is some probability of the mistake, and sometimes even estimate that the probability in one situation is greater than in another situation. Which suggests that there is a number, I just don’t know it. (But if we could somehow repeat the whole situation million times, and observe that I was wrong in 42500 cases, that would suggest that the probability of the mistake is about 4.25%. Unlikely in real life, but possible as a hypothesis.)
It definitely will. Notice that those are two different things: (a) the probability that I am wrong, and (b) my estimate of the probability that I am wrong.
Yes, what you point out is a very real and very difficult problem. Estimating probabilities in a situation where everything (including our knowledge of ourselves, and even our knowledge of math itself) is… complicated. Difficult to do, and even more difficult to justify in a debate.
This may even be a hard limit on human certainty. For example, if at every moment of time there is a 0.000000000001 probability that you will go insane, that would mean you can never be sure about anything with probability greater than 0.999999999999, because there is always the chance that however logical and reasonable something sounds to you at the moment, it’s merely because you have become insane at this very moment. (The cause of insanity could be e.g. a random tumor or a blood vessel breaking in your brain.) Even if you would make a system more reliable than a human, for example a system maintained by hundred humans, where if anyone goes insane, the remaining ones will notice it and fix the mistake, the system itself could achieve higher certainty, but you, as an individual, reading its output, could not. Because there would always be the chance that you just got insane, and what you believe you are reading isn’t actually there.
Relevant LW article: “Confidence levels inside and outside an argument”.
Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying “p<0.05” is also just an arbitrary number; why not “p<0.001″?)
You can have a “binary” solution only as long as you remain in the realm of words. (“Socrates is a human. All humans are mortal. Therefore Socrates is mortal. Certainty of argument: 100%.”) Even there, the longer chain of words you produce, the greater chance that you made a mistake somewhere. I mean, if you imagine a syllogism going over thousand pages, ultimately proving something, you would probably want to check the whole book at least two or three times; which means you wouldn’t feel a 100% certainty after the first reading. But the greater problems will appear on the boundary between the words and reality. (Theory: “the energy of the particle X is 0.04 units”; the experimental device displays 0.041. Also, the experimental devices sometimes break, and your assistant sometimes records the numbers incorrectly.)
Fair point.
(BTW, I’m going offline for a week now; for reasons unrelated to LW or this debate.)
EDIT:
For the record: Of course there are things where I consider the probability to be so high or so low that I treat them for all practical purposes as 100% or 0%. If you ask me e.g. whether gravity exists, I will simply say “yes”; I am not going to role-play Spock and give you a number with 15 decimal places. I wouldn’t even know exactly how many nines are there after the decimal dot. (But again, there is a difference between “believing there is a probability” and “being able to tell the exact number”.)
The most obvious impact of probabilistic reasoning on my behavior is that I generally don’t trust long chains of words. Give me 1000 pages of syllogisms that allegedly prove something, and my reaction will be “the probability that somewhere in that chain is an error is so high that the conclusion is completely unreliable”. (For example, I am not even trying to understand Hegel. Yeah, there are also other reasons to distrust him specifically, but I would not trust such long chain of logic without experimental confirmation of intermediate results from any author.)
It may or may not make sense, depending on terminology and nuances of what you mean, for some types of mistakes. Some categories of error have some level of predictability b/c you’re already familiar with them. However, it does not make sense for all types of mistakes. There are some mistakes which are simply unpredictable, which you know nothing about in advance. Perhaps you can partly, in some way, see some mistakes coming – but that doesn’t work in all cases. So you can’t figure out any overall probability of some judgement being a mistake, because at most you have a probability which addresses some sources of mistakes but others are just unknown (and you can’t combine “unknown” and “90%” to get an overall probability).
I am a fallibilist who thinks we can have neither 100% certainty nor 90% certainty nor 50% certainty. There’s always framework questions too – e.g. you may say according to your framework, given your context, then you’re unlikely (20%) to be mistaken (btw my main objections remain the same if you stop quantifying certainty with numbers). But you wouldn’t know the probability your framework has a mistake, so you can’t get an overall probability this way.
if you’re already aware that your system doesn’t really work, due to this regress problem, why does no one here study the philosophy which has a solution to this problem? (i had the same kind of issue in discussions with others here – they admitted their viewpoint has known flaws but stuck to it anyway. knowing they’re wrong in some way wasn’t enough to interest them in studying an alternative which claims not to be wrong in any known way – a claim they didn’t care to refute.)
the hard limit is we don’t have certainty, we’re fallible. that’s it. what we have, knowledge, is something else which is (contra over 2000 years of philosophical tradition) different than certainty.
you have to make a decision about what standards of evidence you will use for what purpose, and why that’s the right thing to do, and expose that meta decision to criticism.
the epistemology issues we’re talking about are prior to the physics issues, and don’t involve that kind of measurement error issue. we can talk about measurement error after resolving epistemology. (the big picture is that probabilities and statistics have some use in life, but they aren’t probabilities of truth/knowledge/certainty, and their use is governed by non-probabilistic judgements/arguments/epistemology.)
see http://curi.us/2067-empiricism-and-instrumentalism and https://yesornophilosophy.com
no, a problem can and should specify criteria of what the bar is for a solution to it. lots of the problems ppl have are due to badly formulated (ambiguous) problems.
i do not value certainty as a feeling. i’m after objective knowledge, not feelings.
That isn’t what Viliam said, and I suggest that here you’re playing rhetorical games rather than arguing in good faith. It’s as if someone took your fallibilism and your rejection of probability, and said “Since you admit that you could well be wrong and you have no idea how likely it is that you’re wrong, why should we take any notice of what you say?”.
You mean “the philosophy which claims to have a solution to this problem”. (Perhaps it really does, perhaps not; but all someone can know in advance of studying it is that it claims to have one.)
Anyway, I think the answer depends on what you mean by “study”. If you mean “investigate at all” then the answer is that several people here have considered some version of Popperian “critical rationalism”, so your question has a false premise. If you mean “study in depth” then the answer is that by and large those who’ve considered “critical rationalism” have decided after a quick investigation that its claim to have the One True Answer to the problem of induction is not credible enough for it to be worth much further study.
My own epistemic state on this matter, which I mention not because I have any particular importance but because I know my own mind much better than anyone else’s, is that I’ve read a couple of Deutsch’s books and some of his other writings and given Deutch’s version of “critical rationalism” hours, but not weeks, of thought, and that since you turned up here I’ve given some further attention to your version; that c.r. seems to me to contain some insights and some outright errors; that I do not find it credible that c.r. “solves” the problem of getting information from observations in any strong sense; that I find the claims made by some c.r. proponents that (e.g.) there is no such thing as induction, or that it is a mistake to assign probabilities to statements that aren’t explicitly about random events, even less credible; that the “return on investment” of further in-depth investigation of Popper’s or Deutsch’s ideas is likely worse than that of other things I could do with the same resources of time and brainpower, not because they’re all bad ideas but because I think I already grasp them well enough for my purposes.
A good epistemology needs to deal with the fact that observations have errors in them, and it makes no sense to try to “resolve epistemology” in a way that ignores such errors. (Perhaps that isn’t what you meant by “we can talk about measurement error after resolving epistemology”, in which case some clarification would be a good idea.)
You say that as if you expect it to be a new idea around here, but it isn’t. See e.g. this old LW article. For the avoidance of doubt, I’m not claiming that what that says about knowledge and certainty is the same as you would say—it isn’t—nor that what it says is original to its author—it isn’t. Just that distinguishing knowledge from certainty is something we’re already comfortable with.
You would equally not be entitled to a 100% certainty, or have any other sort of 100% certainty you might regard as more objective and less dependent on feelings. (Because in the epistemic situation Viliam describes, it would be very likely that at least one error had been made.)
Of course, in principle you admit exactly this: after all, you call yourself a fallibilist. But, while you admit the possibility of error and no doubt actually change your mind sometimes, you refuse to try to quantify how error-prone any particular judgement is. I think this is “obviously” a mistake (i.e., obviously when you look at things rightly, which may not be an easy thing to do) and I think Viliam probably thinks the same.
(And when you complain above of an infinite regress, it’s precisely about what happens when one tries to quantify these propensities-to-error, and your approach avoids this regress not by actually handling it any better but by simply declaring that you aren’t going to try to quantify. That might be OK if your approach handled such uncertainties just as well by other means, but it doesn’t seem to me that it does.)
you haven’t cared to try to write down, with permalink, any errors in CR that you think could survive critical scrutiny.
by study i mean look at it enough to find something wrong with it – a reason not to look further – or else keep going if you see no errors. and then write down what the problem is, ala Paths Forward.
it’s dishonest (or ignorant?) to refer to Popper, Deutsch and myself (as well as Miller, Bartley, and more or less everyone else) as “some c.r. proponents”.
no. i have tried and found it’s impossible, and found out why (arguments u don’t wish to learn).
anyway i don’t see what your comment is supposed to accomplish. you have 1.8 of your feet out the door. you aren’t really looking to have a conversation to resolve the matter. why speak at all?
Your understanding of “resolve the matter” is very peculiar—as far as I can see it means “go read what I tell you to read so that you will agree with me”.
I notice that you show considerable lack of flexibility: you follow a certain pattern of interaction which, to no great surprise, tends to end up in the same place, you get nowhere and accuse people of bad faith and unwillingness to learn.
You’ve been hanging around the place for a few weeks by now—how about you, did you learn anything? Or this is strictly a bring-civilization-to-the-savages expedition from your point of view?
Correct: I am not interested in jumping through the idiosyncratic set of hoops you choose to set up.
Why?
Don’t wish to learn them? True enough. I don’t see your relationship to me as being that of teacher to learner. I’d be interested to hear what they are, though, if you could drop the superior attitude and try having an actual discussion.
It is supposed to point out some errors in things you wrote, and to answer some questions you raised.
Does that actually mean anything? If so, what?
I am very willing to have a conversation. I am not interested in straitjacketing that conversation with the arbitrary rules you keep trying to impose (“paths forward”), and I am not interested in replacing the (to me, potentially interesting) conversation about probability and science and reasoning and explanation and knowledge with the (to me, almost certainly boring and fruitless) conversation about “paths forward” that you keep trying to replace it with.
See above. You said some things that I think are wrong, and you asked some questions I thought I could answer. It’s not my problem that you’re unable or unwilling to address any of the actual content of what I say and only interested in meta-issues.
[EDITED because I noticed I wrote “conservation” where I meant “conversation” :-)]
you have openly stated your unwillingness to
1) do PF
2) discuss PF or other methodology
that’s an impasse, created by you. you won’t use the methodology i think is needed for making progress, and won’t discuss the disagreement. a particular example issue is your hostility to the use of references.
the end.
given your rules, including the impasse above.
Yup. I’m not interested in jumping through the idiosyncratic set of hoops you choose to set up.
Curiously, I find myself perfectly well able to conduct discussions with pretty much everyone else I encounter, including people who disagree with me at least as much as you do. That would be because they don’t try to lay down a bunch of procedural rules and refuse to engage unless I either follow their rules or get sidetracked onto a discussion of those rules. So … nah, I’m not buying “created by you”. I’m not the one who tried to impose the absurdly over-demanding set of procedural rules on a bunch of other people.
You just made that up. I am not hostile to the use of references.
(Maybe I objected to something you did that involved the use of references; I don’t remember. But if I did, it wasn’t because I am hostile to the use of references.)