Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.
If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments—its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don’t think this is necessarily a flaw with your website—presumably it was not designed first and foremost as a response to Bayesianism—but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.
To be clear, what I am looking for is a statement of the form: “Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage.” Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been “wrong” where Popper would be clearly “right” at any historical point would be good enough to argue about.
statement of the form: “Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage I would not believe such a thing because of my improved epistemology.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren’t because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn’t mean we agree or have nothing to discuss.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn’t solve your problem, whereas my epistemology did solve mine.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn’t solve your problem, whereas my epistemology did solve mine.
It doesn’t take Popperian epistemology to learn social fluency. I’ve learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
That’s a claim that only makes sense in certain epistemological systems...
I don’t have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Hmm? I’m not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That’s not to say I understand how they think it is implied/ok in a Popperian framework).
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
I have considered that. Popperian epistemology helps with these issues more. I don’t want to argue about that now because it is an advanced topic and you don’t know enough about my epistemology to understand it (correct me if I’m wrong), but I thought the example could help make a point to the person I was speaking to.
If I don’t understand your explanation and am interested in it, I’m prepared to do the research in order to understand it, but if you can only assert why your epistemology should result in better social learning and not demonstrate that it does so for people in general, I confess that I will probably not be interested enough to follow up.
I will note though, that stating the assumption that another does not understand, but leaving them free to correct you, strikes me as a markedly worse way to minimize conflict and aggression than asking if they have the familiarity necessary to understand the explanation.
I studied philosophy as part of a double major (which I eventually dropped because of the amount of confusion and sophistry I was being expected to humor,) and my acquaintance with Popper, although not as deep as yours, I’m sure, precedes my acquaintance with Bayes. Although it may be that others who I have not read better presented and refined his ideas, Popper’s philosophy did not particularly impress me, whereas the ideas presented by Bayesianism immediately struck me as deserving of further investigation. It’s possible that I haven’t given Popper his fair shakes, but it’s not for lack of interest in other epistemologies that I’ve come to identify as Bayesian.
I wouldn’t describe the link as unhelpful, exactly, but I also wouldn’t say that it’s among the best advice for controlling one’s emotions that I’ve received (this was a process I put quite a bit of effort into learning, and I’ve received a fair amount,) so I don’t see how it functions as a demonstration of the superiority of Popperian epistemology.
With regards to the link, it’s simply that it’s less in depth than other advice I’ve received. There are techniques that it doesn’t cover in meaningful detail, like manipulation of cognitive dissonance (habitually behaving in certain ways to convince yourself to feel certain ways,) or recognition of various cognitive biases which will alter our feelings. It’s not that bad as an introduction, but it could do a better job opening up connections to specific techniques to practice or biases to be aware of.
Popper didn’t impress me because it simply wasn’t apparent to me that he was establishing any meaningful improvements to how we go about reasoning and gaining information. Critical rationalism appeared to me to be a way of looking at how we go about the pursuit of knowledge, but to quote Feynman, “Philosophy of science is about as useful to scientists as ornithology is to birds.” It wasn’t apparent to me that trying to become more Popperian should improve the work of scientists at all; indeed, in practice it is my observation that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence are more likely to make significant blunders.
Attempting to become more Bayesian in one’s epistemology, on the other hand, had immediately apparent benefits with regards to conducting science well (which are are discussed extensively on this site.)
I had criticisms of Popper’s arguments to offer, and could probably refresh my memory of them by revisiting his writings, but the deciding factor which kept me from bothering to read further was that, like other philosophers of science I had encountered, it simply wasn’t apparent that he had anything useful to offer, whereas it was immediately clear that Bayesianism did.
Feynman meant normal philosophers of science. Including, I think, Bayesians. He didn’t mean Popper, who he read and appreciated. Feynman himself engaged in philosophy of science, and published it. It’s academic philosophers, of the dominant type, that he loathed.
that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence
That’s not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn’t actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what “support” means (like in the phrase “supporting evidence”) and tell me how support differs from consistency.
The biggest thing Popper has to offer is the solution to justificationism which has plagued almost everyone’s thinking since Aristotle. You won’t know quite what that is because it’s an unconscious bias for most people. In short it is the idea that theories should be supported/justified/verified/proven, or whatever, whether probabilistically or not. A fraction of this is: he solved the problem of induction. Genuinely solved it, rather than simply giving up and accepting regress/foundations/circularly/whatever.
That’s not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn’t actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what “support” means (like in the phrase “supporting evidence”) and tell me how support differs from consistency.
I’ve read his arguments for this, I simply wasn’t convinced that accepting it in any way improved scientific conduct.
“Support” would be data in light of which the subjective likelihood of a hypothesis is increased. If consistency does not meaningfully differ from this with respect to how we respond to data, can you explain why it is is more practical to think about data in terms of consistency than support?
I’d also like to add that I do know what justificationism is, and your tendency to openly assume deficiencies in the knowledge of others is rather irritating. I normally wouldn’t bother to remark upon it, but given that you posed a superior grasp of socially effective debate conduct as evidence of the strength of your epistemology, I feel the need to point out that I don’t feel like you’re meeting the standards of etiquette I would expect of most members of Less Wrong.
I’ve read his arguments for this, I simply wasn’t convinced that accepting it in any way improved scientific conduct.
Yet again you disagree with no substantive argument. If you don’t have anything to say, why are you posting?
can you explain why it is is more practical to think about data in terms of consistency than support?
Well, consistency is good as far as it goes. If we see 10 white swans, we should reject “all swans are black” (yes, even this much depends on some other stuff). Consistency does the job without anything extraneous or misleading.
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn’t true, except in special cases which aren’t important.
The way Popper improves on this is by noting that there are always many hypotheses consistent with the data. Saying their likelihood increases is pointless. It does not help deal with the problem of differentiating between them. Something else, not support, is needed. This leaves the concept of support with nothing useful to do, except be badly abused in sloppy arguments (I have in mind arguments I’ve seen elsewhere. Lots of them. What people do is they find some evidence, and some theory it is consistent with, and they say the theory is supported so now they have a strong argument or whatever. And they are totally selective about this. You try to tell them, “well, theory is also consistent with the data. so it’s supported just as much. right?” and they say no, theirs fits the data better, so it’s supported more. but you ask what the difference is, and they can’t tell you because there is no answer. the idea that a theory can fit the data better than another, when both are consistent with the data, is a mistake (again there are some special cases that don’t matter in practice).)
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn’t true, except in special cases which aren’t important.
Suppose I ask a woman if she has children. She says no.
This is supporting evidence for the hypothesis that she does not have children; it raises the likelihood from my perspective that she is childless.
It is entirely consistent with the hypothesis that she has children; she would simply have to be lying.
So it appears to me that in this case, whatever arguments you might make regarding induction, viewing the data in terms of consistency does not inform my behavior as well.
This is the standard story. It is nothing but an appeal to intuition (and/or unstated background knowledge, unstated explanations, unstated assumptions, etc). There is no argument for it and there never has been one.
Refuting this common mistake is something important Popper did.
Try reading your post again. You simply assumed that her having children is more likely. That is not true from the example presented, without some unstated assumptions being added. There is no argument in your post. That makes it very difficult to argue against because there’s nothing to engage with.
It could go either way. You know it could go either way. You claim one way fits the data better, but you don’t offer any rigorous guidelines (or anything else) for figuring out which way fits better. What are the rules to decide which consistent theories are more supported than others?
Of course it could go either way. But if I behaved in everyday life as if it were equally likely to go either way, I would be subjecting myself to disaster. For practical purposes it has always served me better to accept that certain hypotheses that are consistent with the available data are more probable than others, and while I cannot prove that this makes it more likely that it will continue to do so in the future, I’m willing to bet quite heavily that it will.
If Popper’s epistemology does not lead to superior results to induction, and at best, only reduces to procedures that perform as well, then I do not see why I should regard his refutation of induction as important.
Then you have your answer: Support is non-boolean. I don’t think a boolean concept of consistency of observations with anything makes sense, though (consistent would mean P(E|H)>0, but observations never have a probability of 0 anyway, so every observation would be consistent with everything, or you’d need an arbitrary cut-off. P(observe black sheep|all sheep are white) > 0, but is very small ).
Some theories predict that some things won’t happen (0 probability). I consider this kind of theory important.
You say I have my answer, but you have not answered. I don’t think you’ve understood the problem. To try to repeat myself less, check out the discussion here, currently at the bottom:
Some theories predict that some things won’t happen (0 probability). I consider this kind of theory important.
But they don’t predict that you won’t hallucinate, or misread the experimental data, or whatever. Some things not happening doesn’t mean some things won’t be observed.
You say I have my answer, but you have not answered.
You asked how support differed form consistent. Boolean vs real number is a difference. Even if you arbitrarily decide that real numbers are not allowed and only booleans are that doesn’t mean that differentiating between their use of real numbers and your use of booleans is inconsistent on part of those who use real numbers.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference?
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can’t listen to an mp3 right now.
My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren’t the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn’t? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren’t because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn’t mean we agree or have nothing to discuss.
I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don’t yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian’s willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
Could you explain how a Popperian disputes such an assertion? [(50% probability of humanity surviving the next century)]
e.g. by pointing out that whether we do or don’t survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can’t make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There’s no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. “yes”. Actually, they make many conjectures, e.g. also “no”. Then they criticize the conjectures, and make more conjectures. So for example I would criticize “yes” for not providing enough explanatory detail about why it’s a good idea. Thus “yes” would be rejected, but a variant of it like “yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites” would be better. If I didn’t understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the “no” answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
You seem to be arguing that Bayesianism is wrong, which is a very different thing.
I think it’s wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes’ theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don’t want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Would you never take a bet?
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it’s not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian’s willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
So, how do Popperians decide? They conjecture an answer, e.g. “yes”. Actually, they make many conjectures, e.g. also “no”. Then they criticize the conjectures, and make more conjectures. So for example I would criticize “yes” for not providing enough explanatory detail about why it’s a good idea. Thus “yes” would be rejected, but a variant of it like “yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites” would be better. If I didn’t understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the “no” answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you’ve described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don’t actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That’s fine, but then it’s not clear that you’ve done anything different from what a Bayesian would have done—you’ve simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
[option 1] since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other.
But some are criticized and some aren’t.
[option 2] conjecture that best weathered the criticisms you were able to muster
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here—the English language is not well adapted to expressing these ideas. (In particular, the concept “uncriticized” is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Or is it something radically different from these two altogether?
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won’t have those mistakes, we learn and improve our knowledge.
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can’t reasonably guarantee that I will not have later objections as well before we’ve even had the discussion!
So let me see if I’m understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we—I don’t want to say “assume that it’s true,” because that’s probably not correct—we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I’m not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, “in order to make a decision, we need to have a guiding theory which is currently impervious to criticism” (my current understanding of Popper’s idea, roughly illustrated), isn’t this just another way of saying: “the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?”
In short, isn’t imperviousness to criticism a type of justification in itself?
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology.
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
Is that basically right?
That is the general idea (but incomplete).
The reason we behave as if it’s true is that it’s the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn’t want to act on an idea that we (thought we) saw a mistake in, over one we don’t think we see any mistake with—we should use what (fallible) knowledge we have.
A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general).
One reason that not being criticized isn’t a justification is that saying it is gets you a regress problem. So let’s not say that! The other reason is: what would that be adding as compared with not saying it? It’s not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those).
Terminology isn’t terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don’t like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it “justified” it doesn’t matter so much.
Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it’s not clear that this discussion warrants an entirely new topic.
Terminology isn’t terribly important . . . If you want to take the Popperian conception of a good theory and label it “justified” it doesn’t matter so much.
Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by “justificationist” systems.
It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper’s system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper’s system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate.
According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question—but as you can see, the regress has begun.
We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question—but as you can see, the regress has begun.
No regress has begun. I already answered why:
The reason we behave as if it’s true is that it’s the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn’t want to act on an idea that we (thought we) saw a mistake in, over one we don’t think we see any mistake with—we should use what (fallible) knowledge we have.
Try to regress me.
It is possible, if you want, to create a regress of some kind which isn’t the same one and isn’t important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won’t regard it as a real regress problem of the same type. You’ll probably wonder how that’s evaluated, but, well, it’s not such a big deal. We’ll quickly get to the point where your attempts to create regress look silly to you. That’s different than the regresses inductivists face where it’s the person trying to defend induction who runs out of stuff to say.
And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge.
You’re equivocating between “knowing exactly the contents of the new knowledge”, which may be impossible for the reason you describe, and “know some things about the effect of the new knowledge”, which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.
If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments—its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don’t think this is necessarily a flaw with your website—presumably it was not designed first and foremost as a response to Bayesianism—but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.
To be clear, what I am looking for is a statement of the form: “Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage.” Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been “wrong” where Popper would be clearly “right” at any historical point would be good enough to argue about.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren’t because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn’t mean we agree or have nothing to discuss.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn’t solve your problem, whereas my epistemology did solve mine.
It doesn’t take Popperian epistemology to learn social fluency. I’ve learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.
If you want to credit a particular skill to your epistemology, you should first see whether it’s more likely to arise among those who share your epistemology than those who don’t.
That’s a claim that only makes sense in certain epistemological systems...
I don’t have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Hmm? I’m not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That’s not to say I understand how they think it is implied/ok in a Popperian framework).
I have considered that. Popperian epistemology helps with these issues more. I don’t want to argue about that now because it is an advanced topic and you don’t know enough about my epistemology to understand it (correct me if I’m wrong), but I thought the example could help make a point to the person I was speaking to.
If I don’t understand your explanation and am interested in it, I’m prepared to do the research in order to understand it, but if you can only assert why your epistemology should result in better social learning and not demonstrate that it does so for people in general, I confess that I will probably not be interested enough to follow up.
I will note though, that stating the assumption that another does not understand, but leaving them free to correct you, strikes me as a markedly worse way to minimize conflict and aggression than asking if they have the familiarity necessary to understand the explanation.
You could begin by reading
http://fallibleideas.com/emotions
And the rest of the site. If you don’t understand any connections between it and Popperian epistemology, feel free to ask.
I’m not asking you to be interested in this, but I do think you should have some interest in rival epistemologies.
I studied philosophy as part of a double major (which I eventually dropped because of the amount of confusion and sophistry I was being expected to humor,) and my acquaintance with Popper, although not as deep as yours, I’m sure, precedes my acquaintance with Bayes. Although it may be that others who I have not read better presented and refined his ideas, Popper’s philosophy did not particularly impress me, whereas the ideas presented by Bayesianism immediately struck me as deserving of further investigation. It’s possible that I haven’t given Popper his fair shakes, but it’s not for lack of interest in other epistemologies that I’ve come to identify as Bayesian.
I wouldn’t describe the link as unhelpful, exactly, but I also wouldn’t say that it’s among the best advice for controlling one’s emotions that I’ve received (this was a process I put quite a bit of effort into learning, and I’ve received a fair amount,) so I don’t see how it functions as a demonstration of the superiority of Popperian epistemology.
You say Popper didn’t impress you. Why not? Did you have any criticism of his ideas? Any substantive argument against them?
Do you have any criticism of the linked ideas? You just said it doesn’t seem that good to you, but you didn’t give any kind of substantive argument.
With regards to the link, it’s simply that it’s less in depth than other advice I’ve received. There are techniques that it doesn’t cover in meaningful detail, like manipulation of cognitive dissonance (habitually behaving in certain ways to convince yourself to feel certain ways,) or recognition of various cognitive biases which will alter our feelings. It’s not that bad as an introduction, but it could do a better job opening up connections to specific techniques to practice or biases to be aware of.
Popper didn’t impress me because it simply wasn’t apparent to me that he was establishing any meaningful improvements to how we go about reasoning and gaining information. Critical rationalism appeared to me to be a way of looking at how we go about the pursuit of knowledge, but to quote Feynman, “Philosophy of science is about as useful to scientists as ornithology is to birds.” It wasn’t apparent to me that trying to become more Popperian should improve the work of scientists at all; indeed, in practice it is my observation that those who try to think of theories more in the light of the criticism they have withstood than their probability in light of the available evidence are more likely to make significant blunders.
Attempting to become more Bayesian in one’s epistemology, on the other hand, had immediately apparent benefits with regards to conducting science well (which are are discussed extensively on this site.)
I had criticisms of Popper’s arguments to offer, and could probably refresh my memory of them by revisiting his writings, but the deciding factor which kept me from bothering to read further was that, like other philosophers of science I had encountered, it simply wasn’t apparent that he had anything useful to offer, whereas it was immediately clear that Bayesianism did.
Feynman meant normal philosophers of science. Including, I think, Bayesians. He didn’t mean Popper, who he read and appreciated. Feynman himself engaged in philosophy of science, and published it. It’s academic philosophers, of the dominant type, that he loathed.
That’s not really what Popperian epistemology is about. But also: the concept of evidence for theories is a mistake that doesn’t actually make sense, as Popper explained. If you doubt this, do what no one else on this site has yet managed: tell me what “support” means (like in the phrase “supporting evidence”) and tell me how support differs from consistency.
The biggest thing Popper has to offer is the solution to justificationism which has plagued almost everyone’s thinking since Aristotle. You won’t know quite what that is because it’s an unconscious bias for most people. In short it is the idea that theories should be supported/justified/verified/proven, or whatever, whether probabilistically or not. A fraction of this is: he solved the problem of induction. Genuinely solved it, rather than simply giving up and accepting regress/foundations/circularly/whatever.
I’ve read his arguments for this, I simply wasn’t convinced that accepting it in any way improved scientific conduct.
“Support” would be data in light of which the subjective likelihood of a hypothesis is increased. If consistency does not meaningfully differ from this with respect to how we respond to data, can you explain why it is is more practical to think about data in terms of consistency than support?
I’d also like to add that I do know what justificationism is, and your tendency to openly assume deficiencies in the knowledge of others is rather irritating. I normally wouldn’t bother to remark upon it, but given that you posed a superior grasp of socially effective debate conduct as evidence of the strength of your epistemology, I feel the need to point out that I don’t feel like you’re meeting the standards of etiquette I would expect of most members of Less Wrong.
Yet again you disagree with no substantive argument. If you don’t have anything to say, why are you posting?
Well, consistency is good as far as it goes. If we see 10 white swans, we should reject “all swans are black” (yes, even this much depends on some other stuff). Consistency does the job without anything extraneous or misleading.
The support idea claims that sometimes evidence supports one idea it is consistent with more than another. This isn’t true, except in special cases which aren’t important.
The way Popper improves on this is by noting that there are always many hypotheses consistent with the data. Saying their likelihood increases is pointless. It does not help deal with the problem of differentiating between them. Something else, not support, is needed. This leaves the concept of support with nothing useful to do, except be badly abused in sloppy arguments (I have in mind arguments I’ve seen elsewhere. Lots of them. What people do is they find some evidence, and some theory it is consistent with, and they say the theory is supported so now they have a strong argument or whatever. And they are totally selective about this. You try to tell them, “well, theory is also consistent with the data. so it’s supported just as much. right?” and they say no, theirs fits the data better, so it’s supported more. but you ask what the difference is, and they can’t tell you because there is no answer. the idea that a theory can fit the data better than another, when both are consistent with the data, is a mistake (again there are some special cases that don’t matter in practice).)
Suppose I ask a woman if she has children. She says no.
This is supporting evidence for the hypothesis that she does not have children; it raises the likelihood from my perspective that she is childless.
It is entirely consistent with the hypothesis that she has children; she would simply have to be lying.
So it appears to me that in this case, whatever arguments you might make regarding induction, viewing the data in terms of consistency does not inform my behavior as well.
This is the standard story. It is nothing but an appeal to intuition (and/or unstated background knowledge, unstated explanations, unstated assumptions, etc). There is no argument for it and there never has been one.
Refuting this common mistake is something important Popper did.
Try reading your post again. You simply assumed that her having children is more likely. That is not true from the example presented, without some unstated assumptions being added. There is no argument in your post. That makes it very difficult to argue against because there’s nothing to engage with.
It could go either way. You know it could go either way. You claim one way fits the data better, but you don’t offer any rigorous guidelines (or anything else) for figuring out which way fits better. What are the rules to decide which consistent theories are more supported than others?
Of course it could go either way. But if I behaved in everyday life as if it were equally likely to go either way, I would be subjecting myself to disaster. For practical purposes it has always served me better to accept that certain hypotheses that are consistent with the available data are more probable than others, and while I cannot prove that this makes it more likely that it will continue to do so in the future, I’m willing to bet quite heavily that it will.
If Popper’s epistemology does not lead to superior results to induction, and at best, only reduces to procedures that perform as well, then I do not see why I should regard his refutation of induction as important.
Support is the same thing as more consistent with that hypothesis than with the alternatives (P(E|H) >P(E|~H)).
What is “more consistent”?
Consistent = does not contradict. But you can’t not-contradict more. It’s a boolean issue.
Then you have your answer: Support is non-boolean. I don’t think a boolean concept of consistency of observations with anything makes sense, though (consistent would mean P(E|H)>0, but observations never have a probability of 0 anyway, so every observation would be consistent with everything, or you’d need an arbitrary cut-off. P(observe black sheep|all sheep are white) > 0, but is very small ).
Some theories predict that some things won’t happen (0 probability). I consider this kind of theory important.
You say I have my answer, but you have not answered. I don’t think you’ve understood the problem. To try to repeat myself less, check out the discussion here, currently at the bottom:
http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3urr?context=3
But they don’t predict that you won’t hallucinate, or misread the experimental data, or whatever. Some things not happening doesn’t mean some things won’t be observed.
You asked how support differed form consistent. Boolean vs real number is a difference. Even if you arbitrarily decide that real numbers are not allowed and only booleans are that doesn’t mean that differentiating between their use of real numbers and your use of booleans is inconsistent on part of those who use real numbers.
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.
Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can’t listen to an mp3 right now.
My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren’t the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn’t? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?
I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don’t yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian’s willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
e.g. by pointing out that whether we do or don’t survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can’t make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There’s no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. “yes”. Actually, they make many conjectures, e.g. also “no”. Then they criticize the conjectures, and make more conjectures. So for example I would criticize “yes” for not providing enough explanatory detail about why it’s a good idea. Thus “yes” would be rejected, but a variant of it like “yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites” would be better. If I didn’t understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the “no” answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
I think it’s wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes’ theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don’t want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it’s not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you’ve described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don’t actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That’s fine, but then it’s not clear that you’ve done anything different from what a Bayesian would have done—you’ve simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
But some are criticized and some aren’t.
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here—the English language is not well adapted to expressing these ideas. (In particular, the concept “uncriticized” is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won’t have those mistakes, we learn and improve our knowledge.
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can’t reasonably guarantee that I will not have later objections as well before we’ve even had the discussion!
So let me see if I’m understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we—I don’t want to say “assume that it’s true,” because that’s probably not correct—we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I’m not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, “in order to make a decision, we need to have a guiding theory which is currently impervious to criticism” (my current understanding of Popper’s idea, roughly illustrated), isn’t this just another way of saying: “the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?”
In short, isn’t imperviousness to criticism a type of justification in itself?
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
That is the general idea (but incomplete).
The reason we behave as if it’s true is that it’s the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn’t want to act on an idea that we (thought we) saw a mistake in, over one we don’t think we see any mistake with—we should use what (fallible) knowledge we have.
A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general).
One reason that not being criticized isn’t a justification is that saying it is gets you a regress problem. So let’s not say that! The other reason is: what would that be adding as compared with not saying it? It’s not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those).
Terminology isn’t terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don’t like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it “justified” it doesn’t matter so much.
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it’s not clear that this discussion warrants an entirely new topic.
Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by “justificationist” systems.
It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper’s system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper’s system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate.
According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question—but as you can see, the regress has begun.
I think it’s a big topic. Began answering your question here:
http://lesswrong.com/r/discussion/lw/551/popperian_decision_making/
No regress has begun. I already answered why:
Try to regress me.
It is possible, if you want, to create a regress of some kind which isn’t the same one and isn’t important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won’t regard it as a real regress problem of the same type. You’ll probably wonder how that’s evaluated, but, well, it’s not such a big deal. We’ll quickly get to the point where your attempts to create regress look silly to you. That’s different than the regresses inductivists face where it’s the person trying to defend induction who runs out of stuff to say.
You’re equivocating between “knowing exactly the contents of the new knowledge”, which may be impossible for the reason you describe, and “know some things about the effect of the new knowledge”, which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.