It’s fine if most people haven’t read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea. or they should be interested and ask more about him instead of posting incorrect arguments about why his basic claims are false.
I do know, offhand, several arguments against Bayesian epistemology (e.g. it’s inability to create moral knowledge, and i know many arguments against induction, each decisive). And anyway I came here to learn more about it. One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere. And I am familiar with those criticisms.
Learning enough Bayesian stuff to sound like a Bayesian so people want to listen to me more sounds to me like more trouble than it’s worth, no offense. I’m perfectly willing to read more things when I make a mistake and there is a specific thing which explains the issue. I have been reading various things people refer me to. If you wanted me to study Bayesian stuff for a month before speaking, well, I’d get bored because I would see flaws and then see them repeated, and then read arguments which depend on them. I did read the whole HP fic if that helps.
One thing that interests me, which I posted about in the initial post, is how unscholarly some Bayesian scholars are. Can anyone correct that? Are there any with higher scholarly standards? I would like there to be. I don’t want to just read stuff until I happen to find something good, I want to be pointed to something considerably better than the unscholarly stuff I criticized. I don’t know where to find that.
It’s fine if most people haven’t read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea. or they should be interested and ask more about him instead of posting incorrect arguments about why his basic claims are false.
Really? Much of that seems questionable. There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one. Sure, making incorrect arguments is bad. And making arguments against strawmen is very bad. But people don’t have time actually research every single idea out there or even know which one’s to look at. Now, I think that Popper is important enough and has relevant enough points that he should be on the short list of philosophers that people can grapple with at least to some limited extent. But frankly, speaking as someone who is convinced of that point, you are making a very weak case for it.
I do know, offhand, several arguments against Bayesian epistemology (e.g. it’s inability to create moral knowledge, and i know many arguments against induction, each decisive).
This paragraph seems to reflect a general problem you are having here in making assertions without providing any information other than vague claims of existence. I am for example aware of a large variety of arguments against induction (the consistency of anti-induction frameworks seems to be a major argument) but calling them “decisive” is a very strong claim, and isn’t terribly relevant in so far as Bayesianism is not an inductive system in many senses of the term.
You’ve also referred to before to this claim that Popperian system can lead to moral knowledge and that’s a claim I’d be very curious to hear expanded with a short summary of how that works. Generally when I see a claim that an epistemological system can create moral knowledge my initial guess is that someone has managed to bury the naturalistic fallacy somewhere or has managed to smuggle in additional moral premises that aren’t really part of the epistemology. I’d be pleasantly surprised to see something that didn’t function that way.
One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere.
I haven’t read it myself but I’ve been told that Earman’s “Bayes or Bust” deals with a lot of the philosophical criticisms of Bayesianism as well as giving a lot of useful references. It should do a decent job in regards to the scholarly concerns.
As to Popper’s criticism of Bayesianism, the discussion of it in LScD is quite small, which is understandable in that Bayesianism was not nearly as developed in that time as it is now. (You may incidentally be engaging in a classical philosophical fallacy here in focusing on a specific philosopher’s personal work rather than the general framework of ideas that followed from it. There’s a lot of criticism of Bayesianism that is not in Popper that is potentially strong. Not everything is about Popper.)
Learning enough Bayesian stuff to sound like a Bayesian so people want to listen to me more sounds to me like more trouble than it’s worth, no offense.
As a non-Bayesian, offense taken. You can’t expect to go to a room full of people with a specific set of viewpoints offer a contrary view, act like the onus is on them to translate into your notation and terminology, and then be shocked when they don’t listen to you. Moreover, knowing the basics of Cox’s theorem is not asking you to “sound like a Bayesian” anyhow.
If you wanted me to study Bayesian stuff for a month before speaking, well, I’d get bored because I would see flaws and then see them repeated, and then read arguments which depend on them. I did read the whole HP fic if that helps.
What? I don’t know how to respond to that. I’m not sure an exclamation exists in standard English to express my response to that last sentence. I’m thinking of saying “By every deity in the full Tegmark ensemble” but maybe I should wait for a better time to use it. You are repeatedly complaining about people not knowing much about Popper while your baseline for Bayesianism is that you’ve read an incomplete Harry Potter fanfic? This fanfic hasn’t even addressed Bayesianism other than in passing. This seems akin to someone thinking they understand rocketry because they’ve watched “Apollo 13″.
Really? Much of that seems questionable. There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one.
The number of major ideas in epistemology is not very large. After Aristotle, there wasn’t very much innovation for a long time. It’s a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven’t seen anything of decent quality.
You’ve also referred to before to this claim that Popperian system can lead to moral knowledge and that’s a claim I’d be very curious to hear expanded with a short summary of how that works.
It works exactly identically to how Popperian epistemology creates any other kind of knowledge. There’s nothing special for morality.
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
You may incidentally be engaging in a classical philosophical fallacy here in focusing on a specific philosopher’s personal work rather than the general framework of ideas that followed from it.
I have a lot of familiarity with the other Popperians. But Popper and Deutsch are by far the best. There isn’t really anything non-Popperian that draws on Popper much. Everyone who has understood Popper is a Popperian, IMO. If you disagree, do tell.
As to Popper’s criticism of Bayesianism, the discussion of it in LScD is quite small
Small is not a criticism; substance matters not length. Do you have a criticism of his arguments in LScD or not? Also he dealt with it elsewhere, as I stated.
Err, Bayesian probability doesn’t have anything special for morality either. People on LW tend to be moral non-realists, ie people who deny that there is objective moral knowledge, if that’s what you’re talking about (not sure- sorry!), but that’s completely orthogonal to this discussion: there’s nothing in Bayesianism that leads inevitably to non-realism. (Also, I’m not convinced that moral realism is right, so saying “Bayesianism leads to moral non-realism” isn’t a very effective argument.)
Bayesian epistemology doesn’t create moral knowledge because it only functions when fed in observation data (or assumptions). I get a lot of conflicting statements here, but some people tell me they only care about prediction, they are instrumentalists, and that is what Bayes stuff is for, and they don’t regard it as a bad thing that it doesn’t address morality at all.
Now what you have in mind, I think, is that if you make a ton of assumptions you could then talk about morality using Bayes. Popperism doesn’t require a bunch of arbitrary starting assumptions to create moral knowledge, it just can deal with it directly.
If I’m wrong, explain how you can deal with figuring out, e.g., what are good moral values to have (without assuming a utility function or something).
As I tried to say (and probably explained really poorly- sorry!), the LW consensus is that morality is not objective. Therefore, the idea of figuring out what good moral values would be is, according to moral non-realism, impossible: any decision about what a good moral value is must rely on your pre-existing values, if an objective morality is not out there to be discovered. Using this as a criticism of Bayesianism is sorta like criticizing thermodynamics because it claims it’s impossible to exactly specify the position and velocity of each particle: not only is the criticism unrelated to the subject matter, but satisfying it would require the theory to do something that is to the best of our knowledge incorrect.
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
I’m inclined to take this formula seriously, but I’d like to start by applying it to innate knowledge, knowledge we are born with, because here we are definitely talking about an evolutionary process involving mutation and natural selection. Some mutations add what amounts to a new innate conjecture (hypothesis, belief) into the cognitive architecture of the creature.
However, what occurs at this point is not that a creature with a false innate conjecture is eliminated. The creature isn’t being tested purely against reality in isolation. It’s being tested against other members of its species. The creature with the least-false, or least-perilously-false conjecture will tend to do better than the competitors. The competition for survival amounts to a competition between rival conjectures. The truest, or most-usefully-true, or least-wrong, or least-dangerously-wrong innate belief will tend to outdo its competitors and ultimately spread through the species. (With the odd usefully-wrong belief surviving.)
The occasional appearance of new innate conjectures resembles the conjecture part of Popperian conjecture and refutation. However, the contest between rival innate conjectures that occurs as the members of the species struggle against each other for survival seems less Popperian than Bayesian.
The relative success of the members of the species who carry the more successful hypothesis vaguely resembles Bayesian updating, because the winners increase their relative numbers and the losers decrease their relative numbers, which resembles the shift in the probabilities assigned to rival hypotheses that occurs in Bayesian updating. Consider the following substitutions applied to Bayes’ formula:
P(H|E) = P(E|H)P(H) / P(E)
P(H|E) is the new proportion (i.e. in the next generation) of the species carrying the hypothesis H, given that event E occurred (E is “everything that happened to the generation”)
P(E|H) is the degree to which H predicts and thus prepares the individual to handle E (measured in expected number of offspring given E)
P(H) is the old proportion (i.e. in the previous generation) of the species carrying H
P(E) is the degree to which the average member of the species predicts and thus is prepared to handle E (measured in expected number of offspring given E)
With these assignments, what the equation means is:
The new proportion of the species with H is equal to the old proportion of the species with H, times the expected number of offspring of members with H, divided by the expected number of offspring of the average member of the species.
One difference between this process and Bayesian updating is that this process allows the occasional introduction of new hypotheses over time, with what amounts to a modest but not vanishing initial prior.
I’m not sure if we’re interested in the same stuff. But taking up one topic:
I think you regard innate/genetic ideas as important. I do not. Because people are universal knowledge creators, and can change any idea they start with, it doesn’t matter very much.
The reason people are so biased is not in their genes but their memes.
There are two major replication strategies that memes use.
1) a meme can be useful and rational. it spreads because of its value
2) a meme can sabotage its holders creativity to prevent him from criticizing it, and to take away his choice not to spread it
The second type dominated all cultures on Earth for a long time. The transition to the first type is incomplete.
More details one memes and universality can be found in The Beginning of Infinity by David Deutsch
I think you regard innate/genetic ideas as important. I do not. Because people are universal knowledge creators, and can change any idea they start with, it doesn’t matter very much.
You misunderstand. I bring it up as a model of learning, and my choice was based on your own remarks. You said that knowledge is created by an evolutionary process. That way of putting it suggests an analogy with Darwin’s theory of evolution as proceeding by random variation and natural selection. And indeed there is an analogy between Popper’s conjectures and refutations and variation and natural selection, and it is this: a conjecture is something like variation (mutation), and refutation is something like natural selection.
However, what I found was that the closer I looked at knowledge which is actually acquired through natural selection—what we might call innate knowledge or instinctive knowledge—the more the process of acquisition resembled Bayesian updating rather than Popperian conjecture and refutation. I explained why.
In Bayesian updating, there are competing hypotheses, and the one for which actual events are less of a surprise (i.e., the hypothesis Hi for which P(E|Hi) is higher) is strengthened relative to the one for which events are more of a surprise. I find a parallel to this in competition among alleles under natural selection, which I described.
Essential to Bayesian updating is the coexistence of competing hypotheses, and essential to natural selection is the coexistence of competing variants in a species. In contrast, Popper talks about conjecture and refutation, which is a more lonely process that need not involve more than one conjecture and a set of observations which have the potential to falsify it. Popper talks about improving the conjecture in response to refutation, but this process more resembles Lamarckian evolution than Darwinian evolution, because in Lamarckian evolution the individuals improve themselves in response to environmental challenges, much as Popper would have us improve our conjectures in response to observational challenges. Also, in Lamarckian evolution, as in the Popperian process of conjecture and refutation, competing variants (compare: competing hypotheses) do not play an essential role (though I’m sure they could be introduced). Rather, the picture is of a single animal (compare: a single hypothesis) facing existential environmental challenges (compare: facing the potential for falsification) improving itself in response (which improvement is passed to offspring).
The Popperian process of conjecture, refutation, and improvement of the conjecture, can as it happens be understood from a Bayesian standpoint. It does implement Bayesian updating in a certain way. Specifically, when a particular conjecture is refuted and the scientist modifies the conjecture—at that point, there are two competing hypotheses. So at that point, the process of choosing between these two competing hypotheses can be characterized as Bayesian updating. The less successful hypothesis is weakened, and the more successful hypothesis is strengthened.
In short, if you want to take seriously the analogy that does exist between evolution through natural selection and knowledge acquisition of whatever type, then you may want to take a closer look at Bayesian updating as conforming more closely to the Darwinian model.
You said we were discussing an analogy. That was a mistake. How can having made a mistake strength your argument? When you make a mistake, and find out, you should be like “uh oh. maybe i made 2. or 3. i better rethink things a bit more carefully. maybe the mistake is caused by a misunderstanding that could cause multiple mistakes.” I don’t think glossing over mistakes is rational or wise.
Because if there is only an analogy between evolution and knowledge acquisition, there are some aspects of each that do are not the same, and it is possible that these differences mean that the specific factor under consideration is not the same; but if the two processes are literally the same, that is not possible.
“How can having a mistake strengthen your argument?”
Example: During WWII,many American leaders didn’t believe that Germany was actually committing massacres, as they were disillusioned from similar but inaccurate WWI propaganda; however, they still believed that Nazi aggression was morally wrong. Later, the death camps were discovered. Clearly, given that they were mistaken in disbelieving in the Holocaust, they were mistaken in believing that the Nazis were morally wrong- because how can making a mistake strength your argument?
Your defects would be easier to tolerate if you were less arrogant. A bit of humility would go a long way to keeping the conversation going. My guess is that you picked up your approach because it led to your being the last person standing, winning by attrition—when in reality the other participants were simply too disgusted to continue.
The number of major ideas in epistemology is not very large. After Aristotle, there wasn’t very much innovation for a long time. It’s a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven’t seen anything of decent quality.
As to a professional, I already referred you to Earman. Incidentally, you seem to be narrowing the claim somewhat. Note that I didn’t say that the set of major ideas in epistemology isn’t small, I referred to the much larger class of philosophical ideas (although I can see how that might not be clear from my wording). And the set is indeed very large. However, I think that your claim about “after Aristotle” is both wrong and misleading. There’s a lot of what thought about epistemological issues in both the Islamic and Christian worlds during the Middle Ages. Now, you might argue that that’s not helpful or relevant since it gets tangled up in theology and involves bad assumptions. But that’s not to say that material doesn’t exist. And that’s before we get to non-Western stuff (which admittedly I don’t know much about at all).
(I agree when you restrict to professionals, and have already recommended Earman to you.)
It works exactly identically to how Popperian epistemology creates any other kind of knowledge. There’s nothing special for morality.
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
This is a deeply puzzling set of claims. First of all, a major point of his epistemological system is falsfiability based on data (at least as I understand it from LScD). How that would at all interact with moral issues is unclear to me. Indeed, the semi-canonical example of a non-falsifiable claim in the Popperian sense is Marxism, a set of ideas that has a large set of attached moral claims.
I also don’t see how this works given that moral claims can always be criticized by the essential sociopathic argument “I don’t care. Why should you?” Obviously, that line of thinking can be/should be expanded. To use your earlier example, how would you discuss “murder is wrong” in a Popperian framework? I would suggest that this isn’t going to be any different than simply discussing moral ideas based on shared intuitions with particular attention to the edge cases. You’re welcome to expand on these claims, but right now, nothing you’ve said in this regard is remotely convincing or even helpful since it amounts to just saying “well, do the same thing.”
I have a lot of familiarity with the other Popperians. But Popper and Deutsch are by far the best. There isn’t really anything non-Popperian that draws on Popper much. Everyone who has understood Popper is a Popperian, IMO. If you disagree, do tell.
I’m going to be obnoxious and quote a friend of mine “Everyone who understands Christianity is a Christian.” I don’t have any deep examples of other individuals although I would tentatively say that I understood Popper’s views in Logic of Scientific Discovery just fine.
Do you have a criticism of his arguments in LScD or not?
Sure. The most obvious one is when he is discussing the law of large numbers and frequentist v. Bayesian interpretations (incidentally to understand those passages it is helpful to note that he uses the term “subjective” to describe Bayesians rather than Bayesian which is consistent with the language of the time, but in modern terminology has a very different meaning (used to distinguish between subject and objective Bayesians)). In that section he argues that (I don’t have the page number unfortunately since I’m using my Kindle edition. I have a hard copy somewhere but I don’t know where) that “it must be inadmissable to give after the deduction of Bernoulli’s theorem a meaning to p different from the one which was given to it before the deduction.” This is, simply put, wrong. Mathematicians all the time prove something in one framework and then interpret it in another framework. You just need to show that all the properties of the relevant frameworks overlap in sufficiently non-pathological cases. If someone wrote this as a complaint about say using the complex exponential to understand the symmetries of the Euclidean plane, we’d immediately see this as a bad claim. There’s an associated issue in this section which also turns up but it is more subtle; Popper doesn’t appreciate what you can do with measure theory and L_p spaces and related ideas to move back and forth between different notions of probability and different metrics on spaces. That’s ok, it was a very new idea when he wrote LScD (although the connections were to some extent definitely there). But it does render a lot of what he says simply irrelevant or outright wrong.
As to a professional, I already referred you to Earman.
Which you stated you had not read. I have rather low standards for recommendations of things to read, but “I never read it myself” isn’t good enough.
I don’t agree with “restrict to professionals”. How is it to be determined who is a professional? I don’t want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
First of all, a major point of his epistemological system is falsfiability based on data (at least as I understand it from LScD).
That is a major point for scientific research where the problem “how do we use evidence?” is important. And the answer is “criticisms can refer to evidence”. Note by “science” here I mean any empirical field. What do you do in non-scientific fields? You simply make criticisms that don’t refer to evidence. Same method, just missing one type of criticism which is rather useful in science but not fundamental to the methodology.
Indeed, the semi-canonical example of a non-falsifiable claim in the Popperian sense is Marxism, a set of ideas that has a large set of attached moral claims.
It is not empirically falsifiable. It is criticizable. For example Popper criticized Marx in The Open Society and its Enemies..
I also don’t see how this works given that moral claims can always be criticized by the essential sociopathic argument “I don’t care. Why should you?”
Any argument which works against everything fails at the task of differentiating better and worse ideas. So it is a bad argument. So we can reject it and all other things in that category, by this criticism.
To use your earlier example, how would you discuss “murder is wrong” in a Popperian framework?
The short answer is: since we don’t care to have justified foundations, you can discuss it any way you like. You can say it’s bad because it hurts people. You can say it’s good because it prevents overpopulation. You can say it’s bad because it’s mean. These kinds of normal arguments, made by normal people, are not deemed automatically invalid and ignored. Many of them are indeed mistakes. But some make good points.
For more on morality, please join this discussion:
I would tentatively say that I understood Popper’s views in Logic of Scientific Discovery just fine.
He has like 20 books. There’s way more to it. When one reads a lot of them, a whole worldview comes across that is very hard to understand from just a couple books. And I wasn’t trying to argue with that statement, I was just commenting. I mentioned it because of a comment to do with whether I had studied results of non-Popperians using Popperian ideas.
“it must be inadmissable to give after the deduction of Bernoulli’s theorem a meaning to p different from the one which was given to it before the deduction.” This is, simply put, wrong.
Are you really telling me that you can prove something, then take the conclusion, redefine a term, and work with that, and consider it still proven? You could only do that if you created a second proof that the change doesn’t break anything, you can’t just do it. I’m not sure you took what Popper was saying literally enough; I don’t think your examples later actually do what he criticized. Changing the meaning of a term in a conclusion statement, and considering a conclusion from a different perspective, are different.
Popper doesn’t appreciate what you can do with measure theory and L_p spaces
Would you understand if I said this has no relevance at all to 99.99% of Popper’s philosophy? Note that his later books generally have considerably less mention of math or logic.
Which you stated you had not read. I have rather low standards for recommendations of things to read, but “I never read it myself” isn’t good enough.
Earman is a philosopher and the book has gotten positive reviews from other philosophers. I don’t know what else to say in that regard.
I don’t agree with “restrict to professionals”. How is it to be determined who is a professional? I don’t want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
Hrrm? You mentioned professionals first. I’m not sure why you are now objecting to the use of professionals as a relevant category.
That is a major point for scientific research where the problem “how do we use evidence?” is important. And the answer is “criticisms can refer to evidence”. Note by “science” here I mean any empirical field. What do you do in non-scientific fields? You simply make criticisms that don’t refer to evidence. Same method, just missing one type of criticism which is rather useful in science but not fundamental to the methodology
I’m not at all convinced that this is what Popper intended (but again I’ve only read LScD) but if this is accurate then Popper isn’t just wrong in an interesting way but is just wrong. Does one mean for example to claim that pure mathematics works off of criticism? I’m a mathematician. We don’t do this. Moreover, it isn’t clear what it would even mean for us to try to do this as our primary method of inquiry. Are we supposed to spend all our time going through pre-existing proofs trying to find holes in them?
He has like 20 books. There’s way more to it. When one reads a lot of them, a whole worldview comes across that is very hard to understand from just a couple books.
Yes, and I’m quite sure that I get much more of a worldview if I read all of Hegel rather than just some of it. That doesn’t mean I need to read all of it. Similar remarks would apply to Aquinas or more starkly the New Testament. Do you need to read all of the New Testament to decide that Christianity is bunk? Do you need to read the entire Talmud to decide that Judaism is incorrect? But you get a whole worldview that you don’t obtain from just reading the major texts.
The short answer is: since we don’t care to have justified foundations, you can discuss it any way you like. You can say it’s bad because it hurts people. You can say it’s good because it prevents overpopulation. You can say it’s bad because it’s mean. These kinds of normal arguments, made by normal people, are not deemed automatically invalid and ignored. Many of them are indeed mistakes. But some make good points
Right, and then we just the criticism “why bother” or “and how does that maximize the number of paperclip in the universe?” Or one can say “mean” “good” bad” are all hideously ill-defined. In any event, does it not bother you that you are essentially claiming that your moral discussion with your great epistemological system looks just like a discussion about morality by a bunch of random individuals? There’s nothing in the above that uses your epistemology in any substantial way.
Are you really telling me that you can prove something, then take the conclusion, redefine a term, and work with that, and consider it still proven? You could only do that if you created a second proof that the change doesn’t break anything, you can’t just do it.
Right! And conveniently in the case Popper cares about you can prove that.
Popper doesn't appreciate what you can do with measure theory and L_p spaces
Would you understand if I said this has no relevance at all to 99.99% of Popper’s philosophy? Note that his later books generally have considerably less mention of math or logic.
Do you mean understand or do you mean care? I don’t understand why you are making this statement given that my remark was addressing the question you asked of whether I had specific problems with Popper’s handling of Bayesianism in LScD. This is a specific problem there.
Does one mean for example to claim that pure mathematics works off of criticism? I’m a mathematician. We don’t do this.
I don’t know what Popper himself would say, but one of his more insightful followers, namely Lakatos, argues for exactly that position.
I read Proofs and Refutations too many years ago to say anything precise about it. I remember finding it interesting but also frustrating. Lakatos seems determined to ignore/deny/downplay the fact of mathematical practice that we only call something a ‘theorem’ when we’ve got a proof, and we only call something a ‘proof’ when it’s logically watertight in such a way that no ‘refutations’ are possible. Still, it’s well-researched (in its use of a historical case-study) and he comes up with some decent ideas along the way (e.g. about “monster barring” and “proof-oriented definitions”.)
Yes, Lakatos does argue for that in a certain fashion, (and I suppose it is right to bring this up since I’ve myself repeatedly pointed people here on LW to read Lakatos when they think that math is completely reliable.) However, Lakatos took a more nuanced position than the position that curi is apparently taking that math advances solely through this method of criticism. I also think Lakatos is wrong in so far as the examples he uses are not actually representative samples of what the vast majority of mathematics looks like. Euler’s formula is an extreme example, and it is telling that when one wants to give other similar examples one often gives other topological claims from before 1900 or so.
You are confused about what that means. An appeal to authority is not intrinsically fallacious. An appeal to authority is problematic when the authority is irrelevant (e.g. a celebrity who plays a doctor on TV endorsing a product) or when one is claiming that one has a valid deduction in some logical system. Someone making an observation about what people in their profession actually do is not a bad appeal to authority in the same way. In any event, you ignored the next line of my comment:
Moreover, it isn’t clear what it would even mean for us to try to do this as our primary method of inquiry. Are we supposed to spend all our time going through pre-existing proofs trying to find holes in them?
If you do think that mathematicians use Popperian reasoning then please explain how we do it.
An appeal to authority is not intrinsically fallacious.
It is in Popperian epistemology.
Could you point me to a Bayesian source that says they are OK? I’d love to have a quote of Yudkowsky advocating appeals to authority, for instance. Or could others comment? Do most people here think appeals to authority are good arguments?
An appeal to authority is not logically airtight, and if logic is about mathematical proofs, then it’s going to be a fallacy. But an appeal to an appropriate authority gives Bayesians strong evidence, provided that [X|Authority believes X] is sufficiently high. In many fields, authorities have sufficient track records that appeals to authority are good arguments. In other fields, not so much.
Of course, the Appeal to Insufficient Force fallacy is a different story from the Appeal to Inappropriate Authority
Track record of statements/predictions, taking into account a priori likelihood of previous predictions and a priori likelihood of current prediction.
Are you asking us to justify appeals to authority by using an appeal to authority?
No lol. I just wanted one to read. Some of my friends will be interested in it too.
Track record of statements/predictions
Since the guy who made the appeal to authority has little track record with me, and little of it good in my view, why would he expect me to concede to his appeal to authority?
This is silly. Whether or not he uses the word authority does not change the fact he is suggesting that we treat the opinions of experts as more accurate than our own opinions.
I had a lot of respect for you before you made this comment, but you have now lost most of it.
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument? I don’t think so and my friends won’t. They won’t know if Hanson thinks so.
For the purposes I wanted to use it for, this will not work well.
One thing I know about some of my friends is that they consider the word “authority” to be very nasty, but the word “expert” to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson’s use of the expertise terminology, instead of the authority terminology, will matter to them. Explaining that he meant what they call authority will add complexity—and scope for argument—and be distracting. And people will find it boring and ignore it as a terminological debate.
And I’m not even quite sure what Hanson did mean. I don’t think what he meant is identical to what the commenter I was speaking to meant.
Hanson speaks of, for example, “if you plan to mostly ignore the experts”. That you shouldn’t ignore them is a different claim than that appeals to their authority are good arguments.
He’s stated before, I’m not sure where, that if you believe an expert has more knowledge about an issue than you then you should prefer their opinions to any argument you generate. This is because if they disagree with you it is almost certainly because they have considered and rejected your argument, not because they have not considered your argument.
One thing I know about some of my friends is that they consider the word “authority” to be very nasty, but the word “expert” to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson’s use of the expertise terminology, instead of the authority terminology, will matter to them.
If your friends cannot differentiate between the content of an argument and its surface appearance then I would advise you find new friends [/facetious].
They can, but some won’t be interested in researching this.
I think Hanson’s approach to experts (as you describe it) is irrational because it abdicates from thinking. And in particular, if you think you don’t know what you’re talking about (i.e. think your argument isn’t good enough) then don’t use it, but if you think otherwise you should respect your own mind (if you’re wrong to think otherwise, convince yourself).
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
if you think otherwise you should respect your own mind (if you’re wrong to think otherwise, convince yourself).
Hanson would suggest that this is pure, unjustified arrogance. I’m not sure I agree with him, I struggle to fault the argument but its still a pretty tough bullet to bite.
Have you heard of the Outside View? Hanson’s a big fan of it, and if you don’t know about it his thought process won’t always make much sense.
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
You could go with the consensus, or with the majority, or you could come up with a procedure for judging which are most trustworthy. If the experts can’t resolve this issue what makes you think you can? More importantly, if you know less than the average expert, then aren’t you better off just picking one expert at random rather than trusting yourself?
Is the majority of experts usually right? I don’t think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
Have you heard of the Outside View?
no
if you know less than the average expert, then aren’t you better off just picking one expert at random rather than trusting yourself?
Why not learn something? Why not use your mind? I don’t think that thinking for yourself is arrogant.
In my experience reading (e.g.) academic papers, most experts are incompetent. the single issue of misquoting is ubiquitous. people publish misquotes even in peer reviewed journals. e.g. i discovered a fraudulent Edmund Burke quote which was used in a bunch of articles. Harry Binswanger (and Objectivist expert) posted misquotes (both getting the source wrong, and inserting brackted explanatory text to explain context which was dead wrong). Thomas Sowell misquoted Godwin in a book that discussed Godwin at length.
I can sometimes think better than experts, in their own field, in 15 minutes. In cases where I should listen to expert advice, i do without disagreeing with the expert and overruling my judgment (e.g. i’m not a good cook. when i don’t know how to make something i use a recipe. i don’t think i know the answer, so don’t get overruled. i can tell the difference btwn when i have an opinion that matters or not.).
In the case of cooking, I think the experts I use would approach the issue in the same way I would if I learned the field myself (in the relevant respects). For example, they would ask the same questions I am interested in like, “If I test this recipe out, does it taste good?” Since i think they already did the same work I would do, there’s no need to reinvent the wheel. In other cases, i don’t think experts have addressed the issue in a way that satisfies me, so i don’t blindly accept their ideas.
To be honest I’m not exactly a passionate Hansonian, I read his blog avidly because what he has to say is almost always original, but if you want to find a proponent of his to argue with you may need to look elsewhere. Still, I can play devil’s advocate if you want.
Is the majority of experts usually right? I don’t think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
At any time, most things are not changing, so most experts will be right about most things. Anyway, the question isn’t whether experts are right, its why you think you are more reliable.
Brief introduction to the Outside View:
Cognitive scientists investigating the planning fallacy (in which people consistently and massively underestimate the amount of time it will take them to finish a project) decided to try to find a ‘cure’. In a surprising twist, they succeeded. If you ask the subject “how long have similar projects taken you in the past” and only then ask the question “how long do you expect this project to take” the bias is dramatically reduced.
They attributed this to the fact that in the initial experiment students had been taking the ‘inside view’ of their project. They had been examining each individual part on its own, and imagining how long it was likely to take. They made the mistake of failing to imagine enough unexpected delays. If they instead take the outside view, by looking at other similar things and seeing how they took, then they ended up implicitly taking those unexpected delays into account because most of those other projects encountered delays of their own/
In general, the outside view says “don’t focus on specifics, you will end up ignoring unexpected confounding elements from outside your model. Instead, consider the broad reference class of problems to which this problem belongs and reason from them”.
Looking at your 3rd last paragraph I can see a possible application of it. You belong to the broad reference class of “people who think they have proven an expert wrong”. Most such people are either crackpots, confused or misinformed. You don’t think of yourself as any of these things, but neither do most such people. Therefore you should perhaps give your own opinions less weight.
(Not a personal attack. I do not mean to imply that you actually are a crackpot, confused or misinformed, for all I know you may be absolutely right, I’m just demonstrating the principle).
This very liberal use of the theory has come under criticism from other Bayesians, including Yudkowsky. One of its problems is that it is not always clear which reference class to use.
A more serious problem comes when you apply it to its logical extreme. If we take the reference class “people who have believed themselves to be Napoleon” then most of them were/are insane, does this mean Napoleon himself should have applied the outside view and concluding that he was probably insane?
Why not learn something? Why not use your mind? I don’t think that thinking for yourself is arrogant.
Anyway, the question isn’t whether experts are right, its why you think you are more reliable.
This question is incompatible with Popperian philosophy. Ideas haven’t got reliability which is just another word for justification. Trying to give it to them leads to problems like regress.
What we do instead is act on our best knowledge without knowing how reliable it is. That means preferring ideas which we don’t see anything wrong with to those that we do see something wrong with.
When you do see something wrong with an expert view, but not with your own view, it’s irrational to do something you expect not to work, over something you expect to work. Of course if use double standards for criticism of your own ideas, and other people’s, you will go wrong. But the solution to that isn’t deferring to experts, it’s improving your mind.
Most such people are either crackpots, confused or misinformed.
Or maybe they have become experts by thinking well. How does one get expert status anyway? Surely if I think I can do better than people with college degrees at various things, that’s not too dumb. I’m e.g. a better programmer than many people with degrees. I have a pretty good sense of how much people do and don’t learn in college, and how much work it is to learn more on one’s own. The credential system isn’t very accurate.
edit: PS please don’t argue stuff you don’t think is true. if no true believers want to argue it, then shrug.
Incidentally, someone has been downvoting Curi’s comments and upvoting mine, would they like to step forward and make the case? I’m intrigued to see some of his criticisms answered.
I suspect that the individuals who are downvoting curi’s remarks in this subthread here are doing so because much of what he is saying are things he has already said elsewhere and that people are getting annoyed at him. I suspect that his comments are also being downvoted since he first used the term “authority” and then tried to make a distinction between “expertise” and “authority” when under his definition the first use of such an argument would seem to be in what he classifies as expertise. Finally, I suspect that his comments in this subthread have been downvoted for his apparent general arrogance regarding subject matter experts such as his claim that “I can sometimes think better than experts, in their own field, in 15 minutes.”
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument?
What do you mean by good argument? The Bayesians have an answer to this. They mean that P(claim|argument)> P(claim). Now, one might argue in that framework that if P(claim|argument)/P(claim) is close to 1 then this isn’t a good argument , or if if log P(claim|argument)/P(claim) is small compared to the effort to present and evaluate the argument then it isn’t a good argument.
However, that’s obviously not what you mean. It isn’t clear to me what you mean by “good argument” and how this connects to the notion of a fallacy. Please expand your definitions or taboo the terms.
It’s fine if most people haven’t read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea. or they should be interested and ask more about him instead of posting incorrect arguments about why his basic claims are false.
I do know, offhand, several arguments against Bayesian epistemology (e.g. it’s inability to create moral knowledge, and i know many arguments against induction, each decisive). And anyway I came here to learn more about it. One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere. And I am familiar with those criticisms.
Learning enough Bayesian stuff to sound like a Bayesian so people want to listen to me more sounds to me like more trouble than it’s worth, no offense. I’m perfectly willing to read more things when I make a mistake and there is a specific thing which explains the issue. I have been reading various things people refer me to. If you wanted me to study Bayesian stuff for a month before speaking, well, I’d get bored because I would see flaws and then see them repeated, and then read arguments which depend on them. I did read the whole HP fic if that helps.
One thing that interests me, which I posted about in the initial post, is how unscholarly some Bayesian scholars are. Can anyone correct that? Are there any with higher scholarly standards? I would like there to be. I don’t want to just read stuff until I happen to find something good, I want to be pointed to something considerably better than the unscholarly stuff I criticized. I don’t know where to find that.
Really? Much of that seems questionable. There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one. Sure, making incorrect arguments is bad. And making arguments against strawmen is very bad. But people don’t have time actually research every single idea out there or even know which one’s to look at. Now, I think that Popper is important enough and has relevant enough points that he should be on the short list of philosophers that people can grapple with at least to some limited extent. But frankly, speaking as someone who is convinced of that point, you are making a very weak case for it.
This paragraph seems to reflect a general problem you are having here in making assertions without providing any information other than vague claims of existence. I am for example aware of a large variety of arguments against induction (the consistency of anti-induction frameworks seems to be a major argument) but calling them “decisive” is a very strong claim, and isn’t terribly relevant in so far as Bayesianism is not an inductive system in many senses of the term.
You’ve also referred to before to this claim that Popperian system can lead to moral knowledge and that’s a claim I’d be very curious to hear expanded with a short summary of how that works. Generally when I see a claim that an epistemological system can create moral knowledge my initial guess is that someone has managed to bury the naturalistic fallacy somewhere or has managed to smuggle in additional moral premises that aren’t really part of the epistemology. I’d be pleasantly surprised to see something that didn’t function that way.
I haven’t read it myself but I’ve been told that Earman’s “Bayes or Bust” deals with a lot of the philosophical criticisms of Bayesianism as well as giving a lot of useful references. It should do a decent job in regards to the scholarly concerns.
As to Popper’s criticism of Bayesianism, the discussion of it in LScD is quite small, which is understandable in that Bayesianism was not nearly as developed in that time as it is now. (You may incidentally be engaging in a classical philosophical fallacy here in focusing on a specific philosopher’s personal work rather than the general framework of ideas that followed from it. There’s a lot of criticism of Bayesianism that is not in Popper that is potentially strong. Not everything is about Popper.)
As a non-Bayesian, offense taken. You can’t expect to go to a room full of people with a specific set of viewpoints offer a contrary view, act like the onus is on them to translate into your notation and terminology, and then be shocked when they don’t listen to you. Moreover, knowing the basics of Cox’s theorem is not asking you to “sound like a Bayesian” anyhow.
What? I don’t know how to respond to that. I’m not sure an exclamation exists in standard English to express my response to that last sentence. I’m thinking of saying “By every deity in the full Tegmark ensemble” but maybe I should wait for a better time to use it. You are repeatedly complaining about people not knowing much about Popper while your baseline for Bayesianism is that you’ve read an incomplete Harry Potter fanfic? This fanfic hasn’t even addressed Bayesianism other than in passing. This seems akin to someone thinking they understand rocketry because they’ve watched “Apollo 13″.
Can I steal this?
Yes, by all means feel free.
The number of major ideas in epistemology is not very large. After Aristotle, there wasn’t very much innovation for a long time. It’s a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven’t seen anything of decent quality.
It works exactly identically to how Popperian epistemology creates any other kind of knowledge. There’s nothing special for morality.
Knowledge is created by an evolutionary process involving conjecture and refutation. By criticizing flaws in ideas, we seek to improve them (by making better conjectures we hope will eliminate the flaws).
I have a lot of familiarity with the other Popperians. But Popper and Deutsch are by far the best. There isn’t really anything non-Popperian that draws on Popper much. Everyone who has understood Popper is a Popperian, IMO. If you disagree, do tell.
Small is not a criticism; substance matters not length. Do you have a criticism of his arguments in LScD or not? Also he dealt with it elsewhere, as I stated.
Err, Bayesian probability doesn’t have anything special for morality either. People on LW tend to be moral non-realists, ie people who deny that there is objective moral knowledge, if that’s what you’re talking about (not sure- sorry!), but that’s completely orthogonal to this discussion: there’s nothing in Bayesianism that leads inevitably to non-realism. (Also, I’m not convinced that moral realism is right, so saying “Bayesianism leads to moral non-realism” isn’t a very effective argument.)
Bayesian epistemology doesn’t create moral knowledge because it only functions when fed in observation data (or assumptions). I get a lot of conflicting statements here, but some people tell me they only care about prediction, they are instrumentalists, and that is what Bayes stuff is for, and they don’t regard it as a bad thing that it doesn’t address morality at all.
Now what you have in mind, I think, is that if you make a ton of assumptions you could then talk about morality using Bayes. Popperism doesn’t require a bunch of arbitrary starting assumptions to create moral knowledge, it just can deal with it directly.
If I’m wrong, explain how you can deal with figuring out, e.g., what are good moral values to have (without assuming a utility function or something).
As I tried to say (and probably explained really poorly- sorry!), the LW consensus is that morality is not objective. Therefore, the idea of figuring out what good moral values would be is, according to moral non-realism, impossible: any decision about what a good moral value is must rely on your pre-existing values, if an objective morality is not out there to be discovered. Using this as a criticism of Bayesianism is sorta like criticizing thermodynamics because it claims it’s impossible to exactly specify the position and velocity of each particle: not only is the criticism unrelated to the subject matter, but satisfying it would require the theory to do something that is to the best of our knowledge incorrect.
I’m inclined to take this formula seriously, but I’d like to start by applying it to innate knowledge, knowledge we are born with, because here we are definitely talking about an evolutionary process involving mutation and natural selection. Some mutations add what amounts to a new innate conjecture (hypothesis, belief) into the cognitive architecture of the creature.
However, what occurs at this point is not that a creature with a false innate conjecture is eliminated. The creature isn’t being tested purely against reality in isolation. It’s being tested against other members of its species. The creature with the least-false, or least-perilously-false conjecture will tend to do better than the competitors. The competition for survival amounts to a competition between rival conjectures. The truest, or most-usefully-true, or least-wrong, or least-dangerously-wrong innate belief will tend to outdo its competitors and ultimately spread through the species. (With the odd usefully-wrong belief surviving.)
The occasional appearance of new innate conjectures resembles the conjecture part of Popperian conjecture and refutation. However, the contest between rival innate conjectures that occurs as the members of the species struggle against each other for survival seems less Popperian than Bayesian.
The relative success of the members of the species who carry the more successful hypothesis vaguely resembles Bayesian updating, because the winners increase their relative numbers and the losers decrease their relative numbers, which resembles the shift in the probabilities assigned to rival hypotheses that occurs in Bayesian updating. Consider the following substitutions applied to Bayes’ formula:
P(H|E) = P(E|H)P(H) / P(E)
P(H|E) is the new proportion (i.e. in the next generation) of the species carrying the hypothesis H, given that event E occurred (E is “everything that happened to the generation”)
P(E|H) is the degree to which H predicts and thus prepares the individual to handle E (measured in expected number of offspring given E)
P(H) is the old proportion (i.e. in the previous generation) of the species carrying H
P(E) is the degree to which the average member of the species predicts and thus is prepared to handle E (measured in expected number of offspring given E)
With these assignments, what the equation means is:
The new proportion of the species with H is equal to the old proportion of the species with H, times the expected number of offspring of members with H, divided by the expected number of offspring of the average member of the species.
One difference between this process and Bayesian updating is that this process allows the occasional introduction of new hypotheses over time, with what amounts to a modest but not vanishing initial prior.
I’m not sure if we’re interested in the same stuff. But taking up one topic:
I think you regard innate/genetic ideas as important. I do not. Because people are universal knowledge creators, and can change any idea they start with, it doesn’t matter very much.
The reason people are so biased is not in their genes but their memes.
There are two major replication strategies that memes use.
1) a meme can be useful and rational. it spreads because of its value
2) a meme can sabotage its holders creativity to prevent him from criticizing it, and to take away his choice not to spread it
The second type dominated all cultures on Earth for a long time. The transition to the first type is incomplete.
More details one memes and universality can be found in The Beginning of Infinity by David Deutsch
You misunderstand. I bring it up as a model of learning, and my choice was based on your own remarks. You said that knowledge is created by an evolutionary process. That way of putting it suggests an analogy with Darwin’s theory of evolution as proceeding by random variation and natural selection. And indeed there is an analogy between Popper’s conjectures and refutations and variation and natural selection, and it is this: a conjecture is something like variation (mutation), and refutation is something like natural selection.
However, what I found was that the closer I looked at knowledge which is actually acquired through natural selection—what we might call innate knowledge or instinctive knowledge—the more the process of acquisition resembled Bayesian updating rather than Popperian conjecture and refutation. I explained why.
In Bayesian updating, there are competing hypotheses, and the one for which actual events are less of a surprise (i.e., the hypothesis Hi for which P(E|Hi) is higher) is strengthened relative to the one for which events are more of a surprise. I find a parallel to this in competition among alleles under natural selection, which I described.
Essential to Bayesian updating is the coexistence of competing hypotheses, and essential to natural selection is the coexistence of competing variants in a species. In contrast, Popper talks about conjecture and refutation, which is a more lonely process that need not involve more than one conjecture and a set of observations which have the potential to falsify it. Popper talks about improving the conjecture in response to refutation, but this process more resembles Lamarckian evolution than Darwinian evolution, because in Lamarckian evolution the individuals improve themselves in response to environmental challenges, much as Popper would have us improve our conjectures in response to observational challenges. Also, in Lamarckian evolution, as in the Popperian process of conjecture and refutation, competing variants (compare: competing hypotheses) do not play an essential role (though I’m sure they could be introduced). Rather, the picture is of a single animal (compare: a single hypothesis) facing existential environmental challenges (compare: facing the potential for falsification) improving itself in response (which improvement is passed to offspring).
The Popperian process of conjecture, refutation, and improvement of the conjecture, can as it happens be understood from a Bayesian standpoint. It does implement Bayesian updating in a certain way. Specifically, when a particular conjecture is refuted and the scientist modifies the conjecture—at that point, there are two competing hypotheses. So at that point, the process of choosing between these two competing hypotheses can be characterized as Bayesian updating. The less successful hypothesis is weakened, and the more successful hypothesis is strengthened.
In short, if you want to take seriously the analogy that does exist between evolution through natural selection and knowledge acquisition of whatever type, then you may want to take a closer look at Bayesian updating as conforming more closely to the Darwinian model.
I wasn’t talking about an analogy.
Evolution is a theory which applies to any type of replicator. Not by analogy by literally applies.
Make sense so far?
That only strengthens my argument.
You said we were discussing an analogy. That was a mistake. How can having made a mistake strength your argument? When you make a mistake, and find out, you should be like “uh oh. maybe i made 2. or 3. i better rethink things a bit more carefully. maybe the mistake is caused by a misunderstanding that could cause multiple mistakes.” I don’t think glossing over mistakes is rational or wise.
Make sense so far?
Because if there is only an analogy between evolution and knowledge acquisition, there are some aspects of each that do are not the same, and it is possible that these differences mean that the specific factor under consideration is not the same; but if the two processes are literally the same, that is not possible.
“How can having a mistake strengthen your argument?”
Example: During WWII,many American leaders didn’t believe that Germany was actually committing massacres, as they were disillusioned from similar but inaccurate WWI propaganda; however, they still believed that Nazi aggression was morally wrong. Later, the death camps were discovered. Clearly, given that they were mistaken in disbelieving in the Holocaust, they were mistaken in believing that the Nazis were morally wrong- because how can making a mistake strength your argument?
Your defects would be easier to tolerate if you were less arrogant. A bit of humility would go a long way to keeping the conversation going. My guess is that you picked up your approach because it led to your being the last person standing, winning by attrition—when in reality the other participants were simply too disgusted to continue.
As to a professional, I already referred you to Earman. Incidentally, you seem to be narrowing the claim somewhat. Note that I didn’t say that the set of major ideas in epistemology isn’t small, I referred to the much larger class of philosophical ideas (although I can see how that might not be clear from my wording). And the set is indeed very large. However, I think that your claim about “after Aristotle” is both wrong and misleading. There’s a lot of what thought about epistemological issues in both the Islamic and Christian worlds during the Middle Ages. Now, you might argue that that’s not helpful or relevant since it gets tangled up in theology and involves bad assumptions. But that’s not to say that material doesn’t exist. And that’s before we get to non-Western stuff (which admittedly I don’t know much about at all).
(I agree when you restrict to professionals, and have already recommended Earman to you.)
This is a deeply puzzling set of claims. First of all, a major point of his epistemological system is falsfiability based on data (at least as I understand it from LScD). How that would at all interact with moral issues is unclear to me. Indeed, the semi-canonical example of a non-falsifiable claim in the Popperian sense is Marxism, a set of ideas that has a large set of attached moral claims.
I also don’t see how this works given that moral claims can always be criticized by the essential sociopathic argument “I don’t care. Why should you?” Obviously, that line of thinking can be/should be expanded. To use your earlier example, how would you discuss “murder is wrong” in a Popperian framework? I would suggest that this isn’t going to be any different than simply discussing moral ideas based on shared intuitions with particular attention to the edge cases. You’re welcome to expand on these claims, but right now, nothing you’ve said in this regard is remotely convincing or even helpful since it amounts to just saying “well, do the same thing.”
I’m going to be obnoxious and quote a friend of mine “Everyone who understands Christianity is a Christian.” I don’t have any deep examples of other individuals although I would tentatively say that I understood Popper’s views in Logic of Scientific Discovery just fine.
Sure. The most obvious one is when he is discussing the law of large numbers and frequentist v. Bayesian interpretations (incidentally to understand those passages it is helpful to note that he uses the term “subjective” to describe Bayesians rather than Bayesian which is consistent with the language of the time, but in modern terminology has a very different meaning (used to distinguish between subject and objective Bayesians)). In that section he argues that (I don’t have the page number unfortunately since I’m using my Kindle edition. I have a hard copy somewhere but I don’t know where) that “it must be inadmissable to give after the deduction of Bernoulli’s theorem a meaning to p different from the one which was given to it before the deduction.” This is, simply put, wrong. Mathematicians all the time prove something in one framework and then interpret it in another framework. You just need to show that all the properties of the relevant frameworks overlap in sufficiently non-pathological cases. If someone wrote this as a complaint about say using the complex exponential to understand the symmetries of the Euclidean plane, we’d immediately see this as a bad claim. There’s an associated issue in this section which also turns up but it is more subtle; Popper doesn’t appreciate what you can do with measure theory and L_p spaces and related ideas to move back and forth between different notions of probability and different metrics on spaces. That’s ok, it was a very new idea when he wrote LScD (although the connections were to some extent definitely there). But it does render a lot of what he says simply irrelevant or outright wrong.
Which you stated you had not read. I have rather low standards for recommendations of things to read, but “I never read it myself” isn’t good enough.
I don’t agree with “restrict to professionals”. How is it to be determined who is a professional? I don’t want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
That is a major point for scientific research where the problem “how do we use evidence?” is important. And the answer is “criticisms can refer to evidence”. Note by “science” here I mean any empirical field. What do you do in non-scientific fields? You simply make criticisms that don’t refer to evidence. Same method, just missing one type of criticism which is rather useful in science but not fundamental to the methodology.
It is not empirically falsifiable. It is criticizable. For example Popper criticized Marx in The Open Society and its Enemies..
Any argument which works against everything fails at the task of differentiating better and worse ideas. So it is a bad argument. So we can reject it and all other things in that category, by this criticism.
The short answer is: since we don’t care to have justified foundations, you can discuss it any way you like. You can say it’s bad because it hurts people. You can say it’s good because it prevents overpopulation. You can say it’s bad because it’s mean. These kinds of normal arguments, made by normal people, are not deemed automatically invalid and ignored. Many of them are indeed mistakes. But some make good points.
For more on morality, please join this discussion:
http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7
He has like 20 books. There’s way more to it. When one reads a lot of them, a whole worldview comes across that is very hard to understand from just a couple books. And I wasn’t trying to argue with that statement, I was just commenting. I mentioned it because of a comment to do with whether I had studied results of non-Popperians using Popperian ideas.
Are you really telling me that you can prove something, then take the conclusion, redefine a term, and work with that, and consider it still proven? You could only do that if you created a second proof that the change doesn’t break anything, you can’t just do it. I’m not sure you took what Popper was saying literally enough; I don’t think your examples later actually do what he criticized. Changing the meaning of a term in a conclusion statement, and considering a conclusion from a different perspective, are different.
Would you understand if I said this has no relevance at all to 99.99% of Popper’s philosophy? Note that his later books generally have considerably less mention of math or logic.
Earman is a philosopher and the book has gotten positive reviews from other philosophers. I don’t know what else to say in that regard.
Hrrm? You mentioned professionals first. I’m not sure why you are now objecting to the use of professionals as a relevant category.
I’m not at all convinced that this is what Popper intended (but again I’ve only read LScD) but if this is accurate then Popper isn’t just wrong in an interesting way but is just wrong. Does one mean for example to claim that pure mathematics works off of criticism? I’m a mathematician. We don’t do this. Moreover, it isn’t clear what it would even mean for us to try to do this as our primary method of inquiry. Are we supposed to spend all our time going through pre-existing proofs trying to find holes in them?
Yes, and I’m quite sure that I get much more of a worldview if I read all of Hegel rather than just some of it. That doesn’t mean I need to read all of it. Similar remarks would apply to Aquinas or more starkly the New Testament. Do you need to read all of the New Testament to decide that Christianity is bunk? Do you need to read the entire Talmud to decide that Judaism is incorrect? But you get a whole worldview that you don’t obtain from just reading the major texts.
Right, and then we just the criticism “why bother” or “and how does that maximize the number of paperclip in the universe?” Or one can say “mean” “good” bad” are all hideously ill-defined. In any event, does it not bother you that you are essentially claiming that your moral discussion with your great epistemological system looks just like a discussion about morality by a bunch of random individuals? There’s nothing in the above that uses your epistemology in any substantial way.
Right! And conveniently in the case Popper cares about you can prove that.
Do you mean understand or do you mean care? I don’t understand why you are making this statement given that my remark was addressing the question you asked of whether I had specific problems with Popper’s handling of Bayesianism in LScD. This is a specific problem there.
I don’t know what Popper himself would say, but one of his more insightful followers, namely Lakatos, argues for exactly that position.
I read Proofs and Refutations too many years ago to say anything precise about it. I remember finding it interesting but also frustrating. Lakatos seems determined to ignore/deny/downplay the fact of mathematical practice that we only call something a ‘theorem’ when we’ve got a proof, and we only call something a ‘proof’ when it’s logically watertight in such a way that no ‘refutations’ are possible. Still, it’s well-researched (in its use of a historical case-study) and he comes up with some decent ideas along the way (e.g. about “monster barring” and “proof-oriented definitions”.)
Yes, Lakatos does argue for that in a certain fashion, (and I suppose it is right to bring this up since I’ve myself repeatedly pointed people here on LW to read Lakatos when they think that math is completely reliable.) However, Lakatos took a more nuanced position than the position that curi is apparently taking that math advances solely through this method of criticism. I also think Lakatos is wrong in so far as the examples he uses are not actually representative samples of what the vast majority of mathematics looks like. Euler’s formula is an extreme example, and it is telling that when one wants to give other similar examples one often gives other topological claims from before 1900 or so.
yes
Instead, you make appeals to authority?
You are confused about what that means. An appeal to authority is not intrinsically fallacious. An appeal to authority is problematic when the authority is irrelevant (e.g. a celebrity who plays a doctor on TV endorsing a product) or when one is claiming that one has a valid deduction in some logical system. Someone making an observation about what people in their profession actually do is not a bad appeal to authority in the same way. In any event, you ignored the next line of my comment:
If you do think that mathematicians use Popperian reasoning then please explain how we do it.
It is in Popperian epistemology.
Could you point me to a Bayesian source that says they are OK? I’d love to have a quote of Yudkowsky advocating appeals to authority, for instance. Or could others comment? Do most people here think appeals to authority are good arguments?
An appeal to authority is not logically airtight, and if logic is about mathematical proofs, then it’s going to be a fallacy. But an appeal to an appropriate authority gives Bayesians strong evidence, provided that [X|Authority believes X] is sufficiently high. In many fields, authorities have sufficient track records that appeals to authority are good arguments. In other fields, not so much.
Of course, the Appeal to Insufficient Force fallacy is a different story from the Appeal to Inappropriate Authority
How do you judge:
[X|Authority believes X]
In general I judge it very low. Certainly in this case.
Can you provide a link to Yudkowsky or any well known Bayesian advocating appeals to authority?
Track record of statements/predictions, taking into account the prior likelihood of previous predictions and prior likelihood of current prediction.
Are you asking us to justify appeals to authority by using an appeal to authority?
edit per wedrifid
I would have said ‘prior’, not ‘a priori’.
No lol. I just wanted one to read. Some of my friends will be interested in it too.
Since the guy who made the appeal to authority has little track record with me, and little of it good in my view, why would he expect me to concede to his appeal to authority?
Robin Hanson does so here.
Too much ambiguity there. e.g. the word authority isn’t used.
This is silly. Whether or not he uses the word authority does not change the fact he is suggesting that we treat the opinions of experts as more accurate than our own opinions.
I had a lot of respect for you before you made this comment, but you have now lost most of it.
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument? I don’t think so and my friends won’t. They won’t know if Hanson thinks so.
For the purposes I wanted to use it for, this will not work well.
One thing I know about some of my friends is that they consider the word “authority” to be very nasty, but the word “expert” to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson’s use of the expertise terminology, instead of the authority terminology, will matter to them. Explaining that he meant what they call authority will add complexity—and scope for argument—and be distracting. And people will find it boring and ignore it as a terminological debate.
And I’m not even quite sure what Hanson did mean. I don’t think what he meant is identical to what the commenter I was speaking to meant.
Hanson speaks of, for example, “if you plan to mostly ignore the experts”. That you shouldn’t ignore them is a different claim than that appeals to their authority are good arguments.
He’s stated before, I’m not sure where, that if you believe an expert has more knowledge about an issue than you then you should prefer their opinions to any argument you generate. This is because if they disagree with you it is almost certainly because they have considered and rejected your argument, not because they have not considered your argument.
If your friends cannot differentiate between the content of an argument and its surface appearance then I would advise you find new friends [/facetious].
They can, but some won’t be interested in researching this.
I think Hanson’s approach to experts (as you describe it) is irrational because it abdicates from thinking. And in particular, if you think you don’t know what you’re talking about (i.e. think your argument isn’t good enough) then don’t use it, but if you think otherwise you should respect your own mind (if you’re wrong to think otherwise, convince yourself).
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
Hanson would suggest that this is pure, unjustified arrogance. I’m not sure I agree with him, I struggle to fault the argument but its still a pretty tough bullet to bite.
Have you heard of the Outside View? Hanson’s a big fan of it, and if you don’t know about it his thought process won’t always make much sense.
You could go with the consensus, or with the majority, or you could come up with a procedure for judging which are most trustworthy. If the experts can’t resolve this issue what makes you think you can? More importantly, if you know less than the average expert, then aren’t you better off just picking one expert at random rather than trusting yourself?
Is the majority of experts usually right? I don’t think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
no
Why not learn something? Why not use your mind? I don’t think that thinking for yourself is arrogant.
In my experience reading (e.g.) academic papers, most experts are incompetent. the single issue of misquoting is ubiquitous. people publish misquotes even in peer reviewed journals. e.g. i discovered a fraudulent Edmund Burke quote which was used in a bunch of articles. Harry Binswanger (and Objectivist expert) posted misquotes (both getting the source wrong, and inserting brackted explanatory text to explain context which was dead wrong). Thomas Sowell misquoted Godwin in a book that discussed Godwin at length.
I can sometimes think better than experts, in their own field, in 15 minutes. In cases where I should listen to expert advice, i do without disagreeing with the expert and overruling my judgment (e.g. i’m not a good cook. when i don’t know how to make something i use a recipe. i don’t think i know the answer, so don’t get overruled. i can tell the difference btwn when i have an opinion that matters or not.).
In the case of cooking, I think the experts I use would approach the issue in the same way I would if I learned the field myself (in the relevant respects). For example, they would ask the same questions I am interested in like, “If I test this recipe out, does it taste good?” Since i think they already did the same work I would do, there’s no need to reinvent the wheel. In other cases, i don’t think experts have addressed the issue in a way that satisfies me, so i don’t blindly accept their ideas.
To be honest I’m not exactly a passionate Hansonian, I read his blog avidly because what he has to say is almost always original, but if you want to find a proponent of his to argue with you may need to look elsewhere. Still, I can play devil’s advocate if you want.
At any time, most things are not changing, so most experts will be right about most things. Anyway, the question isn’t whether experts are right, its why you think you are more reliable.
Brief introduction to the Outside View:
Cognitive scientists investigating the planning fallacy (in which people consistently and massively underestimate the amount of time it will take them to finish a project) decided to try to find a ‘cure’. In a surprising twist, they succeeded. If you ask the subject “how long have similar projects taken you in the past” and only then ask the question “how long do you expect this project to take” the bias is dramatically reduced.
They attributed this to the fact that in the initial experiment students had been taking the ‘inside view’ of their project. They had been examining each individual part on its own, and imagining how long it was likely to take. They made the mistake of failing to imagine enough unexpected delays. If they instead take the outside view, by looking at other similar things and seeing how they took, then they ended up implicitly taking those unexpected delays into account because most of those other projects encountered delays of their own/
In general, the outside view says “don’t focus on specifics, you will end up ignoring unexpected confounding elements from outside your model. Instead, consider the broad reference class of problems to which this problem belongs and reason from them”.
Looking at your 3rd last paragraph I can see a possible application of it. You belong to the broad reference class of “people who think they have proven an expert wrong”. Most such people are either crackpots, confused or misinformed. You don’t think of yourself as any of these things, but neither do most such people. Therefore you should perhaps give your own opinions less weight.
(Not a personal attack. I do not mean to imply that you actually are a crackpot, confused or misinformed, for all I know you may be absolutely right, I’m just demonstrating the principle).
This very liberal use of the theory has come under criticism from other Bayesians, including Yudkowsky. One of its problems is that it is not always clear which reference class to use.
A more serious problem comes when you apply it to its logical extreme. If we take the reference class “people who have believed themselves to be Napoleon” then most of them were/are insane, does this mean Napoleon himself should have applied the outside view and concluding that he was probably insane?
Like I said, tough bullet to bite.
This question is incompatible with Popperian philosophy. Ideas haven’t got reliability which is just another word for justification. Trying to give it to them leads to problems like regress.
What we do instead is act on our best knowledge without knowing how reliable it is. That means preferring ideas which we don’t see anything wrong with to those that we do see something wrong with.
When you do see something wrong with an expert view, but not with your own view, it’s irrational to do something you expect not to work, over something you expect to work. Of course if use double standards for criticism of your own ideas, and other people’s, you will go wrong. But the solution to that isn’t deferring to experts, it’s improving your mind.
Or maybe they have become experts by thinking well. How does one get expert status anyway? Surely if I think I can do better than people with college degrees at various things, that’s not too dumb. I’m e.g. a better programmer than many people with degrees. I have a pretty good sense of how much people do and don’t learn in college, and how much work it is to learn more on one’s own. The credential system isn’t very accurate.
edit: PS please don’t argue stuff you don’t think is true. if no true believers want to argue it, then shrug.
You seemed curious so I explained.
Incidentally, someone has been downvoting Curi’s comments and upvoting mine, would they like to step forward and make the case? I’m intrigued to see some of his criticisms answered.
I suspect that the individuals who are downvoting curi’s remarks in this subthread here are doing so because much of what he is saying are things he has already said elsewhere and that people are getting annoyed at him. I suspect that his comments are also being downvoted since he first used the term “authority” and then tried to make a distinction between “expertise” and “authority” when under his definition the first use of such an argument would seem to be in what he classifies as expertise. Finally, I suspect that his comments in this subthread have been downvoted for his apparent general arrogance regarding subject matter experts such as his claim that “I can sometimes think better than experts, in their own field, in 15 minutes.”
What do you mean by good argument? The Bayesians have an answer to this. They mean that P(claim|argument)> P(claim). Now, one might argue in that framework that if P(claim|argument)/P(claim) is close to 1 then this isn’t a good argument , or if if log P(claim|argument)/P(claim) is small compared to the effort to present and evaluate the argument then it isn’t a good argument.
However, that’s obviously not what you mean. It isn’t clear to me what you mean by “good argument” and how this connects to the notion of a fallacy. Please expand your definitions or taboo the terms.