You are confused about what that means. An appeal to authority is not intrinsically fallacious. An appeal to authority is problematic when the authority is irrelevant (e.g. a celebrity who plays a doctor on TV endorsing a product) or when one is claiming that one has a valid deduction in some logical system. Someone making an observation about what people in their profession actually do is not a bad appeal to authority in the same way. In any event, you ignored the next line of my comment:
Moreover, it isn’t clear what it would even mean for us to try to do this as our primary method of inquiry. Are we supposed to spend all our time going through pre-existing proofs trying to find holes in them?
If you do think that mathematicians use Popperian reasoning then please explain how we do it.
An appeal to authority is not intrinsically fallacious.
It is in Popperian epistemology.
Could you point me to a Bayesian source that says they are OK? I’d love to have a quote of Yudkowsky advocating appeals to authority, for instance. Or could others comment? Do most people here think appeals to authority are good arguments?
An appeal to authority is not logically airtight, and if logic is about mathematical proofs, then it’s going to be a fallacy. But an appeal to an appropriate authority gives Bayesians strong evidence, provided that [X|Authority believes X] is sufficiently high. In many fields, authorities have sufficient track records that appeals to authority are good arguments. In other fields, not so much.
Of course, the Appeal to Insufficient Force fallacy is a different story from the Appeal to Inappropriate Authority
Track record of statements/predictions, taking into account a priori likelihood of previous predictions and a priori likelihood of current prediction.
Are you asking us to justify appeals to authority by using an appeal to authority?
No lol. I just wanted one to read. Some of my friends will be interested in it too.
Track record of statements/predictions
Since the guy who made the appeal to authority has little track record with me, and little of it good in my view, why would he expect me to concede to his appeal to authority?
This is silly. Whether or not he uses the word authority does not change the fact he is suggesting that we treat the opinions of experts as more accurate than our own opinions.
I had a lot of respect for you before you made this comment, but you have now lost most of it.
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument? I don’t think so and my friends won’t. They won’t know if Hanson thinks so.
For the purposes I wanted to use it for, this will not work well.
One thing I know about some of my friends is that they consider the word “authority” to be very nasty, but the word “expert” to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson’s use of the expertise terminology, instead of the authority terminology, will matter to them. Explaining that he meant what they call authority will add complexity—and scope for argument—and be distracting. And people will find it boring and ignore it as a terminological debate.
And I’m not even quite sure what Hanson did mean. I don’t think what he meant is identical to what the commenter I was speaking to meant.
Hanson speaks of, for example, “if you plan to mostly ignore the experts”. That you shouldn’t ignore them is a different claim than that appeals to their authority are good arguments.
He’s stated before, I’m not sure where, that if you believe an expert has more knowledge about an issue than you then you should prefer their opinions to any argument you generate. This is because if they disagree with you it is almost certainly because they have considered and rejected your argument, not because they have not considered your argument.
One thing I know about some of my friends is that they consider the word “authority” to be very nasty, but the word “expert” to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson’s use of the expertise terminology, instead of the authority terminology, will matter to them.
If your friends cannot differentiate between the content of an argument and its surface appearance then I would advise you find new friends [/facetious].
They can, but some won’t be interested in researching this.
I think Hanson’s approach to experts (as you describe it) is irrational because it abdicates from thinking. And in particular, if you think you don’t know what you’re talking about (i.e. think your argument isn’t good enough) then don’t use it, but if you think otherwise you should respect your own mind (if you’re wrong to think otherwise, convince yourself).
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
if you think otherwise you should respect your own mind (if you’re wrong to think otherwise, convince yourself).
Hanson would suggest that this is pure, unjustified arrogance. I’m not sure I agree with him, I struggle to fault the argument but its still a pretty tough bullet to bite.
Have you heard of the Outside View? Hanson’s a big fan of it, and if you don’t know about it his thought process won’t always make much sense.
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
You could go with the consensus, or with the majority, or you could come up with a procedure for judging which are most trustworthy. If the experts can’t resolve this issue what makes you think you can? More importantly, if you know less than the average expert, then aren’t you better off just picking one expert at random rather than trusting yourself?
Is the majority of experts usually right? I don’t think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
Have you heard of the Outside View?
no
if you know less than the average expert, then aren’t you better off just picking one expert at random rather than trusting yourself?
Why not learn something? Why not use your mind? I don’t think that thinking for yourself is arrogant.
In my experience reading (e.g.) academic papers, most experts are incompetent. the single issue of misquoting is ubiquitous. people publish misquotes even in peer reviewed journals. e.g. i discovered a fraudulent Edmund Burke quote which was used in a bunch of articles. Harry Binswanger (and Objectivist expert) posted misquotes (both getting the source wrong, and inserting brackted explanatory text to explain context which was dead wrong). Thomas Sowell misquoted Godwin in a book that discussed Godwin at length.
I can sometimes think better than experts, in their own field, in 15 minutes. In cases where I should listen to expert advice, i do without disagreeing with the expert and overruling my judgment (e.g. i’m not a good cook. when i don’t know how to make something i use a recipe. i don’t think i know the answer, so don’t get overruled. i can tell the difference btwn when i have an opinion that matters or not.).
In the case of cooking, I think the experts I use would approach the issue in the same way I would if I learned the field myself (in the relevant respects). For example, they would ask the same questions I am interested in like, “If I test this recipe out, does it taste good?” Since i think they already did the same work I would do, there’s no need to reinvent the wheel. In other cases, i don’t think experts have addressed the issue in a way that satisfies me, so i don’t blindly accept their ideas.
To be honest I’m not exactly a passionate Hansonian, I read his blog avidly because what he has to say is almost always original, but if you want to find a proponent of his to argue with you may need to look elsewhere. Still, I can play devil’s advocate if you want.
Is the majority of experts usually right? I don’t think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
At any time, most things are not changing, so most experts will be right about most things. Anyway, the question isn’t whether experts are right, its why you think you are more reliable.
Brief introduction to the Outside View:
Cognitive scientists investigating the planning fallacy (in which people consistently and massively underestimate the amount of time it will take them to finish a project) decided to try to find a ‘cure’. In a surprising twist, they succeeded. If you ask the subject “how long have similar projects taken you in the past” and only then ask the question “how long do you expect this project to take” the bias is dramatically reduced.
They attributed this to the fact that in the initial experiment students had been taking the ‘inside view’ of their project. They had been examining each individual part on its own, and imagining how long it was likely to take. They made the mistake of failing to imagine enough unexpected delays. If they instead take the outside view, by looking at other similar things and seeing how they took, then they ended up implicitly taking those unexpected delays into account because most of those other projects encountered delays of their own/
In general, the outside view says “don’t focus on specifics, you will end up ignoring unexpected confounding elements from outside your model. Instead, consider the broad reference class of problems to which this problem belongs and reason from them”.
Looking at your 3rd last paragraph I can see a possible application of it. You belong to the broad reference class of “people who think they have proven an expert wrong”. Most such people are either crackpots, confused or misinformed. You don’t think of yourself as any of these things, but neither do most such people. Therefore you should perhaps give your own opinions less weight.
(Not a personal attack. I do not mean to imply that you actually are a crackpot, confused or misinformed, for all I know you may be absolutely right, I’m just demonstrating the principle).
This very liberal use of the theory has come under criticism from other Bayesians, including Yudkowsky. One of its problems is that it is not always clear which reference class to use.
A more serious problem comes when you apply it to its logical extreme. If we take the reference class “people who have believed themselves to be Napoleon” then most of them were/are insane, does this mean Napoleon himself should have applied the outside view and concluding that he was probably insane?
Why not learn something? Why not use your mind? I don’t think that thinking for yourself is arrogant.
Anyway, the question isn’t whether experts are right, its why you think you are more reliable.
This question is incompatible with Popperian philosophy. Ideas haven’t got reliability which is just another word for justification. Trying to give it to them leads to problems like regress.
What we do instead is act on our best knowledge without knowing how reliable it is. That means preferring ideas which we don’t see anything wrong with to those that we do see something wrong with.
When you do see something wrong with an expert view, but not with your own view, it’s irrational to do something you expect not to work, over something you expect to work. Of course if use double standards for criticism of your own ideas, and other people’s, you will go wrong. But the solution to that isn’t deferring to experts, it’s improving your mind.
Most such people are either crackpots, confused or misinformed.
Or maybe they have become experts by thinking well. How does one get expert status anyway? Surely if I think I can do better than people with college degrees at various things, that’s not too dumb. I’m e.g. a better programmer than many people with degrees. I have a pretty good sense of how much people do and don’t learn in college, and how much work it is to learn more on one’s own. The credential system isn’t very accurate.
edit: PS please don’t argue stuff you don’t think is true. if no true believers want to argue it, then shrug.
Incidentally, someone has been downvoting Curi’s comments and upvoting mine, would they like to step forward and make the case? I’m intrigued to see some of his criticisms answered.
I suspect that the individuals who are downvoting curi’s remarks in this subthread here are doing so because much of what he is saying are things he has already said elsewhere and that people are getting annoyed at him. I suspect that his comments are also being downvoted since he first used the term “authority” and then tried to make a distinction between “expertise” and “authority” when under his definition the first use of such an argument would seem to be in what he classifies as expertise. Finally, I suspect that his comments in this subthread have been downvoted for his apparent general arrogance regarding subject matter experts such as his claim that “I can sometimes think better than experts, in their own field, in 15 minutes.”
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument?
What do you mean by good argument? The Bayesians have an answer to this. They mean that P(claim|argument)> P(claim). Now, one might argue in that framework that if P(claim|argument)/P(claim) is close to 1 then this isn’t a good argument , or if if log P(claim|argument)/P(claim) is small compared to the effort to present and evaluate the argument then it isn’t a good argument.
However, that’s obviously not what you mean. It isn’t clear to me what you mean by “good argument” and how this connects to the notion of a fallacy. Please expand your definitions or taboo the terms.
yes
Instead, you make appeals to authority?
You are confused about what that means. An appeal to authority is not intrinsically fallacious. An appeal to authority is problematic when the authority is irrelevant (e.g. a celebrity who plays a doctor on TV endorsing a product) or when one is claiming that one has a valid deduction in some logical system. Someone making an observation about what people in their profession actually do is not a bad appeal to authority in the same way. In any event, you ignored the next line of my comment:
If you do think that mathematicians use Popperian reasoning then please explain how we do it.
It is in Popperian epistemology.
Could you point me to a Bayesian source that says they are OK? I’d love to have a quote of Yudkowsky advocating appeals to authority, for instance. Or could others comment? Do most people here think appeals to authority are good arguments?
An appeal to authority is not logically airtight, and if logic is about mathematical proofs, then it’s going to be a fallacy. But an appeal to an appropriate authority gives Bayesians strong evidence, provided that [X|Authority believes X] is sufficiently high. In many fields, authorities have sufficient track records that appeals to authority are good arguments. In other fields, not so much.
Of course, the Appeal to Insufficient Force fallacy is a different story from the Appeal to Inappropriate Authority
How do you judge:
[X|Authority believes X]
In general I judge it very low. Certainly in this case.
Can you provide a link to Yudkowsky or any well known Bayesian advocating appeals to authority?
Track record of statements/predictions, taking into account the prior likelihood of previous predictions and prior likelihood of current prediction.
Are you asking us to justify appeals to authority by using an appeal to authority?
edit per wedrifid
I would have said ‘prior’, not ‘a priori’.
No lol. I just wanted one to read. Some of my friends will be interested in it too.
Since the guy who made the appeal to authority has little track record with me, and little of it good in my view, why would he expect me to concede to his appeal to authority?
Robin Hanson does so here.
Too much ambiguity there. e.g. the word authority isn’t used.
This is silly. Whether or not he uses the word authority does not change the fact he is suggesting that we treat the opinions of experts as more accurate than our own opinions.
I had a lot of respect for you before you made this comment, but you have now lost most of it.
The idea that appeals to authority are good arguments is not identical to the idea that the opinions of experts are more accurate. Suppose they are more accurate, on average. Does that make appealing to one a good argument? I don’t think so and my friends won’t. They won’t know if Hanson thinks so.
For the purposes I wanted to use it for, this will not work well.
One thing I know about some of my friends is that they consider the word “authority” to be very nasty, but the word “expert” to be OK. They specifically differentiate between expertise (a legitimate concept) and authority (an illegitimate concept). Hanson’s use of the expertise terminology, instead of the authority terminology, will matter to them. Explaining that he meant what they call authority will add complexity—and scope for argument—and be distracting. And people will find it boring and ignore it as a terminological debate.
And I’m not even quite sure what Hanson did mean. I don’t think what he meant is identical to what the commenter I was speaking to meant.
Hanson speaks of, for example, “if you plan to mostly ignore the experts”. That you shouldn’t ignore them is a different claim than that appeals to their authority are good arguments.
He’s stated before, I’m not sure where, that if you believe an expert has more knowledge about an issue than you then you should prefer their opinions to any argument you generate. This is because if they disagree with you it is almost certainly because they have considered and rejected your argument, not because they have not considered your argument.
If your friends cannot differentiate between the content of an argument and its surface appearance then I would advise you find new friends [/facetious].
They can, but some won’t be interested in researching this.
I think Hanson’s approach to experts (as you describe it) is irrational because it abdicates from thinking. And in particular, if you think you don’t know what you’re talking about (i.e. think your argument isn’t good enough) then don’t use it, but if you think otherwise you should respect your own mind (if you’re wrong to think otherwise, convince yourself).
Besides, in all the interesting real cases, there are experts advocating things on both sides. One expert disagrees with you. Another reaches the same conclusion as you. What now?
Hanson would suggest that this is pure, unjustified arrogance. I’m not sure I agree with him, I struggle to fault the argument but its still a pretty tough bullet to bite.
Have you heard of the Outside View? Hanson’s a big fan of it, and if you don’t know about it his thought process won’t always make much sense.
You could go with the consensus, or with the majority, or you could come up with a procedure for judging which are most trustworthy. If the experts can’t resolve this issue what makes you think you can? More importantly, if you know less than the average expert, then aren’t you better off just picking one expert at random rather than trusting yourself?
Is the majority of experts usually right? I don’t think so. Whenever there is a new idea, which is an improvement, usually for a while a minority believe it. In a society with rapid progress, this is a common state.
no
Why not learn something? Why not use your mind? I don’t think that thinking for yourself is arrogant.
In my experience reading (e.g.) academic papers, most experts are incompetent. the single issue of misquoting is ubiquitous. people publish misquotes even in peer reviewed journals. e.g. i discovered a fraudulent Edmund Burke quote which was used in a bunch of articles. Harry Binswanger (and Objectivist expert) posted misquotes (both getting the source wrong, and inserting brackted explanatory text to explain context which was dead wrong). Thomas Sowell misquoted Godwin in a book that discussed Godwin at length.
I can sometimes think better than experts, in their own field, in 15 minutes. In cases where I should listen to expert advice, i do without disagreeing with the expert and overruling my judgment (e.g. i’m not a good cook. when i don’t know how to make something i use a recipe. i don’t think i know the answer, so don’t get overruled. i can tell the difference btwn when i have an opinion that matters or not.).
In the case of cooking, I think the experts I use would approach the issue in the same way I would if I learned the field myself (in the relevant respects). For example, they would ask the same questions I am interested in like, “If I test this recipe out, does it taste good?” Since i think they already did the same work I would do, there’s no need to reinvent the wheel. In other cases, i don’t think experts have addressed the issue in a way that satisfies me, so i don’t blindly accept their ideas.
To be honest I’m not exactly a passionate Hansonian, I read his blog avidly because what he has to say is almost always original, but if you want to find a proponent of his to argue with you may need to look elsewhere. Still, I can play devil’s advocate if you want.
At any time, most things are not changing, so most experts will be right about most things. Anyway, the question isn’t whether experts are right, its why you think you are more reliable.
Brief introduction to the Outside View:
Cognitive scientists investigating the planning fallacy (in which people consistently and massively underestimate the amount of time it will take them to finish a project) decided to try to find a ‘cure’. In a surprising twist, they succeeded. If you ask the subject “how long have similar projects taken you in the past” and only then ask the question “how long do you expect this project to take” the bias is dramatically reduced.
They attributed this to the fact that in the initial experiment students had been taking the ‘inside view’ of their project. They had been examining each individual part on its own, and imagining how long it was likely to take. They made the mistake of failing to imagine enough unexpected delays. If they instead take the outside view, by looking at other similar things and seeing how they took, then they ended up implicitly taking those unexpected delays into account because most of those other projects encountered delays of their own/
In general, the outside view says “don’t focus on specifics, you will end up ignoring unexpected confounding elements from outside your model. Instead, consider the broad reference class of problems to which this problem belongs and reason from them”.
Looking at your 3rd last paragraph I can see a possible application of it. You belong to the broad reference class of “people who think they have proven an expert wrong”. Most such people are either crackpots, confused or misinformed. You don’t think of yourself as any of these things, but neither do most such people. Therefore you should perhaps give your own opinions less weight.
(Not a personal attack. I do not mean to imply that you actually are a crackpot, confused or misinformed, for all I know you may be absolutely right, I’m just demonstrating the principle).
This very liberal use of the theory has come under criticism from other Bayesians, including Yudkowsky. One of its problems is that it is not always clear which reference class to use.
A more serious problem comes when you apply it to its logical extreme. If we take the reference class “people who have believed themselves to be Napoleon” then most of them were/are insane, does this mean Napoleon himself should have applied the outside view and concluding that he was probably insane?
Like I said, tough bullet to bite.
This question is incompatible with Popperian philosophy. Ideas haven’t got reliability which is just another word for justification. Trying to give it to them leads to problems like regress.
What we do instead is act on our best knowledge without knowing how reliable it is. That means preferring ideas which we don’t see anything wrong with to those that we do see something wrong with.
When you do see something wrong with an expert view, but not with your own view, it’s irrational to do something you expect not to work, over something you expect to work. Of course if use double standards for criticism of your own ideas, and other people’s, you will go wrong. But the solution to that isn’t deferring to experts, it’s improving your mind.
Or maybe they have become experts by thinking well. How does one get expert status anyway? Surely if I think I can do better than people with college degrees at various things, that’s not too dumb. I’m e.g. a better programmer than many people with degrees. I have a pretty good sense of how much people do and don’t learn in college, and how much work it is to learn more on one’s own. The credential system isn’t very accurate.
edit: PS please don’t argue stuff you don’t think is true. if no true believers want to argue it, then shrug.
You seemed curious so I explained.
Incidentally, someone has been downvoting Curi’s comments and upvoting mine, would they like to step forward and make the case? I’m intrigued to see some of his criticisms answered.
I suspect that the individuals who are downvoting curi’s remarks in this subthread here are doing so because much of what he is saying are things he has already said elsewhere and that people are getting annoyed at him. I suspect that his comments are also being downvoted since he first used the term “authority” and then tried to make a distinction between “expertise” and “authority” when under his definition the first use of such an argument would seem to be in what he classifies as expertise. Finally, I suspect that his comments in this subthread have been downvoted for his apparent general arrogance regarding subject matter experts such as his claim that “I can sometimes think better than experts, in their own field, in 15 minutes.”
What do you mean by good argument? The Bayesians have an answer to this. They mean that P(claim|argument)> P(claim). Now, one might argue in that framework that if P(claim|argument)/P(claim) is close to 1 then this isn’t a good argument , or if if log P(claim|argument)/P(claim) is small compared to the effort to present and evaluate the argument then it isn’t a good argument.
However, that’s obviously not what you mean. It isn’t clear to me what you mean by “good argument” and how this connects to the notion of a fallacy. Please expand your definitions or taboo the terms.