I strongly endorse drawing a distinction, but I think I want to draw it a bit differently. The reason is that I feel like I would still defend the smart-people quote as non-fallacy-like if someone else had said it, and if that’s true, it can’t be because I have some respect for Yang as a thinker.
How about this (which I give you total credit for because I only came up with it after reading your comment):
(1) A statement is, by itself, not evidence to a rational observer who doesn’t know the speaker, but it’s possible that adding more information could turn it into evidence
(2) A statement contains enough information for a rational observer to conclude that the argument is fallacious
I would agree that #1 applies to Yang’s quote, and if that’s sufficient for being a fallacy, then Yang’s quote is a fallacy. This has some logic to it because people could mistakenly believe that the quote is evidence by itself, and that would be a mistake. However, I myself often say things for which #1 applies, and I believe that a lot of pretty rational people do as well. Then again, perhaps some don’t. I probably do it much less on LW than on other sites where I put much less effort into my posts.
I think the explanation for my intuition that the inclusion into the book is stupid is that avoiding #1 is a relatively high standard, and in fact lots of politicians routinely fail #2. I bet you could even find clear-cut examples of politicians failing #2 with regard to arguments from authority. There just is a difference between “this is stupid” and “this is incomplete and may or may not be stupid if I hear the rest”, and that difference seems to capture my reaction even on reflection.
Conversely, I feel like the personality of the speaker should not be an input to the fallacy-deciding function. I agree with everything in your last four paragraphs, but I think #1 remains less bad than #2 even if you think it’s unlikely that additional information would make the argument non-stupid.
I think we’re getting closer! Here’s even another alternative.
Let’s first admit that arguments can gain information-content both from their text and from their context. Through text and context, they can define, describe, and make claims and predictions.
An appropriate argument from authority sufficiently defines (via text and context) a meaningful authority figure, describes with sufficient accuracy their level of credibility on the subject, and makes a sufficiently specific claim. We predict that the accuracy of our own prediction will improve if we put in the effort to update our predictions based on this claim. Furthermore, once the prediction resolves, we will use that result to increase or decease the credibility we ascribe to that authority figure.
A fallacious argument from authority fails one or more of these tests of sufficiency, even when taking context into account. The authority figure may be too vaguely referenced, their credibility may be exaggerated, or the claim may be too imprecise. The accuracy of our predictions will be worsened if we update based on this claim, and the resolution of the claim does not allow us to update the credibility we ascribe to the referenced authority figure.
Another possible division is:
A non-fallacious argument from authority creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
A fallacious argument from authority creates an expectation that further context would make the quote feel just as stupid as it seemed before.
I think it’s only worth worrying about these divisions and fearing arguments from authority in a relatively serious context. If you’re having dinner with your friends and happen to vaguely reference an authority figure to back your claim, that’s fine. It ain’t that deep.
But if a serious statesman does it on TV in the context of a debate or speech, then we have every right to complain that their claims contain fallacious arguments from authority.
I think that Yang’s quote has a serious-enough context, is stupid-sounding enough on its own to make me uninterested in more context, and fails all these tests of sufficiency. For that reason, I consider it fallacious. It hits all three check boxes on my “fallacy check-list.”
I suspect that when you make potentially-fallacious arguments from authority, they’re usually in a relatively non-serious context and that your audience believes that if they took the time to interrogate you about the authority figures you reference, that they’d feel persuaded that they are plausibly meritorious authority figures even though you were vague in your initial presentation. Hence, you would probably not be committing a fallacy, in the way I am defining it here.
[...] creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
[...] creates an expectation that further context would make the quote feel just as stupid as it seemed before.
I want to add 3
[...] contains enough information that you already know the quote is stupid
You implied Yang’s quote is #2, I would say it’s #1. But (more importantly?) I would draw the bigger distinction between #2 and #3, and I don’t see any reason to choose something in #2 as an example of irrationality when examples in #3 are available. This, I think, is still my main argument.
I agree with everything else in your post; in particular, with the importance of context which I hadn’t considered. I concede that my behavior isn’t analogous to that of Yang in this example.
Glad we are converging on some sort of agreement! I find this helpful to talk out with you.
I’m not clear on the distinction between #2 and #3. What’s the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren’t they the same thing?
I’m not clear on the distinction between #2 and #3. What’s the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren’t they the same thing?
One difference is just uncertainty. Conservation of expected evidence only works if you’re certain. I think I was assuming that your probability on Yang being unable to expand on his claim meaningfully is like ~67%, not like 98%.
What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?
The other difference is that you can’t demonstrate it. The leading question wasn’t about whether one should update from the quote, it was whether it’s a good idea to choose the quote as a negative example in a book about rationality. Even if Steven Pinker were 100% sure that there is no reason to update here, it’s still not a good example if he can’t prove it. I mean, if you accept that the quote could plausibly be non-stupid, then the honest way to talk about would have been “Here is Yang saying {quote}. If it turns out that ‘some of the smartest people in the world’ is made-up or refers to people aren’t actually impressive, this will have been an example of a vapid appeal-to-authority. I can’t prove that this is so, but here are my reasons for suspecting it.” And again, I can’t imagine it’s hard to find examples of appeals-to-authority that are clear-cut fallacies.
What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?
No. That’s a good point. It seems like “fallacious argument from authority” lends itself to black-and-white thinking that’s just not appropriate in many cases. Reading the tea leaves has its value. If I had to guess, Pinker was looking for a timely quote by a politician his readers are likely to be sympathetic to, and this one was convenient.
I still think that there are many times when it’s best as a rule to just dismiss statements with the form of “arguments from authority.” This fits the criteria, and it might be that sometimes you throw out the baby with the bathwater this way. Then again, there could be equal value in becoming sensitive to when it’s appropriate to “tune in” to this sort of evidence. That probably depends on the individual and their goals.
I strongly endorse drawing a distinction, but I think I want to draw it a bit differently. The reason is that I feel like I would still defend the smart-people quote as non-fallacy-like if someone else had said it, and if that’s true, it can’t be because I have some respect for Yang as a thinker.
How about this (which I give you total credit for because I only came up with it after reading your comment):
(1) A statement is, by itself, not evidence to a rational observer who doesn’t know the speaker, but it’s possible that adding more information could turn it into evidence
(2) A statement contains enough information for a rational observer to conclude that the argument is fallacious
I would agree that #1 applies to Yang’s quote, and if that’s sufficient for being a fallacy, then Yang’s quote is a fallacy. This has some logic to it because people could mistakenly believe that the quote is evidence by itself, and that would be a mistake. However, I myself often say things for which #1 applies, and I believe that a lot of pretty rational people do as well. Then again, perhaps some don’t. I probably do it much less on LW than on other sites where I put much less effort into my posts.
I think the explanation for my intuition that the inclusion into the book is stupid is that avoiding #1 is a relatively high standard, and in fact lots of politicians routinely fail #2. I bet you could even find clear-cut examples of politicians failing #2 with regard to arguments from authority. There just is a difference between “this is stupid” and “this is incomplete and may or may not be stupid if I hear the rest”, and that difference seems to capture my reaction even on reflection.
Conversely, I feel like the personality of the speaker should not be an input to the fallacy-deciding function. I agree with everything in your last four paragraphs, but I think #1 remains less bad than #2 even if you think it’s unlikely that additional information would make the argument non-stupid.
I think we’re getting closer! Here’s even another alternative.
Let’s first admit that arguments can gain information-content both from their text and from their context. Through text and context, they can define, describe, and make claims and predictions.
An appropriate argument from authority sufficiently defines (via text and context) a meaningful authority figure, describes with sufficient accuracy their level of credibility on the subject, and makes a sufficiently specific claim. We predict that the accuracy of our own prediction will improve if we put in the effort to update our predictions based on this claim. Furthermore, once the prediction resolves, we will use that result to increase or decease the credibility we ascribe to that authority figure.
A fallacious argument from authority fails one or more of these tests of sufficiency, even when taking context into account. The authority figure may be too vaguely referenced, their credibility may be exaggerated, or the claim may be too imprecise. The accuracy of our predictions will be worsened if we update based on this claim, and the resolution of the claim does not allow us to update the credibility we ascribe to the referenced authority figure.
Another possible division is:
A non-fallacious argument from authority creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
A fallacious argument from authority creates an expectation that further context would make the quote feel just as stupid as it seemed before.
I think it’s only worth worrying about these divisions and fearing arguments from authority in a relatively serious context. If you’re having dinner with your friends and happen to vaguely reference an authority figure to back your claim, that’s fine. It ain’t that deep.
But if a serious statesman does it on TV in the context of a debate or speech, then we have every right to complain that their claims contain fallacious arguments from authority.
I think that Yang’s quote has a serious-enough context, is stupid-sounding enough on its own to make me uninterested in more context, and fails all these tests of sufficiency. For that reason, I consider it fallacious. It hits all three check boxes on my “fallacy check-list.”
I suspect that when you make potentially-fallacious arguments from authority, they’re usually in a relatively non-serious context and that your audience believes that if they took the time to interrogate you about the authority figures you reference, that they’d feel persuaded that they are plausibly meritorious authority figures even though you were vague in your initial presentation. Hence, you would probably not be committing a fallacy, in the way I am defining it here.
Taking your second division:
I want to add 3
You implied Yang’s quote is #2, I would say it’s #1. But (more importantly?) I would draw the bigger distinction between #2 and #3, and I don’t see any reason to choose something in #2 as an example of irrationality when examples in #3 are available. This, I think, is still my main argument.
I agree with everything else in your post; in particular, with the importance of context which I hadn’t considered. I concede that my behavior isn’t analogous to that of Yang in this example.
Glad we are converging on some sort of agreement! I find this helpful to talk out with you.
I’m not clear on the distinction between #2 and #3. What’s the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren’t they the same thing?
One difference is just uncertainty. Conservation of expected evidence only works if you’re certain. I think I was assuming that your probability on Yang being unable to expand on his claim meaningfully is like ~67%, not like 98%.
What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?
The other difference is that you can’t demonstrate it. The leading question wasn’t about whether one should update from the quote, it was whether it’s a good idea to choose the quote as a negative example in a book about rationality. Even if Steven Pinker were 100% sure that there is no reason to update here, it’s still not a good example if he can’t prove it. I mean, if you accept that the quote could plausibly be non-stupid, then the honest way to talk about would have been “Here is Yang saying {quote}. If it turns out that ‘some of the smartest people in the world’ is made-up or refers to people aren’t actually impressive, this will have been an example of a vapid appeal-to-authority. I can’t prove that this is so, but here are my reasons for suspecting it.” And again, I can’t imagine it’s hard to find examples of appeals-to-authority that are clear-cut fallacies.
No. That’s a good point. It seems like “fallacious argument from authority” lends itself to black-and-white thinking that’s just not appropriate in many cases. Reading the tea leaves has its value. If I had to guess, Pinker was looking for a timely quote by a politician his readers are likely to be sympathetic to, and this one was convenient.
I still think that there are many times when it’s best as a rule to just dismiss statements with the form of “arguments from authority.” This fits the criteria, and it might be that sometimes you throw out the baby with the bathwater this way. Then again, there could be equal value in becoming sensitive to when it’s appropriate to “tune in” to this sort of evidence. That probably depends on the individual and their goals.