We could substitute any nonsense assertion we like into that Yang quote.
Isn’t that trivially false? If Yang said the Kanye thing, that would be a lie, so if you think he’s trying to be honest, he can’t say that. I agree that he doesn’t give you any way to verify his claim, but that’s not the standard I use to decide whether something is an appeal to authority. If you say, ‘some of the smartest people in the world are religious’, that’s an appeal to authority and probably a weak argument even though it’s true.
Yang often uses the phrase ‘my friends in Silicon Valley’; he probably was talking about important people in tech in that quote.. I wouldn’t trust those people, but I certainly think their opinions are evidence.
OK, I think there are two ways to look at this question.
One is to separate the quote from the speaker, and ask if we’d still consider the context-free quote to be a piece of evidence. This is what I am advocating.
The other is to consider who’s speaking as key to interpreting the meaning of the quote, which is what you’re doing.
I think both are valid. For example, in the TV show “Firefly,” one of the characters, Simon Tam, receives letters from his highly intelligent sister. They read as “perfectly normal” to his parents, but their trivial content and occasional misspellings make him suspect—correctly—that they contain a code saying that she’s being harmed at the boarding school she’s been sent to. Here, considering the message in light of the speaker (or sender, in this case), is crucial to understanding it as a piece of evidence that his sister is in danger.
Another example is when my mentor in my MS program tells me that it’s best to automatically accept the predictions our PI makes about research ideas. If he likes them, they’re good. If he doesn’t like them, they’re bad. He’s a credible authority figure to whom we can and should appeal as a strong form of evidence.
Alternatively, there are many cases in which we might find it very difficult to predict how the identity of the speaker, or the context, should influence our interpretation of their quote. In the case of Yang, I have very little insight into whether or not his reference to “the smartest people in the world” is evidence that “this job-loss prediction is believed by more smart and well-qualified analysists than I, AllAmericanBreakfast, had thought prior to reading this Yang quote.”
If it is evidence of this, then yes, I agree that an ideal reasoning process would take it as some evidence that the prediction is true. But realistically, politicians often play fast-and-loose with their evidence. I am often wise to actively choose to not read anything into the quote beyond its context-free content. When I read this quote, I willfully shut off my imagination from trying to conjure up images of the supposed “smartest people in the world” that Yang’s ventriloquizing, to prevent my brain from being tricked into thinking this quote ought to update my belief.
Perhaps a more precise description of the fallacy here is “argument from an illusory authority.” When we say that something is a fallacious argument from authority, we’re implicitly saying that “proper epistemics in this context is to disregard references to the opinions of nonspecific ‘authorities,’ because the rhetoric is designed to trick you into accepting the statement, rather than to convey credible opinions to your mind.”
In response to your question, “isn’t this trivially false,” well—no, it’s trivially true. We can substitute anything we like into that statement. Watch me!
The smartest people in the world say that raisin bran is the best cereal.
The smartest people in the world say that I, AllAmericanBreakfast, am right about all this argument-from-authority stuff.
The smartest people in the world say that it’s trivially true that you can append any nonsense statement to “the smartest people in the world.”
Context-free, you shouldn’t (and, I’m sure, are not) taking “the smartest people in the world” as any evidence at all of the truth-value of my claims. In context, you also aren’t doing that, because you recognize my intent in uttering these quotes is not to convince you of their literal truth.
Note that it’s only in very special circumstances—particular combinations of speaker, referenced authority figure, and object-level claim—in which you would consider the referenced authority figure to lend a greater weight of evidence to the claim. Perhaps most references to authority figures in the context of argument are selected for actually being relevant as evidence. By this, I mean that during actual debate, people may strive to avoid statements like “my uncle Bob says that Apple stock is going to double in price by next year,” because referencing uncle Bob as an authority figure lends no support to their claim, but undermines their general credibility as a debater. So when an authority figure does get referenced, we ought to take it seriously.
But I don’t actually buy that argument. I think vague or non-credible authority figures get referenced all the time, and it’s only in select circumstances when we should actually respect this form of evidence. Generally, I think we should “tag” references to authority figures as fallacious and to be ignored, unless we have made a considered judgment to afford a specific speaker, on a specific topic, referencing a specific authority figure, to be actually useful evidence. A conservative guardedness with occasional permissions, rather than a liberal acceptance with occasional rejections.
In an ideal reasoning process with unlimited compute, we might wish to consider fully the credibility of each referenced authority figure, no matter how vague. But in a real process with our puny minds, I think it’s best to generally choose to ignore and actively (epistemically) punish arguments by authority, unless they’re done right.
I strongly endorse drawing a distinction, but I think I want to draw it a bit differently. The reason is that I feel like I would still defend the smart-people quote as non-fallacy-like if someone else had said it, and if that’s true, it can’t be because I have some respect for Yang as a thinker.
How about this (which I give you total credit for because I only came up with it after reading your comment):
(1) A statement is, by itself, not evidence to a rational observer who doesn’t know the speaker, but it’s possible that adding more information could turn it into evidence
(2) A statement contains enough information for a rational observer to conclude that the argument is fallacious
I would agree that #1 applies to Yang’s quote, and if that’s sufficient for being a fallacy, then Yang’s quote is a fallacy. This has some logic to it because people could mistakenly believe that the quote is evidence by itself, and that would be a mistake. However, I myself often say things for which #1 applies, and I believe that a lot of pretty rational people do as well. Then again, perhaps some don’t. I probably do it much less on LW than on other sites where I put much less effort into my posts.
I think the explanation for my intuition that the inclusion into the book is stupid is that avoiding #1 is a relatively high standard, and in fact lots of politicians routinely fail #2. I bet you could even find clear-cut examples of politicians failing #2 with regard to arguments from authority. There just is a difference between “this is stupid” and “this is incomplete and may or may not be stupid if I hear the rest”, and that difference seems to capture my reaction even on reflection.
Conversely, I feel like the personality of the speaker should not be an input to the fallacy-deciding function. I agree with everything in your last four paragraphs, but I think #1 remains less bad than #2 even if you think it’s unlikely that additional information would make the argument non-stupid.
I think we’re getting closer! Here’s even another alternative.
Let’s first admit that arguments can gain information-content both from their text and from their context. Through text and context, they can define, describe, and make claims and predictions.
An appropriate argument from authority sufficiently defines (via text and context) a meaningful authority figure, describes with sufficient accuracy their level of credibility on the subject, and makes a sufficiently specific claim. We predict that the accuracy of our own prediction will improve if we put in the effort to update our predictions based on this claim. Furthermore, once the prediction resolves, we will use that result to increase or decease the credibility we ascribe to that authority figure.
A fallacious argument from authority fails one or more of these tests of sufficiency, even when taking context into account. The authority figure may be too vaguely referenced, their credibility may be exaggerated, or the claim may be too imprecise. The accuracy of our predictions will be worsened if we update based on this claim, and the resolution of the claim does not allow us to update the credibility we ascribe to the referenced authority figure.
Another possible division is:
A non-fallacious argument from authority creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
A fallacious argument from authority creates an expectation that further context would make the quote feel just as stupid as it seemed before.
I think it’s only worth worrying about these divisions and fearing arguments from authority in a relatively serious context. If you’re having dinner with your friends and happen to vaguely reference an authority figure to back your claim, that’s fine. It ain’t that deep.
But if a serious statesman does it on TV in the context of a debate or speech, then we have every right to complain that their claims contain fallacious arguments from authority.
I think that Yang’s quote has a serious-enough context, is stupid-sounding enough on its own to make me uninterested in more context, and fails all these tests of sufficiency. For that reason, I consider it fallacious. It hits all three check boxes on my “fallacy check-list.”
I suspect that when you make potentially-fallacious arguments from authority, they’re usually in a relatively non-serious context and that your audience believes that if they took the time to interrogate you about the authority figures you reference, that they’d feel persuaded that they are plausibly meritorious authority figures even though you were vague in your initial presentation. Hence, you would probably not be committing a fallacy, in the way I am defining it here.
[...] creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
[...] creates an expectation that further context would make the quote feel just as stupid as it seemed before.
I want to add 3
[...] contains enough information that you already know the quote is stupid
You implied Yang’s quote is #2, I would say it’s #1. But (more importantly?) I would draw the bigger distinction between #2 and #3, and I don’t see any reason to choose something in #2 as an example of irrationality when examples in #3 are available. This, I think, is still my main argument.
I agree with everything else in your post; in particular, with the importance of context which I hadn’t considered. I concede that my behavior isn’t analogous to that of Yang in this example.
Glad we are converging on some sort of agreement! I find this helpful to talk out with you.
I’m not clear on the distinction between #2 and #3. What’s the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren’t they the same thing?
I’m not clear on the distinction between #2 and #3. What’s the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren’t they the same thing?
One difference is just uncertainty. Conservation of expected evidence only works if you’re certain. I think I was assuming that your probability on Yang being unable to expand on his claim meaningfully is like ~67%, not like 98%.
What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?
The other difference is that you can’t demonstrate it. The leading question wasn’t about whether one should update from the quote, it was whether it’s a good idea to choose the quote as a negative example in a book about rationality. Even if Steven Pinker were 100% sure that there is no reason to update here, it’s still not a good example if he can’t prove it. I mean, if you accept that the quote could plausibly be non-stupid, then the honest way to talk about would have been “Here is Yang saying {quote}. If it turns out that ‘some of the smartest people in the world’ is made-up or refers to people aren’t actually impressive, this will have been an example of a vapid appeal-to-authority. I can’t prove that this is so, but here are my reasons for suspecting it.” And again, I can’t imagine it’s hard to find examples of appeals-to-authority that are clear-cut fallacies.
What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?
No. That’s a good point. It seems like “fallacious argument from authority” lends itself to black-and-white thinking that’s just not appropriate in many cases. Reading the tea leaves has its value. If I had to guess, Pinker was looking for a timely quote by a politician his readers are likely to be sympathetic to, and this one was convenient.
I still think that there are many times when it’s best as a rule to just dismiss statements with the form of “arguments from authority.” This fits the criteria, and it might be that sometimes you throw out the baby with the bathwater this way. Then again, there could be equal value in becoming sensitive to when it’s appropriate to “tune in” to this sort of evidence. That probably depends on the individual and their goals.
Isn’t that trivially false? If Yang said the Kanye thing, that would be a lie, so if you think he’s trying to be honest, he can’t say that. I agree that he doesn’t give you any way to verify his claim, but that’s not the standard I use to decide whether something is an appeal to authority. If you say, ‘some of the smartest people in the world are religious’, that’s an appeal to authority and probably a weak argument even though it’s true.
Yang often uses the phrase ‘my friends in Silicon Valley’; he probably was talking about important people in tech in that quote.. I wouldn’t trust those people, but I certainly think their opinions are evidence.
OK, I think there are two ways to look at this question.
One is to separate the quote from the speaker, and ask if we’d still consider the context-free quote to be a piece of evidence. This is what I am advocating.
The other is to consider who’s speaking as key to interpreting the meaning of the quote, which is what you’re doing.
I think both are valid. For example, in the TV show “Firefly,” one of the characters, Simon Tam, receives letters from his highly intelligent sister. They read as “perfectly normal” to his parents, but their trivial content and occasional misspellings make him suspect—correctly—that they contain a code saying that she’s being harmed at the boarding school she’s been sent to. Here, considering the message in light of the speaker (or sender, in this case), is crucial to understanding it as a piece of evidence that his sister is in danger.
Another example is when my mentor in my MS program tells me that it’s best to automatically accept the predictions our PI makes about research ideas. If he likes them, they’re good. If he doesn’t like them, they’re bad. He’s a credible authority figure to whom we can and should appeal as a strong form of evidence.
Alternatively, there are many cases in which we might find it very difficult to predict how the identity of the speaker, or the context, should influence our interpretation of their quote. In the case of Yang, I have very little insight into whether or not his reference to “the smartest people in the world” is evidence that “this job-loss prediction is believed by more smart and well-qualified analysists than I, AllAmericanBreakfast, had thought prior to reading this Yang quote.”
If it is evidence of this, then yes, I agree that an ideal reasoning process would take it as some evidence that the prediction is true. But realistically, politicians often play fast-and-loose with their evidence. I am often wise to actively choose to not read anything into the quote beyond its context-free content. When I read this quote, I willfully shut off my imagination from trying to conjure up images of the supposed “smartest people in the world” that Yang’s ventriloquizing, to prevent my brain from being tricked into thinking this quote ought to update my belief.
Perhaps a more precise description of the fallacy here is “argument from an illusory authority.” When we say that something is a fallacious argument from authority, we’re implicitly saying that “proper epistemics in this context is to disregard references to the opinions of nonspecific ‘authorities,’ because the rhetoric is designed to trick you into accepting the statement, rather than to convey credible opinions to your mind.”
In response to your question, “isn’t this trivially false,” well—no, it’s trivially true. We can substitute anything we like into that statement. Watch me!
Context-free, you shouldn’t (and, I’m sure, are not) taking “the smartest people in the world” as any evidence at all of the truth-value of my claims. In context, you also aren’t doing that, because you recognize my intent in uttering these quotes is not to convince you of their literal truth.
Note that it’s only in very special circumstances—particular combinations of speaker, referenced authority figure, and object-level claim—in which you would consider the referenced authority figure to lend a greater weight of evidence to the claim. Perhaps most references to authority figures in the context of argument are selected for actually being relevant as evidence. By this, I mean that during actual debate, people may strive to avoid statements like “my uncle Bob says that Apple stock is going to double in price by next year,” because referencing uncle Bob as an authority figure lends no support to their claim, but undermines their general credibility as a debater. So when an authority figure does get referenced, we ought to take it seriously.
But I don’t actually buy that argument. I think vague or non-credible authority figures get referenced all the time, and it’s only in select circumstances when we should actually respect this form of evidence. Generally, I think we should “tag” references to authority figures as fallacious and to be ignored, unless we have made a considered judgment to afford a specific speaker, on a specific topic, referencing a specific authority figure, to be actually useful evidence. A conservative guardedness with occasional permissions, rather than a liberal acceptance with occasional rejections.
In an ideal reasoning process with unlimited compute, we might wish to consider fully the credibility of each referenced authority figure, no matter how vague. But in a real process with our puny minds, I think it’s best to generally choose to ignore and actively (epistemically) punish arguments by authority, unless they’re done right.
I strongly endorse drawing a distinction, but I think I want to draw it a bit differently. The reason is that I feel like I would still defend the smart-people quote as non-fallacy-like if someone else had said it, and if that’s true, it can’t be because I have some respect for Yang as a thinker.
How about this (which I give you total credit for because I only came up with it after reading your comment):
(1) A statement is, by itself, not evidence to a rational observer who doesn’t know the speaker, but it’s possible that adding more information could turn it into evidence
(2) A statement contains enough information for a rational observer to conclude that the argument is fallacious
I would agree that #1 applies to Yang’s quote, and if that’s sufficient for being a fallacy, then Yang’s quote is a fallacy. This has some logic to it because people could mistakenly believe that the quote is evidence by itself, and that would be a mistake. However, I myself often say things for which #1 applies, and I believe that a lot of pretty rational people do as well. Then again, perhaps some don’t. I probably do it much less on LW than on other sites where I put much less effort into my posts.
I think the explanation for my intuition that the inclusion into the book is stupid is that avoiding #1 is a relatively high standard, and in fact lots of politicians routinely fail #2. I bet you could even find clear-cut examples of politicians failing #2 with regard to arguments from authority. There just is a difference between “this is stupid” and “this is incomplete and may or may not be stupid if I hear the rest”, and that difference seems to capture my reaction even on reflection.
Conversely, I feel like the personality of the speaker should not be an input to the fallacy-deciding function. I agree with everything in your last four paragraphs, but I think #1 remains less bad than #2 even if you think it’s unlikely that additional information would make the argument non-stupid.
I think we’re getting closer! Here’s even another alternative.
Let’s first admit that arguments can gain information-content both from their text and from their context. Through text and context, they can define, describe, and make claims and predictions.
An appropriate argument from authority sufficiently defines (via text and context) a meaningful authority figure, describes with sufficient accuracy their level of credibility on the subject, and makes a sufficiently specific claim. We predict that the accuracy of our own prediction will improve if we put in the effort to update our predictions based on this claim. Furthermore, once the prediction resolves, we will use that result to increase or decease the credibility we ascribe to that authority figure.
A fallacious argument from authority fails one or more of these tests of sufficiency, even when taking context into account. The authority figure may be too vaguely referenced, their credibility may be exaggerated, or the claim may be too imprecise. The accuracy of our predictions will be worsened if we update based on this claim, and the resolution of the claim does not allow us to update the credibility we ascribe to the referenced authority figure.
Another possible division is:
A non-fallacious argument from authority creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
A fallacious argument from authority creates an expectation that further context would make the quote feel just as stupid as it seemed before.
I think it’s only worth worrying about these divisions and fearing arguments from authority in a relatively serious context. If you’re having dinner with your friends and happen to vaguely reference an authority figure to back your claim, that’s fine. It ain’t that deep.
But if a serious statesman does it on TV in the context of a debate or speech, then we have every right to complain that their claims contain fallacious arguments from authority.
I think that Yang’s quote has a serious-enough context, is stupid-sounding enough on its own to make me uninterested in more context, and fails all these tests of sufficiency. For that reason, I consider it fallacious. It hits all three check boxes on my “fallacy check-list.”
I suspect that when you make potentially-fallacious arguments from authority, they’re usually in a relatively non-serious context and that your audience believes that if they took the time to interrogate you about the authority figures you reference, that they’d feel persuaded that they are plausibly meritorious authority figures even though you were vague in your initial presentation. Hence, you would probably not be committing a fallacy, in the way I am defining it here.
Taking your second division:
I want to add 3
You implied Yang’s quote is #2, I would say it’s #1. But (more importantly?) I would draw the bigger distinction between #2 and #3, and I don’t see any reason to choose something in #2 as an example of irrationality when examples in #3 are available. This, I think, is still my main argument.
I agree with everything else in your post; in particular, with the importance of context which I hadn’t considered. I concede that my behavior isn’t analogous to that of Yang in this example.
Glad we are converging on some sort of agreement! I find this helpful to talk out with you.
I’m not clear on the distinction between #2 and #3. What’s the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren’t they the same thing?
One difference is just uncertainty. Conservation of expected evidence only works if you’re certain. I think I was assuming that your probability on Yang being unable to expand on his claim meaningfully is like ~67%, not like 98%.
What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?
The other difference is that you can’t demonstrate it. The leading question wasn’t about whether one should update from the quote, it was whether it’s a good idea to choose the quote as a negative example in a book about rationality. Even if Steven Pinker were 100% sure that there is no reason to update here, it’s still not a good example if he can’t prove it. I mean, if you accept that the quote could plausibly be non-stupid, then the honest way to talk about would have been “Here is Yang saying {quote}. If it turns out that ‘some of the smartest people in the world’ is made-up or refers to people aren’t actually impressive, this will have been an example of a vapid appeal-to-authority. I can’t prove that this is so, but here are my reasons for suspecting it.” And again, I can’t imagine it’s hard to find examples of appeals-to-authority that are clear-cut fallacies.
No. That’s a good point. It seems like “fallacious argument from authority” lends itself to black-and-white thinking that’s just not appropriate in many cases. Reading the tea leaves has its value. If I had to guess, Pinker was looking for a timely quote by a politician his readers are likely to be sympathetic to, and this one was convenient.
I still think that there are many times when it’s best as a rule to just dismiss statements with the form of “arguments from authority.” This fits the criteria, and it might be that sometimes you throw out the baby with the bathwater this way. Then again, there could be equal value in becoming sensitive to when it’s appropriate to “tune in” to this sort of evidence. That probably depends on the individual and their goals.