It seems the “No True Elite” fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of “elites,” elites generally believe X.
(2) X being an embarrassing thing to believe
(3) Me replying that someone who believed X wouldn’t count as an “elite,” but doing so in a way that couldn’t be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don’t get to count as part of elite common sense immediately because their opinions are too hard to access. And I’m actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don’t understand how this is supposed to be an instance of the “No True Scotsman” fallacy.
There’s always reasons why the scotsman isn’t a Scotsman. What I’m worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries ‘cherry-picking’ when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
To me it might be obvious that AI ‘elites’ are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because ‘Oh, they’re not true elites’ or ‘Even though it’s completely predictable that we’re going to run into this problem later, it’s not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again’. But now I’m diverging into a separate issue.)
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.” I’m much more worried about alleged rules for deciding who the elites are that are supposed to substitute for “Eh, take your best guess” and if you’re applying complex reasoning to say, “Well, but that rule didn’t really fail for cryptographers” then it becomes more legitimate for me to reply, “Maybe just ‘take your best guess’ would better summarize the rule?” In turn, I’m espousing this because I think people will have a more productive conversation if they understand that the rule is just ‘best guess’ and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain ‘elite’.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
Just to be clear: I would count this as violating my rules because you haven’t used a clear indicator of trustworthiness that many people would accept.
ETA: I’d add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai’s case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn’t say that people should be relying on info they can’t use, which is how I understood Wei Dai’s objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I’m just guessing about that.) What is it you think my framework implies—with no funny business and no instance of the fallacy you think I’m committing—and why do you find it objectionable?
ETA:
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.”
This is what I think I am doing and am intending to do.
So in my case I would consider elite common sense about cryptography to be “Ask Bruce Schneier”, who might or might not have declined to talk to those companies or consult with them. That’s much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn’t answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn’t a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn’t follow the object-level arguments and there was no blatantly obvious meta-level difference, I’d mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn’t hesitate to pick sides based on it, regardless of the eliteness of the people who’d taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I’ve been in that epistemic position lots of times.
I’m honestly not sure about what your version is. I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my ‘Bruce Schneier’ output more strongly than the 10%, at least as I briefly read them.
I think we don’t disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don’t know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai’s example was one in which you didn’t know and maybe couldn’t find out, so that’s why I said what I said. Frankly, I’m pretty flummoxed about why you think this is the “No True Scotsman” fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It’s subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.
It seems the “No True Elite” fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of “elites,” elites generally believe X. (2) X being an embarrassing thing to believe (3) Me replying that someone who believed X wouldn’t count as an “elite,” but doing so in a way that couldn’t be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don’t get to count as part of elite common sense immediately because their opinions are too hard to access. And I’m actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don’t understand how this is supposed to be an instance of the “No True Scotsman” fallacy.
There’s always reasons why the scotsman isn’t a Scotsman. What I’m worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries ‘cherry-picking’ when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
To me it might be obvious that AI ‘elites’ are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because ‘Oh, they’re not true elites’ or ‘Even though it’s completely predictable that we’re going to run into this problem later, it’s not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again’. But now I’m diverging into a separate issue.)
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.” I’m much more worried about alleged rules for deciding who the elites are that are supposed to substitute for “Eh, take your best guess” and if you’re applying complex reasoning to say, “Well, but that rule didn’t really fail for cryptographers” then it becomes more legitimate for me to reply, “Maybe just ‘take your best guess’ would better summarize the rule?” In turn, I’m espousing this because I think people will have a more productive conversation if they understand that the rule is just ‘best guess’ and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain ‘elite’.
Just to be clear: I would count this as violating my rules because you haven’t used a clear indicator of trustworthiness that many people would accept.
ETA: I’d add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai’s case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn’t say that people should be relying on info they can’t use, which is how I understood Wei Dai’s objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I’m just guessing about that.) What is it you think my framework implies—with no funny business and no instance of the fallacy you think I’m committing—and why do you find it objectionable?
ETA:
This is what I think I am doing and am intending to do.
So in my case I would consider elite common sense about cryptography to be “Ask Bruce Schneier”, who might or might not have declined to talk to those companies or consult with them. That’s much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn’t answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn’t a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn’t follow the object-level arguments and there was no blatantly obvious meta-level difference, I’d mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn’t hesitate to pick sides based on it, regardless of the eliteness of the people who’d taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I’ve been in that epistemic position lots of times.
I’m honestly not sure about what your version is. I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my ‘Bruce Schneier’ output more strongly than the 10%, at least as I briefly read them.
I think we don’t disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don’t know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai’s example was one in which you didn’t know and maybe couldn’t find out, so that’s why I said what I said. Frankly, I’m pretty flummoxed about why you think this is the “No True Scotsman” fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It’s subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.