I worry a bit that this has a flavor of ‘No True Elite’ or informal respecification of the procedure—suddenly, instead of consulting the best-trained subject matter experts, we are to poll a broad coalition of smart people. Why? Well, because that’s what might have delivered the best answer in this case post-facto. But how are we to know in advance which to do?
(One possible algorithm is to first arrive at the correct answer, then pick an elite group which delivers that answer. But in this case the algorithm has an extra step. And of course you don’t advocate this explicitly, but it looks to me like that’s what you just did.)
I’m not sure I understand the objection/question, but I’ll respond to the objection/question I think it is.
Am I changing the procedure to avoid a counterexample from Wei Dai?
I think the answer is No. If you look at the section titled “An outline of the framework and some guidelines for applying it effectively” you’ll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an “expert” is being someone elite common sense would defer to. If the experts won’t tell elite common sense what they think, then what the experts think isn’t yet part of elite common sense. I think this is a case where elite common sense just gets it wrong, not that they clearly could have done anything about it. But I do think it’s a case where you can apply elite common sense, even if it gives you the wrong answer ex post. (Maybe it doesn’t give you the wrong answer though; maybe some better investigation would have been possible and they didn’t do it. This is hard to say from our perspective.)
Why go with what generally trustworthy people think as your definition of elite common sense? It’s precisely because I think it is easier to get in touch with what generally trustworthy people think, rather than what all subject matter experts in the world think. As I say in the essay:
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs....If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
In principle, if you could get a sense for what all subject matter experts thought about every issue, that would be a great place to start for your prior. But I think that’s not possible in practice. So I recommend using a more general group that you can use as your starting point.
It seems the “No True Elite” fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of “elites,” elites generally believe X.
(2) X being an embarrassing thing to believe
(3) Me replying that someone who believed X wouldn’t count as an “elite,” but doing so in a way that couldn’t be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don’t get to count as part of elite common sense immediately because their opinions are too hard to access. And I’m actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don’t understand how this is supposed to be an instance of the “No True Scotsman” fallacy.
There’s always reasons why the scotsman isn’t a Scotsman. What I’m worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries ‘cherry-picking’ when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
To me it might be obvious that AI ‘elites’ are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because ‘Oh, they’re not true elites’ or ‘Even though it’s completely predictable that we’re going to run into this problem later, it’s not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again’. But now I’m diverging into a separate issue.)
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.” I’m much more worried about alleged rules for deciding who the elites are that are supposed to substitute for “Eh, take your best guess” and if you’re applying complex reasoning to say, “Well, but that rule didn’t really fail for cryptographers” then it becomes more legitimate for me to reply, “Maybe just ‘take your best guess’ would better summarize the rule?” In turn, I’m espousing this because I think people will have a more productive conversation if they understand that the rule is just ‘best guess’ and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain ‘elite’.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
Just to be clear: I would count this as violating my rules because you haven’t used a clear indicator of trustworthiness that many people would accept.
ETA: I’d add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai’s case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn’t say that people should be relying on info they can’t use, which is how I understood Wei Dai’s objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I’m just guessing about that.) What is it you think my framework implies—with no funny business and no instance of the fallacy you think I’m committing—and why do you find it objectionable?
ETA:
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.”
This is what I think I am doing and am intending to do.
So in my case I would consider elite common sense about cryptography to be “Ask Bruce Schneier”, who might or might not have declined to talk to those companies or consult with them. That’s much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn’t answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn’t a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn’t follow the object-level arguments and there was no blatantly obvious meta-level difference, I’d mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn’t hesitate to pick sides based on it, regardless of the eliteness of the people who’d taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I’ve been in that epistemic position lots of times.
I’m honestly not sure about what your version is. I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my ‘Bruce Schneier’ output more strongly than the 10%, at least as I briefly read them.
I think we don’t disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don’t know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai’s example was one in which you didn’t know and maybe couldn’t find out, so that’s why I said what I said. Frankly, I’m pretty flummoxed about why you think this is the “No True Scotsman” fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It’s subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.
I worry a bit that this has a flavor of ‘No True Elite’ or informal respecification of the procedure—suddenly, instead of consulting the best-trained subject matter experts, we are to poll a broad coalition of smart people. Why? Well, because that’s what might have delivered the best answer in this case post-facto. But how are we to know in advance which to do?
(One possible algorithm is to first arrive at the correct answer, then pick an elite group which delivers that answer. But in this case the algorithm has an extra step. And of course you don’t advocate this explicitly, but it looks to me like that’s what you just did.)
I’m not sure I understand the objection/question, but I’ll respond to the objection/question I think it is.
Am I changing the procedure to avoid a counterexample from Wei Dai?
I think the answer is No. If you look at the section titled “An outline of the framework and some guidelines for applying it effectively” you’ll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an “expert” is being someone elite common sense would defer to. If the experts won’t tell elite common sense what they think, then what the experts think isn’t yet part of elite common sense. I think this is a case where elite common sense just gets it wrong, not that they clearly could have done anything about it. But I do think it’s a case where you can apply elite common sense, even if it gives you the wrong answer ex post. (Maybe it doesn’t give you the wrong answer though; maybe some better investigation would have been possible and they didn’t do it. This is hard to say from our perspective.)
Why go with what generally trustworthy people think as your definition of elite common sense? It’s precisely because I think it is easier to get in touch with what generally trustworthy people think, rather than what all subject matter experts in the world think. As I say in the essay:
In principle, if you could get a sense for what all subject matter experts thought about every issue, that would be a great place to start for your prior. But I think that’s not possible in practice. So I recommend using a more general group that you can use as your starting point.
Does this answer your question?
It seems the “No True Elite” fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of “elites,” elites generally believe X. (2) X being an embarrassing thing to believe (3) Me replying that someone who believed X wouldn’t count as an “elite,” but doing so in a way that couldn’t be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don’t get to count as part of elite common sense immediately because their opinions are too hard to access. And I’m actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don’t understand how this is supposed to be an instance of the “No True Scotsman” fallacy.
There’s always reasons why the scotsman isn’t a Scotsman. What I’m worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries ‘cherry-picking’ when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
To me it might be obvious that AI ‘elites’ are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because ‘Oh, they’re not true elites’ or ‘Even though it’s completely predictable that we’re going to run into this problem later, it’s not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again’. But now I’m diverging into a separate issue.)
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.” I’m much more worried about alleged rules for deciding who the elites are that are supposed to substitute for “Eh, take your best guess” and if you’re applying complex reasoning to say, “Well, but that rule didn’t really fail for cryptographers” then it becomes more legitimate for me to reply, “Maybe just ‘take your best guess’ would better summarize the rule?” In turn, I’m espousing this because I think people will have a more productive conversation if they understand that the rule is just ‘best guess’ and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain ‘elite’.
Just to be clear: I would count this as violating my rules because you haven’t used a clear indicator of trustworthiness that many people would accept.
ETA: I’d add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai’s case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn’t say that people should be relying on info they can’t use, which is how I understood Wei Dai’s objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I’m just guessing about that.) What is it you think my framework implies—with no funny business and no instance of the fallacy you think I’m committing—and why do you find it objectionable?
ETA:
This is what I think I am doing and am intending to do.
So in my case I would consider elite common sense about cryptography to be “Ask Bruce Schneier”, who might or might not have declined to talk to those companies or consult with them. That’s much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn’t answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn’t a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn’t follow the object-level arguments and there was no blatantly obvious meta-level difference, I’d mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn’t hesitate to pick sides based on it, regardless of the eliteness of the people who’d taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I’ve been in that epistemic position lots of times.
I’m honestly not sure about what your version is. I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my ‘Bruce Schneier’ output more strongly than the 10%, at least as I briefly read them.
I think we don’t disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don’t know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai’s example was one in which you didn’t know and maybe couldn’t find out, so that’s why I said what I said. Frankly, I’m pretty flummoxed about why you think this is the “No True Scotsman” fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It’s subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.