One problem with this is that you often can’t access the actual epistemic standards of other people because they have no incentives to reveal them to you. Consider the case of the Blu-ray copy protection system BD+ (which is fresh in my mind because I just used it recently as a example elsewhere). I’m not personally involved with this case, but my understanding based on what I’ve read is that the Blu-ray consortium bought the rights to the system from a reputable cryptography consulting firm for several million dollars (presumably after checking with other independent consultants), and many studios choose Blu-ray over HD DVD because of it. (From Wikipedia: Several studios cited Blu-ray Disc’s adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD. The copy protection scheme was to take “10 years” to crack, according to Richard Doherty, an analyst with Envisioneering Group.) And yet one month after Blu-ray discs were released using the system, it was broken and those discs became copyable to people having a commercially available piece of software.
I think the actual majority opinion in the professional cryptography community, when they talk about this privately among themselves, was that such copy protection systems are pretty hopeless (i.e., likely to be easily broken to people with the right skills), but the elite decision makers had no access to this information. The consultants they bought the system from had no reason to tell them this, or were just overconfident in their own ideas. The other consultants they checked with couldn’t personally break the system, probably due to not having quite the right sub-specialization (which perhaps only a handful of people in the world had), and since it would have been embarrassing to say “this is probably easily broken, but I can’t break it”, they just stuck with “I can’t break it”. (Or again, they may have been overconfident and translated “I can’t break it” to “it can’t be broken”, even if they previously agreed with the majority opinion before personally studying the system.)
In order to correct for things like this, you have to take the elite opinion that you have access to, and use as evidence to update some other prior, instead of using it directly as a prior. In other words, ask questions like “If BD+ was likely to be easily breakable to people with the right skills, would I learn this from these consultants?” (But of course doing that exposes you to your own biases which perhaps make elite opinion too easy to “explain away”.)
If I understand this objection properly, the objection is:
(1) The executives making decisions didn’t have access to what the cryptographers thought.
(2) In order for the executives to apply the elite common sense framework, they would need to have access to what the cryptographers thought.
(3) Therefore, the executives could not apply the elite common sense framework in this case.
I would agree with the first premise but reject the second. If this all happened as you say—which seems plausible—then I would frame this as a case where the elite decision makers didn’t have access to the opinions of some relevant subject matter experts rather than a case where the elite decision makers didn’t have access to elite common sense. In my framework, you can have access to elite common sense without having access to what relevant subject mater experts think, though in this kind of situation you should be extremely modest in your opinions. The elite decision makers still had reasonable access to elite common sense insofar as they were able to stress-test their views about what to expect if they bought this copyright protection system by presenting their opinions to a broad coalition of smart people and seeing what others thought.
I agree that you have to start from your own personal standards in order to get a grip on elite common sense. But note that this point generally applies to anyone recommending that you use any reasoning standards at all other than the ones you happen to presently have. And my sense is that people can get reasonably well in touch with elite common sense by trying to understand how other trustworthy people think and applying the framework that I have advocated here. I acknowledge that it is not easy to know about the epistemic standards that others use; what I advocate here is doing your best to follow the epistemic standards of the most trustworthy people.
Ok, I think I misunderstood you earlier and thought “elite common sense” referred to the common sense of elite experts, rather than of elites in general. (I don’t share Eliezer’s “No True Elite” objection since that’s probably what you originally intended.)
In view of my new understanding I would revise my criticism a bit. If the Blu-ray and studio executives had asked the opinions of a broad coalition of smart people, they likely would have gotten back the same answer that they already had: “hire some expert consultants and ask them to evaluate the system”. An alternative would be to instead learn about Bayesian updating and the heuristics-and-biases literature (in other words learn LW-style rationality), which could have enabled the executives to realize that they’d probably be reading the same reports from their consultants even if BD+ was actually easily breakable by a handful of people with the right skills. At that point maybe they could have come up with some unconventional, outside-the-box ideas about how to confirm or rule out this possibility.
I worry a bit that this has a flavor of ‘No True Elite’ or informal respecification of the procedure—suddenly, instead of consulting the best-trained subject matter experts, we are to poll a broad coalition of smart people. Why? Well, because that’s what might have delivered the best answer in this case post-facto. But how are we to know in advance which to do?
(One possible algorithm is to first arrive at the correct answer, then pick an elite group which delivers that answer. But in this case the algorithm has an extra step. And of course you don’t advocate this explicitly, but it looks to me like that’s what you just did.)
I’m not sure I understand the objection/question, but I’ll respond to the objection/question I think it is.
Am I changing the procedure to avoid a counterexample from Wei Dai?
I think the answer is No. If you look at the section titled “An outline of the framework and some guidelines for applying it effectively” you’ll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an “expert” is being someone elite common sense would defer to. If the experts won’t tell elite common sense what they think, then what the experts think isn’t yet part of elite common sense. I think this is a case where elite common sense just gets it wrong, not that they clearly could have done anything about it. But I do think it’s a case where you can apply elite common sense, even if it gives you the wrong answer ex post. (Maybe it doesn’t give you the wrong answer though; maybe some better investigation would have been possible and they didn’t do it. This is hard to say from our perspective.)
Why go with what generally trustworthy people think as your definition of elite common sense? It’s precisely because I think it is easier to get in touch with what generally trustworthy people think, rather than what all subject matter experts in the world think. As I say in the essay:
How should we assign weight to different groups of people? Other things being equal, a larger number of people is better, more trustworthy people are better, people who are trustworthy by clearer indicators that more people would accept are better, and a set of criteria which allows you to have some grip on what the people in question think is better, but you have to make trade-offs....If I went with, say, the 10 most-cited people in 10 of the most intellectually credible academic disciplines, 100 of the most generally respected people in business, and the 100 heads of different states, I would have a pretty large number of people and a broad set of people who were very trustworthy by clear standards that many people would accept, but I would have a hard time knowing what they would think about various issues because I haven’t interacted with them enough. How these factors can be traded-off against each other in a way that is practically most helpful probably varies substantially from person to person.
In principle, if you could get a sense for what all subject matter experts thought about every issue, that would be a great place to start for your prior. But I think that’s not possible in practice. So I recommend using a more general group that you can use as your starting point.
It seems the “No True Elite” fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of “elites,” elites generally believe X.
(2) X being an embarrassing thing to believe
(3) Me replying that someone who believed X wouldn’t count as an “elite,” but doing so in a way that couldn’t be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don’t get to count as part of elite common sense immediately because their opinions are too hard to access. And I’m actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don’t understand how this is supposed to be an instance of the “No True Scotsman” fallacy.
There’s always reasons why the scotsman isn’t a Scotsman. What I’m worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries ‘cherry-picking’ when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
To me it might be obvious that AI ‘elites’ are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because ‘Oh, they’re not true elites’ or ‘Even though it’s completely predictable that we’re going to run into this problem later, it’s not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again’. But now I’m diverging into a separate issue.)
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.” I’m much more worried about alleged rules for deciding who the elites are that are supposed to substitute for “Eh, take your best guess” and if you’re applying complex reasoning to say, “Well, but that rule didn’t really fail for cryptographers” then it becomes more legitimate for me to reply, “Maybe just ‘take your best guess’ would better summarize the rule?” In turn, I’m espousing this because I think people will have a more productive conversation if they understand that the rule is just ‘best guess’ and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain ‘elite’.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
Just to be clear: I would count this as violating my rules because you haven’t used a clear indicator of trustworthiness that many people would accept.
ETA: I’d add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai’s case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn’t say that people should be relying on info they can’t use, which is how I understood Wei Dai’s objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I’m just guessing about that.) What is it you think my framework implies—with no funny business and no instance of the fallacy you think I’m committing—and why do you find it objectionable?
ETA:
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.”
This is what I think I am doing and am intending to do.
So in my case I would consider elite common sense about cryptography to be “Ask Bruce Schneier”, who might or might not have declined to talk to those companies or consult with them. That’s much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn’t answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn’t a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn’t follow the object-level arguments and there was no blatantly obvious meta-level difference, I’d mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn’t hesitate to pick sides based on it, regardless of the eliteness of the people who’d taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I’ve been in that epistemic position lots of times.
I’m honestly not sure about what your version is. I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my ‘Bruce Schneier’ output more strongly than the 10%, at least as I briefly read them.
I think we don’t disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don’t know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai’s example was one in which you didn’t know and maybe couldn’t find out, so that’s why I said what I said. Frankly, I’m pretty flummoxed about why you think this is the “No True Scotsman” fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It’s subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.
bought the rights to the system from a reputable cryptography consulting firm for several million dollars
That’s petty change—consider big-studio movie budgets for proper context.
but the elite decision makers had no access to this information
I am pretty sure they had—but it’s hard to say whether they discounted it to low probability or their whole incentive structure was such that it made sense for them to ignore this information even if they believed it to be true. I’m inclined towards the latter.
One problem with this is that you often can’t access the actual epistemic standards of other people because they have no incentives to reveal them to you. Consider the case of the Blu-ray copy protection system BD+ (which is fresh in my mind because I just used it recently as a example elsewhere). I’m not personally involved with this case, but my understanding based on what I’ve read is that the Blu-ray consortium bought the rights to the system from a reputable cryptography consulting firm for several million dollars (presumably after checking with other independent consultants), and many studios choose Blu-ray over HD DVD because of it. (From Wikipedia: Several studios cited Blu-ray Disc’s adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD. The copy protection scheme was to take “10 years” to crack, according to Richard Doherty, an analyst with Envisioneering Group.) And yet one month after Blu-ray discs were released using the system, it was broken and those discs became copyable to people having a commercially available piece of software.
I think the actual majority opinion in the professional cryptography community, when they talk about this privately among themselves, was that such copy protection systems are pretty hopeless (i.e., likely to be easily broken to people with the right skills), but the elite decision makers had no access to this information. The consultants they bought the system from had no reason to tell them this, or were just overconfident in their own ideas. The other consultants they checked with couldn’t personally break the system, probably due to not having quite the right sub-specialization (which perhaps only a handful of people in the world had), and since it would have been embarrassing to say “this is probably easily broken, but I can’t break it”, they just stuck with “I can’t break it”. (Or again, they may have been overconfident and translated “I can’t break it” to “it can’t be broken”, even if they previously agreed with the majority opinion before personally studying the system.)
In order to correct for things like this, you have to take the elite opinion that you have access to, and use as evidence to update some other prior, instead of using it directly as a prior. In other words, ask questions like “If BD+ was likely to be easily breakable to people with the right skills, would I learn this from these consultants?” (But of course doing that exposes you to your own biases which perhaps make elite opinion too easy to “explain away”.)
If I understand this objection properly, the objection is:
(1) The executives making decisions didn’t have access to what the cryptographers thought.
(2) In order for the executives to apply the elite common sense framework, they would need to have access to what the cryptographers thought.
(3) Therefore, the executives could not apply the elite common sense framework in this case.
I would agree with the first premise but reject the second. If this all happened as you say—which seems plausible—then I would frame this as a case where the elite decision makers didn’t have access to the opinions of some relevant subject matter experts rather than a case where the elite decision makers didn’t have access to elite common sense. In my framework, you can have access to elite common sense without having access to what relevant subject mater experts think, though in this kind of situation you should be extremely modest in your opinions. The elite decision makers still had reasonable access to elite common sense insofar as they were able to stress-test their views about what to expect if they bought this copyright protection system by presenting their opinions to a broad coalition of smart people and seeing what others thought.
I agree that you have to start from your own personal standards in order to get a grip on elite common sense. But note that this point generally applies to anyone recommending that you use any reasoning standards at all other than the ones you happen to presently have. And my sense is that people can get reasonably well in touch with elite common sense by trying to understand how other trustworthy people think and applying the framework that I have advocated here. I acknowledge that it is not easy to know about the epistemic standards that others use; what I advocate here is doing your best to follow the epistemic standards of the most trustworthy people.
Ok, I think I misunderstood you earlier and thought “elite common sense” referred to the common sense of elite experts, rather than of elites in general. (I don’t share Eliezer’s “No True Elite” objection since that’s probably what you originally intended.)
In view of my new understanding I would revise my criticism a bit. If the Blu-ray and studio executives had asked the opinions of a broad coalition of smart people, they likely would have gotten back the same answer that they already had: “hire some expert consultants and ask them to evaluate the system”. An alternative would be to instead learn about Bayesian updating and the heuristics-and-biases literature (in other words learn LW-style rationality), which could have enabled the executives to realize that they’d probably be reading the same reports from their consultants even if BD+ was actually easily breakable by a handful of people with the right skills. At that point maybe they could have come up with some unconventional, outside-the-box ideas about how to confirm or rule out this possibility.
I worry a bit that this has a flavor of ‘No True Elite’ or informal respecification of the procedure—suddenly, instead of consulting the best-trained subject matter experts, we are to poll a broad coalition of smart people. Why? Well, because that’s what might have delivered the best answer in this case post-facto. But how are we to know in advance which to do?
(One possible algorithm is to first arrive at the correct answer, then pick an elite group which delivers that answer. But in this case the algorithm has an extra step. And of course you don’t advocate this explicitly, but it looks to me like that’s what you just did.)
I’m not sure I understand the objection/question, but I’ll respond to the objection/question I think it is.
Am I changing the procedure to avoid a counterexample from Wei Dai?
I think the answer is No. If you look at the section titled “An outline of the framework and some guidelines for applying it effectively” you’ll see that I say you should try to use a prior that corresponds to an impartial combination of what the people who are most trustworthy in general think. I say a practical approximation of being an “expert” is being someone elite common sense would defer to. If the experts won’t tell elite common sense what they think, then what the experts think isn’t yet part of elite common sense. I think this is a case where elite common sense just gets it wrong, not that they clearly could have done anything about it. But I do think it’s a case where you can apply elite common sense, even if it gives you the wrong answer ex post. (Maybe it doesn’t give you the wrong answer though; maybe some better investigation would have been possible and they didn’t do it. This is hard to say from our perspective.)
Why go with what generally trustworthy people think as your definition of elite common sense? It’s precisely because I think it is easier to get in touch with what generally trustworthy people think, rather than what all subject matter experts in the world think. As I say in the essay:
In principle, if you could get a sense for what all subject matter experts thought about every issue, that would be a great place to start for your prior. But I think that’s not possible in practice. So I recommend using a more general group that you can use as your starting point.
Does this answer your question?
It seems the “No True Elite” fallacy would involve:
(1) Elite common sense seeming to say that I should believe X because on my definition of “elites,” elites generally believe X. (2) X being an embarrassing thing to believe (3) Me replying that someone who believed X wouldn’t count as an “elite,” but doing so in a way that couldn’t be justified by my framework
In this example I am actually saying we should defer to the cryptographers if we know their opinions, but that they don’t get to count as part of elite common sense immediately because their opinions are too hard to access. And I’m actually saying that elite common sense supports a claim which it is embarrassing to believe.
So I don’t understand how this is supposed to be an instance of the “No True Scotsman” fallacy.
There’s always reasons why the scotsman isn’t a Scotsman. What I’m worried about is more the case where these types of considerations are selected post-facto and seem perfectly reasonable since they produce the correct answer there, but then in a new case, someone cries ‘cherry-picking’ when similar reasoning is applied.
Suppose I selected from among all physicists who accept MWI and asked them what they thought about FAI arguments. To me that’s just an obvious sort of reweighting you might try, though anyone who’s had experience with machine learning knows that most clever reweightings you try don’t work. To someone else it might be cherry-picking of gullible physicists, and say, “You have violated Beckstead’s rules!”
To me it might be obvious that AI ‘elites’ are exceedingly poorly motivated to come up with good answers about FAI. Someone else might think that the world being at stake would make them more motivated. (Though here it seems to me that this crosses the line into blatant empirical falsity about how human beings actually think, and brief acquaintance with AI people talking about the problem ought to confirm this, except that most such evidence seems to be discarded because ‘Oh, they’re not true elites’ or ‘Even though it’s completely predictable that we’re going to run into this problem later, it’s not a warning sign for them to drop their epistemical trousers right now because they have arrived at the judgment that AI is far away via some line of reasoning which is itself reliable and will update accordingly as doom approaches, suddenly causing them to raise their epistemic standards again’. But now I’m diverging into a separate issue.)
I’d be happy with advice along the lines of, “First take your best guess as to who the elites really are and how much they ought to be trusted in this case, then take their opinion as a prior with an appropriate degree of concentrated probability density, then update.” I’m much more worried about alleged rules for deciding who the elites are that are supposed to substitute for “Eh, take your best guess” and if you’re applying complex reasoning to say, “Well, but that rule didn’t really fail for cryptographers” then it becomes more legitimate for me to reply, “Maybe just ‘take your best guess’ would better summarize the rule?” In turn, I’m espousing this because I think people will have a more productive conversation if they understand that the rule is just ‘best guess’ and itself something subject to dispute rather than hard rules, as opposed to someone thinking that someone else violated a hard rule that is clearly visible to everyone in targeting a certain ‘elite’.
Just to be clear: I would count this as violating my rules because you haven’t used a clear indicator of trustworthiness that many people would accept.
ETA: I’d add that people should generally pick their indicators in advance and stick with them, and not add them in to tune the system to their desired bottom lines.
Could you maybe just tell me what you think my framework is supposed to imply about Wei Dai’s case, if not what I said it implies? To be clear: I say it implies that the executives should have used an impartial combination of the epistemic standards used by the upper crust of Ivy League graduates, and that this gives little weight to the cryptographers because, though the cryptographers are included, they are a relatively small portion of all people included. So I think my framework straightforwardly doesn’t say that people should be relying on info they can’t use, which is how I understood Wei Dai’s objection. (I think that if they were able to know what the cryptographers opinions are, then elite common sense would recommend deferring to the cryptographers, but I’m just guessing about that.) What is it you think my framework implies—with no funny business and no instance of the fallacy you think I’m committing—and why do you find it objectionable?
ETA:
This is what I think I am doing and am intending to do.
So in my case I would consider elite common sense about cryptography to be “Ask Bruce Schneier”, who might or might not have declined to talk to those companies or consult with them. That’s much narrower than trying to poll an upper crust of Ivy League graduates, from whom I would not expect a particularly good answer. If Bruce Schneier didn’t answer I would email Dad and ask him for the name of a trusted cryptographer who was friends with the Yudkowsky family, and separately I would email Jolly and ask him what he thought or who to talk to.
But then if Scott Aaronson, who isn’t a cryptographer, blogged about the issue saying the cryptographers were being silly and even he could see that, I would either mark it as unknown or use my own judgment to try and figure out who to trust. If I couldn’t follow the object-level arguments and there was no blatantly obvious meta-level difference, I’d mark it unresolvable-for-now (and plan as if both alternatives had substantial probability). If I could follow the object-level arguments and there was a substantial difference of strength which I perceived, I wouldn’t hesitate to pick sides based on it, regardless of the eliteness of the people who’d taken the opposite side, so long as there were some elites on my own side who seemed to think that yes, it was that obvious. I’ve been in that epistemic position lots of times.
I’m honestly not sure about what your version is. I certainly don’t get the impression that one can grind well-specified rules to get to the answer about polling the upper 10% of Ivy League graduates in this case. If anything I think your rules would endorse my ‘Bruce Schneier’ output more strongly than the 10%, at least as I briefly read them.
I think we don’t disagree about whether elite common sense should defer to cryptography experts (I assume this is what Bruce Schneier is a stand-in for). Simplifying a bit, we are disagreeing about the much more subtle question of whether, given that elite common sense should defer to cryptography experts, in a situation where the current views of cryptographers are unknown, elite common sense recommends adopting the current views of cryptographers. I say elite common sense recommends adopting their views if you know them, but going with what e.g. the upper crust of Ivy League graduates would say if they had access to your information if you don’t know about the opinions of cryptographers. I also suspect elite common sense recommends finding out about the opinions of elite cryptographers if you can. But Wei Dai’s example was one in which you didn’t know and maybe couldn’t find out, so that’s why I said what I said. Frankly, I’m pretty flummoxed about why you think this is the “No True Scotsman” fallacy. I feel that one of us is probably misunderstanding the other on a basic level.
A possible confusion here is that I doubt the cryptographers have very different epistemic standards as opposed to substantive knowledge and experience about cryptography and tools for thinking about it.
I agree with this, and tried to make this clear in my discussion. I went with a rough guess that would work for a decent chunk of the audience rather than only saying something very abstract. It’s subtle, but I think reasonable epistemic frameworks are subtle if you want them to have much generality.
That’s petty change—consider big-studio movie budgets for proper context.
I am pretty sure they had—but it’s hard to say whether they discounted it to low probability or their whole incentive structure was such that it made sense for them to ignore this information even if they believed it to be true. I’m inclined towards the latter.