Here’s my attempt to give a concise definition both for Gell-Mann Amnesia (GMA) and your hypothesis, which I’ll call Shminux Amnesia (SA). I’ll present them in soft form (“adequately”). For the hard form, replace “fail to adequately update” with “fail to update”
GMA: People systematically fail to adequately update their priors on a source’s general credibility based on their knowledge of its credibility in their field of expertise.
Example: A COVID-19 epidemiologists considers the reporting on COVID-19 epidemiology in a certain newspaper to be terrible, but that its reportage is generally adequate. They mildly downgrade their assessment of the reliability of its economic reportage from “adequate” to “mildly problematic.” However, economists generally consider that newspaper’s economic reportage to be terrible, not just mildly problematic.
SA: People systematically fail to adequately update their priors on the credibility of specific statements based on their knowledge of the credibility the source itself assigns them.
Example: A reader reads a newspaper reporting what it calls “totally off-base speculation” that a tax increase is about to be announced. The newspaper also refers to a starfish die-off as “extremely likely to be caused by a fungal infection.” They regard the newspaper as moderately reliable in general. Prior to reading this article, they had no specific information about the likelihood of a tax increase or the cause of the starfish die-off.
After reading the two articles, they believe the prediction of a tax increase to be “unlikely, but not too out there,” and the prediction of a fungal cause of the starfish die-off to be “fairly likely, but nowhere close to a sure thing.”
It’s not super clear to me that either GMA or SA are examples of poor epistemic strategies, or that they’re especially common. Even if they’re true, important, and common, it’s not clear what their magnitude is. My personal experience is that I form fairly detailed models of how reliable certain sources are on the topics I care about, and don’t bother with the topics I don’t care about. I also do neglect to take into account reliability claims to the full extent that would be ideal.
To add on to this, I’d throw one more form of amnesia—Assumption Amnesia. This form of amnesia means that people tend to ignore or neglect the assumptions that motivate theorems and inferences, in areas ranging from mathematics to politics. If they are presented with the following statement:
“We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data whenever we have no independent means of checking the veracity of the data.”
What they will remember is this:
“This sounds unobjectionable on the surface. We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data.”
And neglect this:
“Whenever we have no independent means of checking the veracity of the data.”
The conclusions differ radically if we ignore that assumption.
With the assumption, we take away the idea that, people lean on the general credibility of the source if they have nothing else to go on.
Without the assumption (due to assumption amnesia), we take away the idea that people will believe whatever they’re told as long as it comes from a source they think is credible, even if it contradicts their own senses.
In general, this cluster of “amnesias” point to an overall tendency of the human mind to radically simplify its models of the world. This can be beneficial to prevent overfitting. But if a key assumption or constraint gets lost, it can lead to major misfires of cognition.
That’s… a surprisingly detailed and interesting analysis, potentially worthy of a separate post. My prototypical example would be something like
Your friend who is a VP at public company XCOMP says “this quarter has been exceptionally busy, we delivered a record number of widgets and have a backlog of new orders enough to last a year. So happy about having all this vested stock options”
You decide that XCOMP is a good investment, since your friend is trustworthy, has the accurate info, and would not benefit from you investing in XCOMP.
You plunk a few grand into XCOMP stock.
The stock value drops after the next quarterly report.
You mention it to your friend, who says “yeah, it’s risky to invest in a single stock, no matter how good the company looks, I always diversify.”
What happened here is that your friend’s odds of the stock going up was maybe 50%, while you assumed that, because you find them 99% trustworthy, you estimated the odds of XCOMP going up as 90%. That is the uninformed elevation of trust I am talking about.
Another example: Elon Musk says “We will have full self-driving ready to go later this year”. You, as an Elon fanboy, take it as a gospel and rush to buy the FSD option for your Model 3. While, if pressed, Elon would say that “I am confident that we can stick to this aggressive timeline if everything goes smoothly” (which it never does).
So, it’s closer to what you call the Assumption Amnesia, as I understand it.
As one further remark, I actually think it’s often good to practice Gell-Mann Amnesia.
Just because someone is an expert in one domain, does not mean they should be assumed an expert in other domains. Likewise, just because someone lacks knowledge in one domain, does not meant they should be assumed to lack knowledge in others.
It seems epistemically healthy to practice identifying the specific areas in which a particular person or source is expert, and distinguishing them carefully from the areas where they are not.
One of the tricky bits is that a newspaper makes this somewhat difficult. By purporting to cover all topics, yet actually aggregating the views of a wide range of journalists and editors, it makes it very hard to build stable knowledge about the newspapers’ epistemics. It would be better to pick a particular journalist and get a sense of how much they know about a particular topic that they cover frequently, but this isn’t easy to do in a newspaper format.
Ultimately, possession of a sophisticated prior on the credibility of any source on any topic is an achievement not lightly obtained.
I think there’s a difference between ignoring a stated assumption, and failing to infer an unstated assumption. In the example I generated from your OP as an illustration of Assumption Amnesia, the problem was ignoring a stated assumption (“Whenever we have no independent means of checking the veracity of the data.”).
By contrast, in the hypothetical cases you present, the problem is failing to infer an unstated assumption (“it’s risky to invest in a single stock, no matter how good the company looks, I always diversify” and “if everything goes smoothly, which it never does”).
My central case for Assumption Amnesia is the former—ignoring a stated assumption. I think the latter is at least as important, but is also more forgivable. It depends on sheer expertise/sophisticated application of heuristics. Taken literally, the hypothetical Musk statement would justify buying the FSD option. It seems related to the problem of knowing when to take a religious, political, poetic, or joking statment literally, figuratively, or as an exaggeration; and when it’s meant specifically and seriously.
In any case, all these seem to be component challenges of the overall problem of interpreting statements in context. It does seem quite useful to break that skill up into factors that can be individually examined and practiced.
Here’s my attempt to give a concise definition both for Gell-Mann Amnesia (GMA) and your hypothesis, which I’ll call Shminux Amnesia (SA). I’ll present them in soft form (“adequately”). For the hard form, replace “fail to adequately update” with “fail to update”
GMA: People systematically fail to adequately update their priors on a source’s general credibility based on their knowledge of its credibility in their field of expertise.
Example: A COVID-19 epidemiologists considers the reporting on COVID-19 epidemiology in a certain newspaper to be terrible, but that its reportage is generally adequate. They mildly downgrade their assessment of the reliability of its economic reportage from “adequate” to “mildly problematic.” However, economists generally consider that newspaper’s economic reportage to be terrible, not just mildly problematic.
SA: People systematically fail to adequately update their priors on the credibility of specific statements based on their knowledge of the credibility the source itself assigns them.
Example: A reader reads a newspaper reporting what it calls “totally off-base speculation” that a tax increase is about to be announced. The newspaper also refers to a starfish die-off as “extremely likely to be caused by a fungal infection.” They regard the newspaper as moderately reliable in general. Prior to reading this article, they had no specific information about the likelihood of a tax increase or the cause of the starfish die-off.
After reading the two articles, they believe the prediction of a tax increase to be “unlikely, but not too out there,” and the prediction of a fungal cause of the starfish die-off to be “fairly likely, but nowhere close to a sure thing.”
It’s not super clear to me that either GMA or SA are examples of poor epistemic strategies, or that they’re especially common. Even if they’re true, important, and common, it’s not clear what their magnitude is. My personal experience is that I form fairly detailed models of how reliable certain sources are on the topics I care about, and don’t bother with the topics I don’t care about. I also do neglect to take into account reliability claims to the full extent that would be ideal.
To add on to this, I’d throw one more form of amnesia—Assumption Amnesia. This form of amnesia means that people tend to ignore or neglect the assumptions that motivate theorems and inferences, in areas ranging from mathematics to politics. If they are presented with the following statement:
“We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data whenever we have no independent means of checking the veracity of the data.”
What they will remember is this:
“This sounds unobjectionable on the surface. We tend to equate the reliability of the data with the subjectively perceived trustworthiness of the source of data.”
And neglect this:
“Whenever we have no independent means of checking the veracity of the data.”
The conclusions differ radically if we ignore that assumption.
With the assumption, we take away the idea that, people lean on the general credibility of the source if they have nothing else to go on.
Without the assumption (due to assumption amnesia), we take away the idea that people will believe whatever they’re told as long as it comes from a source they think is credible, even if it contradicts their own senses.
In general, this cluster of “amnesias” point to an overall tendency of the human mind to radically simplify its models of the world. This can be beneficial to prevent overfitting. But if a key assumption or constraint gets lost, it can lead to major misfires of cognition.
That’s… a surprisingly detailed and interesting analysis, potentially worthy of a separate post. My prototypical example would be something like
Your friend who is a VP at public company XCOMP says “this quarter has been exceptionally busy, we delivered a record number of widgets and have a backlog of new orders enough to last a year. So happy about having all this vested stock options”
You decide that XCOMP is a good investment, since your friend is trustworthy, has the accurate info, and would not benefit from you investing in XCOMP.
You plunk a few grand into XCOMP stock.
The stock value drops after the next quarterly report.
You mention it to your friend, who says “yeah, it’s risky to invest in a single stock, no matter how good the company looks, I always diversify.”
What happened here is that your friend’s odds of the stock going up was maybe 50%, while you assumed that, because you find them 99% trustworthy, you estimated the odds of XCOMP going up as 90%. That is the uninformed elevation of trust I am talking about.
Another example: Elon Musk says “We will have full self-driving ready to go later this year”. You, as an Elon fanboy, take it as a gospel and rush to buy the FSD option for your Model 3. While, if pressed, Elon would say that “I am confident that we can stick to this aggressive timeline if everything goes smoothly” (which it never does).
So, it’s closer to what you call the Assumption Amnesia, as I understand it.
As one further remark, I actually think it’s often good to practice Gell-Mann Amnesia.
Just because someone is an expert in one domain, does not mean they should be assumed an expert in other domains. Likewise, just because someone lacks knowledge in one domain, does not meant they should be assumed to lack knowledge in others.
It seems epistemically healthy to practice identifying the specific areas in which a particular person or source is expert, and distinguishing them carefully from the areas where they are not.
One of the tricky bits is that a newspaper makes this somewhat difficult. By purporting to cover all topics, yet actually aggregating the views of a wide range of journalists and editors, it makes it very hard to build stable knowledge about the newspapers’ epistemics. It would be better to pick a particular journalist and get a sense of how much they know about a particular topic that they cover frequently, but this isn’t easy to do in a newspaper format.
Ultimately, possession of a sophisticated prior on the credibility of any source on any topic is an achievement not lightly obtained.
I think there’s a difference between ignoring a stated assumption, and failing to infer an unstated assumption. In the example I generated from your OP as an illustration of Assumption Amnesia, the problem was ignoring a stated assumption (“Whenever we have no independent means of checking the veracity of the data.”).
By contrast, in the hypothetical cases you present, the problem is failing to infer an unstated assumption (“it’s risky to invest in a single stock, no matter how good the company looks, I always diversify” and “if everything goes smoothly, which it never does”).
My central case for Assumption Amnesia is the former—ignoring a stated assumption. I think the latter is at least as important, but is also more forgivable. It depends on sheer expertise/sophisticated application of heuristics. Taken literally, the hypothetical Musk statement would justify buying the FSD option. It seems related to the problem of knowing when to take a religious, political, poetic, or joking statment literally, figuratively, or as an exaggeration; and when it’s meant specifically and seriously.
In any case, all these seem to be component challenges of the overall problem of interpreting statements in context. It does seem quite useful to break that skill up into factors that can be individually examined and practiced.