These are not evidence at all! They are the opposite of evidence. Evidence are something from the territory that you use to update your map—what you are describing goes the opposite direction—it comes from the map to say something specific about the territory.
“Using the map to say something about the territory” sounds like “predictions”, but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true—in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
So… maybe you could call it “application”? Since you are applying your knowledge?
Or, since they explicitly go against the empirical evidence, how about we just call it “stubbornness”?
These are not evidence at all! They are the opposite of evidence.
I believe that what I am trying to point at is indeed evidence, in the Bayesian sense of the word. For example, consider masks and COVID. Imagine that we empirically observe that they are effective 20% of the time and ineffective 80% of the time. Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
Suppose now that we know that when someone with COVID breathes, particles containing COVID remain in the air. Further suppose that our knowledge of physics would tell us that someone standing two feet away is likely to breathe in these particles at some concentration. And further suppose that our knowledge of how other diseases work tell us that when that concentration of virus is ingested, it is likely that you will get infected. When you incorporate all of this knowledge about physics and biology, it should shift your belief that masks are effective. It shouldn’t stay put at 20%. We’d want to shift it upward to something like 75% maybe.
Evidence are something from the territory that you use to update your map—what you are describing goes the opposite direction—it comes from the map to say something specific about the territory.
”Using the map to say something about the territory” sounds like “predictions”, but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true—in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
I agree that evidence comes from the territory. And that from there, you can use that to update your map. For example, an apple falling from a tree is evidence for gravity. From there you have a model of how gravity works in your map. From there, you can then use this model of how gravity works to say something about the territory, eg. to make predictions.
Relating this back to masks example, perhaps our model of how gravity works would imply that these aerosol particles would start falling to the ground and thus be present at a much lower concentration six feet away from the person who breathed them compared to two feet away. From there, we should use that prediction to update our belief about how likely it is that masks should be effective.
but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true—in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
That is not the case. It’s just that I believe that theoretical evidence should be used in addition to empirical evidence, not as a replacement. They should both be incorporated into your beliefs. Ie. in this example with masks, we should factor in both the (hypothetical?) empirical evidence that masks aren’t effective with the theoretical evidence that I described.
So… maybe you could call it “application”? Since you are applying your knowledge?
That sounds like a promising idea. It seems like it needs some tweaking though. I want be able to say something like “the theoretical evidence suggests”. If you replace “theoretical evidence” with “application”, it wouldn’t make sense. You’d have to replace it with something like “application of what we know about X”, but that is too wordy.
(I feel like my explanation for why theoretical evidence is in fact evidence didn’t do it justice. It seems like an important thing and I can’t think of a place where it is explained well, so I’m interested in hearing explanations from people who can explain/articulate it well.)
Imagine that you are working on a product. A/B tests are showing that option A is better, but your instincts, based on your understanding of how the gears turn, suggest that B is better.
Imagining it now. “are showing” makes it sound like your A/B tests are still underway, in which case wait for the study to end (presumably you designed a good study with enough power that the end results would give you a useful answer on A vs. B). But if the tests show A > B, why would you hold on to your B > A prior? Or if you think the tests are only 50% conclusive, why would you not at least update the certainty or strength of your B > A prior?
I think this is why Idan said, “Or, since they explicitly go against the empirical evidence, how about we just call it ‘stubbornness’?”
Or if you think the tests are only 50% conclusive, why would you not at least update the certainty or strength of your B > A prior?
I would.
But if the tests show A > B, why would you hold on to your B > A prior?
I wouldn’t necessarily do that. The test results are empirical evidence in favor of A > B. The intuition is theoretical evidence in favor of B > A. My position is that they both count and you should update your beliefs according to how strong each of them is.
Okay, thank you for engaging. Those answers weren’t clear to me from the parent piece.
Maybe I reacted strongly because my current prior on my own intuitions is something like “Your intuition is just savannah-monkey-brain cognitive shortcuts and biases layered over your weird life experiences”. One thing I’ve been thinking about lately is how often that prior is actually justified versus how often it’s merely a useful heuristic (or a shortcut/bias? ha!) to remind me to shut up and Google/multiply.
Speaking generally, not assuming that you are doing this, but I think that there is a bit of a taboo against hedgehog-thinking. Perhaps there is a tendency for people to overuse that type of thinking, so perhaps it can make sense to be weary of it.
But it is clear that some situations call for us to be more like foxes, and other situations to be more like hedgehogs. I don’t think anyone would take the position that hedgehogs are to be completely dismissed in 100% of situations. So then, it would be helpful to have the right terminology at your disposal for when you do find yourself in a hedgehog situation.
This clarification gave me enough context to write a proper answer.
That sounds like a promising idea. It seems like it needs some tweaking though. I want be able to say something like “the theoretical evidence suggests”. If you replace “theoretical evidence” with “application”, it wouldn’t make sense. You’d have to replace it with something like “application of what we know about X”, but that is too wordy.
Just call it “the theory” then—“the theory suggests” is both concise and conveys the meaning well.
Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
You need not stop there, but getting an answer that is in conflict with your intuitions does not give you free reign to fight it with non-evidence. If you think there’s a chance the empirical evidence so far may have some bias you can look for the bias. If you think the empirical evidence could be bolstered by further experimentation you perform further experimentation. Trying to misalign your prior in light of the evidence with the goal of sticking to your original intuitions however is not ok. What you’re doing is giving in to motivated reasoning and then post-hoc trying to find some way to say that’s ok. I would call that meta-level rationalization.
They are definitely evidence; the “theory” or knowledge being applied itself came from the territory. As long as a map was generated from the territory in the first place, the map provides evidence which can be extrapolated into other parts of the territory.
You need to be very careful with this approach, as it can easily lead to circular logic where map X is evidence for map Y because they both come from the same territory, and may Y is evidence for map X because they both come from the same territory, so you get a positive feedback loop that updates them both to approach 100% confidence.
These are not evidence at all! They are the opposite of evidence. Evidence are something from the territory that you use to update your map—what you are describing goes the opposite direction—it comes from the map to say something specific about the territory.
“Using the map to say something about the territory” sounds like “predictions”, but in this case it does not seem like you intend to update your beliefs based on whether or not the predictions come true—in fact, you specify that the empirical evidence is already going against these predictions, and you seem perfectly content with that.
So… maybe you could call it “application”? Since you are applying your knowledge?
Or, since they explicitly go against the empirical evidence, how about we just call it “stubbornness”?
I believe that what I am trying to point at is indeed evidence, in the Bayesian sense of the word. For example, consider masks and COVID. Imagine that we empirically observe that they are effective 20% of the time and ineffective 80% of the time. Should we stop there and take it as our belief that there is a 20% chance that they are effective? No!
Suppose now that we know that when someone with COVID breathes, particles containing COVID remain in the air. Further suppose that our knowledge of physics would tell us that someone standing two feet away is likely to breathe in these particles at some concentration. And further suppose that our knowledge of how other diseases work tell us that when that concentration of virus is ingested, it is likely that you will get infected. When you incorporate all of this knowledge about physics and biology, it should shift your belief that masks are effective. It shouldn’t stay put at 20%. We’d want to shift it upward to something like 75% maybe.
I agree that evidence comes from the territory. And that from there, you can use that to update your map. For example, an apple falling from a tree is evidence for gravity. From there you have a model of how gravity works in your map. From there, you can then use this model of how gravity works to say something about the territory, eg. to make predictions.
Relating this back to masks example, perhaps our model of how gravity works would imply that these aerosol particles would start falling to the ground and thus be present at a much lower concentration six feet away from the person who breathed them compared to two feet away. From there, we should use that prediction to update our belief about how likely it is that masks should be effective.
That is not the case. It’s just that I believe that theoretical evidence should be used in addition to empirical evidence, not as a replacement. They should both be incorporated into your beliefs. Ie. in this example with masks, we should factor in both the (hypothetical?) empirical evidence that masks aren’t effective with the theoretical evidence that I described.
That sounds like a promising idea. It seems like it needs some tweaking though. I want be able to say something like “the theoretical evidence suggests”. If you replace “theoretical evidence” with “application”, it wouldn’t make sense. You’d have to replace it with something like “application of what we know about X”, but that is too wordy.
(I feel like my explanation for why theoretical evidence is in fact evidence didn’t do it justice. It seems like an important thing and I can’t think of a place where it is explained well, so I’m interested in hearing explanations from people who can explain/articulate it well.)
Imagining it now. “are showing” makes it sound like your A/B tests are still underway, in which case wait for the study to end (presumably you designed a good study with enough power that the end results would give you a useful answer on A vs. B). But if the tests show A > B, why would you hold on to your B > A prior? Or if you think the tests are only 50% conclusive, why would you not at least update the certainty or strength of your B > A prior?
I think this is why Idan said, “Or, since they explicitly go against the empirical evidence, how about we just call it ‘stubbornness’?”
I would.
I wouldn’t necessarily do that. The test results are empirical evidence in favor of A > B. The intuition is theoretical evidence in favor of B > A. My position is that they both count and you should update your beliefs according to how strong each of them is.
Okay, thank you for engaging. Those answers weren’t clear to me from the parent piece.
Maybe I reacted strongly because my current prior on my own intuitions is something like “Your intuition is just savannah-monkey-brain cognitive shortcuts and biases layered over your weird life experiences”. One thing I’ve been thinking about lately is how often that prior is actually justified versus how often it’s merely a useful heuristic (or a shortcut/bias? ha!) to remind me to shut up and Google/multiply.
Sure thing :)
Speaking generally, not assuming that you are doing this, but I think that there is a bit of a taboo against hedgehog-thinking. Perhaps there is a tendency for people to overuse that type of thinking, so perhaps it can make sense to be weary of it.
But it is clear that some situations call for us to be more like foxes, and other situations to be more like hedgehogs. I don’t think anyone would take the position that hedgehogs are to be completely dismissed in 100% of situations. So then, it would be helpful to have the right terminology at your disposal for when you do find yourself in a hedgehog situation.
This clarification gave me enough context to write a proper answer.
Just call it “the theory” then—“the theory suggests” is both concise and conveys the meaning well.
You need not stop there, but getting an answer that is in conflict with your intuitions does not give you free reign to fight it with non-evidence. If you think there’s a chance the empirical evidence so far may have some bias you can look for the bias. If you think the empirical evidence could be bolstered by further experimentation you perform further experimentation. Trying to misalign your prior in light of the evidence with the goal of sticking to your original intuitions however is not ok. What you’re doing is giving in to motivated reasoning and then post-hoc trying to find some way to say that’s ok. I would call that meta-level rationalization.
They are definitely evidence; the “theory” or knowledge being applied itself came from the territory. As long as a map was generated from the territory in the first place, the map provides evidence which can be extrapolated into other parts of the territory.
You need to be very careful with this approach, as it can easily lead to circular logic where map X is evidence for map Y because they both come from the same territory, and may Y is evidence for map X because they both come from the same territory, so you get a positive feedback loop that updates them both to approach 100% confidence.