If you can notice when you’re confused, how do you notice when you’re ignorant?
You don’t need to notice that you’re ignorant if you already know that you are.
One of the structural commitments of Korzybski (of the The Map is not the Territory fame) is that abstractions always leave out some facts. My concepts of a thing is not the thing itself—the map is not the territory. That consciousness of abstraction entails a consciousness of ignorance.
When he had eliminated the impossible, whatever remained, however low its prior, must be true.
Eliminated by his calculations, with his priors, with his abstractions. What’s the probability that those are wrong? What’s the probability that he hadn’t taken into account everything. And then, what’s the chance that he hadn’t been thorough enough in his enumeration of “whatever remained”?
Jaynes has a nice example of rejecting “whatever remained”, by putting a something else theory into the analysis, and assigning some small probability to it.
Also, like Korzybski, Jaynes encourages a consciousness of abstraction by conditioning all probabilities on background knowledge I, as in P(X | a_1,a_2,......, I). There’s my background knowledge I, staring back at me. What if it’s incorrect?
So there are two main failures in these proof by contradiction scenarios. The first is to fail to include a valid alternative. The second is that your I, your model and assumptions, suck. They are wrong, or worse, not even wrong.
You don’t need to notice that you’re ignorant if you already know that you are.
One of the structural commitments of Korzybski (of the The Map is not the Territory fame) is that abstractions always leave out some facts. My concepts of a thing is not the thing itself—the map is not the territory. That consciousness of abstraction entails a consciousness of ignorance.
Eliminated by his calculations, with his priors, with his abstractions. What’s the probability that those are wrong? What’s the probability that he hadn’t taken into account everything. And then, what’s the chance that he hadn’t been thorough enough in his enumeration of “whatever remained”?
Jaynes has a nice example of rejecting “whatever remained”, by putting a something else theory into the analysis, and assigning some small probability to it.
Also, like Korzybski, Jaynes encourages a consciousness of abstraction by conditioning all probabilities on background knowledge I, as in P(X | a_1,a_2,......, I). There’s my background knowledge I, staring back at me. What if it’s incorrect?
So there are two main failures in these proof by contradiction scenarios. The first is to fail to include a valid alternative. The second is that your I, your model and assumptions, suck. They are wrong, or worse, not even wrong.