This bit irked me because it is inconsistent with a foundational way of checking and improving my brain that might be enough by itself to recover the whole of the art:
Being wrong feels exactly like being right.
This might be true in some specific situation where a sort of Epistemic Potemkin Village is being constructed for you with the goal of making it true… but otherwise, with high reliability, I think it is wrong.
Being confident feels very similar in both cases, but being confidently right enables you to predict things at the edge of your perceptions and keep “guessing right” and you kinda just get bored, whereas being confidently wrong feels different at the edges of your perceptions, with blindness there, or an aversion to looking, or a lack of curiosity, or a certainty that it is neither interesting nor important nor good”.
If you go confidently forth in an area where you are wrong, you feel surprise over and over and over (unless something is watching your mind and creating what you expect in each place you look). If you’re wrong about something, you either go there and get surprised, or “just feel” like not going there, or something is generating the thing you’re exploring.
I think this is part of how it is possible to be genre-savvy. In fiction, there IS an optimization process that IS laying out a world, with surprises all queued up “as if you had been wrong about an objective world that existed by accident, with all correlations caused by accident and physics iterated over time”. Once you’re genre-savvy, you’ve learned to “see past the so-called surprises to the creative optimizing author of those surprises”.
There are probably theorems lurking here (not that I’ve seen in wikipedia and checked for myself, but it makes sense), that sort of invert Aumann, and show that if the Author ever makes non-trivial choices, then an ideal bayesian reasoner will eventually catch on.
If creationism was true, and our demiurge had done a big complicated thing, then eventually “doing physics” and “becoming theologically genre-savvy” would be the SAME thing.
This not working (and hypotheses that suppose “blind mechanism” working very well) is either evidence that (1) naive creationism is false, (2) we haven’t studied physics long enough, or (3) we have a demiurge and is it is a half-evil fuckhead who aims to subvert the efforts of “genre-savvy scientists” by exploiting the imperfections of our ability to update on evidence.
(A fourth hypothesis is: the “real” god (OntoGod?) is something like “math itself”. Then “math” conceives of literally every universe as a logically possible data structure, including our entire spacetime and so on, often times almost by accident, like how our universe is accidentally simulated as a side effect every time anyone anywhere in the multi-verse runs Solomonoff Induction on a big enough computer. Sadly, this is basically just a new way of talking that is maybe a bit more rigorous than older ways of talking, at the cost of being unintelligible to most people. It doesn’t help you predict coin flips or know the melting point of water any more precisely, so like: what’s the point?)
But anyway… it all starts with “being confidently wrong feels different (out at the edges, where aversion and confusion can lurk) than being confidently right”. If that were false, then we couldn’t do math… but we can do math, so yay for that! <3
I do NOT know that “the subjective feeling of being right” is an adequate approach to purge all error.
Also, I think that hypotheses are often wrong, but they motivate new careful systematic observation, and that this “useful wrongness” is often a core part of a larger OODA loop of guessing and checking ideas in the course of learning and discovery.
My claim is that “the subjective feeling of being right” is a tool whose absence works to disqualify at least some wrongnesses as “maybe true, maybe false, but not confidently and clearly known to be true in that way that feels very very hard to get wrong”.
Prime numbers fall out of simple definitions, and I know in my bones that five is prime.
There are very few things that I know with as much certainty as this, but I’m pretty sure that being vividly and reliably shown to be wrong about this would require me to rebuild my metaphysics and epistemics in radical ways. I’ve been wrong a lot, but the things I was wrong about were not like my mental state(s) around “5 is prime”.
And in science, seeking reliable generalities about the physical world, there’s another sort of qualitative difference that is similar. For example, I grew up in northern California, and I’ve seen so many Sequoia sempervirens that I can often “just look” and “simply know” that that is the kind of tree I’m seeing.
If I visit other biomes, the feeling of “looking at a forest and NOT knowing the names of >80% of the plants I can see” is kind of pleasantly disorienting… there is so much to learn in other biomes!
(I’ve only ever seen one Metasequoia glyptostroboides that was planted as a specimen at the entrance to a park, and probably can’t recognize them, but my understanding is that they just don’t look like a coastal redwood or even grow very well where coastal redwoods naturally grow. My confidence for Sequoiadendron giganteum is in between. There could hypothetically be a fourth kind of redwood that is rare. Or it might be that half the coastal redwoods I “very confidently recognize” are male and half are female in some weird way (or maybe 10% are have even weirder polyploid status than you’d naively expect?) and I just can’t see the subtle distinctions (yet)? With science and the material world, in my experience, I simply can’t achieve the kind of subjective feeling of confident correctness that exists in math.)
In general, subjectively, for me, “random ass guesses” (even the ones that turn out right (but by random chance you’d expect them to mostly be wrong)) feel very very different from coherently-justified, well-understood, broadly-empirically-supported, central, contextualized, confident, “correct” conclusions because they lack a subjective feeling of “confidence”.
And within domains where I (and presumably other people?) are basically confident, I claim that there’s a distinct feeling which shows up in one’s aversions to observation or contemplation about things at the edge of awareness. This is less reliable, and attaching the feelings to Bayesian credence levels is challenging and I don’t know how to teach it, and I do it imperfectly myself...
...but (1) without subjective awareness of confidence and (2) the ability to notice aversion (or lack thereof) to tangential and potentially relevant evidence...
...I wouldn’t say that epistemic progress is impossible. Helicopters, peregrine falcons, F-16s, and bees show that there are many ways to fly.
But I am saying that if I had these subjective senses of confidence and confusion lesioned from my brain, I’d expect to be, mentally, a bit like a “bee with only one wing” and not expect to be able to make very much intellectual progress. I think I’d have a lot of difficulty learning math, much less being able to tutor the parts of math I’m confident about.
(I’m not sure if I’d be able to notice the lesion or not. It is an interesting question whether or how such things are neurologically organized, and whether modular parts of the brain are “relevant to declarative/verbal/measurable epistemic performance” in coherent or redundant or complimentary ways. I don’t know how to lesion brains in the way I propose, and maybe it isn’t even possible, except as a low resolution thought experiment?)
In summary, I don’t think “feeling the subjective difference between believing something true and believing something false” is necessary or sufficient for flawless epistemology, just that it is damn useful, and not something I’d want to do without.
This bit irked me because it is inconsistent with a foundational way of checking and improving my brain that might be enough by itself to recover the whole of the art:
This might be true in some specific situation where a sort of Epistemic Potemkin Village is being constructed for you with the goal of making it true… but otherwise, with high reliability, I think it is wrong.
Being confident feels very similar in both cases, but being confidently right enables you to predict things at the edge of your perceptions and keep “guessing right” and you kinda just get bored, whereas being confidently wrong feels different at the edges of your perceptions, with blindness there, or an aversion to looking, or a lack of curiosity, or a certainty that it is neither interesting nor important nor good”.
If you go confidently forth in an area where you are wrong, you feel surprise over and over and over (unless something is watching your mind and creating what you expect in each place you look). If you’re wrong about something, you either go there and get surprised, or “just feel” like not going there, or something is generating the thing you’re exploring.
I think this is part of how it is possible to be genre-savvy. In fiction, there IS an optimization process that IS laying out a world, with surprises all queued up “as if you had been wrong about an objective world that existed by accident, with all correlations caused by accident and physics iterated over time”. Once you’re genre-savvy, you’ve learned to “see past the so-called surprises to the creative optimizing author of those surprises”.
There are probably theorems lurking here (not that I’ve seen in wikipedia and checked for myself, but it makes sense), that sort of invert Aumann, and show that if the Author ever makes non-trivial choices, then an ideal bayesian reasoner will eventually catch on.
If creationism was true, and our demiurge had done a big complicated thing, then eventually “doing physics” and “becoming theologically genre-savvy” would be the SAME thing.
This not working (and hypotheses that suppose “blind mechanism” working very well) is either evidence that (1) naive creationism is false, (2) we haven’t studied physics long enough, or (3) we have a demiurge and is it is a half-evil fuckhead who aims to subvert the efforts of “genre-savvy scientists” by exploiting the imperfections of our ability to update on evidence.
(A fourth hypothesis is: the “real” god (OntoGod?) is something like “math itself”. Then “math” conceives of literally every universe as a logically possible data structure, including our entire spacetime and so on, often times almost by accident, like how our universe is accidentally simulated as a side effect every time anyone anywhere in the multi-verse runs Solomonoff Induction on a big enough computer. Sadly, this is basically just a new way of talking that is maybe a bit more rigorous than older ways of talking, at the cost of being unintelligible to most people. It doesn’t help you predict coin flips or know the melting point of water any more precisely, so like: what’s the point?)
But anyway… it all starts with “being confidently wrong feels different (out at the edges, where aversion and confusion can lurk) than being confidently right”. If that were false, then we couldn’t do math… but we can do math, so yay for that! <3
How do you know that this approach doesn’t miss entire categories of error?
I do NOT know that “the subjective feeling of being right” is an adequate approach to purge all error.
Also, I think that hypotheses are often wrong, but they motivate new careful systematic observation, and that this “useful wrongness” is often a core part of a larger OODA loop of guessing and checking ideas in the course of learning and discovery.
My claim is that “the subjective feeling of being right” is a tool whose absence works to disqualify at least some wrongnesses as “maybe true, maybe false, but not confidently and clearly known to be true in that way that feels very very hard to get wrong”.
Prime numbers fall out of simple definitions, and I know in my bones that five is prime.
There are very few things that I know with as much certainty as this, but I’m pretty sure that being vividly and reliably shown to be wrong about this would require me to rebuild my metaphysics and epistemics in radical ways. I’ve been wrong a lot, but the things I was wrong about were not like my mental state(s) around “5 is prime”.
And in science, seeking reliable generalities about the physical world, there’s another sort of qualitative difference that is similar. For example, I grew up in northern California, and I’ve seen so many Sequoia sempervirens that I can often “just look” and “simply know” that that is the kind of tree I’m seeing.
If I visit other biomes, the feeling of “looking at a forest and NOT knowing the names of >80% of the plants I can see” is kind of pleasantly disorienting… there is so much to learn in other biomes!
(I’ve only ever seen one Metasequoia glyptostroboides that was planted as a specimen at the entrance to a park, and probably can’t recognize them, but my understanding is that they just don’t look like a coastal redwood or even grow very well where coastal redwoods naturally grow. My confidence for Sequoiadendron giganteum is in between. There could hypothetically be a fourth kind of redwood that is rare. Or it might be that half the coastal redwoods I “very confidently recognize” are male and half are female in some weird way (or maybe 10% are have even weirder polyploid status than you’d naively expect?) and I just can’t see the subtle distinctions (yet)? With science and the material world, in my experience, I simply can’t achieve the kind of subjective feeling of confident correctness that exists in math.)
In general, subjectively, for me, “random ass guesses” (even the ones that turn out right (but by random chance you’d expect them to mostly be wrong)) feel very very different from coherently-justified, well-understood, broadly-empirically-supported, central, contextualized, confident, “correct” conclusions because they lack a subjective feeling of “confidence”.
And within domains where I (and presumably other people?) are basically confident, I claim that there’s a distinct feeling which shows up in one’s aversions to observation or contemplation about things at the edge of awareness. This is less reliable, and attaching the feelings to Bayesian credence levels is challenging and I don’t know how to teach it, and I do it imperfectly myself...
...but (1) without subjective awareness of confidence and (2) the ability to notice aversion (or lack thereof) to tangential and potentially relevant evidence...
...I wouldn’t say that epistemic progress is impossible. Helicopters, peregrine falcons, F-16s, and bees show that there are many ways to fly.
But I am saying that if I had these subjective senses of confidence and confusion lesioned from my brain, I’d expect to be, mentally, a bit like a “bee with only one wing” and not expect to be able to make very much intellectual progress. I think I’d have a lot of difficulty learning math, much less being able to tutor the parts of math I’m confident about.
(I’m not sure if I’d be able to notice the lesion or not. It is an interesting question whether or how such things are neurologically organized, and whether modular parts of the brain are “relevant to declarative/verbal/measurable epistemic performance” in coherent or redundant or complimentary ways. I don’t know how to lesion brains in the way I propose, and maybe it isn’t even possible, except as a low resolution thought experiment?)
In summary, I don’t think “feeling the subjective difference between believing something true and believing something false” is necessary or sufficient for flawless epistemology, just that it is damn useful, and not something I’d want to do without.