It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, “Well where did you come to believe all that stuff about evidence, LessWrong?”
Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said “Is this guy a cult victim?” he would ask for evidence. He wouldn’t be thinking in terms of Bayes’s theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like “Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you’ve checked for the evidence,” which I was later delighted to see validated and formalized by probability theory.
You could of course say, “Well, that’s not actually your past self, that’s your present self (the cult victim)’s memories, which are distorted by mad thinking,” but then you’re getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I’m screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.
I’m not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they’re supposed to. I can see why they would do what they’re supposed to by simulating how they would work, and they have a track record of doing what they’re supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don’t believe what they recommend me to believe.
This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic “Mestroyer::should”)
I can identify with this. Reading through the sequences wasn’t a magical journey of enlightenment, it was more “Hey, this is what I thought as well. I’m glad Elezier wrote all this down so that I don’t have to.”
It especially annoys me when people respond to evidence-based arguments that LessWrong is not a cult with, “Well where did you come to believe all that stuff about evidence, LessWrong?”
Before LessWrong, my epistemology was basically a more clumsy version of what is now. If you described my present self to my past self, and said “Is this guy a cult victim?” he would ask for evidence. He wouldn’t be thinking in terms of Bayes’s theorem, but he would be thinking with a bunch of verbally expressed heuristics and analogies that usually added up to the same thing. I used to say things like “Absence of evidence is actually evidence of absence, but only if you would expect to see the evidence if the thing was true and you’ve checked for the evidence,” which I was later delighted to see validated and formalized by probability theory.
You could of course say, “Well, that’s not actually your past self, that’s your present self (the cult victim)’s memories, which are distorted by mad thinking,” but then you’re getting into brain-in-a-vat territory. I have to think using some process. If that process is wrong but unable to detect its own wrongness, I’m screwed. Adding infinitely recursive meta-doubt to the process just creates a new one to which the same problem applies.
I’m not particularly worried that my epistemology is completely wrong, because the pieces of my epistemology, when evaluated by my epistemology, appear to do what they’re supposed to. I can see why they would do what they’re supposed to by simulating how they would work, and they have a track record of doing what they’re supposed to. There may be other epistemologies that would evaluate mine as wrong. But they are not my epistemology, so I don’t believe what they recommend me to believe.
This is what someone with a particular kind of corrupt epistemology (one that was internally consistent) would say. But it is also the best anyone with an optimal epistemology could say. So why Mestroyer::should my saying it be cause for concern? (this is an epistemic “Mestroyer::should”)
I can identify with this. Reading through the sequences wasn’t a magical journey of enlightenment, it was more “Hey, this is what I thought as well. I’m glad Elezier wrote all this down so that I don’t have to.”