I haven’t really understood where the fakeness in the framework is.
Well, by my model of epistemic hygiene, it’s therefore especially important to label it “fake” as you step into using it. Otherwise you risk forgetting that it’s an interpretation you’re adding, and when you can’t notice the interpretations you’re adding anymore then you have a much harder time Looking at what’s true.
In my usage, “fake” doesn’t necessarily mean “wrong”. It means something more like “illusory”. The point of a framework, to me, is that it pumps intuition and highlights clusters and possible Gears. But all of that is coming from your mind, not the territory. When you don’t yet know how much to trust a framework, I think it’s especially helpful to have clear signs on its boundaries saying “You are now entering a domain of intentional self-induced hallucination.”
Like, it’s worth remembering that you don’t see molecules. When you look at a glass of water and think “Oh, that’s dihydrogen monoxide”, if you can’t tell that that’s a thought you’re adding and not what you’re seeing, then it’s very easy for you to get confused. That same kind of thought process goes into things like, “Oh, that person must be unstable.” If you think you see how it’s objectively true that water makes sense in terms of chemistry, then it starts to seem an awful lot like (e.g.) your judgments of people are observations rather than interpretations.
I think this kind of thing is super important to keep track of when you’re using a framework for pragmatic effects. Otherwise you run the risk of either (a) being incapable of benefitting from the framework because there are parts you’re suspicious of, or (b) coming to believe the framework wholeheartedly because it seems to produce results for you. It’s worth remembering that astrology sure seemed to a lot of people to produce results for centuries, which caused people to speculate about very strange powers radiating from the stars.
So, I’m saying “This is a fake framework” as a reminder to track that it’s adding an interpretive layer… which I think is especially important if the framework comes across as obviously true.
I’ll have a lot more to say about this general point later in the sequence, by the way.
Wait a second. Aw fuck me, this is exactly what’s happening to me right now! My mood instantly improved by a ton and I kept laughing for several minutes.
:-)
I loved this story. Thank you for sharing it.
PS: It’s probably more helpful to point your Attachment Theory link to here instead.
Ah, you know, I just agree with you. I’ll go edit that right after I post this reply.
In my usage, “fake” doesn’t necessarily mean “wrong”. It means something more like “illusory”. The point of a framework, to me, is that it pumps intuition and highlights clusters and possible Gears. But all of that is coming from your mind, not the territory.
Like, it’s worth remembering that you don’t see molecules. When you look at a glass of water and think “Oh, that’s dihydrogen monoxide”, if you can’t tell that that’s a thought you’re adding and not what you’re seeing, then it’s very easy for you to get confused.
I’d just like to point out that this leads to a interpretation of map and territory that is really weird from the perspective of the bayesian-skeptical correspondence theory given in the sequences. If I were to give a name pointing at what this metaphysics is, I’d say something like “direct realism”. This is not to say that it is wrong.
I’m also concerned that the theory you’re pointing out has an ontology problem. I’m hoping to get to spelling my concern out — but that’s several posts later in the sequence.
Well, by my model of epistemic hygiene, it’s therefore especially important to label it “fake” as you step into using it. Otherwise you risk forgetting that it’s an interpretation you’re adding, and when you can’t notice the interpretations you’re adding anymore then you have a much harder time Looking at what’s true.
In my usage, “fake” doesn’t necessarily mean “wrong”. It means something more like “illusory”. The point of a framework, to me, is that it pumps intuition and highlights clusters and possible Gears. But all of that is coming from your mind, not the territory. When you don’t yet know how much to trust a framework, I think it’s especially helpful to have clear signs on its boundaries saying “You are now entering a domain of intentional self-induced hallucination.”
Like, it’s worth remembering that you don’t see molecules. When you look at a glass of water and think “Oh, that’s dihydrogen monoxide”, if you can’t tell that that’s a thought you’re adding and not what you’re seeing, then it’s very easy for you to get confused. That same kind of thought process goes into things like, “Oh, that person must be unstable.” If you think you see how it’s objectively true that water makes sense in terms of chemistry, then it starts to seem an awful lot like (e.g.) your judgments of people are observations rather than interpretations.
I think this kind of thing is super important to keep track of when you’re using a framework for pragmatic effects. Otherwise you run the risk of either (a) being incapable of benefitting from the framework because there are parts you’re suspicious of, or (b) coming to believe the framework wholeheartedly because it seems to produce results for you. It’s worth remembering that astrology sure seemed to a lot of people to produce results for centuries, which caused people to speculate about very strange powers radiating from the stars.
So, I’m saying “This is a fake framework” as a reminder to track that it’s adding an interpretive layer… which I think is especially important if the framework comes across as obviously true.
I’ll have a lot more to say about this general point later in the sequence, by the way.
:-)
I loved this story. Thank you for sharing it.
Ah, you know, I just agree with you. I’ll go edit that right after I post this reply.
(For posterity: the original link went here.)
Oh, haha, I should be more careful when using a phone interface to read these comments. I visually missed that you’d said:
So, yep, basically that.
I’d just like to point out that this leads to a interpretation of map and territory that is really weird from the perspective of the bayesian-skeptical correspondence theory given in the sequences. If I were to give a name pointing at what this metaphysics is, I’d say something like “direct realism”. This is not to say that it is wrong.
Yep, I agree.
I’m also concerned that the theory you’re pointing out has an ontology problem. I’m hoping to get to spelling my concern out — but that’s several posts later in the sequence.