The value of the framework has been demonstrated (at least to the author) by the use cases given in the post. The fact that it might be ‘humans imposing order on chaos’ does not affect the question of its usefulness. Usefulness is a separate question from whether someone ‘just made it up’. They’re orthogonal. Which I feel was one of the points of Val’s post on Fake Frameworks.
Consider horoscopes. Suppose I give you a horoscope each month for a year (which consists of super generic, but specific sounding claims) and it seems to match your experience very well. Do you then start believing in horoscopes or do you realise once a certain kind of narrative has been proposed, we are surprisingly likely to accept it because of confirmation bias? So I dispute the claim that the “value has been demonstrated”.
I feel like there’s an easy way to rule out the Forer Effect for a typology, which is whenever someone looks at it and says “oh, I’m definitely not an X.” If observers agree that Picard cannot be well-described by Red, this is more impressive than him being well-described by White/Blue, because the theory is confidently predicting that things won’t happen instead of always providing high probabilities.
I can’t shake the feeling that you’re making the type error as described here:
I suspect it’s a type error to think of an ontology as correct or wrong. Ontologies are toolkits for building maps. It makes sense to ask whether it carves reality at its joints, but that’s different. That’s looking at fit. Something weird happens to your epistemology when you start asking whether quarks are real independent of ontology.
In particular, I’m confused by the phrase ‘start believing in horoscopes’. Which implies that I’d take horoscopes as some kind of truth about the universe. Which is not the stance of someone using a fake framework the way Val describes. I want to avoid ‘believing’ in a fake framework—I want to hold it very lightly. This is how one gets the benefits of ‘ki’ while being able to understand it as conflicting with physics. So I’ll use physics to predict most observed physical phenomenon, while I’ll use ki to learn martial arts, and the two don’t have to sit badly with each other.
I hear your worry about confirmation bias. It’s good to watch out for it. But my preference is not to avoid all systems that might lead to confirmation bias. But to use the system and ‘hold it lightly’ so that evidence of its ‘bad fit’ can be raised to my attention as it comes up. I do not want to live in fear of ‘confirmation bias’.
In general, my stance is to try to wear any sufficiently interesting fake frameworks as they come into my view. And then see what happens. If they seem good/useful/insightful, I keep using them. If not, I discard them, probably in favor of other better frameworks. I could see myself starting off using horoscopes, but I imagine I’d quickly find better things.
I feel confused by this comment.
The value of the framework has been demonstrated (at least to the author) by the use cases given in the post. The fact that it might be ‘humans imposing order on chaos’ does not affect the question of its usefulness. Usefulness is a separate question from whether someone ‘just made it up’. They’re orthogonal. Which I feel was one of the points of Val’s post on Fake Frameworks.
Consider horoscopes. Suppose I give you a horoscope each month for a year (which consists of super generic, but specific sounding claims) and it seems to match your experience very well. Do you then start believing in horoscopes or do you realise once a certain kind of narrative has been proposed, we are surprisingly likely to accept it because of confirmation bias? So I dispute the claim that the “value has been demonstrated”.
I feel like there’s an easy way to rule out the Forer Effect for a typology, which is whenever someone looks at it and says “oh, I’m definitely not an X.” If observers agree that Picard cannot be well-described by Red, this is more impressive than him being well-described by White/Blue, because the theory is confidently predicting that things won’t happen instead of always providing high probabilities.
I can’t shake the feeling that you’re making the type error as described here:
In particular, I’m confused by the phrase ‘start believing in horoscopes’. Which implies that I’d take horoscopes as some kind of truth about the universe. Which is not the stance of someone using a fake framework the way Val describes. I want to avoid ‘believing’ in a fake framework—I want to hold it very lightly. This is how one gets the benefits of ‘ki’ while being able to understand it as conflicting with physics. So I’ll use physics to predict most observed physical phenomenon, while I’ll use ki to learn martial arts, and the two don’t have to sit badly with each other.
I hear your worry about confirmation bias. It’s good to watch out for it. But my preference is not to avoid all systems that might lead to confirmation bias. But to use the system and ‘hold it lightly’ so that evidence of its ‘bad fit’ can be raised to my attention as it comes up. I do not want to live in fear of ‘confirmation bias’.
In general, my stance is to try to wear any sufficiently interesting fake frameworks as they come into my view. And then see what happens. If they seem good/useful/insightful, I keep using them. If not, I discard them, probably in favor of other better frameworks. I could see myself starting off using horoscopes, but I imagine I’d quickly find better things.