Myers-Briggs works as a fake framework as its usefulness isn’t contingent on its four scales being a “special” way of carving up people-space (see Slatestarcodex). All you have to do is pick four broad, reasonable sounding categories that are somewhat orthogonal and you’ll end up with a good number of strong correlations between certain combinations and other attributes.
On the other hand, with this model, you’ve divided up people space into five categories, then you’ve added all these theories of adjacent categories, opposite categories and contrasting categories, ect. Unlike Myers-Briggs, you are trying to get the model to put out more than you’ve put in. What are the odds that all of this symmetry just happens to exist, as opposed to this being the result of reality being bent into shape in order to produce a clean model? I mean parts of this feel psychologically compelling, but my best rational analysis is that this is the result of the human tendency to impose order and find meaning in chaos.
I was mixed on whether to upvote or downvote this. On one hand, I appreciated the time and effort taken to write up and convey the framework so clearly. On the other hand, I wasn’t convinced that there was much value in the object level framework.
My impression of what happened here is something like: OP: “episteme of how the color wheel works, gnosis-claim that it’s useful” Chris: “I don’t see the episteme of why this framework is useful, and am pretty suspicious of claims that frameworks are useful.” Conor: “I made a gnosis claim, and you should update based on gnosis claims using a different mechanism. I agree that I don’t have the necessary support for an episteme claim.”
In that view, everything here seems right; the right way to update on the relevance of gnosis claims is indeed by querying your models of the person making the claim.
But I worry this comment will land poorly without that explicit framing, especially if a reader doesn’t see that the thing coming out of the other end is doxa instead of episteme or gnosis. That is, the end state of knowledge looks something like “some respectable people appreciate the color pie typology” as opposed to “the color pie typology is correct” or “the color pie typology is useful.”
I feel that this summary doesn’t quite capture what I was saying.
We should definitely be suspicious of anecdotal evidence of frameworks, but the core claim was that the particular structure of this framework made it more suspicious than normal (as described in my comment above). I provided the Myers-Briggs framework as a contrasting example where the structure of the claim made it less suspicious, ie. sort of an anti-prediction.
Your belief causes me to update a little, but I worry that the “best” framework will be very subjective as people have very different experiences of the world. Some people will find that everyone seems to act one way, while others will find that everyone seems to act another.
The value of the framework has been demonstrated (at least to the author) by the use cases given in the post. The fact that it might be ‘humans imposing order on chaos’ does not affect the question of its usefulness. Usefulness is a separate question from whether someone ‘just made it up’. They’re orthogonal. Which I feel was one of the points of Val’s post on Fake Frameworks.
Consider horoscopes. Suppose I give you a horoscope each month for a year (which consists of super generic, but specific sounding claims) and it seems to match your experience very well. Do you then start believing in horoscopes or do you realise once a certain kind of narrative has been proposed, we are surprisingly likely to accept it because of confirmation bias? So I dispute the claim that the “value has been demonstrated”.
I feel like there’s an easy way to rule out the Forer Effect for a typology, which is whenever someone looks at it and says “oh, I’m definitely not an X.” If observers agree that Picard cannot be well-described by Red, this is more impressive than him being well-described by White/Blue, because the theory is confidently predicting that things won’t happen instead of always providing high probabilities.
I can’t shake the feeling that you’re making the type error as described here:
I suspect it’s a type error to think of an ontology as correct or wrong. Ontologies are toolkits for building maps. It makes sense to ask whether it carves reality at its joints, but that’s different. That’s looking at fit. Something weird happens to your epistemology when you start asking whether quarks are real independent of ontology.
In particular, I’m confused by the phrase ‘start believing in horoscopes’. Which implies that I’d take horoscopes as some kind of truth about the universe. Which is not the stance of someone using a fake framework the way Val describes. I want to avoid ‘believing’ in a fake framework—I want to hold it very lightly. This is how one gets the benefits of ‘ki’ while being able to understand it as conflicting with physics. So I’ll use physics to predict most observed physical phenomenon, while I’ll use ki to learn martial arts, and the two don’t have to sit badly with each other.
I hear your worry about confirmation bias. It’s good to watch out for it. But my preference is not to avoid all systems that might lead to confirmation bias. But to use the system and ‘hold it lightly’ so that evidence of its ‘bad fit’ can be raised to my attention as it comes up. I do not want to live in fear of ‘confirmation bias’.
In general, my stance is to try to wear any sufficiently interesting fake frameworks as they come into my view. And then see what happens. If they seem good/useful/insightful, I keep using them. If not, I discard them, probably in favor of other better frameworks. I could see myself starting off using horoscopes, but I imagine I’d quickly find better things.
I’m not a fan of this framework.
Myers-Briggs works as a fake framework as its usefulness isn’t contingent on its four scales being a “special” way of carving up people-space (see Slatestarcodex). All you have to do is pick four broad, reasonable sounding categories that are somewhat orthogonal and you’ll end up with a good number of strong correlations between certain combinations and other attributes.
On the other hand, with this model, you’ve divided up people space into five categories, then you’ve added all these theories of adjacent categories, opposite categories and contrasting categories, ect. Unlike Myers-Briggs, you are trying to get the model to put out more than you’ve put in. What are the odds that all of this symmetry just happens to exist, as opposed to this being the result of reality being bent into shape in order to produce a clean model? I mean parts of this feel psychologically compelling, but my best rational analysis is that this is the result of the human tendency to impose order and find meaning in chaos.
I was mixed on whether to upvote or downvote this. On one hand, I appreciated the time and effort taken to write up and convey the framework so clearly. On the other hand, I wasn’t convinced that there was much value in the object level framework.
Loren ipsum
My impression of what happened here is something like:
OP: “episteme of how the color wheel works, gnosis-claim that it’s useful”
Chris: “I don’t see the episteme of why this framework is useful, and am pretty suspicious of claims that frameworks are useful.”
Conor: “I made a gnosis claim, and you should update based on gnosis claims using a different mechanism. I agree that I don’t have the necessary support for an episteme claim.”
In that view, everything here seems right; the right way to update on the relevance of gnosis claims is indeed by querying your models of the person making the claim.
But I worry this comment will land poorly without that explicit framing, especially if a reader doesn’t see that the thing coming out of the other end is doxa instead of episteme or gnosis. That is, the end state of knowledge looks something like “some respectable people appreciate the color pie typology” as opposed to “the color pie typology is correct” or “the color pie typology is useful.”
Loren ipsum
I feel that this summary doesn’t quite capture what I was saying.
We should definitely be suspicious of anecdotal evidence of frameworks, but the core claim was that the particular structure of this framework made it more suspicious than normal (as described in my comment above). I provided the Myers-Briggs framework as a contrasting example where the structure of the claim made it less suspicious, ie. sort of an anti-prediction.
Loren ipsum
Your belief causes me to update a little, but I worry that the “best” framework will be very subjective as people have very different experiences of the world. Some people will find that everyone seems to act one way, while others will find that everyone seems to act another.
Loren ipsum
I feel confused by this comment.
The value of the framework has been demonstrated (at least to the author) by the use cases given in the post. The fact that it might be ‘humans imposing order on chaos’ does not affect the question of its usefulness. Usefulness is a separate question from whether someone ‘just made it up’. They’re orthogonal. Which I feel was one of the points of Val’s post on Fake Frameworks.
Consider horoscopes. Suppose I give you a horoscope each month for a year (which consists of super generic, but specific sounding claims) and it seems to match your experience very well. Do you then start believing in horoscopes or do you realise once a certain kind of narrative has been proposed, we are surprisingly likely to accept it because of confirmation bias? So I dispute the claim that the “value has been demonstrated”.
I feel like there’s an easy way to rule out the Forer Effect for a typology, which is whenever someone looks at it and says “oh, I’m definitely not an X.” If observers agree that Picard cannot be well-described by Red, this is more impressive than him being well-described by White/Blue, because the theory is confidently predicting that things won’t happen instead of always providing high probabilities.
I can’t shake the feeling that you’re making the type error as described here:
In particular, I’m confused by the phrase ‘start believing in horoscopes’. Which implies that I’d take horoscopes as some kind of truth about the universe. Which is not the stance of someone using a fake framework the way Val describes. I want to avoid ‘believing’ in a fake framework—I want to hold it very lightly. This is how one gets the benefits of ‘ki’ while being able to understand it as conflicting with physics. So I’ll use physics to predict most observed physical phenomenon, while I’ll use ki to learn martial arts, and the two don’t have to sit badly with each other.
I hear your worry about confirmation bias. It’s good to watch out for it. But my preference is not to avoid all systems that might lead to confirmation bias. But to use the system and ‘hold it lightly’ so that evidence of its ‘bad fit’ can be raised to my attention as it comes up. I do not want to live in fear of ‘confirmation bias’.
In general, my stance is to try to wear any sufficiently interesting fake frameworks as they come into my view. And then see what happens. If they seem good/useful/insightful, I keep using them. If not, I discard them, probably in favor of other better frameworks. I could see myself starting off using horoscopes, but I imagine I’d quickly find better things.