Insights Over Frameworks
Once someone has made several insightful self-help observations, I think it’s often tempting to then go and find some sort of cohering framework which can explain all of those observations. In this essay, I’ll argue for why I think it can be better to skip the post-hoc theorizing altogether and just present the insights in a modular way.
First, let me be clear about definitions for the terms I’m using.
By “insight”, I mean some sort of observation about how the world works. For example, someone trying to figure out how to do better in social situations might realize that finishing their thoughts before speaking aloud helps them be more eloquent. Of course this doesn’t mean that insights are free from an underlying framework, but no attempt is made to bring the assumptions and models into the foreground. The focus is on some small mechanism or model.
By “framework”, I mean some sort of set of beliefs that form an overarching model about how the world works. For example, some people have tried to model all of social interactions as an intelligent social web. Insights about social interactions are then contextualized under this framework. Perhaps, under this framework, finishing your thoughts help with eloquence because they give off the impression that you are thoughtful, which coheres with the framework’s claims about interactions being similar to a theatrical performance. The focus is on the framework, and the insights are corollaries.
As I mentioned above, I think people like coming up with frameworks. I think people like having neat explanations that can explain all of the neat things they’ve been observing. I am also guilty of this. There is something satisfying about finding some crystallization of all your past thoughts, like putting them through a strainer.
However, recently, I’ve bowed to pragmatism, and I think that frameworks are not as useful as I’d previously thought.
Firstly, there are too many frameworks. Most pieces of self-help content do not just promise a few (potentially) helpful observations, but rather an entire system. Sure, the system might have some good parts, but it’s difficult to evaluate an entire framework without putting in the mental effort to really inhabit it for a while. And this is costly; you can’t expect to do this each time a new self-help guru comes out with another new analogy or metaphor for the world.
Secondly, frameworks are often post-hoc. In self-help, the explanations basically have to come after the observations. This isn’t necessarily bad. After all, if your goal is to maximize personal effectiveness (or effectiveness for a group of people similar to you), it seems probably better to just go out and try a bunch of things instead of armchair theorizing about what would or wouldn’t work. You are the source of empirical data, which can be cheap to collect.
What this means is that the framework itself is likely the least interesting part of what you have to say! If I’m reading your self-help guide, I really just want to know about the places where it can excel for me locally, or what sorts of effects you saw in yourself. All of that information is in the insights, not in the framework itself, which could be nonsense for all I know.
Thirdly, I think it’s important to respect your audience. By that, I mean that they likely already have their own self-help systems and frameworks coming into your content; most people are not impressionable blank slates. And, as the saying goes, the best self-help is the one that works for you. If your audience already has some system that’s 70% optimal, it’s possibly not worth their effort to try and relearn your system to get to 75% optimality. (Assuming for the moment that percentages like these even make sense.)
Insights, however, can be much more easily incorporated into any existing worldview. Furthermore, if the audience finds many of your insights compelling, they may come up with some sort of coherent framework themselves (as our brains are wont to do), which could end up looking similar to the framework you wanted to give them. Except in this case they’ll likely appreciate it much more, as they’ve done the messy work of theorizing themselves.
Thus, instead of giving your audience some sort of grand theory, I think it can be better to just give the disparate list of insights. This allows your audience to pick and choose which observations they think can be useful for them, and it puts the focus back where I think it should be. There’s also something to be said for gradually changing your worldview to subsume many insights, rather than always doing a hard switch.
As a parting note of caution, this approach also runs into its own type of perverse selection pressure. While a focus on large frameworks can lead to sprawling essays requiring major intellectual buy-in, modular insights can go the way of Buzzfeed listicles, often too trite and regularly doled out in small dopamine hits that can lead to a false sense of illumination.
My anchor for this is that most frameworks are descriptive, and not functionally prescriptive. i.e. most such systems are just the author describing something that seemed to work for them for awhile. The version they describe is probably a lot more systematic than what they actually do.
The piece as a whole seems to assume insights are tiny and specific, without generality.
Attempted aphorism: A theory is useful if it tells you what experiments to perform.
You’re right that I’m making assumptions about insights which not always be applicable. And I don’t mean to claim that theory isn’t useful. This post is partially also for me to push back against some default theorizing that happens.
I think that sometimes the right thing to do is to focus on just “reporting the data”, so to speak, if we use an analogy from research papers. There are experimental papers which might do some speculation, but their focus is on the results. Then there are also papers which try to do more theorizing and synthesis.
I guess I’m trying to discourage what I see as experimental papers focusing too much on the theorizing aspect.
So, it’s a lack of balance, and basically this:?
That seems reasonable, yeah.
I personally believe insights more if I can tie them to some theory. Even if I don’t quite believe in that theory, the mere fact that there is a possible theory under which the insight is true, increases that insight’s credibility in my mind. I think this is somewhat of a cognitive error (it might also be necessary for building models, i.e., it might be a necessary evil of a learning brain), but it’s what it is.