Thanks for writing this up; I enjoy reading attempts to categorize experience, and this one seems plausibly useful as a thing to point to in conversations when jumping between levels.
My attempt at a summary if no one else wants to wade through the dense wall of text (feel free to correct me, gworley):
[gworley draws from sources in phenomenology (the study of experience), systems modeling, meta-levels, and Kegan to try and create some useful distinctions to describe how people study experience.
If you’re already familiar with the 5 stages of Kegan’s stuff, you’ll see lots of parallels.
gworley starts with an introduction to phenomenology, which is summed up by a {subject, experience, object} tuple, where a subject experiences an object. In a similar way to Drescher’s Cartesian Camcorder (if anyone’s familiar w/ Good and Real), he claims that it is the experiencing of our experiences that metaphorically maps onto the feeling we typically ascribe as consciousness.
From there, he uses a more formal approach to rebuild Kegan’s stages. There is the conscious experience of “things” (like “chairs” and other things our ontology classifies as basic), followed by the conscious experience of “systems”, which can be more abstract. This is followed by a system-relationship worldview, which looks at different competing ontologies.
gworley ends by pointing to the idea of a “Holon”, which seems similar to Hofstadter’s idea of a strange loop. He posits a worldview where there’s a sort of meta-framework for connecting with different system relationships. Also, there’s an attempt to try and ground all this in a similar way to complexity classes in computing, but that part seemed sketchy.]
My thoughts:
As with most analyses of this kind, it’s frustrating to see that they don’t immediately point to things we can do differently, as such theories are fairly far removed, compared to, say, new debiasing research. However, I think that this kind of thinking can be good, especially as ideas in rationality like Actually Trying do sort of skirt the idea of breaking away from assumed social expectations. So I do think there’s value in directly explicating these things, if only for the benefit of more clearly building our internal worldviews.
But it does seem to me that the sort of people who tend to coherently understand this stuff are already past the level these sorts of models refer to, which makes them seem less useful as a way to “level up” people, or as a thing to give to aspiring friends.
Apologies that the complexity stuff seems a bit sketchy. I had to cut the strongest the formalisms because it was very slow going developing them and will probably take me months. I have a sketch of a mathematical approach to phenomenology as topology over state space but need time to fully develop it so I can flesh out a rigorous explanation of what I mean by complexity here.
You’re right that this is not especially applicable in this form. It’s like describing topology when what you really need to do is integrate a function over real numbers. But given my hermeneutical preferences I think just trying to understand it is likely to lead to some “leveling up”, as you put it.
Thanks for giving more information about your theory.
I’d like to express skepticism that people would be able to level up after reading such a treatise. Maybe my mental models of past me and other people aren’t very good, but my impression is that giving this sort of stuff to people is not an ideal way to boost people’s mental models.
As in, the act of explicating the whole phenomenon behind breaking down what we mean by concepts, systems, etc. seems very different from the act of giving people the tools or leading them to the point where they can start to update their mental models and explore different systems. (That is to say, the act of writing about Leveling Up seems very different than the things you’d need to do to help someone Level Up.)
Happy to extend this if it turns out that we’ve got differing ideas on the matter.
Sure, my program for helping people achieve more phenomenological complexity is not to point them at this. It’s instead to follow the advice I’ve previously outlined: act into fear and abandon all hope. That can be hard to apply, though, so folks often need to be specifically induced to face particular fears and abandon particular hopes. Once they’ve done this and experienced worldview disintegration they can reintegrate with whatever they like (at least until they have to disintegrate again to make progress) and basically any choice seems fine there since repeated disintegration eventually forces convergence by needing to accept all of reality into the worldview.
So basically my advice is keep breaking down your assumptions and rebuilding them until you have none left. Then you will be enlightened.
Hm, I think the thing I’m trying to point at is my general feeling intuition about how people tend to react (it’s just an internal model here, so feel free to counter w/ better-informed info) which says something like if you want to have people to even get to the point where they can look at written essays on rationality and think to themselves, “Wait this could apply to me!”, you need some sort of baseline of rationality to catalyze the whole thing.
My claim is that getting people to this sort of optimizing step whereupon everything else can work requires something different than what conventional wisdom might dictate (e.g. writing things and/or giving people general advice and telling them to go with it).
Something like “personally interacting with promising individuals and send off social signals that you know cool stuff and pique their interest; then, slowly get them to want to care and get them started off on their journey via slow tidbits / cultivating their interest” seems to be something that I claim is more effective than just finding someone and saying, “Hey! Read this; it’ll shatter your worldview!”
Thanks for writing this up; I enjoy reading attempts to categorize experience, and this one seems plausibly useful as a thing to point to in conversations when jumping between levels.
My attempt at a summary if no one else wants to wade through the dense wall of text (feel free to correct me, gworley):
[gworley draws from sources in phenomenology (the study of experience), systems modeling, meta-levels, and Kegan to try and create some useful distinctions to describe how people study experience.
If you’re already familiar with the 5 stages of Kegan’s stuff, you’ll see lots of parallels.
gworley starts with an introduction to phenomenology, which is summed up by a {subject, experience, object} tuple, where a subject experiences an object. In a similar way to Drescher’s Cartesian Camcorder (if anyone’s familiar w/ Good and Real), he claims that it is the experiencing of our experiences that metaphorically maps onto the feeling we typically ascribe as consciousness.
From there, he uses a more formal approach to rebuild Kegan’s stages. There is the conscious experience of “things” (like “chairs” and other things our ontology classifies as basic), followed by the conscious experience of “systems”, which can be more abstract. This is followed by a system-relationship worldview, which looks at different competing ontologies.
gworley ends by pointing to the idea of a “Holon”, which seems similar to Hofstadter’s idea of a strange loop. He posits a worldview where there’s a sort of meta-framework for connecting with different system relationships. Also, there’s an attempt to try and ground all this in a similar way to complexity classes in computing, but that part seemed sketchy.]
My thoughts: As with most analyses of this kind, it’s frustrating to see that they don’t immediately point to things we can do differently, as such theories are fairly far removed, compared to, say, new debiasing research. However, I think that this kind of thinking can be good, especially as ideas in rationality like Actually Trying do sort of skirt the idea of breaking away from assumed social expectations. So I do think there’s value in directly explicating these things, if only for the benefit of more clearly building our internal worldviews.
But it does seem to me that the sort of people who tend to coherently understand this stuff are already past the level these sorts of models refer to, which makes them seem less useful as a way to “level up” people, or as a thing to give to aspiring friends.
Thanks, I appreciate the summary!
Apologies that the complexity stuff seems a bit sketchy. I had to cut the strongest the formalisms because it was very slow going developing them and will probably take me months. I have a sketch of a mathematical approach to phenomenology as topology over state space but need time to fully develop it so I can flesh out a rigorous explanation of what I mean by complexity here.
You’re right that this is not especially applicable in this form. It’s like describing topology when what you really need to do is integrate a function over real numbers. But given my hermeneutical preferences I think just trying to understand it is likely to lead to some “leveling up”, as you put it.
Thanks for giving more information about your theory.
I’d like to express skepticism that people would be able to level up after reading such a treatise. Maybe my mental models of past me and other people aren’t very good, but my impression is that giving this sort of stuff to people is not an ideal way to boost people’s mental models.
As in, the act of explicating the whole phenomenon behind breaking down what we mean by concepts, systems, etc. seems very different from the act of giving people the tools or leading them to the point where they can start to update their mental models and explore different systems. (That is to say, the act of writing about Leveling Up seems very different than the things you’d need to do to help someone Level Up.)
Happy to extend this if it turns out that we’ve got differing ideas on the matter.
Sure, my program for helping people achieve more phenomenological complexity is not to point them at this. It’s instead to follow the advice I’ve previously outlined: act into fear and abandon all hope. That can be hard to apply, though, so folks often need to be specifically induced to face particular fears and abandon particular hopes. Once they’ve done this and experienced worldview disintegration they can reintegrate with whatever they like (at least until they have to disintegrate again to make progress) and basically any choice seems fine there since repeated disintegration eventually forces convergence by needing to accept all of reality into the worldview.
So basically my advice is keep breaking down your assumptions and rebuilding them until you have none left. Then you will be enlightened.
Hm, I think the thing I’m trying to point at is my general feeling intuition about how people tend to react (it’s just an internal model here, so feel free to counter w/ better-informed info) which says something like if you want to have people to even get to the point where they can look at written essays on rationality and think to themselves, “Wait this could apply to me!”, you need some sort of baseline of rationality to catalyze the whole thing.
My claim is that getting people to this sort of optimizing step whereupon everything else can work requires something different than what conventional wisdom might dictate (e.g. writing things and/or giving people general advice and telling them to go with it).
Something like “personally interacting with promising individuals and send off social signals that you know cool stuff and pique their interest; then, slowly get them to want to care and get them started off on their journey via slow tidbits / cultivating their interest” seems to be something that I claim is more effective than just finding someone and saying, “Hey! Read this; it’ll shatter your worldview!”
I agree: interactively working in person is more effective.