I can’t point to a specific post without doing more digging than I care to do right now. I wouldn’t be too shocked to find out I’m drastically wrong. It’s just my impression from (a) years of interacting with Less Wrong before plus (b) popping in every now and again to see what social dynamics have and haven’t changed.
With that caveat… here are a couple of frames to triangulate what I was referring to:
In Ken Wilber’s version of Spiral Dynamics, Less Wrong is the best display of Orange I know of. Most efforts at Orange these days are weaksauce, like “I Fucking Love Science” (which is more like Amber with an Orange aesthetic) or Richard Dawkins’ “Brights” campaign. I could imagine a Less Wrong that wants to work hard at holding Orange values as it transitions into 2nd Tier (i.e., Wilber’s Teal and Turquoise Altitudes), but that’s not what I see. What I see instead is a LW that wants to continue to embody Orange more fully and perfectly, importing and translating other frameworks into Orange terms. In other words, LW seems to me to have devoted to keep playing in 1st Tier, which seems like a fine choice. It’s just not the one I make.
There’s a mighty powerful pull on LW to orient toward propositional knowing. The focus is super-heavy on languaging and explicit models. Questions about deeper layers of knowing (e.g., John Vervaeke’s breakdown in terms of procedural, perspectival, and participatory forms of knowing) undergo pressure to be framed in propositional terms and evaluated analytically to be held here. The whole thing with “fake frameworks” is an attempt to acknowledge perspectival knowing… but there’s still a strong alignment I see here with such knowing being seen as preliminary or lacking in some sense unless and until there’s a propositional analysis that shows what’s “really” going on. I notice the reverse isn’t really the case: there isn’t a demand that a compelling model or idea be actionable, for instance. This overall picture is amazing for ensuring that propositional strengths (e.g., logic) get integrated into one’s worldview. It’s quite terrible at navigating metacognitive blindspots though.
From what I’ve seen, LW seems to want to say “yes” maximally to this direction. Which is a fine choice. There aren’t other groups that can make this choice with this degree of skill and intelligence as far as I know.
There’s just some friction with this view when I want to point at certain perspectival and participatory forms of knowing, e.g. about the nature of the self. You can’t argue an ego into recognizing itself. The whole OP was an attempt to offer a perspective that would help transform what was seeable and actionable; it was never meant to be a logical argument, really. So when asked “What can I do with this knowledge?”, it’s very tricky to give a propositional model that is actually actionable in this context — but it’s quite straightforward to give some instructions that someone can try so as to discover for themselves what they experience.
I was just noticing that bypassing theory to offer participatory forms of knowing was a mild violation of norms here as I understand them. But I was guessing it was a forgivable violation, and that the potential benefit justified the mild social bruising.
I don’t think everyone playing on the propositional level is unaware of its shortcomings, many just recognize that propositional knowledge is the knowledge that scales and therefore worthy of investment despite those shortcomings. And on the other side of things you have Kegan 3 (I don’t like Integral terms for reasons related to this very topic) people with some awareness of Kegan 5 but having skipped a healthy Kegan 4 and therefore having big holes which they tend to paper over with spiritual bypassing. They are the counterpart to the rationalist strawmen who skipped over a healthy Kegan 3 (many of us here do have shades of this) and run into big problems when they try to go from 4 to 5 because of those holes from 3.
I don’t think everyone playing on the propositional level is unaware of its shortcomings…
I didn’t mean to imply that everyone was unaware this way. I meant to point at the culture as a whole. Like, if the whole of LW were a single person, then that person strikes me as being unaware this way, even if many of that person’s “organs” have a different perspective.
…propositional knowledge is the knowledge that scales…
That’s actually really unclear to me. Christendom would have been better defined by a social order (and thus by individuals’ knowing how to participate in that culture) than it would have by a set of propositions. Likewise #metoo spread because it was a viable knowing-how: read a #metoo story with the hashtag, then feel moved to share your own with the hashtag such that others see yours.
tl;dr: that raised some interesting points. I’m not sure “actionable” is the right lens but something nearby resonated.
My current take is something like “yes, LessWrong is pretty oriented towards propositional knowledge”. Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
Your point that “fake frameworks that are actionable are seen as preliminary, but there doesn’t seem to be a corresponding sense that compelling-but-inactionable-models are also ‘preliminary’” was interesting. I hadn’t quite thought through that lens before.
Thinking through that lens a bit now, what I’d guess is that “actually, yes, non-actionable-things are also sort of preliminary.” (I think part of the point of the LW Review was to check ‘okay, has anyone actually used these ideas in a way that either connected directly with reality, or showed some signs of eventually connecting.’ A concept I kept pointing at during the Review process was “which ideas were true, and also useful?”)
But, I think there is still some kind of tradeoff being made here, that isn’t quite about actionability vs vetted-explicit-knowledge. The tradeoff is in instead some vaguer axis of “the sort of stuff I imagine Val is excited about”, that has more to do with… like, in an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge, with tacit knowledge treated as something that should eventually get type-checked into explicit knowledge and vetted if possible, some frames are going to have an easier time being talked about.
So, I do think there are going to be some domains that LessWrong is weaker at, and that’s okay. I don’t think actionability is the thing though.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing. A lot of the point of the original sequences was to convey tacit knowledge about how-to-think. A lot of the currently-hard-to-talk-about-explicitly-stuff is stuff that’s real important to figure out how to convey and write up nuanced sequences about. (I do think it’s necessary to figure out how to convey it in writing, as much as possible, because there are serious limits to in-person-workshop scalability)
I’m not sure “actionable” is the right lens but something nearby resonated.
Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn’t really offer new explicit models or explanations, and it didn’t really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.
By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer’s & Anna’s eye, there was an attempted debiasing technique vs. the sunk cost fallacy called “Pretend you’re a teleporting alien”. The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human’s memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).
What it means to know (in a way that matters) why that technique didn’t work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.
But a step in the right direction (I think) is noticing that the “alien” frame doesn’t in practice have the kind of “kick” that the Moloch idea does. Despite having in-theory actionable steps, it doesn’t galvanize a mind with meaning. Turns out, that’s actually really important for a viable art of rationality.
Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
I’m wanting to emphasize that I’m not trying to denigrate this. In case that wasn’t clear. I think this is valuable and good.
…an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge…
This resonates pretty well with where my intuition tends to point.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing.
That’s something of an illusion. It’s a habit we’ve learned in terms of how to relate to writing. (Although it’s kind of true because we’ve all learned it… but it’s possible to circumnavigate this by noticing what’s going on, which a subcommunity like LW can potentially do.)
More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What’s the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?
A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.
This is also part of the value of poetry. What makes poetry powerful and important is that it’s writing designed specifically to create an impact beneath the propositional level. There’s a reason Rumi focused on poetry after his enlightenment:
“Sit down, be still, and listen.
You are drunk
and this is
the edge of the roof.”
~Rumi
Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it’s not what I observe. I doubt three months from now that there’ll be any relevant uptick in how much poetry appears on LW, for instance. It’s just not what the culture seems to want — which, again, seems like a fine choice.
Thanks a bunch, Val. I say you saved me dozens if not hundreds of hours, because I was (am) pretty confused about the big picture around here.
The associated Ken Wilber image helps with the understanding a lot. Now, if I don’t really get nearly half of the articles on LW, does that mean I’m redder than orange? Are there tests on the internet where I can pretty reliably tell where I’m standing on that scale? Also, I’m quite sure that my goal is to get to the turquoise level. What online resources I should learn and/or what “groups” I should join, in your personal recommendation?
I can’t point to a specific post without doing more digging than I care to do right now. I wouldn’t be too shocked to find out I’m drastically wrong. It’s just my impression from (a) years of interacting with Less Wrong before plus (b) popping in every now and again to see what social dynamics have and haven’t changed.
With that caveat… here are a couple of frames to triangulate what I was referring to:
In Ken Wilber’s version of Spiral Dynamics, Less Wrong is the best display of Orange I know of. Most efforts at Orange these days are weaksauce, like “I Fucking Love Science” (which is more like Amber with an Orange aesthetic) or Richard Dawkins’ “Brights” campaign. I could imagine a Less Wrong that wants to work hard at holding Orange values as it transitions into 2nd Tier (i.e., Wilber’s Teal and Turquoise Altitudes), but that’s not what I see. What I see instead is a LW that wants to continue to embody Orange more fully and perfectly, importing and translating other frameworks into Orange terms. In other words, LW seems to me to have devoted to keep playing in 1st Tier, which seems like a fine choice. It’s just not the one I make.
There’s a mighty powerful pull on LW to orient toward propositional knowing. The focus is super-heavy on languaging and explicit models. Questions about deeper layers of knowing (e.g., John Vervaeke’s breakdown in terms of procedural, perspectival, and participatory forms of knowing) undergo pressure to be framed in propositional terms and evaluated analytically to be held here. The whole thing with “fake frameworks” is an attempt to acknowledge perspectival knowing… but there’s still a strong alignment I see here with such knowing being seen as preliminary or lacking in some sense unless and until there’s a propositional analysis that shows what’s “really” going on. I notice the reverse isn’t really the case: there isn’t a demand that a compelling model or idea be actionable, for instance. This overall picture is amazing for ensuring that propositional strengths (e.g., logic) get integrated into one’s worldview. It’s quite terrible at navigating metacognitive blindspots though.
From what I’ve seen, LW seems to want to say “yes” maximally to this direction. Which is a fine choice. There aren’t other groups that can make this choice with this degree of skill and intelligence as far as I know.
There’s just some friction with this view when I want to point at certain perspectival and participatory forms of knowing, e.g. about the nature of the self. You can’t argue an ego into recognizing itself. The whole OP was an attempt to offer a perspective that would help transform what was seeable and actionable; it was never meant to be a logical argument, really. So when asked “What can I do with this knowledge?”, it’s very tricky to give a propositional model that is actually actionable in this context — but it’s quite straightforward to give some instructions that someone can try so as to discover for themselves what they experience.
I was just noticing that bypassing theory to offer participatory forms of knowing was a mild violation of norms here as I understand them. But I was guessing it was a forgivable violation, and that the potential benefit justified the mild social bruising.
I don’t think everyone playing on the propositional level is unaware of its shortcomings, many just recognize that propositional knowledge is the knowledge that scales and therefore worthy of investment despite those shortcomings. And on the other side of things you have Kegan 3 (I don’t like Integral terms for reasons related to this very topic) people with some awareness of Kegan 5 but having skipped a healthy Kegan 4 and therefore having big holes which they tend to paper over with spiritual bypassing. They are the counterpart to the rationalist strawmen who skipped over a healthy Kegan 3 (many of us here do have shades of this) and run into big problems when they try to go from 4 to 5 because of those holes from 3.
I didn’t mean to imply that everyone was unaware this way. I meant to point at the culture as a whole. Like, if the whole of LW were a single person, then that person strikes me as being unaware this way, even if many of that person’s “organs” have a different perspective.
That’s actually really unclear to me. Christendom would have been better defined by a social order (and thus by individuals’ knowing how to participate in that culture) than it would have by a set of propositions. Likewise #metoo spread because it was a viable knowing-how: read a #metoo story with the hashtag, then feel moved to share your own with the hashtag such that others see yours.
tl;dr: that raised some interesting points. I’m not sure “actionable” is the right lens but something nearby resonated.
My current take is something like “yes, LessWrong is pretty oriented towards propositional knowledge”. Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
Your point that “fake frameworks that are actionable are seen as preliminary, but there doesn’t seem to be a corresponding sense that compelling-but-inactionable-models are also ‘preliminary’” was interesting. I hadn’t quite thought through that lens before.
Thinking through that lens a bit now, what I’d guess is that “actually, yes, non-actionable-things are also sort of preliminary.” (I think part of the point of the LW Review was to check ‘okay, has anyone actually used these ideas in a way that either connected directly with reality, or showed some signs of eventually connecting.’ A concept I kept pointing at during the Review process was “which ideas were true, and also useful?”)
But, I think there is still some kind of tradeoff being made here, that isn’t quite about actionability vs vetted-explicit-knowledge. The tradeoff is in instead some vaguer axis of “the sort of stuff I imagine Val is excited about”, that has more to do with… like, in an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge, with tacit knowledge treated as something that should eventually get type-checked into explicit knowledge and vetted if possible, some frames are going to have an easier time being talked about.
So, I do think there are going to be some domains that LessWrong is weaker at, and that’s okay. I don’t think actionability is the thing though.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing. A lot of the point of the original sequences was to convey tacit knowledge about how-to-think. A lot of the currently-hard-to-talk-about-explicitly-stuff is stuff that’s real important to figure out how to convey and write up nuanced sequences about. (I do think it’s necessary to figure out how to convey it in writing, as much as possible, because there are serious limits to in-person-workshop scalability)
Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn’t really offer new explicit models or explanations, and it didn’t really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.
By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer’s & Anna’s eye, there was an attempted debiasing technique vs. the sunk cost fallacy called “Pretend you’re a teleporting alien”. The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human’s memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).
What it means to know (in a way that matters) why that technique didn’t work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.
But a step in the right direction (I think) is noticing that the “alien” frame doesn’t in practice have the kind of “kick” that the Moloch idea does. Despite having in-theory actionable steps, it doesn’t galvanize a mind with meaning. Turns out, that’s actually really important for a viable art of rationality.
I’m wanting to emphasize that I’m not trying to denigrate this. In case that wasn’t clear. I think this is valuable and good.
This resonates pretty well with where my intuition tends to point.
That’s something of an illusion. It’s a habit we’ve learned in terms of how to relate to writing. (Although it’s kind of true because we’ve all learned it… but it’s possible to circumnavigate this by noticing what’s going on, which a subcommunity like LW can potentially do.)
Contrast with e.g. Lectio Divina.
More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What’s the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?
A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.
This is also part of the value of poetry. What makes poetry powerful and important is that it’s writing designed specifically to create an impact beneath the propositional level. There’s a reason Rumi focused on poetry after his enlightenment:
Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it’s not what I observe. I doubt three months from now that there’ll be any relevant uptick in how much poetry appears on LW, for instance. It’s just not what the culture seems to want — which, again, seems like a fine choice.
(Note: I found this comment helpful in thinking about LessWrong, though don’t have much to say in response)
Thanks a bunch, Val. I say you saved me dozens if not hundreds of hours, because I was (am) pretty confused about the big picture around here.
The associated Ken Wilber image helps with the understanding a lot. Now, if I don’t really get nearly half of the articles on LW, does that mean I’m redder than orange? Are there tests on the internet where I can pretty reliably tell where I’m standing on that scale? Also, I’m quite sure that my goal is to get to the turquoise level. What online resources I should learn and/or what “groups” I should join, in your personal recommendation?
I’m glad to have helped. :)
I’ll answer the rest by PM. Diving into Integral Theory here strikes me as a bit off topic (though I certainly don’t mind the question).