I don’t know if I’ll ever get to a full editing of this. I’ll jot notes here of how I would edit it as I reread this.
I’d ax the whole opening section.
That was me trying to (a) brute force motivation for the reader and (b) navigate some social tension I was feeling around what it means to be able to make a claim here. In particular I was annoyed with Oli and wanted to sidestep discussion of the lemons problem. My focus was actually on making something in culture salient by offering a fake framework. The thing speaks for itself once you look at it. After that point I don’t care what anyone calls it.
This would, alas, leave out the emphasis that it’s a fake framework. But I’ve changed my attitude about how much hand-holding to do for stuff like that. Part of the reason I put that in the beginning was to show the LW audience that I was taking it as fake, so as to sidestep arguments about how justified everything is or isn’t. At this point I don’t care anymore. People can project whatever they want on me because, uh, I can’t really stop them anyway. So I’m not going to fret about it.
I had also intended the opening to have a kind of conversational tone, as part of a Sequence that I never finished (on “ontology-cracking”). I probably never will finish it at this point. So no point in making this stand-alone essay pretend to be part of an ongoing conversation.
A minor nitpick: I open the meat of the idea by telling some facts about improv theater. I suspect it’d be more engaging if I had written it as a story illustrating the experience. “Bob walked onto the stage, his heart pounding. ‘God, what do I say?’” Etc. The whole thing would have felt less abstract if I had done that. But it clearly communicated well for this audience, so that’s not a big concern.
One other reviewer mentioned how the strong examples end up obfuscating my overall point. That was actually a writing strategy: I didn’t want the point stated early on and elucidated throughout. I wanted the reader to resonate with what I was describing, and then use that resonance to point out an implication of the reader’s own life. That said, I bet I could do that with more punch and precision these days.
Reading over the “abuser”/”victim”/”rescuer” stuff, I’m now reminded of Karpman’s Triangle. I didn’t know about that at the time. Karpman was a grad student under Eric Berne, the father of Transactional Analysis. These days many folk know it as “the drama triangle”. Were I writing this essay today I might reference this triangle.
I feel like most of the value of the improv analogy is actually in the contrast between player and character. When I hear about people being impacted by this article, most of what I hear has to do with the mechanics of how the social scene unfolds and how that creates constraints (anti-slack). Which is wonderful! But if I had to choose one illumination for people to experience from this whole thing, I’d rather they get a glimpse of who they are as the player, and how much that really really isn’t the character that’s usually talking and saying “I”, “me”, and “my”. It’s immensely freeing to see this clearly. But there’s a lot of pleasure to be taken in playing genre-naïve characters, and I don’t mean to dismiss that. That’s just not the scene type I want to play in anymore. So on net, this wish of mine probably wouldn’t meaningfully affect how I’d edit this piece.
The reason for referencing Omega was to foreshadow a later post on Newcomblike self-deception.
The short version is: If Omega is modeling your self-model instead of your actual source code to predict your actions, then you’re highly incentivized to separate your self-model from your method of choosing your actions. Then you can two-box while convincing Omega you’ll one-box by sincerely but falsely believing you’re going to one-box. This paints a pretty vivid picture if you view the intelligent social web as the real-world version of Omega with “social role” playing the part of “self-model”.
I’d now skip that whole reference. It made sense only in my mind. And even if I had finished the Sequence this was part of, the references to Omega would make sense only to those who had finished it and then went back to reread this essay.
There’s something about how this essay uses the concept of slack that nags at me. I suspect it’s fine for the purposes of the 2018 review, but I’d be remiss not to mention it. The intuition about slack is itself interpreted from within the social web. But slack affects only the character. So although slack is a genre-savvy concept, it’s still a concept within the web itself. That introduces a dimension of self-reference that might be elegantly self-reinforcing, paradoxical, or something else. I honestly don’t know.
This has me wonder about there being a type of construct, which is genre-savvy concepts. This whole model is an example, as is the concept of genre-savviness. I suspect that’s a gateway to an insight type that’s usually called “spiritual”.
There’s a bit where I refer to the possibility of using Looking to shift roles. I have a much more sophisticated view of this now. I think I was being truthful and reasonably accurate… and yet for the sake of the essay I would either expand on that reference to clarify it, or remove the reference entirely. It’s not helpful to say “There’s a magic consciousness thingie you can do that’ll do things your character can’t understand” if that’s literally all I say about it.
So, with all that said, here are the edits I’d make:
Cut the opening section.
Add a hyperlink to Karpman’s Triangle.
Erase references to Omega, maybe expanding a bit where needed instead.
Either delete references to changing one’s fate by Looking, or spell it out in less mysterious terms.
I’ve made my edits. I think my most questionable call was to go ahead and expand the bit on how to Look in this case.
If I understand the review plan correctly, I think this means I’m past the point where I can get feedback on that edit before voting happens for this article. Alas. I’m juggling a tension between (a) what I think is actually most helpful vs. (b) what I imagine is most fitting to where Less Wrong culture seems to want to go.
If it somehow makes more sense to include the original and ignore this edit, I’m actually fine with that. I had originally planned on not making edits.
But I do hope this new version is clearer and more helpful. I think it has the same content as the original, just clarified a bit.
I can’t point to a specific post without doing more digging than I care to do right now. I wouldn’t be too shocked to find out I’m drastically wrong. It’s just my impression from (a) years of interacting with Less Wrong before plus (b) popping in every now and again to see what social dynamics have and haven’t changed.
With that caveat… here are a couple of frames to triangulate what I was referring to:
In Ken Wilber’s version of Spiral Dynamics, Less Wrong is the best display of Orange I know of. Most efforts at Orange these days are weaksauce, like “I Fucking Love Science” (which is more like Amber with an Orange aesthetic) or Richard Dawkins’ “Brights” campaign. I could imagine a Less Wrong that wants to work hard at holding Orange values as it transitions into 2nd Tier (i.e., Wilber’s Teal and Turquoise Altitudes), but that’s not what I see. What I see instead is a LW that wants to continue to embody Orange more fully and perfectly, importing and translating other frameworks into Orange terms. In other words, LW seems to me to have devoted to keep playing in 1st Tier, which seems like a fine choice. It’s just not the one I make.
There’s a mighty powerful pull on LW to orient toward propositional knowing. The focus is super-heavy on languaging and explicit models. Questions about deeper layers of knowing (e.g., John Vervaeke’s breakdown in terms of procedural, perspectival, and participatory forms of knowing) undergo pressure to be framed in propositional terms and evaluated analytically to be held here. The whole thing with “fake frameworks” is an attempt to acknowledge perspectival knowing… but there’s still a strong alignment I see here with such knowing being seen as preliminary or lacking in some sense unless and until there’s a propositional analysis that shows what’s “really” going on. I notice the reverse isn’t really the case: there isn’t a demand that a compelling model or idea be actionable, for instance. This overall picture is amazing for ensuring that propositional strengths (e.g., logic) get integrated into one’s worldview. It’s quite terrible at navigating metacognitive blindspots though.
From what I’ve seen, LW seems to want to say “yes” maximally to this direction. Which is a fine choice. There aren’t other groups that can make this choice with this degree of skill and intelligence as far as I know.
There’s just some friction with this view when I want to point at certain perspectival and participatory forms of knowing, e.g. about the nature of the self. You can’t argue an ego into recognizing itself. The whole OP was an attempt to offer a perspective that would help transform what was seeable and actionable; it was never meant to be a logical argument, really. So when asked “What can I do with this knowledge?”, it’s very tricky to give a propositional model that is actually actionable in this context — but it’s quite straightforward to give some instructions that someone can try so as to discover for themselves what they experience.
I was just noticing that bypassing theory to offer participatory forms of knowing was a mild violation of norms here as I understand them. But I was guessing it was a forgivable violation, and that the potential benefit justified the mild social bruising.
I don’t think everyone playing on the propositional level is unaware of its shortcomings, many just recognize that propositional knowledge is the knowledge that scales and therefore worthy of investment despite those shortcomings. And on the other side of things you have Kegan 3 (I don’t like Integral terms for reasons related to this very topic) people with some awareness of Kegan 5 but having skipped a healthy Kegan 4 and therefore having big holes which they tend to paper over with spiritual bypassing. They are the counterpart to the rationalist strawmen who skipped over a healthy Kegan 3 (many of us here do have shades of this) and run into big problems when they try to go from 4 to 5 because of those holes from 3.
I don’t think everyone playing on the propositional level is unaware of its shortcomings…
I didn’t mean to imply that everyone was unaware this way. I meant to point at the culture as a whole. Like, if the whole of LW were a single person, then that person strikes me as being unaware this way, even if many of that person’s “organs” have a different perspective.
…propositional knowledge is the knowledge that scales…
That’s actually really unclear to me. Christendom would have been better defined by a social order (and thus by individuals’ knowing how to participate in that culture) than it would have by a set of propositions. Likewise #metoo spread because it was a viable knowing-how: read a #metoo story with the hashtag, then feel moved to share your own with the hashtag such that others see yours.
tl;dr: that raised some interesting points. I’m not sure “actionable” is the right lens but something nearby resonated.
My current take is something like “yes, LessWrong is pretty oriented towards propositional knowledge”. Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
Your point that “fake frameworks that are actionable are seen as preliminary, but there doesn’t seem to be a corresponding sense that compelling-but-inactionable-models are also ‘preliminary’” was interesting. I hadn’t quite thought through that lens before.
Thinking through that lens a bit now, what I’d guess is that “actually, yes, non-actionable-things are also sort of preliminary.” (I think part of the point of the LW Review was to check ‘okay, has anyone actually used these ideas in a way that either connected directly with reality, or showed some signs of eventually connecting.’ A concept I kept pointing at during the Review process was “which ideas were true, and also useful?”)
But, I think there is still some kind of tradeoff being made here, that isn’t quite about actionability vs vetted-explicit-knowledge. The tradeoff is in instead some vaguer axis of “the sort of stuff I imagine Val is excited about”, that has more to do with… like, in an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge, with tacit knowledge treated as something that should eventually get type-checked into explicit knowledge and vetted if possible, some frames are going to have an easier time being talked about.
So, I do think there are going to be some domains that LessWrong is weaker at, and that’s okay. I don’t think actionability is the thing though.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing. A lot of the point of the original sequences was to convey tacit knowledge about how-to-think. A lot of the currently-hard-to-talk-about-explicitly-stuff is stuff that’s real important to figure out how to convey and write up nuanced sequences about. (I do think it’s necessary to figure out how to convey it in writing, as much as possible, because there are serious limits to in-person-workshop scalability)
I’m not sure “actionable” is the right lens but something nearby resonated.
Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn’t really offer new explicit models or explanations, and it didn’t really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.
By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer’s & Anna’s eye, there was an attempted debiasing technique vs. the sunk cost fallacy called “Pretend you’re a teleporting alien”. The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human’s memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).
What it means to know (in a way that matters) why that technique didn’t work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.
But a step in the right direction (I think) is noticing that the “alien” frame doesn’t in practice have the kind of “kick” that the Moloch idea does. Despite having in-theory actionable steps, it doesn’t galvanize a mind with meaning. Turns out, that’s actually really important for a viable art of rationality.
Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
I’m wanting to emphasize that I’m not trying to denigrate this. In case that wasn’t clear. I think this is valuable and good.
…an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge…
This resonates pretty well with where my intuition tends to point.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing.
That’s something of an illusion. It’s a habit we’ve learned in terms of how to relate to writing. (Although it’s kind of true because we’ve all learned it… but it’s possible to circumnavigate this by noticing what’s going on, which a subcommunity like LW can potentially do.)
More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What’s the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?
A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.
This is also part of the value of poetry. What makes poetry powerful and important is that it’s writing designed specifically to create an impact beneath the propositional level. There’s a reason Rumi focused on poetry after his enlightenment:
“Sit down, be still, and listen.
You are drunk
and this is
the edge of the roof.”
~Rumi
Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it’s not what I observe. I doubt three months from now that there’ll be any relevant uptick in how much poetry appears on LW, for instance. It’s just not what the culture seems to want — which, again, seems like a fine choice.
Thanks a bunch, Val. I say you saved me dozens if not hundreds of hours, because I was (am) pretty confused about the big picture around here.
The associated Ken Wilber image helps with the understanding a lot. Now, if I don’t really get nearly half of the articles on LW, does that mean I’m redder than orange? Are there tests on the internet where I can pretty reliably tell where I’m standing on that scale? Also, I’m quite sure that my goal is to get to the turquoise level. What online resources I should learn and/or what “groups” I should join, in your personal recommendation?
Some thoughts I wanted to share on this aspect (speaking only for myself, not Oli or anyone else)
[quick meta note: the deadline for editing was extended till the 13th, and I think there’s a chance we may extend it further]
That was me trying to (a) brute force motivation for the reader and (b) navigate some social tension I was feeling around what it means to be able to make a claim here. In particular I was annoyed with Oli and wanted to sidestep discussion of the lemons problem. My focus was actually on making something in culture salient by offering a fake framework. The thing speaks for itself once you look at it. After that point I don’t care what anyone calls it.
This would, alas, leave out the emphasis that it’s a fake framework. But I’ve changed my attitude about how much hand-holding to do for stuff like that. Part of the reason I put that in the beginning was to show the LW audience that I was taking it as fake, so as to sidestep arguments about how justified everything is or isn’t. At this point I don’t care anymore. People can project whatever they want on me because, uh, I can’t really stop them anyway. So I’m not going to fret about it.
I agree that axing the previous opening section was mostly good – it was a bit overwrought and skipping to the meat of the article seems better. I think what I’d personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”. You sort of basically have that with the new version (linking to Fake Frameworks at the beginning, but we have the Epistemic Status convention to handle it slightly more explicitly, without taking up much space)
What I think I actually prefer, overall (for LW culture) is something like:
Individual posts can give a quick disclaimer to let readers know how they’re supposed to relate to an article, epistemically. Fake Frameworks are a fine abstraction. This should be an established concept that doesn’t require much explanation each time.
Over the long term, there is an expectation that if Fake Frameworks stick around, they are expected to get grounded out into “real” frameworks, or at least the limits of the framework is more clearly spelled out. This often takes lots of exploration, experimentation, modeling, and explanatory work, which can often take years. It makes sense to have a shared understanding that it takes years (esp. because often it’s not people’s full time job to be writing this sort of thing up), but I think it’s pretty important to the intellectual culture for people to trust that that’s part of the longterm goal (for things discussed on LessWrong anyhow)
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
I generally prefer to handle things with “escalating rewards and recognition” rather than rules that crimp people’s ability to brainstorm, or write things that explain things to people with some-but-not-all-of-a-set-of-prequisites.
So one of the things I’m pretty excited about for the review process is creating a more robust system for (and explicit answer to the question of) “when/how do we re-examine things that aren’t rigorously grounded?“.
I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’”, with some posts flagged as “this seems straightforwardly true” and others flagged as “this seems to point in an interesting and useful thing, but further work is needed.”
This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what’s going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).
I think what I’d personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”.
Like so? (See edit at top.) I’m familiar with the idea behind this convention. Just not sure how LW has started formatting it, or if there’s desire to develop much precision on this formatting.
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
Mmm. That makes sense.
My impression looking back now is that the dynamic was something like:
[me]: Here’s an epistemic puzzle that emerges from whether people have or haven’t experience flibble.
[others]: I don’t believe there’s an epistemic puzzle until you show there’s value in experiencing flibble.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
[others]: Then I’m correct not to take the epistemic puzzle seriously given my epistemic state.
[me]: You realize you’re assuming there’s no puzzle to conclude there’s no puzzle, right?
[others]: You realize you’re assuming there is a puzzle to conclude there is, right? Since you’re putting the claim forward, the onus is on you to break the symmetry to show there’s something worth talking about here.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
(Proceed with loop.)
What I wasn’t acknowledging to myself (and thus not to anyone else either) at the time was that I was loving the frustration of being misunderstood. Which is why I got exasperated instead of just… being clearer given feedback about how I wasn’t clear.
I’m now much better at just communicating. Mostly by caring a heck of a lot more about actually listening to others.
I think you’re naming something I didn’t hear back then. And if nothing else, it’s something you value now, and I can see how it makes sense as a value to want to ground Less Wrong in. Thanks for speaking to that.
I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’”, with some posts flagged as “this seems straightforwardly true” and others flagged as “this seems to point in an interesting and useful thing, but further work is needed.”
That seems great. Kind of like what Duncan did with the CFAR handbook.
This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what’s going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).
Mmm. That’s a noble wish. I like it.
I won’t respond to that right now. I don’t know enough to offer the full rigor I imagine you’d like, either. So I hope for your sake that others dive in on this.
I won’t respond to that right now. I don’t know enough to offer the full rigor I imagine you’d like, either. So I hope for your sake that others dive in on this.
Yeah, to be clear I am expecting this sort of thing to take years to do. (and, part of the point of the review process is that it can be more of a collective effort to either flag issues or resolve them)
What seems like an achievable thing to shoot for this year, by someone-or-other (and I think worth doing whether this post ends up getting included in the book or not), is something like
a) if anyone does think the post is actually misleading in some way, now’s the time for them to say so. (Obviously this isn’t something I’d generally expect authors to do, unless they’ve actually changed their mind on a thing).
b) write out a list of pointers for “what sort of places might you look to figure out how this connects to the rest of psych literature of neuroscience, or what experiments you’d want to see run or models built if there isn’t yet existing literature on this”. Not as a “fully ground this out in one month”, but “notes for future people to followup on.”
I don’t know if I’ll ever get to a full editing of this. I’ll jot notes here of how I would edit it as I reread this.
I’d ax the whole opening section.
That was me trying to (a) brute force motivation for the reader and (b) navigate some social tension I was feeling around what it means to be able to make a claim here. In particular I was annoyed with Oli and wanted to sidestep discussion of the lemons problem. My focus was actually on making something in culture salient by offering a fake framework. The thing speaks for itself once you look at it. After that point I don’t care what anyone calls it.
This would, alas, leave out the emphasis that it’s a fake framework. But I’ve changed my attitude about how much hand-holding to do for stuff like that. Part of the reason I put that in the beginning was to show the LW audience that I was taking it as fake, so as to sidestep arguments about how justified everything is or isn’t. At this point I don’t care anymore. People can project whatever they want on me because, uh, I can’t really stop them anyway. So I’m not going to fret about it.
I had also intended the opening to have a kind of conversational tone, as part of a Sequence that I never finished (on “ontology-cracking”). I probably never will finish it at this point. So no point in making this stand-alone essay pretend to be part of an ongoing conversation.
A minor nitpick: I open the meat of the idea by telling some facts about improv theater. I suspect it’d be more engaging if I had written it as a story illustrating the experience. “Bob walked onto the stage, his heart pounding. ‘God, what do I say?’” Etc. The whole thing would have felt less abstract if I had done that. But it clearly communicated well for this audience, so that’s not a big concern.
One other reviewer mentioned how the strong examples end up obfuscating my overall point. That was actually a writing strategy: I didn’t want the point stated early on and elucidated throughout. I wanted the reader to resonate with what I was describing, and then use that resonance to point out an implication of the reader’s own life. That said, I bet I could do that with more punch and precision these days.
Reading over the “abuser”/”victim”/”rescuer” stuff, I’m now reminded of Karpman’s Triangle. I didn’t know about that at the time. Karpman was a grad student under Eric Berne, the father of Transactional Analysis. These days many folk know it as “the drama triangle”. Were I writing this essay today I might reference this triangle.
I feel like most of the value of the improv analogy is actually in the contrast between player and character. When I hear about people being impacted by this article, most of what I hear has to do with the mechanics of how the social scene unfolds and how that creates constraints (anti-slack). Which is wonderful! But if I had to choose one illumination for people to experience from this whole thing, I’d rather they get a glimpse of who they are as the player, and how much that really really isn’t the character that’s usually talking and saying “I”, “me”, and “my”. It’s immensely freeing to see this clearly. But there’s a lot of pleasure to be taken in playing genre-naïve characters, and I don’t mean to dismiss that. That’s just not the scene type I want to play in anymore. So on net, this wish of mine probably wouldn’t meaningfully affect how I’d edit this piece.
The reason for referencing Omega was to foreshadow a later post on Newcomblike self-deception.
The short version is: If Omega is modeling your self-model instead of your actual source code to predict your actions, then you’re highly incentivized to separate your self-model from your method of choosing your actions. Then you can two-box while convincing Omega you’ll one-box by sincerely but falsely believing you’re going to one-box. This paints a pretty vivid picture if you view the intelligent social web as the real-world version of Omega with “social role” playing the part of “self-model”.
I’d now skip that whole reference. It made sense only in my mind. And even if I had finished the Sequence this was part of, the references to Omega would make sense only to those who had finished it and then went back to reread this essay.
There’s something about how this essay uses the concept of slack that nags at me. I suspect it’s fine for the purposes of the 2018 review, but I’d be remiss not to mention it. The intuition about slack is itself interpreted from within the social web. But slack affects only the character. So although slack is a genre-savvy concept, it’s still a concept within the web itself. That introduces a dimension of self-reference that might be elegantly self-reinforcing, paradoxical, or something else. I honestly don’t know.
This has me wonder about there being a type of construct, which is genre-savvy concepts. This whole model is an example, as is the concept of genre-savviness. I suspect that’s a gateway to an insight type that’s usually called “spiritual”.
There’s a bit where I refer to the possibility of using Looking to shift roles. I have a much more sophisticated view of this now. I think I was being truthful and reasonably accurate… and yet for the sake of the essay I would either expand on that reference to clarify it, or remove the reference entirely. It’s not helpful to say “There’s a magic consciousness thingie you can do that’ll do things your character can’t understand” if that’s literally all I say about it.
So, with all that said, here are the edits I’d make:
Cut the opening section.
Add a hyperlink to Karpman’s Triangle.
Erase references to Omega, maybe expanding a bit where needed instead.
Either delete references to changing one’s fate by Looking, or spell it out in less mysterious terms.
I’ve made my edits. I think my most questionable call was to go ahead and expand the bit on how to Look in this case.
If I understand the review plan correctly, I think this means I’m past the point where I can get feedback on that edit before voting happens for this article. Alas. I’m juggling a tension between (a) what I think is actually most helpful vs. (b) what I imagine is most fitting to where Less Wrong culture seems to want to go.
If it somehow makes more sense to include the original and ignore this edit, I’m actually fine with that. I had originally planned on not making edits.
But I do hope this new version is clearer and more helpful. I think it has the same content as the original, just clarified a bit.
Thanks! Probably will end up with a couple more thoughts but definitely appreciate you making some time for this. :)
Indeed, thank you a lot for taking the time for this!
May be off-topic, but can you elaborate on where LW culture wants to go? Or point to a specific post...
I can’t point to a specific post without doing more digging than I care to do right now. I wouldn’t be too shocked to find out I’m drastically wrong. It’s just my impression from (a) years of interacting with Less Wrong before plus (b) popping in every now and again to see what social dynamics have and haven’t changed.
With that caveat… here are a couple of frames to triangulate what I was referring to:
In Ken Wilber’s version of Spiral Dynamics, Less Wrong is the best display of Orange I know of. Most efforts at Orange these days are weaksauce, like “I Fucking Love Science” (which is more like Amber with an Orange aesthetic) or Richard Dawkins’ “Brights” campaign. I could imagine a Less Wrong that wants to work hard at holding Orange values as it transitions into 2nd Tier (i.e., Wilber’s Teal and Turquoise Altitudes), but that’s not what I see. What I see instead is a LW that wants to continue to embody Orange more fully and perfectly, importing and translating other frameworks into Orange terms. In other words, LW seems to me to have devoted to keep playing in 1st Tier, which seems like a fine choice. It’s just not the one I make.
There’s a mighty powerful pull on LW to orient toward propositional knowing. The focus is super-heavy on languaging and explicit models. Questions about deeper layers of knowing (e.g., John Vervaeke’s breakdown in terms of procedural, perspectival, and participatory forms of knowing) undergo pressure to be framed in propositional terms and evaluated analytically to be held here. The whole thing with “fake frameworks” is an attempt to acknowledge perspectival knowing… but there’s still a strong alignment I see here with such knowing being seen as preliminary or lacking in some sense unless and until there’s a propositional analysis that shows what’s “really” going on. I notice the reverse isn’t really the case: there isn’t a demand that a compelling model or idea be actionable, for instance. This overall picture is amazing for ensuring that propositional strengths (e.g., logic) get integrated into one’s worldview. It’s quite terrible at navigating metacognitive blindspots though.
From what I’ve seen, LW seems to want to say “yes” maximally to this direction. Which is a fine choice. There aren’t other groups that can make this choice with this degree of skill and intelligence as far as I know.
There’s just some friction with this view when I want to point at certain perspectival and participatory forms of knowing, e.g. about the nature of the self. You can’t argue an ego into recognizing itself. The whole OP was an attempt to offer a perspective that would help transform what was seeable and actionable; it was never meant to be a logical argument, really. So when asked “What can I do with this knowledge?”, it’s very tricky to give a propositional model that is actually actionable in this context — but it’s quite straightforward to give some instructions that someone can try so as to discover for themselves what they experience.
I was just noticing that bypassing theory to offer participatory forms of knowing was a mild violation of norms here as I understand them. But I was guessing it was a forgivable violation, and that the potential benefit justified the mild social bruising.
I don’t think everyone playing on the propositional level is unaware of its shortcomings, many just recognize that propositional knowledge is the knowledge that scales and therefore worthy of investment despite those shortcomings. And on the other side of things you have Kegan 3 (I don’t like Integral terms for reasons related to this very topic) people with some awareness of Kegan 5 but having skipped a healthy Kegan 4 and therefore having big holes which they tend to paper over with spiritual bypassing. They are the counterpart to the rationalist strawmen who skipped over a healthy Kegan 3 (many of us here do have shades of this) and run into big problems when they try to go from 4 to 5 because of those holes from 3.
I didn’t mean to imply that everyone was unaware this way. I meant to point at the culture as a whole. Like, if the whole of LW were a single person, then that person strikes me as being unaware this way, even if many of that person’s “organs” have a different perspective.
That’s actually really unclear to me. Christendom would have been better defined by a social order (and thus by individuals’ knowing how to participate in that culture) than it would have by a set of propositions. Likewise #metoo spread because it was a viable knowing-how: read a #metoo story with the hashtag, then feel moved to share your own with the hashtag such that others see yours.
tl;dr: that raised some interesting points. I’m not sure “actionable” is the right lens but something nearby resonated.
My current take is something like “yes, LessWrong is pretty oriented towards propositional knowledge”. Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
Your point that “fake frameworks that are actionable are seen as preliminary, but there doesn’t seem to be a corresponding sense that compelling-but-inactionable-models are also ‘preliminary’” was interesting. I hadn’t quite thought through that lens before.
Thinking through that lens a bit now, what I’d guess is that “actually, yes, non-actionable-things are also sort of preliminary.” (I think part of the point of the LW Review was to check ‘okay, has anyone actually used these ideas in a way that either connected directly with reality, or showed some signs of eventually connecting.’ A concept I kept pointing at during the Review process was “which ideas were true, and also useful?”)
But, I think there is still some kind of tradeoff being made here, that isn’t quite about actionability vs vetted-explicit-knowledge. The tradeoff is in instead some vaguer axis of “the sort of stuff I imagine Val is excited about”, that has more to do with… like, in an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge, with tacit knowledge treated as something that should eventually get type-checked into explicit knowledge and vetted if possible, some frames are going to have an easier time being talked about.
So, I do think there are going to be some domains that LessWrong is weaker at, and that’s okay. I don’t think actionability is the thing though.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing. A lot of the point of the original sequences was to convey tacit knowledge about how-to-think. A lot of the currently-hard-to-talk-about-explicitly-stuff is stuff that’s real important to figure out how to convey and write up nuanced sequences about. (I do think it’s necessary to figure out how to convey it in writing, as much as possible, because there are serious limits to in-person-workshop scalability)
Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn’t really offer new explicit models or explanations, and it didn’t really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.
By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer’s & Anna’s eye, there was an attempted debiasing technique vs. the sunk cost fallacy called “Pretend you’re a teleporting alien”. The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human’s memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).
What it means to know (in a way that matters) why that technique didn’t work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.
But a step in the right direction (I think) is noticing that the “alien” frame doesn’t in practice have the kind of “kick” that the Moloch idea does. Despite having in-theory actionable steps, it doesn’t galvanize a mind with meaning. Turns out, that’s actually really important for a viable art of rationality.
I’m wanting to emphasize that I’m not trying to denigrate this. In case that wasn’t clear. I think this is valuable and good.
This resonates pretty well with where my intuition tends to point.
That’s something of an illusion. It’s a habit we’ve learned in terms of how to relate to writing. (Although it’s kind of true because we’ve all learned it… but it’s possible to circumnavigate this by noticing what’s going on, which a subcommunity like LW can potentially do.)
Contrast with e.g. Lectio Divina.
More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What’s the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?
A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.
This is also part of the value of poetry. What makes poetry powerful and important is that it’s writing designed specifically to create an impact beneath the propositional level. There’s a reason Rumi focused on poetry after his enlightenment:
Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it’s not what I observe. I doubt three months from now that there’ll be any relevant uptick in how much poetry appears on LW, for instance. It’s just not what the culture seems to want — which, again, seems like a fine choice.
(Note: I found this comment helpful in thinking about LessWrong, though don’t have much to say in response)
Thanks a bunch, Val. I say you saved me dozens if not hundreds of hours, because I was (am) pretty confused about the big picture around here.
The associated Ken Wilber image helps with the understanding a lot. Now, if I don’t really get nearly half of the articles on LW, does that mean I’m redder than orange? Are there tests on the internet where I can pretty reliably tell where I’m standing on that scale? Also, I’m quite sure that my goal is to get to the turquoise level. What online resources I should learn and/or what “groups” I should join, in your personal recommendation?
I’m glad to have helped. :)
I’ll answer the rest by PM. Diving into Integral Theory here strikes me as a bit off topic (though I certainly don’t mind the question).
Some thoughts I wanted to share on this aspect (speaking only for myself, not Oli or anyone else)
[quick meta note: the deadline for editing was extended till the 13th, and I think there’s a chance we may extend it further]
I agree that axing the previous opening section was mostly good – it was a bit overwrought and skipping to the meat of the article seems better. I think what I’d personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”. You sort of basically have that with the new version (linking to Fake Frameworks at the beginning, but we have the Epistemic Status convention to handle it slightly more explicitly, without taking up much space)
What I think I actually prefer, overall (for LW culture) is something like:
Individual posts can give a quick disclaimer to let readers know how they’re supposed to relate to an article, epistemically. Fake Frameworks are a fine abstraction. This should be an established concept that doesn’t require much explanation each time.
Over the long term, there is an expectation that if Fake Frameworks stick around, they are expected to get grounded out into “real” frameworks, or at least the limits of the framework is more clearly spelled out. This often takes lots of exploration, experimentation, modeling, and explanatory work, which can often take years. It makes sense to have a shared understanding that it takes years (esp. because often it’s not people’s full time job to be writing this sort of thing up), but I think it’s pretty important to the intellectual culture for people to trust that that’s part of the longterm goal (for things discussed on LessWrong anyhow)
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
I generally prefer to handle things with “escalating rewards and recognition” rather than rules that crimp people’s ability to brainstorm, or write things that explain things to people with some-but-not-all-of-a-set-of-prequisites.
So one of the things I’m pretty excited about for the review process is creating a more robust system for (and explicit answer to the question of) “when/how do we re-examine things that aren’t rigorously grounded?“.
I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’”, with some posts flagged as “this seems straightforwardly true” and others flagged as “this seems to point in an interesting and useful thing, but further work is needed.”
This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what’s going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).
Like so? (See edit at top.) I’m familiar with the idea behind this convention. Just not sure how LW has started formatting it, or if there’s desire to develop much precision on this formatting.
Mmm. That makes sense.
My impression looking back now is that the dynamic was something like:
[me]: Here’s an epistemic puzzle that emerges from whether people have or haven’t experience flibble.
[others]: I don’t believe there’s an epistemic puzzle until you show there’s value in experiencing flibble.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
[others]: Then I’m correct not to take the epistemic puzzle seriously given my epistemic state.
[me]: You realize you’re assuming there’s no puzzle to conclude there’s no puzzle, right?
[others]: You realize you’re assuming there is a puzzle to conclude there is, right? Since you’re putting the claim forward, the onus is on you to break the symmetry to show there’s something worth talking about here.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
(Proceed with loop.)
What I wasn’t acknowledging to myself (and thus not to anyone else either) at the time was that I was loving the frustration of being misunderstood. Which is why I got exasperated instead of just… being clearer given feedback about how I wasn’t clear.
I’m now much better at just communicating. Mostly by caring a heck of a lot more about actually listening to others.
I think you’re naming something I didn’t hear back then. And if nothing else, it’s something you value now, and I can see how it makes sense as a value to want to ground Less Wrong in. Thanks for speaking to that.
That seems great. Kind of like what Duncan did with the CFAR handbook.
Mmm. That’s a noble wish. I like it.
I won’t respond to that right now. I don’t know enough to offer the full rigor I imagine you’d like, either. So I hope for your sake that others dive in on this.
Yeah, to be clear I am expecting this sort of thing to take years to do. (and, part of the point of the review process is that it can be more of a collective effort to either flag issues or resolve them)
What seems like an achievable thing to shoot for this year, by someone-or-other (and I think worth doing whether this post ends up getting included in the book or not), is something like
a) if anyone does think the post is actually misleading in some way, now’s the time for them to say so. (Obviously this isn’t something I’d generally expect authors to do, unless they’ve actually changed their mind on a thing).
b) write out a list of pointers for “what sort of places might you look to figure out how this connects to the rest of psych literature of neuroscience, or what experiments you’d want to see run or models built if there isn’t yet existing literature on this”. Not as a “fully ground this out in one month”, but “notes for future people to followup on.”