tl;dr: that raised some interesting points. I’m not sure “actionable” is the right lens but something nearby resonated.
My current take is something like “yes, LessWrong is pretty oriented towards propositional knowledge”. Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
Your point that “fake frameworks that are actionable are seen as preliminary, but there doesn’t seem to be a corresponding sense that compelling-but-inactionable-models are also ‘preliminary’” was interesting. I hadn’t quite thought through that lens before.
Thinking through that lens a bit now, what I’d guess is that “actually, yes, non-actionable-things are also sort of preliminary.” (I think part of the point of the LW Review was to check ‘okay, has anyone actually used these ideas in a way that either connected directly with reality, or showed some signs of eventually connecting.’ A concept I kept pointing at during the Review process was “which ideas were true, and also useful?”)
But, I think there is still some kind of tradeoff being made here, that isn’t quite about actionability vs vetted-explicit-knowledge. The tradeoff is in instead some vaguer axis of “the sort of stuff I imagine Val is excited about”, that has more to do with… like, in an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge, with tacit knowledge treated as something that should eventually get type-checked into explicit knowledge and vetted if possible, some frames are going to have an easier time being talked about.
So, I do think there are going to be some domains that LessWrong is weaker at, and that’s okay. I don’t think actionability is the thing though.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing. A lot of the point of the original sequences was to convey tacit knowledge about how-to-think. A lot of the currently-hard-to-talk-about-explicitly-stuff is stuff that’s real important to figure out how to convey and write up nuanced sequences about. (I do think it’s necessary to figure out how to convey it in writing, as much as possible, because there are serious limits to in-person-workshop scalability)
I’m not sure “actionable” is the right lens but something nearby resonated.
Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn’t really offer new explicit models or explanations, and it didn’t really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.
By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer’s & Anna’s eye, there was an attempted debiasing technique vs. the sunk cost fallacy called “Pretend you’re a teleporting alien”. The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human’s memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).
What it means to know (in a way that matters) why that technique didn’t work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.
But a step in the right direction (I think) is noticing that the “alien” frame doesn’t in practice have the kind of “kick” that the Moloch idea does. Despite having in-theory actionable steps, it doesn’t galvanize a mind with meaning. Turns out, that’s actually really important for a viable art of rationality.
Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
I’m wanting to emphasize that I’m not trying to denigrate this. In case that wasn’t clear. I think this is valuable and good.
…an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge…
This resonates pretty well with where my intuition tends to point.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing.
That’s something of an illusion. It’s a habit we’ve learned in terms of how to relate to writing. (Although it’s kind of true because we’ve all learned it… but it’s possible to circumnavigate this by noticing what’s going on, which a subcommunity like LW can potentially do.)
More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What’s the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?
A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.
This is also part of the value of poetry. What makes poetry powerful and important is that it’s writing designed specifically to create an impact beneath the propositional level. There’s a reason Rumi focused on poetry after his enlightenment:
“Sit down, be still, and listen.
You are drunk
and this is
the edge of the roof.”
~Rumi
Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it’s not what I observe. I doubt three months from now that there’ll be any relevant uptick in how much poetry appears on LW, for instance. It’s just not what the culture seems to want — which, again, seems like a fine choice.
tl;dr: that raised some interesting points. I’m not sure “actionable” is the right lens but something nearby resonated.
My current take is something like “yes, LessWrong is pretty oriented towards propositional knowledge”. Not necessarily because it’s the best or only way, as Romeo said, because it’s a thing that can scale in a particular way and so is useful to build around.
Your point that “fake frameworks that are actionable are seen as preliminary, but there doesn’t seem to be a corresponding sense that compelling-but-inactionable-models are also ‘preliminary’” was interesting. I hadn’t quite thought through that lens before.
Thinking through that lens a bit now, what I’d guess is that “actually, yes, non-actionable-things are also sort of preliminary.” (I think part of the point of the LW Review was to check ‘okay, has anyone actually used these ideas in a way that either connected directly with reality, or showed some signs of eventually connecting.’ A concept I kept pointing at during the Review process was “which ideas were true, and also useful?”)
But, I think there is still some kind of tradeoff being made here, that isn’t quite about actionability vs vetted-explicit-knowledge. The tradeoff is in instead some vaguer axis of “the sort of stuff I imagine Val is excited about”, that has more to do with… like, in an environment that’s explicitly oriented towards bridging gaps between explicit and tacit knowledge, with tacit knowledge treated as something that should eventually get type-checked into explicit knowledge and vetted if possible, some frames are going to have an easier time being talked about.
So, I do think there are going to be some domains that LessWrong is weaker at, and that’s okay. I don’t think actionability is the thing though.
Some of this is just about tacit or experiential knowledge just being real-damn-hard-to-convey in writing. A lot of the point of the original sequences was to convey tacit knowledge about how-to-think. A lot of the currently-hard-to-talk-about-explicitly-stuff is stuff that’s real important to figure out how to convey and write up nuanced sequences about. (I do think it’s necessary to figure out how to convey it in writing, as much as possible, because there are serious limits to in-person-workshop scalability)
Agreed. I mean actionability as an example type. A different sort of example would be Scott introducing the frame of Moloch. His essay didn’t really offer new explicit models or explanations, and it didn’t really make any action pathways viable for the individual reader. But it was still powerful in a way that I think importantly counts.
By way of contrast, way back in the day when CFAR was but a glimmer in Eliezer’s & Anna’s eye, there was an attempted debiasing technique vs. the sunk cost fallacy called “Pretend you’re a teleporting alien”. The idea was to imagine that you had just teleported into this body and mind, with memories and so on, but that your history was something other than what this human’s memory claimed. Anna and Eliezer offered this to a few people, presumably because the thought experiment worked for them, but by my understanding it fell flat. It was too boring to use. It sure seems actionable, but in practice it neither lit up a meaningful new perspective (the way Meditations on Moloch did) nor afforded a viable action pathway (despite having specific steps that people could in theory follow).
What it means to know (in a way that matters) why that technique didn’t work is that you can share a debiasing technique with others that they can and do use. Models and ideas might be helpful for getting there… but something goes really odd when the implicit goal is the propositional model. Too much room for conversational Goodharting.
But a step in the right direction (I think) is noticing that the “alien” frame doesn’t in practice have the kind of “kick” that the Moloch idea does. Despite having in-theory actionable steps, it doesn’t galvanize a mind with meaning. Turns out, that’s actually really important for a viable art of rationality.
I’m wanting to emphasize that I’m not trying to denigrate this. In case that wasn’t clear. I think this is valuable and good.
This resonates pretty well with where my intuition tends to point.
That’s something of an illusion. It’s a habit we’ve learned in terms of how to relate to writing. (Although it’s kind of true because we’ve all learned it… but it’s possible to circumnavigate this by noticing what’s going on, which a subcommunity like LW can potentially do.)
Contrast with e.g. Lectio Divina.
More generally, one can dialogue with the text rather than just scan it for information. You can read a sentence and let it sink in. How does it feel to read it? What is it like to wear the perspective that would say that sentence? What’s the feel on the inside of the worldview being espoused? How can you choose to allow the very act of reading to transform you?
A lot of Buddhist texts seem to have been designed to be read this way. You read the teachings slowly, to let it really absorb, and in doing so it guides your mind to mimic the way of being that lets you slip into insight.
This is also part of the value of poetry. What makes poetry powerful and important is that it’s writing designed specifically to create an impact beneath the propositional level. There’s a reason Rumi focused on poetry after his enlightenment:
Culture has quite a few tools like these for powerfully conveying deep ways of knowing. Along the same lines as I mentioned in my earlier comment above, I can imagine a potential Less Wrong that wants to devote energy and effort toward mastering this multimodal communication process in order to dynamically create a powerful community of deep practice of rationality. But it’s not what I observe. I doubt three months from now that there’ll be any relevant uptick in how much poetry appears on LW, for instance. It’s just not what the culture seems to want — which, again, seems like a fine choice.