Some thoughts I wanted to share on this aspect (speaking only for myself, not Oli or anyone else)
[quick meta note: the deadline for editing was extended till the 13th, and I think there’s a chance we may extend it further]
That was me trying to (a) brute force motivation for the reader and (b) navigate some social tension I was feeling around what it means to be able to make a claim here. In particular I was annoyed with Oli and wanted to sidestep discussion of the lemons problem. My focus was actually on making something in culture salient by offering a fake framework. The thing speaks for itself once you look at it. After that point I don’t care what anyone calls it.
This would, alas, leave out the emphasis that it’s a fake framework. But I’ve changed my attitude about how much hand-holding to do for stuff like that. Part of the reason I put that in the beginning was to show the LW audience that I was taking it as fake, so as to sidestep arguments about how justified everything is or isn’t. At this point I don’t care anymore. People can project whatever they want on me because, uh, I can’t really stop them anyway. So I’m not going to fret about it.
I agree that axing the previous opening section was mostly good – it was a bit overwrought and skipping to the meat of the article seems better. I think what I’d personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”. You sort of basically have that with the new version (linking to Fake Frameworks at the beginning, but we have the Epistemic Status convention to handle it slightly more explicitly, without taking up much space)
What I think I actually prefer, overall (for LW culture) is something like:
Individual posts can give a quick disclaimer to let readers know how they’re supposed to relate to an article, epistemically. Fake Frameworks are a fine abstraction. This should be an established concept that doesn’t require much explanation each time.
Over the long term, there is an expectation that if Fake Frameworks stick around, they are expected to get grounded out into “real” frameworks, or at least the limits of the framework is more clearly spelled out. This often takes lots of exploration, experimentation, modeling, and explanatory work, which can often take years. It makes sense to have a shared understanding that it takes years (esp. because often it’s not people’s full time job to be writing this sort of thing up), but I think it’s pretty important to the intellectual culture for people to trust that that’s part of the longterm goal (for things discussed on LessWrong anyhow)
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
I generally prefer to handle things with “escalating rewards and recognition” rather than rules that crimp people’s ability to brainstorm, or write things that explain things to people with some-but-not-all-of-a-set-of-prequisites.
So one of the things I’m pretty excited about for the review process is creating a more robust system for (and explicit answer to the question of) “when/how do we re-examine things that aren’t rigorously grounded?“.
I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’”, with some posts flagged as “this seems straightforwardly true” and others flagged as “this seems to point in an interesting and useful thing, but further work is needed.”
This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what’s going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).
I think what I’d personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”.
Like so? (See edit at top.) I’m familiar with the idea behind this convention. Just not sure how LW has started formatting it, or if there’s desire to develop much precision on this formatting.
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
Mmm. That makes sense.
My impression looking back now is that the dynamic was something like:
[me]: Here’s an epistemic puzzle that emerges from whether people have or haven’t experience flibble.
[others]: I don’t believe there’s an epistemic puzzle until you show there’s value in experiencing flibble.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
[others]: Then I’m correct not to take the epistemic puzzle seriously given my epistemic state.
[me]: You realize you’re assuming there’s no puzzle to conclude there’s no puzzle, right?
[others]: You realize you’re assuming there is a puzzle to conclude there is, right? Since you’re putting the claim forward, the onus is on you to break the symmetry to show there’s something worth talking about here.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
(Proceed with loop.)
What I wasn’t acknowledging to myself (and thus not to anyone else either) at the time was that I was loving the frustration of being misunderstood. Which is why I got exasperated instead of just… being clearer given feedback about how I wasn’t clear.
I’m now much better at just communicating. Mostly by caring a heck of a lot more about actually listening to others.
I think you’re naming something I didn’t hear back then. And if nothing else, it’s something you value now, and I can see how it makes sense as a value to want to ground Less Wrong in. Thanks for speaking to that.
I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’”, with some posts flagged as “this seems straightforwardly true” and others flagged as “this seems to point in an interesting and useful thing, but further work is needed.”
That seems great. Kind of like what Duncan did with the CFAR handbook.
This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what’s going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).
Mmm. That’s a noble wish. I like it.
I won’t respond to that right now. I don’t know enough to offer the full rigor I imagine you’d like, either. So I hope for your sake that others dive in on this.
I won’t respond to that right now. I don’t know enough to offer the full rigor I imagine you’d like, either. So I hope for your sake that others dive in on this.
Yeah, to be clear I am expecting this sort of thing to take years to do. (and, part of the point of the review process is that it can be more of a collective effort to either flag issues or resolve them)
What seems like an achievable thing to shoot for this year, by someone-or-other (and I think worth doing whether this post ends up getting included in the book or not), is something like
a) if anyone does think the post is actually misleading in some way, now’s the time for them to say so. (Obviously this isn’t something I’d generally expect authors to do, unless they’ve actually changed their mind on a thing).
b) write out a list of pointers for “what sort of places might you look to figure out how this connects to the rest of psych literature of neuroscience, or what experiments you’d want to see run or models built if there isn’t yet existing literature on this”. Not as a “fully ground this out in one month”, but “notes for future people to followup on.”
Some thoughts I wanted to share on this aspect (speaking only for myself, not Oli or anyone else)
[quick meta note: the deadline for editing was extended till the 13th, and I think there’s a chance we may extend it further]
I agree that axing the previous opening section was mostly good – it was a bit overwrought and skipping to the meat of the article seems better. I think what I’d personally prefer (over the new version), is a quick: “Epistemic Status: Fake Framework”. You sort of basically have that with the new version (linking to Fake Frameworks at the beginning, but we have the Epistemic Status convention to handle it slightly more explicitly, without taking up much space)
What I think I actually prefer, overall (for LW culture) is something like:
Individual posts can give a quick disclaimer to let readers know how they’re supposed to relate to an article, epistemically. Fake Frameworks are a fine abstraction. This should be an established concept that doesn’t require much explanation each time.
Over the long term, there is an expectation that if Fake Frameworks stick around, they are expected to get grounded out into “real” frameworks, or at least the limits of the framework is more clearly spelled out. This often takes lots of exploration, experimentation, modeling, and explanatory work, which can often take years. It makes sense to have a shared understanding that it takes years (esp. because often it’s not people’s full time job to be writing this sort of thing up), but I think it’s pretty important to the intellectual culture for people to trust that that’s part of the longterm goal (for things discussed on LessWrong anyhow)
I think a lot of the earlier disagreements or concerns at the time had less to do with flagging frameworks as fake, and more to do with not trusting that they were eventually going to ground out as “connected more clearly to the rest of our scientific understanding of the world”.
I generally prefer to handle things with “escalating rewards and recognition” rather than rules that crimp people’s ability to brainstorm, or write things that explain things to people with some-but-not-all-of-a-set-of-prequisites.
So one of the things I’m pretty excited about for the review process is creating a more robust system for (and explicit answer to the question of) “when/how do we re-examine things that aren’t rigorously grounded?“.
I don’t think things necessarily need to be ‘rigorously grounded’ to be in the 2018 Book, but I do think the book should include “taking stock of ‘what the epistemic status of each post is’ and checking for community consensus on whether the claims of the post hold up’”, with some posts flagged as “this seems straightforwardly true” and others flagged as “this seems to point in an interesting and useful thing, but further work is needed.”
This is all to say: I have gotten value out of this post and think it’s pointing at a true thing, but it’s also a post that I’d be particularly interested in people reviewing, from a standpoint of “okay, what actual claims is the post implying? What are the limits of the fake framework here? How does this connect to the rest of our best understanding of what’s going on in the brain?” (the previous round of commenters explored this somewhat but only in very vague terms).
Like so? (See edit at top.) I’m familiar with the idea behind this convention. Just not sure how LW has started formatting it, or if there’s desire to develop much precision on this formatting.
Mmm. That makes sense.
My impression looking back now is that the dynamic was something like:
[me]: Here’s an epistemic puzzle that emerges from whether people have or haven’t experience flibble.
[others]: I don’t believe there’s an epistemic puzzle until you show there’s value in experiencing flibble.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
[others]: Then I’m correct not to take the epistemic puzzle seriously given my epistemic state.
[me]: You realize you’re assuming there’s no puzzle to conclude there’s no puzzle, right?
[others]: You realize you’re assuming there is a puzzle to conclude there is, right? Since you’re putting the claim forward, the onus is on you to break the symmetry to show there’s something worth talking about here.
[me]: Uh, I can’t, because that’s the epistemic puzzle.
(Proceed with loop.)
What I wasn’t acknowledging to myself (and thus not to anyone else either) at the time was that I was loving the frustration of being misunderstood. Which is why I got exasperated instead of just… being clearer given feedback about how I wasn’t clear.
I’m now much better at just communicating. Mostly by caring a heck of a lot more about actually listening to others.
I think you’re naming something I didn’t hear back then. And if nothing else, it’s something you value now, and I can see how it makes sense as a value to want to ground Less Wrong in. Thanks for speaking to that.
That seems great. Kind of like what Duncan did with the CFAR handbook.
Mmm. That’s a noble wish. I like it.
I won’t respond to that right now. I don’t know enough to offer the full rigor I imagine you’d like, either. So I hope for your sake that others dive in on this.
Yeah, to be clear I am expecting this sort of thing to take years to do. (and, part of the point of the review process is that it can be more of a collective effort to either flag issues or resolve them)
What seems like an achievable thing to shoot for this year, by someone-or-other (and I think worth doing whether this post ends up getting included in the book or not), is something like
a) if anyone does think the post is actually misleading in some way, now’s the time for them to say so. (Obviously this isn’t something I’d generally expect authors to do, unless they’ve actually changed their mind on a thing).
b) write out a list of pointers for “what sort of places might you look to figure out how this connects to the rest of psych literature of neuroscience, or what experiments you’d want to see run or models built if there isn’t yet existing literature on this”. Not as a “fully ground this out in one month”, but “notes for future people to followup on.”