(Thanks for laying out your position in this level of depth. Sorry for how long this comment turned out. I guess I wanted to back up a bunch of my agreement with words. It’s a comment for the sake of everyone else, not just you.)
I think there’s something to what you’re saying, that the mentality itself could be better. The Sequences have been criticized because Eliezer didn’t cite previous thinkers all that much, but at least as far as the science goes, as you said, he was drawing on academic knowledge. I also think we’ve lost something precious with the absence of epic topic reviews by the likes of Luke. Kaj Sotala still brings in heavily from outside knowledge, John Wentworth did a great review on Biological Circuits, and we get SSC crossposts that have that, but otherwise posts aren’t heavily referencing or building upon outside stuff. I concede that I would like to see a lot more of that.
I think Kaj was rightly disappointed that he didn’t get more engagement with his post whose gist was “this is what the science really says about S1 & S2, one of your most cherished concepts, LW community”.
I wouldn’t say the typical approach is strictly bad, there’s value in thinking freshly for oneself or that failure to reference previous material shouldn’t be a crime or makes a text unworthy, but yeah, it’d be pretty cool if after Alkjash laid out Babble & Prune (which intuitively feels so correct), someone had dug through what empirical science we have to see whether the picture lines up. Or heck, actually gone and done some kind of experiment. I bet it would turn up something interesting.
And I think what you’re saying is that the issue isn’t just that people aren’t following up with scholarship and empiricism on new ideas and models, but that they’re actually forgetting that these are the next steps. Instead, they’re overconfident in our homegrown models, as though LessWrong were the one place able to come up with good ideas. (Sorry, some of this might be my own words.)
The category I’d label a lot of LessWrong posts with is “engaging articulation of a point which is intuitive in hindsight” / “creation of common vocabulary around such points”. That’s pretty valuable, but I do think solving the hardest problems will take more.
-----
You use the word “reliably” in a few places. It feels like it’s doing some work in your statements, and I’m not entirely sure what you mean or why it’s important.
-----
A model which is interesting but maybe not of obvious connection. I was speaking to a respected rationalist thinker this week and they classified potential writing on LessWrong into three categories:
Writing stuff to help oneself figure things out. Like a diary, but publicly shared.
People exchanging “letters” as they attempt to figure things out. Like old school academic journals.
Someone having something mostly figured out but with a large inferential distance to bridge. They write a large collection of posts trying to cover that distance. One example is The Sequences, and more recent examples are from John Wentworth and Kaj Sotala
I mention this because I recall you (alongside the rationalist thinker) complaining about the lack of people “presenting their worldviews on LessWrong”.
The kinds of epistemic norms I think you’re advocating for feel like a natural fit for 2nd kind of writing, but it’s less clear to me how they should apply to people presenting world views. Maybe it’s not more complicated than it’s fine to present your worldview without a tonne of evidence, but people shouldn’t forget that the evidence hasn’t been presented and it feeling intuitively correct isn’t enough.
-----
There’s something in here about Epistemic Modesty, something, something. Some part of me reads you as calling for more of that, which I’m wary of, but I don’t currently have more to say than flagging it as maybe a relevant variable in any disagreements here.
We probably do disagree about the value of academic sources, or what it takes to get value from them. Hmm. Maybe it’s something like there’s something to be said for thinking about models and assessing their plausibility yourself rather than relying on likely very flawed empirical studies.
Maybe I’m in favor of large careful reviews of what science knows but less in favor of trying to find sources for each idea or model that gets raised. I’m not sure.
-----
I can’t recall whether I’ve written publicly much about this, but a model I’ve had for a year or more is that for LW to make intellectual progress, we need to become a “community of practice”, not just a “community of interest”. Martial arts vs literal stamp collecting. (Streetfighting might be better still due to actual testing real fighting ability.) It’s great that many people find LessWrong a guilty pleasure they feel less guilty about than Facebook, but for us to make progress, people need to see LessWrong as a place where one of things you do is show up and do Serious Work, some of which is relatively hard and boring, like writing and reading lit reviews.
I suspect that a cap on the epistemic standards people hold stuff to is downstream of the level of effort people are calibrated on applying. But maybe it goes in other direction, so I don’t know.
Probably the 2018 Review is biased towards the posts which are most widely read, i.e., those easiest and most enjoyable to read, rather than solely rewarding those with the best contributions. Not overwhelmingly, but enough. Maybe same for karma. I’m not sure how to relate to that.
-----
3. Insofar as many of these scattered plausible insights are actually related in deep ways, trying to combine them so that the next generation of LW readers doesn’t have to separately learn about each of them, but can rather download a unified generative framework.
This sounds partially like distillation work plus extra integration. And sounds pretty good to me too.
-----
I still remember my feeling of disillusionment in the LessWrong community relative soon after I joined in late 2012. I realized that the bulk of members didn’t seem serious about advancing the Art. I never heard people discussing new results from cognitive science and how to apply them, even though that’s what Sequences were in large part and the Sequences hardly claimed to be complete! I guess I do relate somewhat to your “desperate effort” comment, though we’ve got some people trying pretty hard that I wouldn’t want to short change.
We do good stuff, but more is possible. I appreciate the reminder. I hope we succeed at pushing the culture and mentality in directions you like.
This is only tangentially relevant, but adding it here as some of you might find it interesting:
Venkatesh Rao has an excellent Twitter thread on why most independent research only reaches this kind of initial exploratory level (he tried it for a bit before moving to consulting). It’s pretty pessimistic, but there is a somewhat more optimistic follow-up thread on potential new funding models. Key point is that the later stages are just really effortful and time-consuming, in a way that keeps out a lot of people trying to do this as a side project alongside a separate main job (which I think is the case for a lot of LW contributors?)
Quote from that thread:
Research =
a) long time between having an idea and having something to show for it that even the most sympathetic fellow crackpot would appreciate (not even pay for, just get)
b) a >10:1 ratio of background invisible thinking in notes, dead-ends, eliminating options etc
With a blogpost, it’s like a week of effort at most from idea to mvp, and at most a 3:1 ratio of invisible to visible. That’s sustainable as a hobby/side thing.
To do research-grade thinking you basically have to be independently wealthy and accept 90% deadweight losses
Also just wanted to say good luck! I’m a relative outsider here with pretty different interests to LW core topics but I do appreciate people trying to do serious work outside academia, have been trying to do this myself, and have thought a fair bit about what’s currently missing (I wrote that in a kind of jokey style but I’m serious about the topic).
Thanks, these links seem great! I think this is a good (if slightly harsh) way of making a similar point to mine:
“I find that autodidacts who haven’t experienced institutional R&D environments have a self-congratulatory low threshold for what they count as research. It’s a bit like vanity publishing or fan fiction. This mismatch doesn’t exist as much in indie art, consulting, game dev etc”
Also, I liked your blog post! More generally, I strongly encourage bloggers to have a “best of” page, or something that directs people to good posts. I’d be keen to read more of your posts but have no idea where to start.
Thanks! I have been meaning to add a ‘start here’ page for a while, so that’s good to have the extra push :) Seems particularly worthwhile in my case because a) there’s no one clear theme and b) I’ve been trying a lot of low-quality experimental posts this year bc pandemic trashed motivation, so recent posts are not really reflective of my normal output.
For now some of my better posts in the last couple of years might be Cognitive decoupling and banana phones (tracing back the original precursor of Stanovich’s idea), The middle distance (a writeup of a useful and somewhat obscure idea from Brian Cantwell Smith’s On the Origin of Objects), and the negative probability post and its followup.
(Thanks for laying out your position in this level of depth. Sorry for how long this comment turned out. I guess I wanted to back up a bunch of my agreement with words. It’s a comment for the sake of everyone else, not just you.)
I think there’s something to what you’re saying, that the mentality itself could be better. The Sequences have been criticized because Eliezer didn’t cite previous thinkers all that much, but at least as far as the science goes, as you said, he was drawing on academic knowledge. I also think we’ve lost something precious with the absence of epic topic reviews by the likes of Luke. Kaj Sotala still brings in heavily from outside knowledge, John Wentworth did a great review on Biological Circuits, and we get SSC crossposts that have that, but otherwise posts aren’t heavily referencing or building upon outside stuff. I concede that I would like to see a lot more of that.
I think Kaj was rightly disappointed that he didn’t get more engagement with his post whose gist was “this is what the science really says about S1 & S2, one of your most cherished concepts, LW community”.
I wouldn’t say the typical approach is strictly bad, there’s value in thinking freshly for oneself or that failure to reference previous material shouldn’t be a crime or makes a text unworthy, but yeah, it’d be pretty cool if after Alkjash laid out Babble & Prune (which intuitively feels so correct), someone had dug through what empirical science we have to see whether the picture lines up. Or heck, actually gone and done some kind of experiment. I bet it would turn up something interesting.
And I think what you’re saying is that the issue isn’t just that people aren’t following up with scholarship and empiricism on new ideas and models, but that they’re actually forgetting that these are the next steps. Instead, they’re overconfident in our homegrown models, as though LessWrong were the one place able to come up with good ideas. (Sorry, some of this might be my own words.)
The category I’d label a lot of LessWrong posts with is “engaging articulation of a point which is intuitive in hindsight” / “creation of common vocabulary around such points”. That’s pretty valuable, but I do think solving the hardest problems will take more.
-----
You use the word “reliably” in a few places. It feels like it’s doing some work in your statements, and I’m not entirely sure what you mean or why it’s important.
-----
A model which is interesting but maybe not of obvious connection. I was speaking to a respected rationalist thinker this week and they classified potential writing on LessWrong into three categories:
Writing stuff to help oneself figure things out. Like a diary, but publicly shared.
People exchanging “letters” as they attempt to figure things out. Like old school academic journals.
Someone having something mostly figured out but with a large inferential distance to bridge. They write a large collection of posts trying to cover that distance. One example is The Sequences, and more recent examples are from John Wentworth and Kaj Sotala
I mention this because I recall you (alongside the rationalist thinker) complaining about the lack of people “presenting their worldviews on LessWrong”.
The kinds of epistemic norms I think you’re advocating for feel like a natural fit for 2nd kind of writing, but it’s less clear to me how they should apply to people presenting world views. Maybe it’s not more complicated than it’s fine to present your worldview without a tonne of evidence, but people shouldn’t forget that the evidence hasn’t been presented and it feeling intuitively correct isn’t enough.
-----
There’s something in here about Epistemic Modesty, something, something. Some part of me reads you as calling for more of that, which I’m wary of, but I don’t currently have more to say than flagging it as maybe a relevant variable in any disagreements here.
We probably do disagree about the value of academic sources, or what it takes to get value from them. Hmm. Maybe it’s something like there’s something to be said for thinking about models and assessing their plausibility yourself rather than relying on likely very flawed empirical studies.
Maybe I’m in favor of large careful reviews of what science knows but less in favor of trying to find sources for each idea or model that gets raised. I’m not sure.
-----
I can’t recall whether I’ve written publicly much about this, but a model I’ve had for a year or more is that for LW to make intellectual progress, we need to become a “community of practice”, not just a “community of interest”. Martial arts vs literal stamp collecting. (Streetfighting might be better still due to actual testing real fighting ability.) It’s great that many people find LessWrong a guilty pleasure they feel less guilty about than Facebook, but for us to make progress, people need to see LessWrong as a place where one of things you do is show up and do Serious Work, some of which is relatively hard and boring, like writing and reading lit reviews.
I suspect that a cap on the epistemic standards people hold stuff to is downstream of the level of effort people are calibrated on applying. But maybe it goes in other direction, so I don’t know.
Probably the 2018 Review is biased towards the posts which are most widely read, i.e., those easiest and most enjoyable to read, rather than solely rewarding those with the best contributions. Not overwhelmingly, but enough. Maybe same for karma. I’m not sure how to relate to that.
-----
This sounds partially like distillation work plus extra integration. And sounds pretty good to me too.
-----
I still remember my feeling of disillusionment in the LessWrong community relative soon after I joined in late 2012. I realized that the bulk of members didn’t seem serious about advancing the Art. I never heard people discussing new results from cognitive science and how to apply them, even though that’s what Sequences were in large part and the Sequences hardly claimed to be complete! I guess I do relate somewhat to your “desperate effort” comment, though we’ve got some people trying pretty hard that I wouldn’t want to short change.
We do good stuff, but more is possible. I appreciate the reminder. I hope we succeed at pushing the culture and mentality in directions you like.
This is only tangentially relevant, but adding it here as some of you might find it interesting:
Venkatesh Rao has an excellent Twitter thread on why most independent research only reaches this kind of initial exploratory level (he tried it for a bit before moving to consulting). It’s pretty pessimistic, but there is a somewhat more optimistic follow-up thread on potential new funding models. Key point is that the later stages are just really effortful and time-consuming, in a way that keeps out a lot of people trying to do this as a side project alongside a separate main job (which I think is the case for a lot of LW contributors?)
Quote from that thread:
Also just wanted to say good luck! I’m a relative outsider here with pretty different interests to LW core topics but I do appreciate people trying to do serious work outside academia, have been trying to do this myself, and have thought a fair bit about what’s currently missing (I wrote that in a kind of jokey style but I’m serious about the topic).
Thanks, these links seem great! I think this is a good (if slightly harsh) way of making a similar point to mine:
“I find that autodidacts who haven’t experienced institutional R&D environments have a self-congratulatory low threshold for what they count as research. It’s a bit like vanity publishing or fan fiction. This mismatch doesn’t exist as much in indie art, consulting, game dev etc”
Also, I liked your blog post! More generally, I strongly encourage bloggers to have a “best of” page, or something that directs people to good posts. I’d be keen to read more of your posts but have no idea where to start.
Thanks! I have been meaning to add a ‘start here’ page for a while, so that’s good to have the extra push :) Seems particularly worthwhile in my case because a) there’s no one clear theme and b) I’ve been trying a lot of low-quality experimental posts this year bc pandemic trashed motivation, so recent posts are not really reflective of my normal output.
For now some of my better posts in the last couple of years might be Cognitive decoupling and banana phones (tracing back the original precursor of Stanovich’s idea), The middle distance (a writeup of a useful and somewhat obscure idea from Brian Cantwell Smith’s On the Origin of Objects), and the negative probability post and its followup.