I’m not sure I know what rationalist culture refers to anymore. Several candidate referents have become blurred and new candidates have been introduced. Could be, lesswrong.com culture; humanity’s rationalist cultures of various stripes; the rationalist cultures descended from lesswrong (but those are many at this point); the sequences view; the friend networks I have (which mostly don’t have the problems I’d complain about, since I filter my friends for people I want to be friends with!); the agi safety research field (which seems to be mostly not people who think of themselves as “rationalists” anymore); berkeley rat crowd; “rationalist-adjacent” people on twitter; the thing postrats say is rationalist; a particular set of discords; some other particular set of discords; scott alexander fans; some vague combination of things I’ve mentioned; people who like secular solstice...
straw vulcan is more accurate than people give it credit for. a lot of people around these parts undervalue academia’s output and independent scholarship and reinvent a lot of stuff. folks tend to have an overly reductive view of politics, either overly “only individuals exist and cannot be aggregated” or “only the greater good exists, individual needs not shared by others don’t exist”—you know, uh, one of the main dimensions of variation that people in general are confused on. I dunno, it seems like the main thing wrong with rationalist culture is that it thinks of itself as rationalist, when in fact it’s “just” another science-focused culture. shrug.
I don’t have any examples ready at hand. It tends to be a pattern I see in people who strike me as somehow new to the concept of “rationalism”, people who just read the sequences and are excited to tell everyone about how they’re ingroup now.
I get the sense that the rationalist vibe involves downregulating brain networks that implement important decision theory, because of not having an explicit description available that explains why those systems are key to the human {genome+memeplex}’s approximate learned decision theory
a lot of people around these parts undervalue academia’s output and independent scholarship and reinvent a lot of stuff.
That’s certainly my impression. I’ve been peeking in here off and on for several years, but became more active last June when I started (cross-)posting here and commenting a bit.
I have a PhD that’s traditional in the sense that I learned to search, read, value, and cite the existing literature on a topic I’m working on. That seems to be missing here, leading, yes, to unnecessary reinvention. I recall reading a post several months ago that made the point that some of the posts here are of sufficiently high quality that they should be placed at, for example, arXiv. Seems reasonable to me. But if you want to do that, you need to pay more attention to what other people are thinking and writing about.
I’d love to hear your thoughts on how to compress the training that one gets beginning and throughout a phd about how to learn effectively from ongoing research. Many folks on here either don’t have time or don’t think we have time to go to school, so it would be nice to get resources together about how to learn it quickly. I’ve also been asking AIs questions like this, and I share the good ones when they come up.
That’s a tough one, in part because the fields vary so much. I was in an English department, so that’s what my degree is in. But my real training came as part of a research group in computational linguistics that was in the linguistics department. I didn’t actually do any programming. I worked on knowledge representation, a big part of old-school computational linguistics (before NLP).
But there are two aspects to this. One is getting the level of intellectual maturity and sophistication you need to function as a disciplined independent thinker. The other is knowing the literature. In some ways they interact and support one another but in some ways they are orthogonal.
I learned the most when I found a mentor, the late David G.l Hays. I wanted to learn his approach to semantics. He tutored me for an hour or two once a week for a semester. That’s the best. It’s also relatively rare to get that kind of individual attention. Still, finding a mentor is the best possible thing you could do.
At the same time I had a job preparing abstracts for The American Journal of Computational Linguistics. That meant I had to read a wide variety of material and prepare abstracts four times a year. You need to learn to extract the gist of an article without reading the whole thing. Look at the introduction and conclusion. Does that tell you what you need? Scan the rest. You should be able to do that – scan the article and write the abstract – in no more than an hour or two.
Note, that many/most abstracts that come with an article are not very good. The idea is, if you trust the journal and the author and don’t need the details, the abstract should tell you all you need. Working up that skill is good discipline.
If you’re working on a project with others here, each of you agree to produce 3, 4, 5 abstracts a week to contribute to the project. Post them to a place where you can all get at them. It becomes your project library.
As for the level of intellectual maturity, the only way to acquire that is to pick a problem, work on it, and come up with a coherent written account of what you’ve done. The account should be intelligible to others. I don’t know whether a formal dissertation is required, but you need to tackle a problem that is both interesting to you and has “weight.”
I’m not sure I know what rationalist culture refers to anymore. Several candidate referents have become blurred and new candidates have been introduced. Could be, lesswrong.com culture; humanity’s rationalist cultures of various stripes; the rationalist cultures descended from lesswrong (but those are many at this point); the sequences view; the friend networks I have (which mostly don’t have the problems I’d complain about, since I filter my friends for people I want to be friends with!); the agi safety research field (which seems to be mostly not people who think of themselves as “rationalists” anymore); berkeley rat crowd; “rationalist-adjacent” people on twitter; the thing postrats say is rationalist; a particular set of discords; some other particular set of discords; scott alexander fans; some vague combination of things I’ve mentioned; people who like secular solstice...
straw vulcan is more accurate than people give it credit for. a lot of people around these parts undervalue academia’s output and independent scholarship and reinvent a lot of stuff. folks tend to have an overly reductive view of politics, either overly “only individuals exist and cannot be aggregated” or “only the greater good exists, individual needs not shared by others don’t exist”—you know, uh, one of the main dimensions of variation that people in general are confused on. I dunno, it seems like the main thing wrong with rationalist culture is that it thinks of itself as rationalist, when in fact it’s “just” another science-focused culture. shrug.
Do you have any favorite examples of the straw vulcan thing?
I don’t have any examples ready at hand. It tends to be a pattern I see in people who strike me as somehow new to the concept of “rationalism”, people who just read the sequences and are excited to tell everyone about how they’re ingroup now.
I dunno. wait, maybe I could cite this comment by tropicalfruit as having the kind of vibe I’m thinking of: “oh no, are emotions truly useless? is morality fake?”—an understandable question, but still! https://www.lesswrong.com/posts/z4Rp6oBtYceZm7Q8s/what-do-you-think-is-wrong-with-rationalist-culture?commentId=HkKjobkvT6sfvRnG2
I get the sense that the rationalist vibe involves downregulating brain networks that implement important decision theory, because of not having an explicit description available that explains why those systems are key to the human {genome+memeplex}’s approximate learned decision theory
That’s certainly my impression. I’ve been peeking in here off and on for several years, but became more active last June when I started (cross-)posting here and commenting a bit.
I have a PhD that’s traditional in the sense that I learned to search, read, value, and cite the existing literature on a topic I’m working on. That seems to be missing here, leading, yes, to unnecessary reinvention. I recall reading a post several months ago that made the point that some of the posts here are of sufficiently high quality that they should be placed at, for example, arXiv. Seems reasonable to me. But if you want to do that, you need to pay more attention to what other people are thinking and writing about.
I’d love to hear your thoughts on how to compress the training that one gets beginning and throughout a phd about how to learn effectively from ongoing research. Many folks on here either don’t have time or don’t think we have time to go to school, so it would be nice to get resources together about how to learn it quickly. I’ve also been asking AIs questions like this, and I share the good ones when they come up.
That’s a tough one, in part because the fields vary so much. I was in an English department, so that’s what my degree is in. But my real training came as part of a research group in computational linguistics that was in the linguistics department. I didn’t actually do any programming. I worked on knowledge representation, a big part of old-school computational linguistics (before NLP).
But there are two aspects to this. One is getting the level of intellectual maturity and sophistication you need to function as a disciplined independent thinker. The other is knowing the literature. In some ways they interact and support one another but in some ways they are orthogonal.
I learned the most when I found a mentor, the late David G.l Hays. I wanted to learn his approach to semantics. He tutored me for an hour or two once a week for a semester. That’s the best. It’s also relatively rare to get that kind of individual attention. Still, finding a mentor is the best possible thing you could do.
At the same time I had a job preparing abstracts for The American Journal of Computational Linguistics. That meant I had to read a wide variety of material and prepare abstracts four times a year. You need to learn to extract the gist of an article without reading the whole thing. Look at the introduction and conclusion. Does that tell you what you need? Scan the rest. You should be able to do that – scan the article and write the abstract – in no more than an hour or two.
Note, that many/most abstracts that come with an article are not very good. The idea is, if you trust the journal and the author and don’t need the details, the abstract should tell you all you need. Working up that skill is good discipline.
If you’re working on a project with others here, each of you agree to produce 3, 4, 5 abstracts a week to contribute to the project. Post them to a place where you can all get at them. It becomes your project library.
As for the level of intellectual maturity, the only way to acquire that is to pick a problem, work on it, and come up with a coherent written account of what you’ve done. The account should be intelligible to others. I don’t know whether a formal dissertation is required, but you need to tackle a problem that is both interesting to you and has “weight.”