In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.
LW-style rationality is not a normal scientific field. It’s a community dialog centered around a shared set of wisdom-stories. These wisdom-stories are based on the author’s own lives, and are interpreted through the lens of psychology, economics, statistics, and game theory.
I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles. One of those foibles will be the perception that the leaders know what they’re doing and that we can trust them as guides.
Another will be a redirection of focus to building and aligning an outside model of ourselves as we are with a model our ideal, and a consequent downplaying of the idea that we’re messy human beings whose internal experience are hard to pin down.
And a third will be a perception that we cannot access the same wisdom that the original writers drew from their own lives and generalized into concepts and rules for living. Instead, we read their concepts and rules, internalize them, and try to fit them to our lived experience. They had an experience, a reaction to it, built on it, and when the time was right, built a conceptual tool out of it. We take the tool and look for nails to pound in our own lives. That can be a powerful strategy, if you know precisely what your hammers and nails are, can truly focus in on one problem, and know when you’ve solved it.
My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. Anyone familiar with the dilemma of AI alignment should see that this is a fundamental problem for any intelligence.
One way of dealing with this is to create more proxy metrics. Rationalist writings and techniques are so voluminous and accessible that I suspect we over-weight them, and that our community has succumbed to some extent to Goodhart’s law and the streetlight effect. We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I’d like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.
However, as I wrote this response, I also realized that maybe it would be interesting to just pick one of the systematized “hammers” of rationality technique, and just start pounding as many nails as possible with it to see what happens. So I changed my own mind to some extent. Maybe I haven’t taken these ideas seriously enough.
Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don’t have the time to make it sharper.
LessWrong itself seems to me a fairly broad forum where a lot of different ideas are discussed. As far as the broad community goes it seems to me like instead of Goodharting the community often persues different goals and maybe even goals that are different enough that some older ideas aren’t persued anymore because people are less interested in certain framing.
In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.
LW-style rationality is not a normal scientific field.
I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they’re doing. If not… uh-oh! (Uh-oh in the sense that I beleived something for no reason, not in the sense that CFAR would therefore be badwrong in my eyes.)
It’s a community dialog centered around a shared set of wisdom-stories. [...] I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles.
I’m not sure I know what kind of community you’re talking about. Are there other readily-available examples?
One of those foibles will be [...] Another will be [...] And a third will be [...]
How do you know?
More charitably, I do think these are real risks. Especially the first, which I think I may fall victim to, at least with Eliezer’s writings.
My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. [...] We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I’d like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.
My current belief (and strong hope) is that the attitude of this community is exactly such that if you are right about that, you will be able to convince people of it. “You’re not making improvements, you’re just roleplaying making improvements” seems like the kind of advice a typical LessWronger would be open to hearing.
By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they’re good stuff. The more I think on this, the more I wonder if the need for “contrarian takes” of LW content has been a blind spot for me in my first year of rationality. It’s an especially insidious one if so, because I normally spit out contrarian takes as naturally as I breathe.
Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don’t have the time to make it sharper.
I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they’re doing.
They sort of are, in that they grade the epistemic status of the techniques they teach by anecdotal reports of their users, or any scientific evidence or empirically-backed theory that seems related. To my knowledge, they’re not running RCTs.
Kuhn stressed that historically, the route to normal science could be a difficult one. Prior to the formation of a shared paradigm or research consensus, would-be scientists were reduced to the accumulation of random facts and unverified observations, in the manner recorded by Pliny the Elder or Francis Bacon, while simultaneously beginning the foundations of their field from scratch through a plethora of competing theories.
Arguably at least the social sciences remain at such a pre-paradigmatic level today.
If the whole field of social science is pre-normal-science, then undoubtedly the work of CFAR is as well. They don’t have even an appreciable fraction of the scientific resources that are thrown at social science. Of course, this might just be some STEM-lord Wikipedia editor’s snark, though it’s got a reference to some article in a Philosophy of Science book.
My understanding of the reason it gets characterized in this way is that in the social sciences, we don’t see this shift from one uniform consensus about how reality works to a different uniform consensus. We do see that in the physical sciences and in math. This is by definition what differentiates normal from pre-paradigmatic science. That doesn’t have to be a marker for what’s true, useful, or legitimate, but it’s something to be aware of.
I’m not sure I know what kind of community you’re talking about. Are there other readily-available examples?
Any religious, professional, group support, or tribal society would be an example.
How do you know?
Anecdotal evidence based on common sense and life experience.
“You’re not making improvements, you’re just roleplaying making improvements” seems like the kind of advice a typical LessWronger would be open to hearing.
Hearing, but listening? Part of my fear is that we’re practicing the performance of skepticism and open-mindedness, rather than the genuine article. That’s not necessarily a bad thing—fake it ’til you make it, and all that. And if that’s the best we have, then I’m glad we have it.
By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they’re good stuff. The more I think on this, the more I wonder if the need for “contrarian takes” of LW content has been a blind spot for me in my first year of rationality.
Thank you! I don’t think that contrarianism is the most important aspect of my posts. Others in the comments of the original articles posted similar objections to my own, and they’re not canon by any means. And heck, one of them was an article of praise.
Instead, I think it’s the idea of generativity. In mainstream scientific scholarship, there’s a sense that articles not only reference each other, but they test, apply and respond to each other. We have lots of reference here on LW, but not too much of the latter. This can make it hard to separate ideas from the individuals who convey them, or from the original language they get presented in. It also means that writers on this blog anticipate only short-term scrutiny in the form of comments. Long-term scrutiny in the form of posts that might not just reference but be exclusively focused on confronting a piece from a year or a decade ago is much less common.
I actually wish we had this in journalism and especially in punditry.
The result of that dynamic is that the hottest takes produce long-lasting memes that take on truthiness merely because everybody uses them and references the article that produced them. To actually confront the writer would be to confront a whole community that’s used to those ideas, damn it! And the incentive remains to optimize writing for the hot take, not the deeply considered argument that can stand up to direct, careful scrutiny that goes beyond one comment buried amongst 10 or 100 others.
Figuring out how to shift this culture productively will take some experimentation. You don’t want to create too many bad feelings that will result in a chilling effect for the whole forum. Nor do you want to raise the bar for posting so high that people feel like it’s too much effort. Nobody’s getting paid to post here, and lots of people already feel intimidated by this forum.
I think that a couple ideas have the most general promise. One is praise articles, especially of lesser-known articles or writers. Another is criticism of weak articles by well-known authors, since they have enough social capital that you don’t look like you’re punching down. But I have no evidence to back that up. This is emphatically not normal science!
In a normal scientific field, you build a theory, push it to the limit with experimental evidence, and then replace it with something better when it breaks down.
LW-style rationality is not a normal scientific field. It’s a community dialog centered around a shared set of wisdom-stories. These wisdom-stories are based on the author’s own lives, and are interpreted through the lens of psychology, economics, statistics, and game theory.
I posit that we are likely to be an average example of such a community, with an average amount of wisdom and an average set of foibles. One of those foibles will be the perception that the leaders know what they’re doing and that we can trust them as guides.
Another will be a redirection of focus to building and aligning an outside model of ourselves as we are with a model our ideal, and a consequent downplaying of the idea that we’re messy human beings whose internal experience are hard to pin down.
And a third will be a perception that we cannot access the same wisdom that the original writers drew from their own lives and generalized into concepts and rules for living. Instead, we read their concepts and rules, internalize them, and try to fit them to our lived experience. They had an experience, a reaction to it, built on it, and when the time was right, built a conceptual tool out of it. We take the tool and look for nails to pound in our own lives. That can be a powerful strategy, if you know precisely what your hammers and nails are, can truly focus in on one problem, and know when you’ve solved it.
My anxiety is that I/we are getting off-track, alienated from ourselves, and obsessed with proxy metrics for rationality. Anyone familiar with the dilemma of AI alignment should see that this is a fundamental problem for any intelligence.
One way of dealing with this is to create more proxy metrics. Rationalist writings and techniques are so voluminous and accessible that I suspect we over-weight them, and that our community has succumbed to some extent to Goodhart’s law and the streetlight effect. We focus on what life changes fit into the framework or what will be interesting to others in this community, rather than what we actually need to do. I’d like to see more storytelling and attempts to draw original wisdom from them, and more contrarian takes/heresy.
However, as I wrote this response, I also realized that maybe it would be interesting to just pick one of the systematized “hammers” of rationality technique, and just start pounding as many nails as possible with it to see what happens. So I changed my own mind to some extent. Maybe I haven’t taken these ideas seriously enough.
Sorry that this is all horrible horrible punditry, darkly hinting and with no verifiable claims, but I don’t have the time to make it sharper.
LessWrong itself seems to me a fairly broad forum where a lot of different ideas are discussed. As far as the broad community goes it seems to me like instead of Goodharting the community often persues different goals and maybe even goals that are different enough that some older ideas aren’t persued anymore because people are less interested in certain framing.
I was under the impression that CFAR was doing something like this, using evidence to figure out which techniques actually do what they seem like they’re doing. If not… uh-oh! (Uh-oh in the sense that I beleived something for no reason, not in the sense that CFAR would therefore be badwrong in my eyes.)
I’m not sure I know what kind of community you’re talking about. Are there other readily-available examples?
How do you know?
More charitably, I do think these are real risks. Especially the first, which I think I may fall victim to, at least with Eliezer’s writings.
My current belief (and strong hope) is that the attitude of this community is exactly such that if you are right about that, you will be able to convince people of it. “You’re not making improvements, you’re just roleplaying making improvements” seems like the kind of advice a typical LessWronger would be open to hearing.
By the way, I saw your two recent posts (criticism of popular LW posts, praise of popular LW posts) and I think they’re good stuff. The more I think on this, the more I wonder if the need for “contrarian takes” of LW content has been a blind spot for me in my first year of rationality. It’s an especially insidious one if so, because I normally spit out contrarian takes as naturally as I breathe.
I’ve been there! ^^
They sort of are, in that they grade the epistemic status of the techniques they teach by anecdotal reports of their users, or any scientific evidence or empirically-backed theory that seems related. To my knowledge, they’re not running RCTs.
The Wikipedia page on normal science states:
If the whole field of social science is pre-normal-science, then undoubtedly the work of CFAR is as well. They don’t have even an appreciable fraction of the scientific resources that are thrown at social science. Of course, this might just be some STEM-lord Wikipedia editor’s snark, though it’s got a reference to some article in a Philosophy of Science book.
My understanding of the reason it gets characterized in this way is that in the social sciences, we don’t see this shift from one uniform consensus about how reality works to a different uniform consensus. We do see that in the physical sciences and in math. This is by definition what differentiates normal from pre-paradigmatic science. That doesn’t have to be a marker for what’s true, useful, or legitimate, but it’s something to be aware of.
Any religious, professional, group support, or tribal society would be an example.
Anecdotal evidence based on common sense and life experience.
Hearing, but listening? Part of my fear is that we’re practicing the performance of skepticism and open-mindedness, rather than the genuine article. That’s not necessarily a bad thing—fake it ’til you make it, and all that. And if that’s the best we have, then I’m glad we have it.
Thank you! I don’t think that contrarianism is the most important aspect of my posts. Others in the comments of the original articles posted similar objections to my own, and they’re not canon by any means. And heck, one of them was an article of praise.
Instead, I think it’s the idea of generativity. In mainstream scientific scholarship, there’s a sense that articles not only reference each other, but they test, apply and respond to each other. We have lots of reference here on LW, but not too much of the latter. This can make it hard to separate ideas from the individuals who convey them, or from the original language they get presented in. It also means that writers on this blog anticipate only short-term scrutiny in the form of comments. Long-term scrutiny in the form of posts that might not just reference but be exclusively focused on confronting a piece from a year or a decade ago is much less common.
I actually wish we had this in journalism and especially in punditry.
The result of that dynamic is that the hottest takes produce long-lasting memes that take on truthiness merely because everybody uses them and references the article that produced them. To actually confront the writer would be to confront a whole community that’s used to those ideas, damn it! And the incentive remains to optimize writing for the hot take, not the deeply considered argument that can stand up to direct, careful scrutiny that goes beyond one comment buried amongst 10 or 100 others.
Figuring out how to shift this culture productively will take some experimentation. You don’t want to create too many bad feelings that will result in a chilling effect for the whole forum. Nor do you want to raise the bar for posting so high that people feel like it’s too much effort. Nobody’s getting paid to post here, and lots of people already feel intimidated by this forum.
I think that a couple ideas have the most general promise. One is praise articles, especially of lesser-known articles or writers. Another is criticism of weak articles by well-known authors, since they have enough social capital that you don’t look like you’re punching down. But I have no evidence to back that up. This is emphatically not normal science!