Think about why that is and adjust strategy and norms correspondingly. (Sorry that’s underspecified, but it actually depends on the reasons). I don’t know what happened to LW1, but it did have pretty high intellectual generativity for a while.
I don’t know what happened to LW1, but it did have pretty high intellectual generativity for a while.
I think Wei Dai said that too elsewhere. When each of you says intellectual generativity, do you the site a whole (post + discussions), or specifically that the discussions in comments were more generative?
Other question is if you think you can quantitatively state some factor by which LW1 was more generative than LW2? If it was only 2x, that would suggest less generativity per person/comment than current LW, since old LW had much more than double the number of users and comments. If it was 10x, then LW1 was qualitatively better in some way.
(I’d expect the output to be a right-tailed distribution over individuals. LW2 could be less generative than LW1 because the top N users which produced 80% of the value left, so it’s not really about the raw number of users/comments.
The most interesting scenario would be if it were all the same people, but they were being less generative.)
I wasn’t around in early LW, so this is hard for me to estimate. My very, very rough guess is 5x. (Note, IMO the recent good content is disproportionately written by people willing to talk about adversarial optimization patterns in a somewhat-forceful way despite pressures to be diplomatic)
I have noted this as well, and I find it worrisome. Many recent interesting conversations are more about social and interpersonal communication / alignment than about personal or theoretical rationality and decision-making. I like it because they are actually interesting topics. I worry that they’re crowding out or hiding a painful decline of more core rationality discussions. I don’t worry that they’re too close to politics (I think they are close to politics, but are narrow enough that they seem to fall prey to the standard problems more because they’re trying to skate around the issue rather than being direct).
I had not framed them as “adversarial optimization patterns”, mostly because they seriously bury that lede. A direct acknowledgement would be useful that almost all groups of more than one (and in some models, including an individual human) contain multiple simultaneous games, with very different payout matrices and equilibria which impact other games. Values start out divergent, and this can’t be assumed away for any part of reality.
5x seems consistent with the raw activity numbers though. Eyeballing it, seems like 4x more active in terms of comments and commenters. Number of posts is pretty close.
One of my current beliefs, based on skimming older posts periodically (esp. since recommendations), is that a lot of the old comments just weren’t that good. Not sure about posts.
Yes.
Think about why that is and adjust strategy and norms correspondingly. (Sorry that’s underspecified, but it actually depends on the reasons). I don’t know what happened to LW1, but it did have pretty high intellectual generativity for a while.
I think Wei Dai said that too elsewhere. When each of you says intellectual generativity, do you the site a whole (post + discussions), or specifically that the discussions in comments were more generative?
Other question is if you think you can quantitatively state some factor by which LW1 was more generative than LW2? If it was only 2x, that would suggest less generativity per person/comment than current LW, since old LW had much more than double the number of users and comments. If it was 10x, then LW1 was qualitatively better in some way.
(I’d expect the output to be a right-tailed distribution over individuals. LW2 could be less generative than LW1 because the top N users which produced 80% of the value left, so it’s not really about the raw number of users/comments.
The most interesting scenario would be if it were all the same people, but they were being less generative.)
The site as a whole.
I wasn’t around in early LW, so this is hard for me to estimate. My very, very rough guess is 5x. (Note, IMO the recent good content is disproportionately written by people willing to talk about adversarial optimization patterns in a somewhat-forceful way despite pressures to be diplomatic)
I have noted this as well, and I find it worrisome. Many recent interesting conversations are more about social and interpersonal communication / alignment than about personal or theoretical rationality and decision-making. I like it because they are actually interesting topics. I worry that they’re crowding out or hiding a painful decline of more core rationality discussions. I don’t worry that they’re too close to politics (I think they are close to politics, but are narrow enough that they seem to fall prey to the standard problems more because they’re trying to skate around the issue rather than being direct).
I had not framed them as “adversarial optimization patterns”, mostly because they seriously bury that lede. A direct acknowledgement would be useful that almost all groups of more than one (and in some models, including an individual human) contain multiple simultaneous games, with very different payout matrices and equilibria which impact other games. Values start out divergent, and this can’t be assumed away for any part of reality.
This is maybe half or more of what Robin Hanson wrote about back when it was still all on overcomingbias.com
Yeah, granted that it’s going to be rough.
5x seems consistent with the raw activity numbers though. Eyeballing it, seems like 4x more active in terms of comments and commenters. Number of posts is pretty close.
One of my current beliefs, based on skimming older posts periodically (esp. since recommendations), is that a lot of the old comments just weren’t that good. Not sure about posts.