I appreciate your thoughtful list of changes. But I don’t agree that weighted voting is bad. Overall I see the following things happening: 1) new writers coming to the site, writing excellent posts, getting reputation, and going on to positively shape the site’s culture (e.g. Alkjash, Wentworth, TurnTrout, Daniel Kokotajlo, Evan Hubinger, Lsusr, Alex Flint, Jameson Quinn, and many more) 2) very small amounts of internal politicking 3) very small amounts of brigading from external groups or the dominant culture. I agree that setting weighted karma is a strong bet on the current culture (or the culture at the time) being healthy and being able to grow into good directions, and I think overall that bet seems to be going fairly well.
I don’t want most people on the internet to have an equal vote here, I want the people who’ve proven themselves to have more say.
I do think that people goodhart on short-term rewards (e.g. karma, number of comments, etc) and to build more aligned long-term incentives the team has organized the annual review (2018, 2019) (the result of which notably does not just track post-karma) and we have published the top 40 posts in a professionally designed book set.
I agree the theoretical case would be pretty compelling alone, and I agree that some part of you should always be terrified by group incentive mechanisms in your environment, but I don’t feel the theoretical case here is strong and I think ‘what I see when I look’ is a lot of thoughtful and insightful writing about interesting and important ideas.
I also am pretty scared of messing up group incentives and coordination mechanisms – for example there are many kinds of growth that I have explicitly avoided because I think they would overwhelm our current reward and credit-allocation systems.
What I see when I look is almost nothing of value which is less than five years old, and comment sections which have nothing at all of value and are complete wastes of time to read at all. And I see lackluster posts by well-known names getting tons of praise and little-to-no meaningful argument; the one which ultimately prompted this post to be written now was Anna’s post about PR, which is poorly reasoned and doesn’t seem to be meant to endure scrutiny.
The annual reviews are no exception; I’ve read partway through several, and gave up because they were far lower in quality than random blog posts from personal blogs; sample purely randomly from Zvi’s archives or the SSC archives and you’ll get something better than the best of the annual review nine times out of ten, and I get far more value out of an RSS subscription to a dozen ‘one or two posts a year’ blogs like those of Nate Soares or Jessica Taylor than the annual review has even approached.
You think that the bet on “the current culture (or the culture at the time) being healthy and being able to grow into good directions[...] seems to be going fairly well.” I do not see any reason to believe this is going well. The culture has been bad since nuLW went up, and getting steadily worse; things were better when the site was old, Reddit-based, and mostly dead. The site maintainers are among the groups of people who are receiving the benefit of undeserved social proof, and this is among the significant factors responsible for this steady degradation. (Also half of the team are people who had a demonstrated history of getting this kind of dynamic badly wrong and doing the collective epistemic rationality equivalent of enthusiastically juggling subcritical uranium masses, so this outcome was predictable; I did in fact predict it.)
I also resent the characterization of my list as ‘babble’; this imputes that it is a bunch of ideas thrown against the wall, rather than a considered list. It is a well-considered list, presented in long form because I don’t expect any action to be taken on any of it but I know no action would be taken if all I presented was the things I thought would be sufficient.
I have some sense to engage you on particular posts, but I don’t know what would be compelling/cruxy for you.
I could say that Anna’s is in some ways like Swentworth‘s recent post “Making Vaccine”, not in being notably successful/groundbreaking but for being a move in an important direction to be rewarded — I think making your own vaccine is relatively easy and I am facepalming that I did not try to make a proper mRNA vaccine back in April. Similarly, I think Anna’s post is correctly moving from a common naive consequentialist refrain that I think is very damaging and contrasting it with a virtue ethics perspective that I think is a healthy remedy, and I regularly see people failing to live up to virtues when faced with naive consequentialist reasoning. No, it was not especially rigorous or especially brilliantly communicated like it was Tim Urban explaining how Neuralink works. But I think that there’s space for rigorous, worked out research like Cartesian Frames or Radical Probabilism, as well as off-the-cuff ideas like the PR/Honor one.
Or I could talk about how valuable new ideas have been explained and built on and discussed. I could talk about Mesa-Optimizers and then follow-on work where people have done Explain Like I’m 12. I could talk about discussion building on the Simulacra Levels ideas that I think LW has helped move along (although I expect you’ll point out that many of the people writing on it like Benquo, Zvi, and Elizabeth have their own blogs). I could talk about the time Jameson Quinn spent a month or two writing up a research question he had in voting theory and a commenter came in and solved it. I don’t know if you’ll find this stuff compelling, in each case I can imagine a reason to not be excited. But in my mind this is all contributions to our understanding of rationality and how minds work, and I think it’s pretty positive. And maybe you’ll agree and just think it’s nowhere near enough progress. And on that I might even agree with you, and would say I am planning something fairly more ambitious than this in the longer term.
The single best thing on LessWrong 2.0 so far I’d say is the Embedded Agency sequence. I think this was a lot of work done primarily by Scott and Abram (employed by MIRI), and I think LessWrong gave it a home and encouraged Abram to do it in the cartoon style (after the hit success of An Untrollable Mathematician) which I think improved it massively, making it more Feynman-esque in its attempt at simplicity, and would have probably stayed in the long drought of editing for far longer, and had the LW audience not been around for it and been read far less and built on far less. I would call this a big deal and a major insight. That would be somewhat cruxy for me and I’d be overall quite surprised if I came to think it didn’t represent philosophical progress in our understanding of rationality and LessWrong hadn’t helped it (and follow-up work like this and this) get written up well.
Added: You’re right, it wasn’t a babble, it was quite thoughtful. Edited.
I could talk about the time Jameson Quinn spent a month or two writing up an open research question in voting theory and a commenter came in and solved it.
I do think this is overselling this a little, given that Shapley value already existed. [Like, ‘open research question’ feels to me like “the field didn’t know how to do this”, when it was more like “Jameson Quinn discounted the solution to his problem after knowing about it, and then reading a LW comment changed his mind about that.”]
This is a much clearer statement of the problem you are pointing at than the post.
(I don’t see how it’s apparent that the voting system deserves significant blame for the overall low-standard-in-your-estimation of LW posts. A more apparent effect is probably bad-in-your-estimation posts getting heavily upvoted or winning in annual reviews, but it’s less clear where to go from that observation.)
I appreciate your thoughtful list of changes. But I don’t agree that weighted voting is bad. Overall I see the following things happening: 1) new writers coming to the site, writing excellent posts, getting reputation, and going on to positively shape the site’s culture (e.g. Alkjash, Wentworth, TurnTrout, Daniel Kokotajlo, Evan Hubinger, Lsusr, Alex Flint, Jameson Quinn, and many more) 2) very small amounts of internal politicking 3) very small amounts of brigading from external groups or the dominant culture. I agree that setting weighted karma is a strong bet on the current culture (or the culture at the time) being healthy and being able to grow into good directions, and I think overall that bet seems to be going fairly well.
I don’t want most people on the internet to have an equal vote here, I want the people who’ve proven themselves to have more say.
I do think that people goodhart on short-term rewards (e.g. karma, number of comments, etc) and to build more aligned long-term incentives the team has organized the annual review (2018, 2019) (the result of which notably does not just track post-karma) and we have published the top 40 posts in a professionally designed book set.
I agree the theoretical case would be pretty compelling alone, and I agree that some part of you should always be terrified by group incentive mechanisms in your environment, but I don’t feel the theoretical case here is strong and I think ‘what I see when I look’ is a lot of thoughtful and insightful writing about interesting and important ideas.
I also am pretty scared of messing up group incentives and coordination mechanisms – for example there are many kinds of growth that I have explicitly avoided because I think they would overwhelm our current reward and credit-allocation systems.
What I see when I look is almost nothing of value which is less than five years old, and comment sections which have nothing at all of value and are complete wastes of time to read at all. And I see lackluster posts by well-known names getting tons of praise and little-to-no meaningful argument; the one which ultimately prompted this post to be written now was Anna’s post about PR, which is poorly reasoned and doesn’t seem to be meant to endure scrutiny.
The annual reviews are no exception; I’ve read partway through several, and gave up because they were far lower in quality than random blog posts from personal blogs; sample purely randomly from Zvi’s archives or the SSC archives and you’ll get something better than the best of the annual review nine times out of ten, and I get far more value out of an RSS subscription to a dozen ‘one or two posts a year’ blogs like those of Nate Soares or Jessica Taylor than the annual review has even approached.
You think that the bet on “the current culture (or the culture at the time) being healthy and being able to grow into good directions[...] seems to be going fairly well.” I do not see any reason to believe this is going well. The culture has been bad since nuLW went up, and getting steadily worse; things were better when the site was old, Reddit-based, and mostly dead. The site maintainers are among the groups of people who are receiving the benefit of undeserved social proof, and this is among the significant factors responsible for this steady degradation. (Also half of the team are people who had a demonstrated history of getting this kind of dynamic badly wrong and doing the collective epistemic rationality equivalent of enthusiastically juggling subcritical uranium masses, so this outcome was predictable; I did in fact predict it.)
I also resent the characterization of my list as ‘babble’; this imputes that it is a bunch of ideas thrown against the wall, rather than a considered list. It is a well-considered list, presented in long form because I don’t expect any action to be taken on any of it but I know no action would be taken if all I presented was the things I thought would be sufficient.
I have some sense to engage you on particular posts, but I don’t know what would be compelling/cruxy for you.
I could say that Anna’s is in some ways like Swentworth‘s recent post “Making Vaccine”, not in being notably successful/groundbreaking but for being a move in an important direction to be rewarded — I think making your own vaccine is relatively easy and I am facepalming that I did not try to make a proper mRNA vaccine back in April. Similarly, I think Anna’s post is correctly moving from a common naive consequentialist refrain that I think is very damaging and contrasting it with a virtue ethics perspective that I think is a healthy remedy, and I regularly see people failing to live up to virtues when faced with naive consequentialist reasoning. No, it was not especially rigorous or especially brilliantly communicated like it was Tim Urban explaining how Neuralink works. But I think that there’s space for rigorous, worked out research like Cartesian Frames or Radical Probabilism, as well as off-the-cuff ideas like the PR/Honor one.
Or I could talk about how valuable new ideas have been explained and built on and discussed. I could talk about Mesa-Optimizers and then follow-on work where people have done Explain Like I’m 12. I could talk about discussion building on the Simulacra Levels ideas that I think LW has helped move along (although I expect you’ll point out that many of the people writing on it like Benquo, Zvi, and Elizabeth have their own blogs). I could talk about the time Jameson Quinn spent a month or two writing up a research question he had in voting theory and a commenter came in and solved it. I don’t know if you’ll find this stuff compelling, in each case I can imagine a reason to not be excited. But in my mind this is all contributions to our understanding of rationality and how minds work, and I think it’s pretty positive. And maybe you’ll agree and just think it’s nowhere near enough progress. And on that I might even agree with you, and would say I am planning something fairly more ambitious than this in the longer term.
The single best thing on LessWrong 2.0 so far I’d say is the Embedded Agency sequence. I think this was a lot of work done primarily by Scott and Abram (employed by MIRI), and I think LessWrong gave it a home and encouraged Abram to do it in the cartoon style (after the hit success of An Untrollable Mathematician) which I think improved it massively, making it more Feynman-esque in its attempt at simplicity, and would have probably stayed in the long drought of editing for far longer, and had the LW audience not been around for it and been read far less and built on far less. I would call this a big deal and a major insight. That would be somewhat cruxy for me and I’d be overall quite surprised if I came to think it didn’t represent philosophical progress in our understanding of rationality and LessWrong hadn’t helped it (and follow-up work like this and this) get written up well.
Added: You’re right, it wasn’t a babble, it was quite thoughtful. Edited.
I do think this is overselling this a little, given that Shapley value already existed. [Like, ‘open research question’ feels to me like “the field didn’t know how to do this”, when it was more like “Jameson Quinn discounted the solution to his problem after knowing about it, and then reading a LW comment changed his mind about that.”]
Thx, edited.
This is a much clearer statement of the problem you are pointing at than the post.
(I don’t see how it’s apparent that the voting system deserves significant blame for the overall low-standard-in-your-estimation of LW posts. A more apparent effect is probably bad-in-your-estimation posts getting heavily upvoted or winning in annual reviews, but it’s less clear where to go from that observation.)