I have some sense to engage you on particular posts, but I don’t know what would be compelling/cruxy for you.
I could say that Anna’s is in some ways like Swentworth‘s recent post “Making Vaccine”, not in being notably successful/groundbreaking but for being a move in an important direction to be rewarded — I think making your own vaccine is relatively easy and I am facepalming that I did not try to make a proper mRNA vaccine back in April. Similarly, I think Anna’s post is correctly moving from a common naive consequentialist refrain that I think is very damaging and contrasting it with a virtue ethics perspective that I think is a healthy remedy, and I regularly see people failing to live up to virtues when faced with naive consequentialist reasoning. No, it was not especially rigorous or especially brilliantly communicated like it was Tim Urban explaining how Neuralink works. But I think that there’s space for rigorous, worked out research like Cartesian Frames or Radical Probabilism, as well as off-the-cuff ideas like the PR/Honor one.
Or I could talk about how valuable new ideas have been explained and built on and discussed. I could talk about Mesa-Optimizers and then follow-on work where people have done Explain Like I’m 12. I could talk about discussion building on the Simulacra Levels ideas that I think LW has helped move along (although I expect you’ll point out that many of the people writing on it like Benquo, Zvi, and Elizabeth have their own blogs). I could talk about the time Jameson Quinn spent a month or two writing up a research question he had in voting theory and a commenter came in and solved it. I don’t know if you’ll find this stuff compelling, in each case I can imagine a reason to not be excited. But in my mind this is all contributions to our understanding of rationality and how minds work, and I think it’s pretty positive. And maybe you’ll agree and just think it’s nowhere near enough progress. And on that I might even agree with you, and would say I am planning something fairly more ambitious than this in the longer term.
The single best thing on LessWrong 2.0 so far I’d say is the Embedded Agency sequence. I think this was a lot of work done primarily by Scott and Abram (employed by MIRI), and I think LessWrong gave it a home and encouraged Abram to do it in the cartoon style (after the hit success of An Untrollable Mathematician) which I think improved it massively, making it more Feynman-esque in its attempt at simplicity, and would have probably stayed in the long drought of editing for far longer, and had the LW audience not been around for it and been read far less and built on far less. I would call this a big deal and a major insight. That would be somewhat cruxy for me and I’d be overall quite surprised if I came to think it didn’t represent philosophical progress in our understanding of rationality and LessWrong hadn’t helped it (and follow-up work like this and this) get written up well.
Added: You’re right, it wasn’t a babble, it was quite thoughtful. Edited.
I could talk about the time Jameson Quinn spent a month or two writing up an open research question in voting theory and a commenter came in and solved it.
I do think this is overselling this a little, given that Shapley value already existed. [Like, ‘open research question’ feels to me like “the field didn’t know how to do this”, when it was more like “Jameson Quinn discounted the solution to his problem after knowing about it, and then reading a LW comment changed his mind about that.”]
I have some sense to engage you on particular posts, but I don’t know what would be compelling/cruxy for you.
I could say that Anna’s is in some ways like Swentworth‘s recent post “Making Vaccine”, not in being notably successful/groundbreaking but for being a move in an important direction to be rewarded — I think making your own vaccine is relatively easy and I am facepalming that I did not try to make a proper mRNA vaccine back in April. Similarly, I think Anna’s post is correctly moving from a common naive consequentialist refrain that I think is very damaging and contrasting it with a virtue ethics perspective that I think is a healthy remedy, and I regularly see people failing to live up to virtues when faced with naive consequentialist reasoning. No, it was not especially rigorous or especially brilliantly communicated like it was Tim Urban explaining how Neuralink works. But I think that there’s space for rigorous, worked out research like Cartesian Frames or Radical Probabilism, as well as off-the-cuff ideas like the PR/Honor one.
Or I could talk about how valuable new ideas have been explained and built on and discussed. I could talk about Mesa-Optimizers and then follow-on work where people have done Explain Like I’m 12. I could talk about discussion building on the Simulacra Levels ideas that I think LW has helped move along (although I expect you’ll point out that many of the people writing on it like Benquo, Zvi, and Elizabeth have their own blogs). I could talk about the time Jameson Quinn spent a month or two writing up a research question he had in voting theory and a commenter came in and solved it. I don’t know if you’ll find this stuff compelling, in each case I can imagine a reason to not be excited. But in my mind this is all contributions to our understanding of rationality and how minds work, and I think it’s pretty positive. And maybe you’ll agree and just think it’s nowhere near enough progress. And on that I might even agree with you, and would say I am planning something fairly more ambitious than this in the longer term.
The single best thing on LessWrong 2.0 so far I’d say is the Embedded Agency sequence. I think this was a lot of work done primarily by Scott and Abram (employed by MIRI), and I think LessWrong gave it a home and encouraged Abram to do it in the cartoon style (after the hit success of An Untrollable Mathematician) which I think improved it massively, making it more Feynman-esque in its attempt at simplicity, and would have probably stayed in the long drought of editing for far longer, and had the LW audience not been around for it and been read far less and built on far less. I would call this a big deal and a major insight. That would be somewhat cruxy for me and I’d be overall quite surprised if I came to think it didn’t represent philosophical progress in our understanding of rationality and LessWrong hadn’t helped it (and follow-up work like this and this) get written up well.
Added: You’re right, it wasn’t a babble, it was quite thoughtful. Edited.
I do think this is overselling this a little, given that Shapley value already existed. [Like, ‘open research question’ feels to me like “the field didn’t know how to do this”, when it was more like “Jameson Quinn discounted the solution to his problem after knowing about it, and then reading a LW comment changed his mind about that.”]
Thx, edited.