As I’ve been talking about on my shortform, I’d be excited about attracting more “programmer’s programmers”. AFAICT, a lot of LW users are programmers, but a large fraction of these users either are more interested in transitioning into theoretical alignment research or just don’t really post about programming. As a small piece of evidence for this claim, I’ve been consistently surprised to see the relatively lukewarm reaction to Martin Sustrik’s posts on LW. I read Sustrik’s blog before he started posting and consistently find his posts there and here pretty interesting (I am admittedly a bit biased because I was already impressed by Sustrik’s work on ZeroMQ).
I think that’s a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and “making beliefs pay rent”, which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes. For a longer example, see this blog post about reproducing a deep RL paper, which discusses how noticing confusion helped the author make progress (CFAR is specifically mentioned). LW-style thinking has also helped me stop obsessing over much of the debate around some of the more mindkiller-y topics in programming like “should you always write tests first”, “are type-safe languages always better than dynamic ones”. In my ideal world, LW-style thinking applied to fuzzier questions about programming would help us move past these “wrong questions”.
Programming already has a few other internet locuses such as Hacker News and lobste.rs, but I think those places have fewer “people who know how to integrate evidence and think probabilistically in confusing domains.”
Assuming this seems appealing, one way to approach getting more people of the type I’m talking about would be to reach out to prominent bloggers who seem like they’re already somewhat sympathetic to the LW meme-plex and see if they’d be willing to cross-post their content. Example of the sorts of people I’m thinking about include:
Hillel Wayne: who writes about empiricism in software engineering and formal methods.
Jimmy Koppel: who writes about insights for programming he’s gleaned from his “day job” as a programming tools researcher (I think he has a LW account already).
Last, I do want to include add a caveat for all this which I think applies to reaching out to basically any group: there’s a big risk of culture clash/dilution if the outreach effort succeeds (see Geeks, MOPs, and sociopaths for one exploration of this topic). How to mitigate this is probably a separate question, but I did want to call it out in case it seems like I’m just recommending blindly trying to get more users.
Jimrandomh recently had the interesting observation that there might have been legitimately fewer rationalists in the world prior to the invention of programming, because it actually forces you to notice when your model is broken, form new hypotheses, and test them, all with short feedback loops.
Yeah, I have a ton of confirmation bias pushing me to agree with this (because for me the two are definitely related), but I’ll add that I also think spending a lot of time programming helped me make reductionism “a part of me” in a way it wasn’t before. There are just very few other activities where you’re forced to express what you want or a concept to something that fundamentally can only understand a limited logical vocabulary. Math is similar but I think programming makes the reductionist element more salient because of the compiler and because programming tends to involve more mundane work.
Yeah, in our office-discussion at the time I think the claim was something like “Prior to programming, Math Proofs were the best way to Get The Thing, and they were slower and the feedback less clear.”
(My sense is that programming _hasn’t_ deeply given me the thing, until perhaps recently when I started getting more intentional about deliberate debugging practice. But it definitely makes sense that programming would at least open up the possibility of gaining the skill. The main remaining question in my mind is “how much does the skill transfer, by default, if you’re not deliberately trying to transfer it?”)
I think that’s a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and “making beliefs pay rent”, which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes.
As someone who landed on your comment specifically by searching for what LW has said about software engineering in particular, I’d love to read more about your methods, experiences, and thoughts on the subject. Have you written about this anywhere?
Sadly, not much. I wrote this one blog post a few years back about my take on why “reading code” isn’t a thing people should do in the same way they read literature but not much (publicly) other than that. I’ll think about whether there’s anything relevant to stuff I’ve been doing recently that I could write up.
As I’ve been talking about on my shortform, I’d be excited about attracting more “programmer’s programmers”. AFAICT, a lot of LW users are programmers, but a large fraction of these users either are more interested in transitioning into theoretical alignment research or just don’t really post about programming. As a small piece of evidence for this claim, I’ve been consistently surprised to see the relatively lukewarm reaction to Martin Sustrik’s posts on LW. I read Sustrik’s blog before he started posting and consistently find his posts there and here pretty interesting (I am admittedly a bit biased because I was already impressed by Sustrik’s work on ZeroMQ).
I think that’s a bit of a shame because I personally have found LW-style thinking useful for programming. My debugging process has especially benefited from applying some combination of informal probabilistic reasoning and “making beliefs pay rent”, which enabled me to make more principled decisions about which hypotheses to falsify first when finding root causes. For a longer example, see this blog post about reproducing a deep RL paper, which discusses how noticing confusion helped the author make progress (CFAR is specifically mentioned). LW-style thinking has also helped me stop obsessing over much of the debate around some of the more mindkiller-y topics in programming like “should you always write tests first”, “are type-safe languages always better than dynamic ones”. In my ideal world, LW-style thinking applied to fuzzier questions about programming would help us move past these “wrong questions”.
Programming already has a few other internet locuses such as Hacker News and lobste.rs, but I think those places have fewer “people who know how to integrate evidence and think probabilistically in confusing domains.”
Assuming this seems appealing, one way to approach getting more people of the type I’m talking about would be to reach out to prominent bloggers who seem like they’re already somewhat sympathetic to the LW meme-plex and see if they’d be willing to cross-post their content. Example of the sorts of people I’m thinking about include:
Hillel Wayne: who writes about empiricism in software engineering and formal methods.
Jimmy Koppel: who writes about insights for programming he’s gleaned from his “day job” as a programming tools researcher (I think he has a LW account already).
Julia Evans: Writes about programming practice and questions she’s interested in. A blog post of hers that seems especially LW-friendly is What does debugging a program look like?
Last, I do want to include add a caveat for all this which I think applies to reaching out to basically any group: there’s a big risk of culture clash/dilution if the outreach effort succeeds (see Geeks, MOPs, and sociopaths for one exploration of this topic). How to mitigate this is probably a separate question, but I did want to call it out in case it seems like I’m just recommending blindly trying to get more users.
Jimrandomh recently had the interesting observation that there might have been legitimately fewer rationalists in the world prior to the invention of programming, because it actually forces you to notice when your model is broken, form new hypotheses, and test them, all with short feedback loops.
Yeah, I have a ton of confirmation bias pushing me to agree with this (because for me the two are definitely related), but I’ll add that I also think spending a lot of time programming helped me make reductionism “a part of me” in a way it wasn’t before. There are just very few other activities where you’re forced to express what you want or a concept to something that fundamentally can only understand a limited logical vocabulary. Math is similar but I think programming makes the reductionist element more salient because of the compiler and because programming tends to involve more mundane work.
Yeah, in our office-discussion at the time I think the claim was something like “Prior to programming, Math Proofs were the best way to Get The Thing, and they were slower and the feedback less clear.”
(My sense is that programming _hasn’t_ deeply given me the thing, until perhaps recently when I started getting more intentional about deliberate debugging practice. But it definitely makes sense that programming would at least open up the possibility of gaining the skill. The main remaining question in my mind is “how much does the skill transfer, by default, if you’re not deliberately trying to transfer it?”)
As someone who landed on your comment specifically by searching for what LW has said about software engineering in particular, I’d love to read more about your methods, experiences, and thoughts on the subject. Have you written about this anywhere?
Sadly, not much. I wrote this one blog post a few years back about my take on why “reading code” isn’t a thing people should do in the same way they read literature but not much (publicly) other than that. I’ll think about whether there’s anything relevant to stuff I’ve been doing recently that I could write up.
I would recommend the other writers I linked though! They are much more insightful than I anyway!
(minor note: the Jimmy and Julia links didn’t work properly, because external links need to be prefaced with “https://www.”)
This link is broken. It goes to: https://www.lesswrong.com/posts/KFBhguD7dSjtmRLeg/lobste.rs
Fixed, same HTTPS problem Raemon commented on above.