I have some difficulty distinguishing personal growth I’ve experienced due to the culture on LessWrong with that from other parts of society and culture and myself. But here’s some things feel substantially downstream of interacting with the ideas and culture in this small intellectual community.
(I imagine others will give very different answers.)
Help me focus more on what I care about, and less on what people and society expect of me.
I’m a better classical musician. These days I’m better able to do deliberate practise on the parts of the music I need to improve at. To give a dumb/oversimplified quantitative measure, I’m able to learn-and-memorise pieces of music maybe 5-10x more efficiently. When I was at music school as a teenager, there were pieces of music I liked that I didn’t finish memorising for years, because when I was in the practise room I was ‘going through the motions’ of practise far more than ‘actually trying to get better according to my own taste’. In the past weeks and months I’ve picked up a dozen or so pieces by Bach and others in maybe 5-10 hours of playing each, have memorised each, and am able to play with them more musically and emotionally than before.
I did weird things during my undergrad at Oxford that were better for my career than being ‘a good student’. The university wanted me to care about things like academic prestige and grades in all of my classes, but I realise that I wasn’t very interested in the goals they had for me. The academic setting rarely encouraged genuine curiosity about math and science, and felt fairly suffocating. I focused on finding interesting people and working on side-projects I was excited about, and ended up doing things I think in retrospect were far more valuable for my career than getting good grades.
Help me think about modern technology clearly and practically.
Quit social media. Writings and actions by people in the LessWrong intellectual community have helped me more than other public dialogue on the subject, think about how to interact with social media. Zvi (author of many LessWrong sequences) did some very simple experiments on the Facebook newsfeed, and wrote about his experiences with it, in a way that helped me think of facebook as an actively adversarial force, optimised to get me hooked, and fundamentally not something we can build a healthy community on. I found his two simple experiments more informative than anything I’ve seen come out of academia on the subject. The fact that he quit Facebook cold-turkey without exception, and a few more friends, has caused me to move off it too. I now view all social media on Saturdays in a 2-hour period, and don’t write/react on any of it, and think this has been very healthy.
Using google docs for meetings. Generally this community has helped me think better using modern technology. One user wrote this post about social modelling, which advised using google docs to have meetings. At work now I regularly, primarily have the meeting conversation inside a google doc, where 3-5 people in a meeting can have many parallel conversations at once. I’ve personally found this really valuable, both in allowing us to use the time more effectively (rather than one person talking at a time, 5 of us can be writing in different parts of the document at the same time), but also in producing a record of our thought processes, reasoning and decisions, for us to share with others and reflect on months and years down the line.
Help me figure out who I am and build my life.
Take bets. I take bets on things regularly. That’s a virtue and something respected in this intellectual community. I get to find out I’m wrong / prove other people wrong, and generally move conversations forward.
Avoid politics. Overall I think that I’ve successfully avoided getting involved in politics or building a political identity throughout my teenage and university years, and focused on learning what real knowledge is in areas of science and other practical matters, which I think has been healthy. I have a sense that this means when I will have to build up my understanding of more political domains, I’ll be able to keep a better sense of what is true and what is convenient narrative. This is partly due to other factors (e.g. personal taste), but has been aided by the LessWrongian dislike of politics.
Learn to trust better. Something about the intellectual honesty and rigour of the LessWrong intellectual community has helped me learn to ‘kill your darlings’, that just because I respect someone doesn’t mean they’re infallible. The correct answer isn’t to always trust someone, or to never trust someone, but to build up an understanding of when they are trustworthy and when they aren’t. (This post says that idea fairly clearly in a way I found helpful.)
Lots of other practises. I learned to think carefully from reading some of the fiction and stories written by LessWrongers. A common example is the Harry Potter fanfiction “Harry Potter and the Methods of Rationality”, which communicates a lot of the experiences of someone who lives up to the many virtues we care about on LessWrong. There are lots of experiences I could write about, about empirically testing your beliefs (including your political beliefs), being curious about how the world works, taking responsibility, and thinking for yourself. I have more I could say here, but it would take me a while to say it while avoiding spoilers. Nonetheless, it’s had a substantial effect on how I live and work and collaborate with other people.
Other people write great things too, I won’t try to find all of them. This recent one by Scott Alexander I think about a fair amount.
I guess there’s a ton of things, the above are just a couple of examples that occurred to me in the ~30 mins I spent writing this answer.
By the way, while we care about understanding the human mind in a very concrete way on LessWrong, we are more often focused on an academic pursuit of knowledge. We recently did a community vote on the best posts from 2018. If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you’ll see the sort of discussion we tend to have best on LessWrong. I don’t come here to get ‘life-improvements’ or ‘self-help’, I come here much more to be part of a small intellectual community that’s very curious about human rationality.
If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you’ll see the sort of discussion we tend to have best on LessWrong. I don’t come here to get ‘life-improvements’ or ‘self-help’, I come here much more to be part of a small intellectual community that’s very curious about human rationality.
I wanted to follow up on this a bit.
TLDR: While LessWrong readers tangentially care a lot about self-improvement, reading forums alone likely won’t have a big effect on life success. But that’s not really that relevant; the most relevant thing to look at is how much progress the community have done on the technical mathematical and philosophical questions it has focused most on. Unfortunately, that discussion is very hard to have without spending a lot of time doing actual maths and philosophy (though if you wanted to do that, I’m sure there are people who would be really happy to discuss those things).
___
If what you wanted to achieve was life-improvements, reading a forum seems like a confusing approach.
Things that I expect to work better are:
personally tailored 1-on-1 advice (e.g. seeing a sleep psychologist, a therapist, a personal trainer or a life coach)
working with great mentors or colleagues and learning from them
deliberate practice ― applying techniques for having more productive disagreements when you actually disagree with colleagues, implementing different productivity systems and seeing how well they work for you, regularly turning your beliefs into predictions and bets checking how well you’re actually reasoning
taking on projects that step the right distance beyond your comfort zone
just changing whatever part of your environment makes things bad for you (changing jobs, moving to another city, leaving a relationship, starting a relationship, changing your degree, buying a new desk chair, …)
There’s previously been some discussion here around whether being a LessWrong reader correlates which increased life success (see e.g. this and this).
As a community, the answer seems to be overwhelmingly positive. In the span of roughly a decade, people who combined ideas about how to reason under uncertainty with impartial altruistic values, and used those to conclude that it would be important to work on issues like AI alignment, have done some very impressive things (as judged by an outside perspective). They’ve launched billion dollar foundations, set up 30+ employee research institutes at some of the worlds most prestigious universities, and gotten endorsements from some of the world’s richest and most influential people, like Elon Musk and Bill Gates. (NOTE: I’m going to caveat these claims below.)
The effects on individual readers are a more complex issue and the relevant variables are harder to measure. (Personally I think there will be some improvements in something like “the ability to think clearly about hard problems”, but that that will largely stem from readers of LessWrong already being selected for being the kinds of people who are good at that.)
Regardless, like Ben hints at, this partly seems like the wrong metric to focus on. This is the caveat.
While interested in self-improvement, one of the key things people at LessWrong have been trying to get at is reasoning safely about super intelligences. To take a problem that’s far in the future, where the stakes are potentially very high, where there is no established field of research, and where thinking about it can feel weird and disorienting… and still trying to do so in a way where you get to the truth.
So personally I think the biggest victories are some impressive technical progress in this domain. Like, a bunch of mathsandmuchconceptualphilosophy.
I believe this because I have my own thoughts about what seems important to work on and what kinds of thinking make progress on those problems. To share those with someone who haven’t spent much time around LessWrong could take many hours of conversation. And I think often they would remain unconvinced. It’s just hard to think and talk about complex issues in any domain. It would be similarly hard for me to understand why a biology PhD student thinks one theory is more important than another relying only on the merits of the theories, without any appeal to what other senior biologists think.
It’s a situation where to understand why I think this is important someone might need to do a lot of maths and philosophy… which they probably won’t do unless they already think it is important. I don’t know how to solve that chicken-egg problem (except for talking to people who were independently curious about that kind of stuff). But my not being able to solve it doesn’t change the fact that it’s there. And that I did spend hundreds of hours engaging with the relevant content and now do have detailed opinions about it.
So, to conclude… people on LessWrong are trying to make progress on AI and rationality, and one important perspective for thinking about LessWrong is whether people are actually making progress on AI and rationality. I’d encourage you (Jon) to engage with that perspective as an important lens through which to understand LessWrong.
Having said that, I want to note that I’m glad that you seem to want to engage in good faith with people from LessWrong, and I hope you’ll have some interesting conversations.
I have some difficulty distinguishing personal growth I’ve experienced due to the culture on LessWrong with that from other parts of society and culture and myself. But here’s some things feel substantially downstream of interacting with the ideas and culture in this small intellectual community.
(I imagine others will give very different answers.)
Help me focus more on what I care about, and less on what people and society expect of me.
I’m a better classical musician. These days I’m better able to do deliberate practise on the parts of the music I need to improve at. To give a dumb/oversimplified quantitative measure, I’m able to learn-and-memorise pieces of music maybe 5-10x more efficiently. When I was at music school as a teenager, there were pieces of music I liked that I didn’t finish memorising for years, because when I was in the practise room I was ‘going through the motions’ of practise far more than ‘actually trying to get better according to my own taste’. In the past weeks and months I’ve picked up a dozen or so pieces by Bach and others in maybe 5-10 hours of playing each, have memorised each, and am able to play with them more musically and emotionally than before.
I did weird things during my undergrad at Oxford that were better for my career than being ‘a good student’. The university wanted me to care about things like academic prestige and grades in all of my classes, but I realise that I wasn’t very interested in the goals they had for me. The academic setting rarely encouraged genuine curiosity about math and science, and felt fairly suffocating. I focused on finding interesting people and working on side-projects I was excited about, and ended up doing things I think in retrospect were far more valuable for my career than getting good grades.
Help me think about modern technology clearly and practically.
Quit social media. Writings and actions by people in the LessWrong intellectual community have helped me more than other public dialogue on the subject, think about how to interact with social media. Zvi (author of many LessWrong sequences) did some very simple experiments on the Facebook newsfeed, and wrote about his experiences with it, in a way that helped me think of facebook as an actively adversarial force, optimised to get me hooked, and fundamentally not something we can build a healthy community on. I found his two simple experiments more informative than anything I’ve seen come out of academia on the subject. The fact that he quit Facebook cold-turkey without exception, and a few more friends, has caused me to move off it too. I now view all social media on Saturdays in a 2-hour period, and don’t write/react on any of it, and think this has been very healthy.
Using google docs for meetings. Generally this community has helped me think better using modern technology. One user wrote this post about social modelling, which advised using google docs to have meetings. At work now I regularly, primarily have the meeting conversation inside a google doc, where 3-5 people in a meeting can have many parallel conversations at once. I’ve personally found this really valuable, both in allowing us to use the time more effectively (rather than one person talking at a time, 5 of us can be writing in different parts of the document at the same time), but also in producing a record of our thought processes, reasoning and decisions, for us to share with others and reflect on months and years down the line.
Help me figure out who I am and build my life.
Take bets. I take bets on things regularly. That’s a virtue and something respected in this intellectual community. I get to find out I’m wrong / prove other people wrong, and generally move conversations forward.
Avoid politics. Overall I think that I’ve successfully avoided getting involved in politics or building a political identity throughout my teenage and university years, and focused on learning what real knowledge is in areas of science and other practical matters, which I think has been healthy. I have a sense that this means when I will have to build up my understanding of more political domains, I’ll be able to keep a better sense of what is true and what is convenient narrative. This is partly due to other factors (e.g. personal taste), but has been aided by the LessWrongian dislike of politics.
Learn to trust better. Something about the intellectual honesty and rigour of the LessWrong intellectual community has helped me learn to ‘kill your darlings’, that just because I respect someone doesn’t mean they’re infallible. The correct answer isn’t to always trust someone, or to never trust someone, but to build up an understanding of when they are trustworthy and when they aren’t. (This post says that idea fairly clearly in a way I found helpful.)
Lots of other practises. I learned to think carefully from reading some of the fiction and stories written by LessWrongers. A common example is the Harry Potter fanfiction “Harry Potter and the Methods of Rationality”, which communicates a lot of the experiences of someone who lives up to the many virtues we care about on LessWrong. There are lots of experiences I could write about, about empirically testing your beliefs (including your political beliefs), being curious about how the world works, taking responsibility, and thinking for yourself. I have more I could say here, but it would take me a while to say it while avoiding spoilers. Nonetheless, it’s had a substantial effect on how I live and work and collaborate with other people.
Other people write great things too, I won’t try to find all of them. This recent one by Scott Alexander I think about a fair amount.
I guess there’s a ton of things, the above are just a couple of examples that occurred to me in the ~30 mins I spent writing this answer.
By the way, while we care about understanding the human mind in a very concrete way on LessWrong, we are more often focused on an academic pursuit of knowledge. We recently did a community vote on the best posts from 2018. If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you’ll see the sort of discussion we tend to have best on LessWrong. I don’t come here to get ‘life-improvements’ or ‘self-help’, I come here much more to be part of a small intellectual community that’s very curious about human rationality.
I wanted to follow up on this a bit.
TLDR: While LessWrong readers tangentially care a lot about self-improvement, reading forums alone likely won’t have a big effect on life success. But that’s not really that relevant; the most relevant thing to look at is how much progress the community have done on the technical mathematical and philosophical questions it has focused most on. Unfortunately, that discussion is very hard to have without spending a lot of time doing actual maths and philosophy (though if you wanted to do that, I’m sure there are people who would be really happy to discuss those things).
___
If what you wanted to achieve was life-improvements, reading a forum seems like a confusing approach.
Things that I expect to work better are:
personally tailored 1-on-1 advice (e.g. seeing a sleep psychologist, a therapist, a personal trainer or a life coach)
working with great mentors or colleagues and learning from them
deliberate practice ― applying techniques for having more productive disagreements when you actually disagree with colleagues, implementing different productivity systems and seeing how well they work for you, regularly turning your beliefs into predictions and bets checking how well you’re actually reasoning
taking on projects that step the right distance beyond your comfort zone
just changing whatever part of your environment makes things bad for you (changing jobs, moving to another city, leaving a relationship, starting a relationship, changing your degree, buying a new desk chair, …)
And even then, realistic expectations for self-improvement might be quite slow. (Though the magic comes when you manage to compound such slow improvements over a long time-period.)
There’s previously been some discussion here around whether being a LessWrong reader correlates which increased life success (see e.g. this and this).
As a community, the answer seems to be overwhelmingly positive. In the span of roughly a decade, people who combined ideas about how to reason under uncertainty with impartial altruistic values, and used those to conclude that it would be important to work on issues like AI alignment, have done some very impressive things (as judged by an outside perspective). They’ve launched billion dollar foundations, set up 30+ employee research institutes at some of the worlds most prestigious universities, and gotten endorsements from some of the world’s richest and most influential people, like Elon Musk and Bill Gates. (NOTE: I’m going to caveat these claims below.)
The effects on individual readers are a more complex issue and the relevant variables are harder to measure. (Personally I think there will be some improvements in something like “the ability to think clearly about hard problems”, but that that will largely stem from readers of LessWrong already being selected for being the kinds of people who are good at that.)
Regardless, like Ben hints at, this partly seems like the wrong metric to focus on. This is the caveat.
While interested in self-improvement, one of the key things people at LessWrong have been trying to get at is reasoning safely about super intelligences. To take a problem that’s far in the future, where the stakes are potentially very high, where there is no established field of research, and where thinking about it can feel weird and disorienting… and still trying to do so in a way where you get to the truth.
So personally I think the biggest victories are some impressive technical progress in this domain. Like, a bunch of maths and much conceptual philosophy.
I believe this because I have my own thoughts about what seems important to work on and what kinds of thinking make progress on those problems. To share those with someone who haven’t spent much time around LessWrong could take many hours of conversation. And I think often they would remain unconvinced. It’s just hard to think and talk about complex issues in any domain. It would be similarly hard for me to understand why a biology PhD student thinks one theory is more important than another relying only on the merits of the theories, without any appeal to what other senior biologists think.
It’s a situation where to understand why I think this is important someone might need to do a lot of maths and philosophy… which they probably won’t do unless they already think it is important. I don’t know how to solve that chicken-egg problem (except for talking to people who were independently curious about that kind of stuff). But my not being able to solve it doesn’t change the fact that it’s there. And that I did spend hundreds of hours engaging with the relevant content and now do have detailed opinions about it.
So, to conclude… people on LessWrong are trying to make progress on AI and rationality, and one important perspective for thinking about LessWrong is whether people are actually making progress on AI and rationality. I’d encourage you (Jon) to engage with that perspective as an important lens through which to understand LessWrong.
Having said that, I want to note that I’m glad that you seem to want to engage in good faith with people from LessWrong, and I hope you’ll have some interesting conversations.
Thanks so much for this—reading now...