Okay, that sounds really bad, I agree. Definitely different from e.g. Vienna.
Let’s go one level deeper and ask “why”.
It is tempting to interact with the fellow rationalists; I also consider them preferable to non-rationalists, ceteris paribus. But even if there were hundred or thousand rationalists available around me, I still have a family, friends, colleagues, neighbors, people who share the same hobby, so I would keep interacting with many non-rationalists anyway. I suspect that in the Bay Area, many community members are either university students, or someone who moved to the Bay Area recently to join a local startup or an EA organization—in other words, people who lost access to their previous social connections.
So the obvious move is to remind them regularly to create and maintain connections outside the rationalist community, and to treat any attempt to convince them otherwise (e.g. by their employer) as a huge red flag.
And, this is less likely to happen in a community where many members have already lived in the city.
The belief that the Singularity is near encourages you to throw all usual long-term planning out of window: if in a year, you will either be dead or live in a paradise, it is not so important whether during that year you have burned out, kept contacts with your family and friends, etc.
I am not going to object against a belief by appealing to consequences. In a world where Singularity actually comes in a year, and you have a 0.1% chance to change the outcome from hell to heaven, working as hard as you can is the right thing to do.
Instead, I suggest that people adjust both their timeline and the probability of their actual impact. With regards to timeline, consider the fact that there was already a rationalist minicamp on existential risk in 2011, that is 13 years ago. And yet, the world did not end in a year, in two years, in five years, or in ten years. Analogically, there is a chance that the world will not end in the following five or ten years. In which case, burning out in one year is a bad strategy. From psychological perspective, ten years is a lot of time; you should keep working towards the good end, but you should also take care of your health, including your mental health. Run a marathon, not a sprint. (People have criticized Eliezer for taking time to write fan fiction and indulge in polyamorous orgies, but notice that he hasn’t burned out, despite worrying about AI for decades. Imagine a parallel timeline, when he burned out in 2012, went crazy in 2013, and committed suicide in 2014. Would doing that help AI safety?)
And if you are considering your personal impact on the outcome of Singularity, most likely it is indistinguishable from zero, and before you go full Pascal and multiply the tiny probability by the number of potential future inhabitants of all galaxies in the universe, please consider that you don’t even know whether that number indistinguishable from zero is positive or negative (so you can’t automatically assume that even multiplying it by 3^^^3 necessarily results in a huge positive number). Working so hard that you burn out increases the absolute value a tiny bit, but still gives no guarantee about the sign, especially if other people afterwards use you as an example of how everyone who cares about AI safety goes crazy.
Ironically, unless you are one of the top AI safety researchers, if you live in the Bay Area, your best contribution would probably be keeping the rationalist community sane. Don’t take drugs, don’t encourage others to take drugs, help people avoid cults, be nice to people around you and help them relax, notice the bad actors in the community and call them out (but in a calm way). If this helps the important people stay sane longer, or prevents them from burning out, or just protects them from being dragged into some scandal that would have otherwise happened around them, your contribution to the final victory is more likely to be positive (although still indistinguishable from zero). Generally speaking, being hysterical does not necessarily mean being more productive.
I have a bit of a different prescription than you do: Instead of aiming to make the community saner, aim to make yourself saner, and especially in ways as de-correlated from the rest of the community. Which often means staying far away from community drama, talking with more people who think very differently than most in the community, following strings of logic in strange & un-intuitive directions, asking yourself whether claims are actually true when they’re made in proportion to how confident community members seem to be in such claims (people are most confident when they’re most wrong, for groupthink, tails come apart, and un-analyzed assumptions reasons), and learning a lot.
People have criticized Eliezer for taking time to write fan fiction and indulge in polyamorous orgies, but notice that he hasn’t burned out, despite worrying about AI for decades.
Not really relevant to your overall point, but I in fact think Eliezer has burnt out. He doesn’t really work on alignment anymore as far as I know.
Okay, that sounds really bad, I agree. Definitely different from e.g. Vienna.
Let’s go one level deeper and ask “why”.
It is tempting to interact with the fellow rationalists; I also consider them preferable to non-rationalists, ceteris paribus. But even if there were hundred or thousand rationalists available around me, I still have a family, friends, colleagues, neighbors, people who share the same hobby, so I would keep interacting with many non-rationalists anyway. I suspect that in the Bay Area, many community members are either university students, or someone who moved to the Bay Area recently to join a local startup or an EA organization—in other words, people who lost access to their previous social connections.
So the obvious move is to remind them regularly to create and maintain connections outside the rationalist community, and to treat any attempt to convince them otherwise (e.g. by their employer) as a huge red flag.
And, this is less likely to happen in a community where many members have already lived in the city.
The belief that the Singularity is near encourages you to throw all usual long-term planning out of window: if in a year, you will either be dead or live in a paradise, it is not so important whether during that year you have burned out, kept contacts with your family and friends, etc.
I am not going to object against a belief by appealing to consequences. In a world where Singularity actually comes in a year, and you have a 0.1% chance to change the outcome from hell to heaven, working as hard as you can is the right thing to do.
Instead, I suggest that people adjust both their timeline and the probability of their actual impact. With regards to timeline, consider the fact that there was already a rationalist minicamp on existential risk in 2011, that is 13 years ago. And yet, the world did not end in a year, in two years, in five years, or in ten years. Analogically, there is a chance that the world will not end in the following five or ten years. In which case, burning out in one year is a bad strategy. From psychological perspective, ten years is a lot of time; you should keep working towards the good end, but you should also take care of your health, including your mental health. Run a marathon, not a sprint. (People have criticized Eliezer for taking time to write fan fiction and indulge in polyamorous orgies, but notice that he hasn’t burned out, despite worrying about AI for decades. Imagine a parallel timeline, when he burned out in 2012, went crazy in 2013, and committed suicide in 2014. Would doing that help AI safety?)
And if you are considering your personal impact on the outcome of Singularity, most likely it is indistinguishable from zero, and before you go full Pascal and multiply the tiny probability by the number of potential future inhabitants of all galaxies in the universe, please consider that you don’t even know whether that number indistinguishable from zero is positive or negative (so you can’t automatically assume that even multiplying it by 3^^^3 necessarily results in a huge positive number). Working so hard that you burn out increases the absolute value a tiny bit, but still gives no guarantee about the sign, especially if other people afterwards use you as an example of how everyone who cares about AI safety goes crazy.
Ironically, unless you are one of the top AI safety researchers, if you live in the Bay Area, your best contribution would probably be keeping the rationalist community sane. Don’t take drugs, don’t encourage others to take drugs, help people avoid cults, be nice to people around you and help them relax, notice the bad actors in the community and call them out (but in a calm way). If this helps the important people stay sane longer, or prevents them from burning out, or just protects them from being dragged into some scandal that would have otherwise happened around them, your contribution to the final victory is more likely to be positive (although still indistinguishable from zero). Generally speaking, being hysterical does not necessarily mean being more productive.
I have a bit of a different prescription than you do: Instead of aiming to make the community saner, aim to make yourself saner, and especially in ways as de-correlated from the rest of the community. Which often means staying far away from community drama, talking with more people who think very differently than most in the community, following strings of logic in strange & un-intuitive directions, asking yourself whether claims are actually true when they’re made in proportion to how confident community members seem to be in such claims (people are most confident when they’re most wrong, for groupthink, tails come apart, and un-analyzed assumptions reasons), and learning a lot.
A kind of put on your own mask before others’ sort of approach.
Not really relevant to your overall point, but I in fact think Eliezer has burnt out. He doesn’t really work on alignment anymore as far as I know.