I have read that entire thread, and… it is hard to say something coherent in reply, and I am probably missing a lot of context… but it seems to me that bad things are happening, but also that people complaining about them make wrong conclusions (mostly in style: I see something bad happening, so I point at the most visible thing nearby and say: this is the cause of the bad things happening).
Makes me wonder, what would have happened if instead of living on the opposite side of the planet, I lived in the middle of all that chaos. Would I be a part of the insanity? Or a lonely voice of reason? Or just a random low-status guy whose opinion is irrelevant because no one listens to it and no one is going to remember it anyway? (Probably the last one.)
Basically, it confuses me when people point at things I consider good, and call them causes of things that I consider obviously bad and stupid. What is the proper lesson to take here? Maybe I am the stupid one, unable to see the obvious causality, and protected from my own stupidity by being far away from where important things happen. Or maybe other people are simply doing things wrong.
I keep dreaming about having a rationalist group with more than five members in my country, but if my wishes came true, would that automatically mean also getting our local version of Zizians/Leverage/etc.? Do these things happen automatically as a consequence of trying to be rational, or did just someone accidentally build the Bay Area community on top of an ancient Indian burial ground?
...but that’s basically what this article is about.
I think in Denver we’ve lucked into a default culture that puts emphasis on first getting your life in order and functioning in default society, with rationalism complimenting that rather than overriding it. Is this common?
The rationalist scene in Vienna is also sane, as far as I know. We need more data points from other cities.
Or maybe it’s something unrelated to “antibodies”, like the people in Bay Area taking an order of magnitude more drugs than people anywhere else, and everything else is just downstream of this. (Or, from another perspective, perhaps “don’t take drugs just because some guys who call themselves ‘rationalists’ told you it was a good idea” is the most relevant normie antibody.) The obvious counter-argument is that everyone in Bay Area takes drugs, so the fact that the drugs were always visibly involved in the most crazy cases is not as strong evidence as I make it. The obvious counter-counter-argument is that this is probably the reason why the crazy cases happen in Bay Area, as opposed to other places.
And, perhaps, a resource that organizers can turn to if they notice someone slipping into fanaticism would be nice. As far as I know, there isn’t a Best Practices Doc for this sort of thing.
My first idea is to make a short text that will document the existing bad cases and highlight the relevant parts of the Sequences. Document the bad cases to show that the problem exists and is serious. Quote the Sequences to… dunno, probably as a way to tell the people “hey, if you decide to ignore all of these warnings and do your own thing anyway, at least do not publicly blame Eliezer when shit hits the fan”.
Do these things happen automatically as a consequence of trying to be rational, or did just someone accidentally build the Bay Area community on top of an ancient Indian burial ground?
As someone “on the ground” in the Bay Area, my first guess would be that the EA and rationality community here (and they are mostly a single community here) is very insular. Many have zero friends they meet up with regularly who aren’t rationalists or EAs.
Okay, that sounds really bad, I agree. Definitely different from e.g. Vienna.
Let’s go one level deeper and ask “why”.
It is tempting to interact with the fellow rationalists; I also consider them preferable to non-rationalists, ceteris paribus. But even if there were hundred or thousand rationalists available around me, I still have a family, friends, colleagues, neighbors, people who share the same hobby, so I would keep interacting with many non-rationalists anyway. I suspect that in the Bay Area, many community members are either university students, or someone who moved to the Bay Area recently to join a local startup or an EA organization—in other words, people who lost access to their previous social connections.
So the obvious move is to remind them regularly to create and maintain connections outside the rationalist community, and to treat any attempt to convince them otherwise (e.g. by their employer) as a huge red flag.
And, this is less likely to happen in a community where many members have already lived in the city.
The belief that the Singularity is near encourages you to throw all usual long-term planning out of window: if in a year, you will either be dead or live in a paradise, it is not so important whether during that year you have burned out, kept contacts with your family and friends, etc.
I am not going to object against a belief by appealing to consequences. In a world where Singularity actually comes in a year, and you have a 0.1% chance to change the outcome from hell to heaven, working as hard as you can is the right thing to do.
Instead, I suggest that people adjust both their timeline and the probability of their actual impact. With regards to timeline, consider the fact that there was already a rationalist minicamp on existential risk in 2011, that is 13 years ago. And yet, the world did not end in a year, in two years, in five years, or in ten years. Analogically, there is a chance that the world will not end in the following five or ten years. In which case, burning out in one year is a bad strategy. From psychological perspective, ten years is a lot of time; you should keep working towards the good end, but you should also take care of your health, including your mental health. Run a marathon, not a sprint. (People have criticized Eliezer for taking time to write fan fiction and indulge in polyamorous orgies, but notice that he hasn’t burned out, despite worrying about AI for decades. Imagine a parallel timeline, when he burned out in 2012, went crazy in 2013, and committed suicide in 2014. Would doing that help AI safety?)
And if you are considering your personal impact on the outcome of Singularity, most likely it is indistinguishable from zero, and before you go full Pascal and multiply the tiny probability by the number of potential future inhabitants of all galaxies in the universe, please consider that you don’t even know whether that number indistinguishable from zero is positive or negative (so you can’t automatically assume that even multiplying it by 3^^^3 necessarily results in a huge positive number). Working so hard that you burn out increases the absolute value a tiny bit, but still gives no guarantee about the sign, especially if other people afterwards use you as an example of how everyone who cares about AI safety goes crazy.
Ironically, unless you are one of the top AI safety researchers, if you live in the Bay Area, your best contribution would probably be keeping the rationalist community sane. Don’t take drugs, don’t encourage others to take drugs, help people avoid cults, be nice to people around you and help them relax, notice the bad actors in the community and call them out (but in a calm way). If this helps the important people stay sane longer, or prevents them from burning out, or just protects them from being dragged into some scandal that would have otherwise happened around them, your contribution to the final victory is more likely to be positive (although still indistinguishable from zero). Generally speaking, being hysterical does not necessarily mean being more productive.
I have a bit of a different prescription than you do: Instead of aiming to make the community saner, aim to make yourself saner, and especially in ways as de-correlated from the rest of the community. Which often means staying far away from community drama, talking with more people who think very differently than most in the community, following strings of logic in strange & un-intuitive directions, asking yourself whether claims are actually true when they’re made in proportion to how confident community members seem to be in such claims (people are most confident when they’re most wrong, for groupthink, tails come apart, and un-analyzed assumptions reasons), and learning a lot.
People have criticized Eliezer for taking time to write fan fiction and indulge in polyamorous orgies, but notice that he hasn’t burned out, despite worrying about AI for decades.
Not really relevant to your overall point, but I in fact think Eliezer has burnt out. He doesn’t really work on alignment anymore as far as I know.
What is QC?
The person whose tweets were linked above when mentioning “they become Zealots, doing lasting damage to their lives, and then burning out spectacularly.”
I have read that entire thread, and… it is hard to say something coherent in reply, and I am probably missing a lot of context… but it seems to me that bad things are happening, but also that people complaining about them make wrong conclusions (mostly in style: I see something bad happening, so I point at the most visible thing nearby and say: this is the cause of the bad things happening).
Makes me wonder, what would have happened if instead of living on the opposite side of the planet, I lived in the middle of all that chaos. Would I be a part of the insanity? Or a lonely voice of reason? Or just a random low-status guy whose opinion is irrelevant because no one listens to it and no one is going to remember it anyway? (Probably the last one.)
Basically, it confuses me when people point at things I consider good, and call them causes of things that I consider obviously bad and stupid. What is the proper lesson to take here? Maybe I am the stupid one, unable to see the obvious causality, and protected from my own stupidity by being far away from where important things happen. Or maybe other people are simply doing things wrong.
I keep dreaming about having a rationalist group with more than five members in my country, but if my wishes came true, would that automatically mean also getting our local version of Zizians/Leverage/etc.? Do these things happen automatically as a consequence of trying to be rational, or did just someone accidentally build the Bay Area community on top of an ancient Indian burial ground?
...but that’s basically what this article is about.
The rationalist scene in Vienna is also sane, as far as I know. We need more data points from other cities.
Or maybe it’s something unrelated to “antibodies”, like the people in Bay Area taking an order of magnitude more drugs than people anywhere else, and everything else is just downstream of this. (Or, from another perspective, perhaps “don’t take drugs just because some guys who call themselves ‘rationalists’ told you it was a good idea” is the most relevant normie antibody.) The obvious counter-argument is that everyone in Bay Area takes drugs, so the fact that the drugs were always visibly involved in the most crazy cases is not as strong evidence as I make it. The obvious counter-counter-argument is that this is probably the reason why the crazy cases happen in Bay Area, as opposed to other places.
My first idea is to make a short text that will document the existing bad cases and highlight the relevant parts of the Sequences. Document the bad cases to show that the problem exists and is serious. Quote the Sequences to… dunno, probably as a way to tell the people “hey, if you decide to ignore all of these warnings and do your own thing anyway, at least do not publicly blame Eliezer when shit hits the fan”.
As someone “on the ground” in the Bay Area, my first guess would be that the EA and rationality community here (and they are mostly a single community here) is very insular. Many have zero friends they meet up with regularly who aren’t rationalists or EAs.
A recipe for insane cults in my book.
Okay, that sounds really bad, I agree. Definitely different from e.g. Vienna.
Let’s go one level deeper and ask “why”.
It is tempting to interact with the fellow rationalists; I also consider them preferable to non-rationalists, ceteris paribus. But even if there were hundred or thousand rationalists available around me, I still have a family, friends, colleagues, neighbors, people who share the same hobby, so I would keep interacting with many non-rationalists anyway. I suspect that in the Bay Area, many community members are either university students, or someone who moved to the Bay Area recently to join a local startup or an EA organization—in other words, people who lost access to their previous social connections.
So the obvious move is to remind them regularly to create and maintain connections outside the rationalist community, and to treat any attempt to convince them otherwise (e.g. by their employer) as a huge red flag.
And, this is less likely to happen in a community where many members have already lived in the city.
The belief that the Singularity is near encourages you to throw all usual long-term planning out of window: if in a year, you will either be dead or live in a paradise, it is not so important whether during that year you have burned out, kept contacts with your family and friends, etc.
I am not going to object against a belief by appealing to consequences. In a world where Singularity actually comes in a year, and you have a 0.1% chance to change the outcome from hell to heaven, working as hard as you can is the right thing to do.
Instead, I suggest that people adjust both their timeline and the probability of their actual impact. With regards to timeline, consider the fact that there was already a rationalist minicamp on existential risk in 2011, that is 13 years ago. And yet, the world did not end in a year, in two years, in five years, or in ten years. Analogically, there is a chance that the world will not end in the following five or ten years. In which case, burning out in one year is a bad strategy. From psychological perspective, ten years is a lot of time; you should keep working towards the good end, but you should also take care of your health, including your mental health. Run a marathon, not a sprint. (People have criticized Eliezer for taking time to write fan fiction and indulge in polyamorous orgies, but notice that he hasn’t burned out, despite worrying about AI for decades. Imagine a parallel timeline, when he burned out in 2012, went crazy in 2013, and committed suicide in 2014. Would doing that help AI safety?)
And if you are considering your personal impact on the outcome of Singularity, most likely it is indistinguishable from zero, and before you go full Pascal and multiply the tiny probability by the number of potential future inhabitants of all galaxies in the universe, please consider that you don’t even know whether that number indistinguishable from zero is positive or negative (so you can’t automatically assume that even multiplying it by 3^^^3 necessarily results in a huge positive number). Working so hard that you burn out increases the absolute value a tiny bit, but still gives no guarantee about the sign, especially if other people afterwards use you as an example of how everyone who cares about AI safety goes crazy.
Ironically, unless you are one of the top AI safety researchers, if you live in the Bay Area, your best contribution would probably be keeping the rationalist community sane. Don’t take drugs, don’t encourage others to take drugs, help people avoid cults, be nice to people around you and help them relax, notice the bad actors in the community and call them out (but in a calm way). If this helps the important people stay sane longer, or prevents them from burning out, or just protects them from being dragged into some scandal that would have otherwise happened around them, your contribution to the final victory is more likely to be positive (although still indistinguishable from zero). Generally speaking, being hysterical does not necessarily mean being more productive.
I have a bit of a different prescription than you do: Instead of aiming to make the community saner, aim to make yourself saner, and especially in ways as de-correlated from the rest of the community. Which often means staying far away from community drama, talking with more people who think very differently than most in the community, following strings of logic in strange & un-intuitive directions, asking yourself whether claims are actually true when they’re made in proportion to how confident community members seem to be in such claims (people are most confident when they’re most wrong, for groupthink, tails come apart, and un-analyzed assumptions reasons), and learning a lot.
A kind of put on your own mask before others’ sort of approach.
Not really relevant to your overall point, but I in fact think Eliezer has burnt out. He doesn’t really work on alignment anymore as far as I know.