Second, I want to jot down a thought I’ve had for a while now, and which came to mind when I read both this and Zoe’s Leverage post.
To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people...
...become convinced that the future is in their hands: that the fate of the entire long-term future (“the future light-cone”) depends on the success of their work, and the work of a small circle of like-minded collaborators
...become convinced that (for some reason) only they, and their small circle, can do this work (or can do it correctly, or morally, etc.) -- that in spite of the work’s vast importance, in spite of the existence of billions of humans and surely at least thousands with comparable or superior talent for this type of work, it is correct/necessary for the work to be done by this tiny group
...become less concerned with the epistemic side of rationality—“how do I know I’m right? how do I become more right than I already am?”—and more concerned with gaining control and influence, so that the long-term future may be shaped by their own (already-obviously-correct) views
...spend more effort on self-experimentation and on self-improvement techniques, with the aim of turning themselves into a person capable of making world-historic breakthroughs—if they do not feel like such a person yet, they must become one, since the breakthroughs must be made within their small group
...become increasingly concerned with a sort of “monastic” notion of purity or virtue: some set of traits which few-to-no people possess naturally, which are necessary for the great work, and which can only be attained through an inward-looking process of self-cultivation that removes inner obstacles, impurities, or aversive reflexes (“debugging,” making oneself “actually try”)
...suffer increasingly from (understandable!) scrupulosity and paranoia, which compete with the object-level work for mental space and emotional energy
...involve themselves in extreme secrecy, factional splits with closely related thinkers, analyses of how others fail to achieve monastic virtue, and other forms of zero- or negative-sum conflict which do not seem typical of healthy research communities
...become probably less productive at the object-level work, and at least not obviouslymore productive, and certainly not productive in the clearly unique way that would be necessary to (begin to) justify the emphasis on secrecy, purity, and specialness
I see all of the above in Ziz’s blog, for example, which is probably the clearest and most concentrated example I know of the phenomenon. (This is not to say that Ziz is wrong about everything, or even to say Ziz is right or wrong about anything—only to observe that her writing is full of factionalism, full of concern for “monastic virtue,” much less prone to raise the question “how do I know I’m right?” than typical rationalist blogging, etc.) I got the same feeling reading about Zoe’s experience inside Leverage. And I see many of the same things reported in this post.
I write from a great remove, as someone who’s socially involved with parts of the rationalist community, but who has never even been to the Bay Area—indeed, as someone skeptical that AI safety research is even very important! This distance has the obvious advantages and disadvantages.
One of the advantages, I think, is that I don’t even have inklings of fear or scrupulosity about AI safety. I just see it as a technical/philosophical research problem. An extremely difficult one, yes, but one that is not clearly special or unique, except possibly in its sheer level of difficulty.
So, I expect it is similar to other problems of that type. Like most such problems, it would probably benefit from a much larger pool of researchers: a lot of research is just perfectly-parallelizable brute-force search, trying many different things most of which will not work.
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem—that would mean applying vastly less parallel “compute” to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.
Of course, if this were really true, then one ought to believe that it is true. But it surprises me how quick many rationalists are to accept this type of claim, on what looks from the outside like very little evidence. And it also surprises me how quickly the same people accept unproven self-improvement techniques, even ideas that look like wishful thinking (“I can achieve uniquely great things if I just actually try, something no one else is doing...”), as substitutes for what they lose by accepting insularity. Ways to make up for the loss in parallel compute by trying to “overclock” the few processors left available.
From where I stand, this just looks like a hole people go into, which harms them while—sadly, ironically—not even yielding the gains in object-level productivity it purports to provide. The challenge is primarily technical, not personal or psychological, and it is unmoved by anything but direct attacks on its steep slopes.
(Relevant: in grad school, I remember feeling envious of some of my colleagues, who seemed able to do research easily, casually, without any of my own inner turmoil. I put far more effort into self-cultivation, but they were far more productive. I was, perhaps, “trying hard to actually try”; they were probably not even trying, just doing. I was, perhaps, “working to overcome my akrasia”; they simply did not have my akrasia to begin with.
I believe that a vast amount of good technical research is done by such people, perhaps even the vast majority of good technical research. Some AI safety researchers are like this, and many people like this could do great AI safety research, I think; but they are utterly lacking in “monastic virtue” and they are the last people you will find attached to one of these secretive, cultivation-focused monastic groups.)
I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren’t really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here.
The fact of the matter is that for almost all the time I’ve been involved with CFAR, there just plain hasn’t been a research team. Much of CFAR’s focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like.
To put things another way, I would say it’s much less “the full-time researchers are off unproductively experimenting on their own brains in secret” and more “there are no full-time researchers”. To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program—instead, the organization has largely been focused on delivering existing content and programs, and insofar as the curriculum advances it does so via iteration and testing at workshops rather than a more structured or systematic development process.
I have historically found this state of affairs pretty frustrating (and am working to change it), but I think that it’s a pretty different dynamic than the one you describe above.
(I suppose it’s possible that the systematic and productive full-time CFAR research team was so secretive that I didn’t even know it existed, but this seems unlikely...)
Maybe offtopic, but the “trying too hard to try” part rings very true to me. Been on both sides of it.
The tricky thing about work, I’m realizing more and more, is that you should just work. That’s the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they’ll be also completely wrong. It always turns out later that your best work wasn’t the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.
Does anyone have thoughts about avoiding failure modes of this sort?
Especially in the “least convenient possible world” where some of the bullet points are actually true—like, if we’re disseminating principles for wannabe AI Manhattan Projects, and we’re optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?
Most of my ideas are around “staying grounded”—spend significant time hanging out with “normies” who don’t buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)
But I’m just guessing, and I encourage others to share their thoughts. Especially people who’ve observed/experienced mental health crises firsthand—how could they have been prevented?
EDIT: I’m also curious how to think about scrupulosity. It seems to me that team members for an AI Manhattan Project should ideally have more scrupulosity/paranoia than average, for obvious reasons. (“A bit above the population average” might be somewhere around “they can count on one hand the number of times they blacked out while drinking”—I suspect communities like ours already select for high-ish levels of scrupulosity.) However, my initial guess is that instead of directing that scrupulosity towards implementation of some sort of monastic ideal, they should instead direct that scrupulosity towards trying to make sure their plan doesn’t fail in some way they didn’t anticipate, trying to make sure their code doesn’t have any bugs, monitoring their power-seeking tendencies, seeking out informed critics to learn from, making sure they themselves aren’t a single point of failure, making sure that important secrets stay secret, etc. (what else should be on this list?) But, how much paranoia/scrupulosity is too much?
IMO, A large number of mental health professionals simply aren’t a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.
I know someone who may be able to help with finding good mental health professionals for those situations; anyone who’s reading this is welcome to PM me for contact info.
I don’t know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.
Unfortunately, by participating in this community (LW/etc.), we’ve disqualified ourselves from asking Scott to be our doctor (should I call him “Dr. Alexander” when talking about him-as-a-medical-professional while using his alias when he’s not in a clinical environment?).
I concur with your comment about having trouble finding a good doctor for people like us. p(find a good doctor) is already low and difficult given the small n (also known as the doctor shortage). If you combine p(doctor works well with people like us), the result may rapidly approach epsilon.
Does anyone have thoughts about avoiding failure modes of this sort?
Meredith from Status451 here. I’ve been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they’re unpleasant enough, both while they’re going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I’ve noticed are, of course, only from my own experience, but maybe relating them will be helpful.
Instrumental scrupulousness is a fantastic tool. By “instrumental scrupulousness” I simply mean pointing my scrupulousness at trying to make sure I’m not doing something I can’t undo. More or less what you describe in your edit, honestly. As for how much is too much, you absolutely don’t want to paralyse yourself into inaction through constantly second-guessing yourself. Real artists ship, after all!
Living someplace with good mental health care has been super crucial for me. In my case that’s Belgium. I’ve only had to commit myself once, but it saved my life and was, bizarrely, one of the most autonomy-respecting experiences I’ve ever had. The US healthcare system is caught in a horrifically large principal-agent problem, and I don’t know if it can extricate itself. Yeeting myself to another continent was literally the path of least resistance for me to find adequate, trustworthy care.
Secrecy is overrated and most things are nothingburgers. I’ve learned to identify certain thought patterns—catastrophisation, for example—as maladaptive, and while it’ll probably always be a work in progress, the worst thing that actually does happen is usually far less awful than I imagined.
The “quit trying so hard and just do it” approach that you and nostalgebraist are gesturing at pays rent, IMO. Christian’s and Avi’s advice about cultivating stable and rewarding friendships and family relationships also comports with my experience.
I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn’t provide.
When deciding whether a personal development group is culty I think it’s a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents.
I agree, and think it’s important to ‘stay grounded’ in the ‘normal world’ if you’re involved in any sort of intense organization or endeavor.
You’ve made some great suggestions.
I would also suggest that having a spouse who preferably isn’t too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the ‘good life’ for most people.
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem—that would mean applying vastly less parallel “compute” to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.
I have substantial probability on an even worse state: there’s *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a “rarity narrative”. In other words, yes, the parallel compute is necessary—there’s lots of data and ideas and thinking that has to happen—but there’s a continuum of how fungible the compute is relative to the problems that need to be solved, and there’s plenty of stuff at the “not very fungible but very important” end. Blood is fungible (though you definitely need it), but you can’t just lose a heart valve, or your hippocampus, and be fine.
I didn’t mention it in the comment, but having a larger pool of researchers is not only useful for doing “ordinary” work in parallel—it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind.
If there are some such figures already in the community, great, but there are presumably others yet to be discovered. That their impact is currently potential, not actual, does not make its sacrifice any less damaging.
Yep. (And I’m happy this overall discussion is happening, partly because, assuming rarity narratives are part of what leads to all this destructive psychic stuff as you described, then if a research community wants to work with people about whom rarity narratives would actually be somewhat *true*, the research community has as an important subgoal to figure out how to have true rarity narratives in a non-harmful way.)
Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias.
Without commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent, with correlations but not in lockstep.
Mod note: I don’t think LessWrong is the right place for this kind of comment. Please don’t leave more of these. I mean, you will get downvoted, but we might also ban you from this and similar threads if you do more of that.
It seems worthwhile to give a little more of the “why” here, lest people just walk away with the confusing feeling that there are invisible electric fences that they need to creep and cringe away from.
I’ll try to lay out the why, and if I’m wrong or off, hopefully one of the mods or regular users will elaborate.
Some reasons why this type of comment doesn’t fit the LW garden:
Low information density. We want readers to be rewarded for each comment that strays across their visual field.
Cruxless/opaque/nonspecific. While it’s quite valid to leave a comment in support of another comment, we want it to be clear to readers why the other comment was deserving of more-support-than-mere-upvoting.
Self-signaling. We want LW to both be, and feel, substantially different from the generic internet-as-a-whole, meaning that some things which are innocuous but strongly reminiscent of run-of-the-mill internetting provoke a strong “no, not that” reaction.
Driving things toward “sides.” There’s the good stuff and the bad stuff, the good people and the bad people. Fundamental bucketing, less attention to detail and gradients and complexity.
Having just laid out this case, I now feel bad about a similar comment that I made today, and am going to go either edit or delete it, in the pursuit of fairness and consistency.
Ah, sorry, yeah, I agree my mod notice wasn’t specific enough. Most of my mod notice was actually about a mixture of this comment, and this other comment, that felt like it was written by the same generator, but feels more obviously bad to me (and probably to others too).
Like, the other comment that TAG left on this post felt like it was really trying to just be some kind of social flag that is common on the rest of the internet. Like, it felt like some kind of semi-ironic “Boo, outgroup” comment, and this comment felt like it was a parallel “Yay, ingroup!” comment, both of which felt like two sides of the same bad coin.
I think occasional “woo, this is great!” comments seem kind of good to me, though I also wouldn’t want them to become as everpresent on here as the rest of the internet, if they are generated by a genuine sense of excitement and compassion. But I feel like I would want those comments to not come from the same generator that then generates a snarky “oh, just like this idiot...” comment. And if I had to choose between either having both or neither, I would choose neither.
No, Eliezer’s comment seems like a straightforward “I am making a non-anonymous upvote” which is indeed a functionality I also sometimes want, since sometimes the identity of the upvoter definitely matters. The comment above seems like it’s doing something different, especially in combination with the other comment I linked to.
First, thank you for writing this.
Second, I want to jot down a thought I’ve had for a while now, and which came to mind when I read both this and Zoe’s Leverage post.
To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people...
...become convinced that the future is in their hands: that the fate of the entire long-term future (“the future light-cone”) depends on the success of their work, and the work of a small circle of like-minded collaborators
...become convinced that (for some reason) only they, and their small circle, can do this work (or can do it correctly, or morally, etc.) -- that in spite of the work’s vast importance, in spite of the existence of billions of humans and surely at least thousands with comparable or superior talent for this type of work, it is correct/necessary for the work to be done by this tiny group
...become less concerned with the epistemic side of rationality—“how do I know I’m right? how do I become more right than I already am?”—and more concerned with gaining control and influence, so that the long-term future may be shaped by their own (already-obviously-correct) views
...spend more effort on self-experimentation and on self-improvement techniques, with the aim of turning themselves into a person capable of making world-historic breakthroughs—if they do not feel like such a person yet, they must become one, since the breakthroughs must be made within their small group
...become increasingly concerned with a sort of “monastic” notion of purity or virtue: some set of traits which few-to-no people possess naturally, which are necessary for the great work, and which can only be attained through an inward-looking process of self-cultivation that removes inner obstacles, impurities, or aversive reflexes (“debugging,” making oneself “actually try”)
...suffer increasingly from (understandable!) scrupulosity and paranoia, which compete with the object-level work for mental space and emotional energy
...involve themselves in extreme secrecy, factional splits with closely related thinkers, analyses of how others fail to achieve monastic virtue, and other forms of zero- or negative-sum conflict which do not seem typical of healthy research communities
...become probably less productive at the object-level work, and at least not obviously more productive, and certainly not productive in the clearly unique way that would be necessary to (begin to) justify the emphasis on secrecy, purity, and specialness
I see all of the above in Ziz’s blog, for example, which is probably the clearest and most concentrated example I know of the phenomenon. (This is not to say that Ziz is wrong about everything, or even to say Ziz is right or wrong about anything—only to observe that her writing is full of factionalism, full of concern for “monastic virtue,” much less prone to raise the question “how do I know I’m right?” than typical rationalist blogging, etc.) I got the same feeling reading about Zoe’s experience inside Leverage. And I see many of the same things reported in this post.
I write from a great remove, as someone who’s socially involved with parts of the rationalist community, but who has never even been to the Bay Area—indeed, as someone skeptical that AI safety research is even very important! This distance has the obvious advantages and disadvantages.
One of the advantages, I think, is that I don’t even have inklings of fear or scrupulosity about AI safety. I just see it as a technical/philosophical research problem. An extremely difficult one, yes, but one that is not clearly special or unique, except possibly in its sheer level of difficulty.
So, I expect it is similar to other problems of that type. Like most such problems, it would probably benefit from a much larger pool of researchers: a lot of research is just perfectly-parallelizable brute-force search, trying many different things most of which will not work.
It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem—that would mean applying vastly less parallel “compute” to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with.
Of course, if this were really true, then one ought to believe that it is true. But it surprises me how quick many rationalists are to accept this type of claim, on what looks from the outside like very little evidence. And it also surprises me how quickly the same people accept unproven self-improvement techniques, even ideas that look like wishful thinking (“I can achieve uniquely great things if I just actually try, something no one else is doing...”), as substitutes for what they lose by accepting insularity. Ways to make up for the loss in parallel compute by trying to “overclock” the few processors left available.
From where I stand, this just looks like a hole people go into, which harms them while—sadly, ironically—not even yielding the gains in object-level productivity it purports to provide. The challenge is primarily technical, not personal or psychological, and it is unmoved by anything but direct attacks on its steep slopes.
(Relevant: in grad school, I remember feeling envious of some of my colleagues, who seemed able to do research easily, casually, without any of my own inner turmoil. I put far more effort into self-cultivation, but they were far more productive. I was, perhaps, “trying hard to actually try”; they were probably not even trying, just doing. I was, perhaps, “working to overcome my akrasia”; they simply did not have my akrasia to begin with.
I believe that a vast amount of good technical research is done by such people, perhaps even the vast majority of good technical research. Some AI safety researchers are like this, and many people like this could do great AI safety research, I think; but they are utterly lacking in “monastic virtue” and they are the last people you will find attached to one of these secretive, cultivation-focused monastic groups.)
I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren’t really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here.
The fact of the matter is that for almost all the time I’ve been involved with CFAR, there just plain hasn’t been a research team. Much of CFAR’s focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like.
To put things another way, I would say it’s much less “the full-time researchers are off unproductively experimenting on their own brains in secret” and more “there are no full-time researchers”. To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program—instead, the organization has largely been focused on delivering existing content and programs, and insofar as the curriculum advances it does so via iteration and testing at workshops rather than a more structured or systematic development process.
I have historically found this state of affairs pretty frustrating (and am working to change it), but I think that it’s a pretty different dynamic than the one you describe above.
(I suppose it’spossiblethat the systematic and productive full-time CFAR research team was so secretive that I didn’t even know it existed, but this seems unlikely...)Maybe offtopic, but the “trying too hard to try” part rings very true to me. Been on both sides of it.
The tricky thing about work, I’m realizing more and more, is that you should just work. That’s the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they’ll be also completely wrong. It always turns out later that your best work wasn’t the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered.
Does anyone have thoughts about avoiding failure modes of this sort?
Especially in the “least convenient possible world” where some of the bullet points are actually true—like, if we’re disseminating principles for wannabe AI Manhattan Projects, and we’re optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?
Most of my ideas are around “staying grounded”—spend significant time hanging out with “normies” who don’t buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)
But I’m just guessing, and I encourage others to share their thoughts. Especially people who’ve observed/experienced mental health crises firsthand—how could they have been prevented?
EDIT: I’m also curious how to think about scrupulosity. It seems to me that team members for an AI Manhattan Project should ideally have more scrupulosity/paranoia than average, for obvious reasons. (“A bit above the population average” might be somewhere around “they can count on one hand the number of times they blacked out while drinking”—I suspect communities like ours already select for high-ish levels of scrupulosity.) However, my initial guess is that instead of directing that scrupulosity towards implementation of some sort of monastic ideal, they should instead direct that scrupulosity towards trying to make sure their plan doesn’t fail in some way they didn’t anticipate, trying to make sure their code doesn’t have any bugs, monitoring their power-seeking tendencies, seeking out informed critics to learn from, making sure they themselves aren’t a single point of failure, making sure that important secrets stay secret, etc. (what else should be on this list?) But, how much paranoia/scrupulosity is too much?
IMO, A large number of mental health professionals simply aren’t a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.
I know someone who may be able to help with finding good mental health professionals for those situations; anyone who’s reading this is welcome to PM me for contact info.
There’s an “EA Mental Health Navigator” now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator
I don’t know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.
Unfortunately, by participating in this community (LW/etc.), we’ve disqualified ourselves from asking Scott to be our doctor (should I call him “Dr. Alexander” when talking about him-as-a-medical-professional while using his alias when he’s not in a clinical environment?).
I concur with your comment about having trouble finding a good doctor for people like us. p(find a good doctor) is already low and difficult given the small n (also known as the doctor shortage). If you combine p(doctor works well with people like us), the result may rapidly approach epsilon.
It seems that the best advice is to make n bigger by seeking care in a place with a large per capita population of the doctors you need. For example, by combining https://nccd.cdc.gov/CKD/detail.aspx?Qnum=Q600 with the US Census ACS 2013 population estimates (https://data.census.gov/cedsci/table?t=Counts,%20Estimates,%20and%20Projections%3APopulation%20Total&g=0100000US%240400000&y=2013&tid=ACSDT1Y2013.B01003&hidePreview=true&tp=true), we see that the following states had >=0.9 primary care doctors per 1,000 people:
District of Columbia (1.4)
Vermont (1.1)
Massachusetts (1.0)
Maryland (0.9)
Minnesota (0.9)
Rhode Island (0.9)
New York (0.9)
Connecticut (0.9)
Meredith from Status451 here. I’ve been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they’re unpleasant enough, both while they’re going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I’ve noticed are, of course, only from my own experience, but maybe relating them will be helpful.
Instrumental scrupulousness is a fantastic tool. By “instrumental scrupulousness” I simply mean pointing my scrupulousness at trying to make sure I’m not doing something I can’t undo. More or less what you describe in your edit, honestly. As for how much is too much, you absolutely don’t want to paralyse yourself into inaction through constantly second-guessing yourself. Real artists ship, after all!
Living someplace with good mental health care has been super crucial for me. In my case that’s Belgium. I’ve only had to commit myself once, but it saved my life and was, bizarrely, one of the most autonomy-respecting experiences I’ve ever had. The US healthcare system is caught in a horrifically large principal-agent problem, and I don’t know if it can extricate itself. Yeeting myself to another continent was literally the path of least resistance for me to find adequate, trustworthy care.
Secrecy is overrated and most things are nothingburgers. I’ve learned to identify certain thought patterns—catastrophisation, for example—as maladaptive, and while it’ll probably always be a work in progress, the worst thing that actually does happen is usually far less awful than I imagined.
The “quit trying so hard and just do it” approach that you and nostalgebraist are gesturing at pays rent, IMO. Christian’s and Avi’s advice about cultivating stable and rewarding friendships and family relationships also comports with my experience.
I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn’t provide.
When deciding whether a personal development group is culty I think it’s a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents.
I agree, and think it’s important to ‘stay grounded’ in the ‘normal world’ if you’re involved in any sort of intense organization or endeavor.
You’ve made some great suggestions.
I would also suggest that having a spouse who preferably isn’t too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the ‘good life’ for most people.
I have substantial probability on an even worse state: there’s *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a “rarity narrative”. In other words, yes, the parallel compute is necessary—there’s lots of data and ideas and thinking that has to happen—but there’s a continuum of how fungible the compute is relative to the problems that need to be solved, and there’s plenty of stuff at the “not very fungible but very important” end. Blood is fungible (though you definitely need it), but you can’t just lose a heart valve, or your hippocampus, and be fine.
I didn’t mention it in the comment, but having a larger pool of researchers is not only useful for doing “ordinary” work in parallel—it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind.
If there are some such figures already in the community, great, but there are presumably others yet to be discovered. That their impact is currently potential, not actual, does not make its sacrifice any less damaging.
Yep. (And I’m happy this overall discussion is happening, partly because, assuming rarity narratives are part of what leads to all this destructive psychic stuff as you described, then if a research community wants to work with people about whom rarity narratives would actually be somewhat *true*, the research community has as an important subgoal to figure out how to have true rarity narratives in a non-harmful way.)
Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias.
Without commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent, with correlations but not in lockstep.
Best. Comment. Ever.
Mod note: I don’t think LessWrong is the right place for this kind of comment. Please don’t leave more of these. I mean, you will get downvoted, but we might also ban you from this and similar threads if you do more of that.
It seems worthwhile to give a little more of the “why” here, lest people just walk away with the confusing feeling that there are invisible electric fences that they need to creep and cringe away from.
I’ll try to lay out the why, and if I’m wrong or off, hopefully one of the mods or regular users will elaborate.
Some reasons why this type of comment doesn’t fit the LW garden:
Low information density. We want readers to be rewarded for each comment that strays across their visual field.
Cruxless/opaque/nonspecific. While it’s quite valid to leave a comment in support of another comment, we want it to be clear to readers why the other comment was deserving of more-support-than-mere-upvoting.
Self-signaling. We want LW to both be, and feel, substantially different from the generic internet-as-a-whole, meaning that some things which are innocuous but strongly reminiscent of run-of-the-mill internetting provoke a strong “no, not that” reaction.
Driving things toward “sides.” There’s the good stuff and the bad stuff, the good people and the bad people. Fundamental bucketing, less attention to detail and gradients and complexity.
Having just laid out this case, I now feel bad about a similar comment that I made today, and am going to go either edit or delete it, in the pursuit of fairness and consistency.
Ah, sorry, yeah, I agree my mod notice wasn’t specific enough. Most of my mod notice was actually about a mixture of this comment, and this other comment, that felt like it was written by the same generator, but feels more obviously bad to me (and probably to others too).
Like, the other comment that TAG left on this post felt like it was really trying to just be some kind of social flag that is common on the rest of the internet. Like, it felt like some kind of semi-ironic “Boo, outgroup” comment, and this comment felt like it was a parallel “Yay, ingroup!” comment, both of which felt like two sides of the same bad coin.
I think occasional “woo, this is great!” comments seem kind of good to me, though I also wouldn’t want them to become as everpresent on here as the rest of the internet, if they are generated by a genuine sense of excitement and compassion. But I feel like I would want those comments to not come from the same generator that then generates a snarky “oh, just like this idiot...” comment. And if I had to choose between either having both or neither, I would choose neither.
Are you going to tell Eliezer the same thing? https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe#EJPSjPv7nNzsam947
No, Eliezer’s comment seems like a straightforward “I am making a non-anonymous upvote” which is indeed a functionality I also sometimes want, since sometimes the identity of the upvoter definitely matters. The comment above seems like it’s doing something different, especially in combination with the other comment I linked to.