I’m not sure I understand what you mean by “something to protect.” Can you give an example?
[Answered by habryka]
I’m not sure I understand what you mean by “something to protect.” Can you give an example?
[Answered by habryka]
[Possibly digging a bit too far into the specifics so no worries if you’d rather bow out.]
Do you think these confusions[1] are fairly evenly dispersed throughout the community (besides what you already mentioned: “People semi-frequently have them at the beginning and then get over them.”)?
Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who “take singularity scenarios seriously.”[2] (B) I’m very uncertain but they also seem less prevalent to me in the EA community than the rationalist community (to the extent the communities can be separated).[3] [4]
Do A and B sound right to you? If so, do you have a take on why that is?
If A or B *are* true, do you think this is in any part caused by the relative groups taking the singularity [/x-risk/the future/the stakes] less seriously? If so, are there important costs from this?
[1] Using your word while withholding my own judgment as to whether every one of these is actually a confusion.
[2] If you’re right that a lot of people have them at the beginning and then get over them, a simple potential explanation would be that by the time you’re working at one of these orgs, that’s already happened.
Other hypothesis: (a) selection effects; (b) working FT in the community gives you additional social supports and makes it more likely others will notice if you start spiraling; (c) the cognitive dissonance with the rest of society is a lot of what’s doing the damage. It’s easier to handle this stuff psychologically if the coworkers you see every day also take the singularity seriously.[i]
[3] For example perhaps less common at Open Phil, GPI, 80k, and CEA than CFAR and MIRI but I also think this holds outside of professional organisations.
[4] One potential reason for this is that a lot of EA ideas are more “in the air” than rationalist/singularity ones. So a lot of EAs may have had their ‘crisis of faith’ before arriving in the community. (For example, I know plenty of EAs (myself included) who did some damage to themselves in their teens or early twenties by “taking Peter Singer really seriously.”
[i] I’ve seen this kind of dissidence offered as a (partial) explanation of why PTSD has become so common among veterans & why it’s so hard for them to reintegrate after serving a combat tour. No clue if the source is reliable/widely held/true. It’s been years but I think I got it from Odysseus in America or perhaps its predecessor, Achilles in Vietnam.
My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.
[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.]
Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?
This seemed really useful. I suspect you’re planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)
The need to coordinate in this way holds just as much for consequentialists or anyone else.
I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.
Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.
Things like PJ EBY’s excellent ebook.
FYI—this link goes to an empty shopping cart. Which of his books did you mean to refer to?
The best links I could find quickly were:
I think I also damaged something psychologically, which took 6 months to repair.
I’ve been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I’d be interested.
I expect, though, that this is too sensitive/personal so please feel free to ignore.
Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.
“For example, we spent a bunch of time circling for a while”
Does this imply that CFAR now spends substantially less time circling? If so and there’s anything interesting to say about why, I’d be curious.
This doesn’t look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it’s odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn’t seem like there’s a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?
I guess I interpreted Rob’s statement that “the EA Funds are usually a better fallback option than GiveWell” as shorthand for “the EA Fund relevant to your values is in expectation a better fallback option than GiveWell.” “The EA Fund relevant to your values” does seem like a useful abstraction to me.
Here’s a potentially more specific way to get at what I mean.
Let’s say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let’s say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.
You’re saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil’s strategy so there’s some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund.
In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness?
Personally, I’d guess it’s lower than 15% and I’d be quite surprised to hear you say you think it’s as high as 33%. This would still leave a difference that easily clears the bar for “large enough to pay attention to.”
Fwiw, to the extent that donors to GW are getting funged, I think it’s much more likely that they are funging with other developing world interventions (e.g. one recommended org’s hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead).
I’m guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven’t had a chance to reread them). Is it possible that funging with GW top charities isn’t really your true objection?
I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren’t high precision so we can’t rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.
That doesn’t strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it’s “odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies.”
What’s the reason to think EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities? My guess would have been that that increased donations to GiveWell’s recommended charities would not cause many other donors (including Open Phil or Good Ventures) to give instead to orgs like those supported by the Long-Term Future, EA Community, or Animal Welfare EA Funds.
In particular, to me this seems in tension with Open Phil’s last public writing on it’s current thinking about how much to give to GW recommendations versus these other cause areas (“world views” in Holden’s terminology). In his January “Update on Cause Prioritization at Open Philanthropy,” Holden wrote:
“We will probably recommend that a cluster of ‘long-termist’ buckets collectively receive the largest allocation: at least 50% of all available capital. . . .
We will likely recommend allocating something like 10% of available capital to a “straightforward charity” bucket (described more below), which will likely correspond to supporting GiveWell recommendations for the near future.”
There are some slight complications here but overall it doesn’t seem to me that Open Phil/GV’s giving to long-termist areas is very sensitive to other donors’ decisions about giving to GW’s recommended charities. Contra Ben H, I therefore think it does currently make sense for donors to spend attention distinguishing between EA Funds and GW’s recommendations.
For what it’s worth, there might be a stronger case that EA Funds funges against long-termist/EA community/Animal welfare grants that Open Phil would otherwise make but I think that’s actually an effect with substantially different consequences.
[Disclosure—I formerly worked at GiveWell and Open Phil but haven’t worked there for over a year and I don’t think anything in this comment is based on any specific inside information.]
[Edited to make my disclosure slightly more specific/nuanced.]
Thanks! forgot about that post.