I work for Open Philanthropy. I spent 2019-2021 at the Centre for Effective Altruism, where I ran the EA Forum. I’m also a semi-professional gamer and streamer who placed second at the 2020 Magic: the Gathering world championship (“Grand Finals”).
aarongertler
Open Philanthropy is hiring for multiple roles across our Global Catastrophic Risks teams
Non-anonymous reacts feel less scary to me as a writer, and don’t feel scary to me as a reactor, though I’d expect most people to be more nervous about publicly sharing a negative reaction than I am.
Overall, inline anonymous reacts feel better to me than named non-inline reacts. I care much more about getting specific feedback on my writing than seeing which specific people liked or disliked it.
This post led me to remove Chrome from my phone, which gave me back a few productive minutes today. Hoping to keep it up and compound those minutes into a couple of solid workdays over the rest of the year. Thanks for the inspiration!
On the Devil’s Advocate side: “Wife” just rolls off the tongue in a way “husband” doesn’t. That’s why we have “wife guys” and “my wife!” jokes, but no memes that do much with the word “husband”. (Sometimes we substitute the one-syllable word “man”, as in “it’s raining men” or “get you a man who can do both”.)
You could also parse “wife years” as “years of being a wife” from the female perspective, though of course this still fails to incorporate couples where no wife-identifying person is involved.
...so it doesn’t work well in a technical sense, but it remains very catchy.
Thanks for the further detail. It sounds like this wasn’t actually a case of “no one in EA has funded X”, which makes my list irrelevant.
(Maybe the first item on the list should be “actually, people in EA are definitely funding X”, since that’s something I often find when I look into claims like Christian’s, though it wasn’t obvious to me in this case.)
Thanks for sharing a specific answer! I appreciate the detail and willingness to engage.
I don’t have the requisite biopolitical knowledge to weigh in on whether the approach you mentioned seems promising, but it does qualify as something someone could have been doing pre-COVID, and a plausible intervention at that.
My default assumptions for cases of “no one in EA has funded X”, in order from most to least likely:
No one ever asked funders in EA to fund X.
Funders in EA considered funding X, but it seemed like a poor choice from a (hits-based or cost-effectiveness) perspective.
Funders in EA considered funding X, but couldn’t find anyone who seemed like a good fit for it.
Various other factors, including “X seemed like a great thing to fund, but would have required acknowledging something the funders thought was both true and uncomfortable”.
In the case of this specific plausible thing, I’d guess it was (2) or (3) rather than (1). While anything involving China can be sensitive, Open Phil and other funders have spent plenty of money on work that involves Chinese policy. (CSET got $100 million from Open Phil, and runs a system tracking PRC “talent initiatives” that specifically refers to China’s “military goals” — their newsletter talks about Chinese AI progress all the time, with the clear implication that it’s a potential global threat.)
That’s not to say that I think (4) is impossible — it just doesn’t get much weight from me compared to those other options.
FWIW, as far as I’ve seen, the EA community has been unanimous in support of the argument “it’s totally fine to debate whether this was a lab leak”. (This is different from the argument “this was definitely a lab leak”.) Maybe I’m forgetting something from the early days when that point was more controversial, or I just didn’t see some big discussion somewhere. But when I think about “big names in EA pontificating on leaks”, things like this and this come to mind.
*****
Do you know of anyone who was trying to build out the gain-of-function project you mentioned during the time before the pandemic? And whether they ever approached anyone in EA about funding? Or whether any organizations actually considered this internally?
Thanks for sharing your experience.
I’ve been writing the EA Newsletter and running the EA Forum for three years, and I’m currently a facilitator for the In-Depth EA Program, so I think I’ve learned enough about EA not to be too naïve.
I’m also an employee of Open Philanthropy starting January 3rd, though I don’t speak for them here.
Given your hypothetical and a few minutes of thought, I’d want Open Phil to write the check. It seems like an incredible buy given their stated funding standards for health interventions and reasonable assumptions about the “fewer manufacturing plants” counterfactual. (This makes me wonder whether Alexander Berger is among the leaders you mentioned, though I assume you can’t say.)
Are any of the arguments that you heard against doing so available for others to read? And were the people you heard back from unanimous?
I ask not in the spirit of doubt, but in the spirit of “I’m surprised and trying to figure things out”.
(Also, David Manheim is a major researcher in the EA community, which makes the whole situation/debate feel especially strange. I’d guess that he has more influence on actually EA-funded COVID decisions than most of the people I’d classify as “EA leaders”.)
Nearly two years in the pandemic the core EA organizations still seem to show no sign of caring that they didn’t prevent it despite their mission including fighting biorisks.
Which core organizations are you referring to, and which signs are you looking for?
This has been discussed to some extent on the Forum, particularly in this thread, where multiple orgs were explicitly criticized. (I want to see a lot more discussions like these than actually exist, but I would say the same thing about many other topics — EA just isn’t very big and most people there, as anywhere, don’t like writing things in public. I expect that many similar discussions happened within expert circles and didn’t appear on the Forum.)
I worked at CEA until recently, and while our mission isn’t especially biorisk-centric (we affect EA bio work in indirect ways on multi-year timescales), our executive director insisted that we should include a mention in the opening talk of the EA Picnic that EA clearly fell short of where it should have been on COVID. It’s not much, but I think it reflects a broader consensus that we could have done better and didn’t.
That said, the implication that EA not preventing the pandemic is a problem for EA seems reasonable only in a very loose sense (better things were possible, as they always are). Open Phil invested less than $100 million into all of its biosecurity grants put together prior to February 2020, and that’s over a five-year period. That this funding (and direct work from a few dozen people, if that) failed to prevent COVID seems very unsurprising, and hard to learn from.
Is there a path you have in mind whereby Open Phil (or anyone else in EA) could have spent that kind of money in a way that would likely have prevented the pandemic, given the information that was available to the relevant parties in the years 2015-2019?
Doing so would require asking uncomfortable questions and accepting uncomfortable truths and there seems to be no willingness to do so.
I find this kind of comment really unhelpful, especially in the context of LessWrong being a site about explaining your reasoning and models.
What are the uncomfortable questions and truths you are talking about? If you don’t even explain what you mean, it seems impossible to verify your claim that no one was asking/accepting these “truths”, or even whether they were truths at all.
EA Forum Creative Writing Contest: $10,000 in prizes for good stories
One Study, Many Results (Matt Clancy)
Reminds me of an old essay I wrote (not fully representative of Aaron!2021) about experiences with a dog who lived with a family but not other dogs, and could never get enough stimulation to meet his needs. A section I think still holds up:
The only “useful” thing he ever fetches is the newspaper, once per day. For thirty seconds, he is doing purposeful work. and his family is genuinely thankful for his help. But every other object he’s fetched has been something a person threw, for the express purpose of fetching. We all smile at him out of politeness or vague amusement and keep throwing the tennis balls and rubber bones, so he gets a constant stream of positive reinforcement for fetching.
This means his life is built around convincing people to throw things, and then bringing the things back to be thrown again. Literally running in circles. I’ve seen him play fetch for well over an hour before getting tired, taking a short break, drinking some water, and then coming back for more fetch.
And he really believes that his fetching is important: When a tennis ball rolls under a couch and he can’t reach it, he’ll sniff around as though it were lost. If he smells it, he’ll paw frantically trying to reach it. If he can’t, he’ll stand there looking miserable until someone reaches under and takes out the ball.
(I wonder how he feels in those moments: An impending sense of doom? Fear that the ball, lost out of sight, may cease to exist? A feeling of something-not-finished, as when a melody is cut short before the final note?)
Tom Chivers, author of “The AI Does Not Hate You”, is running an AMA on the EA Forum
Sounds great, thanks!
Would you be interested in crossposting this to the EA Forum? I think your points are equally relevant for those discussions, and I’d be interested to see how posters there would react.
As a mod, I could also save you some time by crossposting it under your account. Let me know if that would be helpful!
Epistemic status: Neither unique nor surprising, but something I felt like idly cataloguing.
An interesting example of statistical illiteracy in the field: This complaint thread about the shuffling algorithm on Magic: the Gathering Arena, a digital version of the card game. Thousands of unique players seem to be represented here.
MTG players who want to win games have a strong incentive to understand basic statistics. Players like Frank Karsten have been working for years to explain the math behind good deckbuilding. And yet, the “rigged shuffler” is a persistent belief even among reasonably engaged players; I’ve seen quite a few people try to promote it on my stream, which is not at all aimed at beginners.
(The shuffler is, of course, appropriately random, save for some “hand smoothing” in best-of-one matches to increase the chance of a “normal” draw.)
A few quotes from the thread:
How is that no matter how many people are playing the game, or how strong your deck is, or how great your skill level, I bet your winning percentage is 30% or less. This defies the laws of probability.
(No one ever seems to think the shuffler is rigged in their favor.)
As I mentioned in a prior post you never see these problems when they broadcast a live tournament.
(People who play in live tournaments are much better at deckbuilding, leading to fewer bad draws. Still, one recent major tournament was infamously decided by a player’s atrocious draw in the last game of the finals.)
In the real world, land draw will not happens as frequent as every turns for 3 times or more. Or less than 2 to 3 turns, not drawing a land
(Many people have only played MTG as a paper game when they come to Arena. In paper, it’s very common for people to “cheat” when shuffling by sorting their initial deck in a particular way, even with innocuous intent. When people are exposed to true randomness, they often can’t tolerate it.)
Other common conspiracy theories about Arena:
“Rigged matchmaking” (the idea that the developers somehow know which decks will be good against your deck, and ensure that you are matched up against it; again, I never see this theory in reverse)
“Poker hands” (the idea that people get multiple copies of a card more often than would be expected)
“50% bias” (the idea that the game arranges good/bad draws to keep players at a 50% win rate; admirably, these players recognize that they do draw well sometimes, but they don’t understand what it means to be in the middle of a binomial distribution)
aarongertler’s Shortform
Consider cross-posting this question to the EA Forum; discussion there is more focused on giving, so you might get a broader set of answers.
Another frame around this question: “How can one go about evaluating the impact of a year’s worth of ~$500 donations?” If you’re trying to get leverage with small donations, you might expect a VC-like set of returns, where you can’t detect much impact from most donations but occasionally see a case of really obvious impact. If you spend an entire year making, say, a dozen such donations, and none of them make a really obvious impact, this is a sign that you either aren’t having much impact or don’t have a good way to measure it (in either case, it’s good to rethink your giving strategy).
You could also try making predictions—“I predict that X will happen if I give/don’t give”—and then following up a few months later. What you learn will depend on what you predict, but you’ll at least be able to learn more about whether your donations are doing what you expect them to do.
Here’s a link to Manson’s announcement. Always good to see more people trying to swim up toward the sanity waterline.
I’ve been enjoying the Sold a Story podcast, which explains how many schools stopped teaching kids to read over the last few decades, replacing phonics with an unscientific theory that taught kids to pretend to read (cargo cult vibes). It features a lot of teachers and education scholars who come face-to-face with evidence that they’ve been failing kids, and respond in many different ways — from pro-phonics advocacy and outright apology to complete refusal to engage. I especially liked one teacher musing on how disconcerting it was to realize her colleagues were “refuse to engage” types.
The relatable topic and straightforward reporting make the podcast very accessible. It’s a good way to share a story with people outside the LessWrong bubble that may get them angry in a way that supports rationalist virtues.