Elizabeth
(Salt) Water Gargling as an Antiviral
Chaos Theory in Ecology
There’s a lot here and if my existing writing didn’t answer your questions, I’m not optimistic another comment will help[1]. Instead, how about we find something to bet on? It’s difficult to identify something both cruxy and measurable, but here are two ideas:
I see a pattern of:
1. CEA takes some action with the best of intentions
2. It takes a few years for the toll to come out, but eventually there’s a negative consensus on it.
3. A representative of CEA agrees the negative consensus is deserved, but since it occurred under old leadership, doesn’t think anyone should draw conclusions about new leadership from it.
4. CEA announces new program with the best of intentions.So I would bet that within 3 years, a CEA representative will repudiate a major project occurring under Zach’s watch.
I would also bet on more posts similar to Bad Omens in Current Community Building or University Groups Need Fixing coming out in a few years, talking about 2024 recruiting.
- ^
Although you might like Change my mind: Veganism entails trade-offs, and health is one of the axes (the predecessor to EA Vegan Advocacy is not Truthseeking) and Truthseeking when your disagreements lie in moral philosophy and Love, Reverence, and Life (dialogues with a vegan commenter on the same post)
- ^
Seeing my statements reflected back is helpful, thank you.
I think Effective Altruism is upper case and has been for a long time, in part because it aggressively recruited people who wanted to follow[1]. In my ideal world it both has better leadership and needs less of it, because members are less dependent.
I think rationality does a decent job here. There are strong leaders of individual fiefdoms, and networks of respect and trust, but it’s much more federated.
- ^
Which is noble and should be respected- the world needs more followers than leaders. But if you actively recruit them, you need to take responsibility for providing leadership.
- ^
I’m curious why this feels better, and for other opinions on this.
How much are you arguing about wording, vs genuinely believe and would bet money that in 3-5 years my work will have moved EA to something I can live with?
The desire for crowdfunding is less about avoiding bias[1] and more that this is only worth doing if people are listening, and small donors are much better evidence on that question than grants. If EV gave explicit instructions to donate to me it would be more like a grant than spontaneous small donors, although I in general agree people should be looking for opportunities they can beat GiveWell.
ETA: we were planning on waiting on this but since there’s interest I might as well post the fundraiser now.- ^
I’m fortunate to have both a long runway and sources of income outside of EA and rationality. One reason I’ve pushed as hard as I have on EA is that I had a rare combination of deep knowledge of and financial independence from EA. If couldn’t do it, who could?
- ^
there are links in the description of the video
Maybe you just don’t see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn’t have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That’s why I’m pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn’t become apparent to you though.
I’ve repeatedly had interactions with ~leadership EA that asks me to assume there’s a shadow EA cabal (positive valence) that is both skilled and aligned with my values. Or puts the burden on me to prove it doesn’t exist, which of course I can’t do. And what you’re saying here is close enough to trigger the rant.
I would love for the aligned shadow cabal to be real. I would especially love if the reason I didn’t know how wonderful it was was that it was so hypercompetent I wasn’t worth including, despite the value match. But I’m not going to assume it exists just because I can’t definitively prove otherwise.
If shadow EA wants my approval, it can show me the evidence. If it decides my approval isn’t worth the work, it can accept my disapproval while continuing its more important work. I am being 100% sincere here, I treasure the right to take action without having to reach consensus- but this doesn’t spare you from the consequences of hidden action or reasoning.
This is a good point. In my ideal movement makes perfect sense to disagree with every leader and yet still be a central member of the group. LessWrong has basically pulled that off. EA somehow managed to be bad at having leaders (both in the sense that the closest things to leaders don’t want to be closer, and that I don’t respect them), while being the sort of thing that requires leaders.
If people in EA would consider her critiques to have real value, then the obvious step is to give Elizabeth money to write more [...] If she would get paid decently, I would expect she would feel she’s making an impact.
First of all, thank you, love it when people suggest I receive money. Timothy and I have talked about fundraising for a continued podcast. I would strongly prefer most of the funding be crowdfunding, for the reason you say. If we did this it would almost certainly be through Manifund. Signing up for Patreon and noting this as the reason also works, although for my own sanity this will always be a side project.
I should note that my work on EA up through May was covered by a Lightspeed grant, but I don’t consider that EA money.
Reading this makes me feel really sad because I’d like to believe it, but I can’t, for all the reasons outlined in the OP.
I could get into more details, but it would be pretty costly for me for (I think) no benefit. The only reason I came back to EA criticism was that talking to Timothy feels wholesome and good, as opposed to the battery acid feeling I get from most discussions of EA.
There were ~20 in round 2, and I’ve gotten reports of other people being inspired by the post to get tested themselves that I estimate at least double that.
I think not enforcing an “in or out” boundary is big contributor to this degradation—like, majorly successful religions required all kinds of sacrifice.
I feel ambivalent about this. On one hand, yes, you need to have standards, and I think EA’s move towards big-tentism degraded it significantly. On the other hand I think having sharp inclusion functions are bad for people in a movement[1], cut the movement off from useful work done outside itself, selects for people searching for validation and belonging, and selects against thoughtful people with other options.
I think I’m reasonably Catholic, even though I don’t know anything about the living Catholic leaders.
I think being a Catholic with no connection to living leaders makes more sense than being an EA who doesn’t have a leader they trust and respect, because Catholicism has a longer tradition, and you can work within that. On the other hand… I wouldn’t say this to most people, but my model is you’d prefer I be this blunt… my understanding is Catholicism is about submission to the hierarchy, and if you’re not doing that or don’t actively believe they are worthy of that, you’re LARPing. I don’t think this is true of (most?) protestant denominations: working from books and a direct line to God is their jam. But Catholicism cares much more about authority and authorization.
It feels like AI safety is the best current candidate for [lifeboat], though that is also much less cohesive and not a direct successor for a bunch of ways. I too have been lately wondering what “Post EA” looks like.
I’d love for this to be true because I think AIS is EA’s most important topic. OTOH, I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta (which is more AIS). ETG was a much more viable role for GD than for AIS.
- ^
If you’re only as good as your last 3 months, no one can take time to rest and reflect, much less recover from burnout.
- ^
Some related posts:
one example among many of a long runway letting me make more moral choices
ongoing twitter thread on frying pan agency
I get to the airport super early because any fear of being late turns me into an asshole.
Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now)
I used the word obligation because it felt too hard to find a better one, but I don’t like it, even for saving children in shallow ponds. In my mind, obligations are for things you signed up for. In our imperfect world I also feel okay using it for things you got signed up for and benefit from (e.g. I never agreed to be born in the US as a citizen, but I sure do benefit from it, so taxes are an obligation). In my world obligations are always to a specific entity, not general demands.
I think that for some people, rescuing drowning children is an obligation to society, similar to taxes. Something feels wrong about that to me, although I’d think very badly of someone who could have trivially saved a child and chose not to.
A key point for me is that people are allowed to be shitty. This right doesn’t make them not-shitty or free them from the consequences of being shitty, but it is an affordance available to everyone. Not being shitty requires a high average on erogatory actions, plus some number of supererogatory ones.
How many supererogatory actions? The easiest way to define this is relative to capacity, but that seems toxic to me, like people to don’t have a right to their own gains. It also seems likely to drive lots of people crazy with guilt. I don’t know what the right answer is.
TBH I’ve been really surprised at my reaction to “~obligation to maximal growth”. I would have predicted it would feel constraining and toxic, but it feels freeing and empowering, like I’ve been given a more chances to help people at no cost to me. I feel more powerful. I also feel more permission to give up on what is currently too hard, since sacrificing myself for one short term goal hurts my long term obligation.
Maybe the key is that this is a better way to think achieve goals I already had. It’s not a good frame for deciding what one’s goals should be.
[cross-posted from What If You Lived In the Least Convenient Possible World]
I came back to this post a year later because I really wanted to grapple with the idea I should be willing to sacrifice more for the cause. Alas, even in a receptive mood I don’t think this post does a very good job of advocating for this position. I don’t believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she’s clearly suffering from distorted thoughts and applying post-hoc justifications. She’s clearly confused about what convenient means (having to slow down to take care of yourself is very inconvenient), and I think this is significant and not just a poor choice of words. So I wrote my own version of the position.
Let’s say Bob is right that the costs exceed the benefits of working harder or suffering. Does that need to be true forever? Could Bob invest in changing himself so that he could better live up to his values? Does he have an ~obligation[1] to do that?
We generally hold that people who can swim have obligations to save drowning children in lakes[2], but there’s no obligation for non-swimmers to make an attempt that will inevitably drown them. Does that mean they’re off the hook, or does it mean their moral failure happened when they chose not to learn how to swim?
One difficulty with this is that there are more potential emergencies than we could possibly plan for. If someone skipped the advance swim lesson where you learn to rescue panicked drowning people because they were learning wilderness first aid, I don’t think that’s a moral failure.
This posits a sort of moral obligation to maximally extend your capacity to help others or take care of yourself in a sustainable way. I still think obligation is not quite the right word for this, but to the extent it applies, it applies to long term strategic decisions and not in-the-moment misery.
can you elaborate on “this format”?