What I think is more likely than EA pivoting is a handful of people launch a lifeboat and recreate a high integrity version of EA.
Thoughts on how this might be done:
Interview a bunch of people who became disillusioned. Try to identify common complaints.
For each common complaint, research organizational psychology, history of high-performing organizations, etc. and brainstorm institutional solutions to address that complaint. By “institutional solutions”, I mean approaches which claim to e.g. fix an underlying bad incentive structure, so it won’t require continuous heroic effort to address the complaint.
Combine the most promising solutions into a charter for a new association of some kind. Solicit criticism/red-teaming for the charter.
Don’t try to replace EA all at once. Start small by aiming at a particular problem present in EA, e.g. bad funding incentives, criticism (it sucks too hard to both give and receive it), or bad feedback loops in the area of AI safety. Initially focus on solving that particular problem, but also build in the capability to scale up and address additional problems if things are going well.
Don’t market this as a “replacement for EA”. There’s no reason to have an adversarial relationship. When describing the new thing, focus on the specific problem which was selected as the initial focus, plus the distinctive features of the charter and the problems they are supposed to solve.
Think of this as an experiment, where you’re aiming to test one or more theses about what charter content will cause organizational outperformance.
I think it would be interesting if someone put together a reading list on high-performing organizations, social movement history, etc. etc. I suspect this is undersupplied on the current margin, compared with observing and theorizing about EA as it exists now. Without any understanding of history, you run the risk of being a “general fighting the last war”—addressing the problems EA has now, but inadvertently introduce a new set of problems. Seems like the ideal charter would exist in the intersection of “inside view says this will fix EA’s current issues” and “outside view says this has worked well historically”.
A reading list might be too much work, but there’s really no reason not to do an LLM-enabled literature review of some kind, at the very least.
I also think a reading list for leadership could be valuable. One impression of mine is that “EA leaders” aren’t reading books about how to lead, research on leadership, or what great leaders did.
Thoughts on how this might be done:
Interview a bunch of people who became disillusioned. Try to identify common complaints.
For each common complaint, research organizational psychology, history of high-performing organizations, etc. and brainstorm institutional solutions to address that complaint. By “institutional solutions”, I mean approaches which claim to e.g. fix an underlying bad incentive structure, so it won’t require continuous heroic effort to address the complaint.
Combine the most promising solutions into a charter for a new association of some kind. Solicit criticism/red-teaming for the charter.
Don’t try to replace EA all at once. Start small by aiming at a particular problem present in EA, e.g. bad funding incentives, criticism (it sucks too hard to both give and receive it), or bad feedback loops in the area of AI safety. Initially focus on solving that particular problem, but also build in the capability to scale up and address additional problems if things are going well.
Don’t market this as a “replacement for EA”. There’s no reason to have an adversarial relationship. When describing the new thing, focus on the specific problem which was selected as the initial focus, plus the distinctive features of the charter and the problems they are supposed to solve.
Think of this as an experiment, where you’re aiming to test one or more theses about what charter content will cause organizational outperformance.
I think it would be interesting if someone put together a reading list on high-performing organizations, social movement history, etc. etc. I suspect this is undersupplied on the current margin, compared with observing and theorizing about EA as it exists now. Without any understanding of history, you run the risk of being a “general fighting the last war”—addressing the problems EA has now, but inadvertently introduce a new set of problems. Seems like the ideal charter would exist in the intersection of “inside view says this will fix EA’s current issues” and “outside view says this has worked well historically”.
A reading list might be too much work, but there’s really no reason not to do an LLM-enabled literature review of some kind, at the very least.
I also think a reading list for leadership could be valuable. One impression of mine is that “EA leaders” aren’t reading books about how to lead, research on leadership, or what great leaders did.