I would also be interested to see this. Also, could you clarify:
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.
My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn’t super seem worth it.
I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to “but you can’t possibly be against bednets”, and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil’s reputation), that seems bad.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.
I would also be interested to see this. Also, could you clarify:
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.
My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn’t super seem worth it.
I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to “but you can’t possibly be against bednets”, and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil’s reputation), that seems bad.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.