Thanks for giving some answers here to these questions; it was really helpful to have them laid out like this.
1. In hindsight, I was probably talking more about moves towards decentralization of leadership, rather than decentralization of funding. I agree that greater decentralization of funding is a good thing, but it seems to me like, within the organizations funded by a given funder, decentralization of leadership is likely useless (if leadership decisions are still being made by informal networks between orgs rather than formal ones), or it may lead to a lack of clarity and direction.
3. I understand the dynamics that may cause the overrepresentation of women. However, that still doesn’t completely explain why there is an overrepresentation of white women, even when compared to racial demographics within EA at large. Additionally, this also doesn’t explain why the overrepresentation of women here isn’t seen as a problem on CEA’s part, if even just from an optics perspective.
4. Makes sense, but I’m still concerned that, say, if CEA had an anti-Stalinism team, they’d be reluctant to ever say “Stalinism isn’t a problem in EA.”
5. Again, this was a question that was badly worded on my end. I was referring more specifically to organizations within AI safety, more than EA at large. I know that AMF, GiveDirectly, The Humane League, etc. fundraise outside EA.
6. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.
7. That makes sense. That was one of my hypotheses (hence my phrase “at least upon initial examination”), and I guess in hindsight it’s probably the best one.
10. Starting an AI capabilities company that does AI safety as a side project generally hasn’t gone well, and yet people keep doing it. The fact that something hasn’t gone well in the past doesn’t seem to me to be a sufficient explanation for why people don’t keep doing it, especially because it largely seems like Leverage failed for Leverage-specific reasons (i.e. too much engagement with woo). Additionally, your argument here seems to prove too much; the Manhattan Project was a large scientific project operating under an intense structure, and yet it was able to maintain good epistemics (i.e. not fixating too hard on designs that wouldn’t work) under those conditions. Same with a lot of really intense start-ups.
11. They may not be examples of the unilateralist’s curse in the original sense, but the term seems to have been expanded well past its original meaning, and they’re examples of that expanded meaning.
12. It seems to me like this is work of a different type than technical alignment work, and could likely be accomplished by hiring different people than the people already working on technical alignment, so it’s not directly trading off against that.
I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.
I intended my answer to be descriptive. EAs generally avoid making weak arguments (or at least I like to think we do).
Thanks for giving some answers here to these questions; it was really helpful to have them laid out like this.
1. In hindsight, I was probably talking more about moves towards decentralization of leadership, rather than decentralization of funding. I agree that greater decentralization of funding is a good thing, but it seems to me like, within the organizations funded by a given funder, decentralization of leadership is likely useless (if leadership decisions are still being made by informal networks between orgs rather than formal ones), or it may lead to a lack of clarity and direction.
3. I understand the dynamics that may cause the overrepresentation of women. However, that still doesn’t completely explain why there is an overrepresentation of white women, even when compared to racial demographics within EA at large. Additionally, this also doesn’t explain why the overrepresentation of women here isn’t seen as a problem on CEA’s part, if even just from an optics perspective.
4. Makes sense, but I’m still concerned that, say, if CEA had an anti-Stalinism team, they’d be reluctant to ever say “Stalinism isn’t a problem in EA.”
5. Again, this was a question that was badly worded on my end. I was referring more specifically to organizations within AI safety, more than EA at large. I know that AMF, GiveDirectly, The Humane League, etc. fundraise outside EA.
6. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.
7. That makes sense. That was one of my hypotheses (hence my phrase “at least upon initial examination”), and I guess in hindsight it’s probably the best one.
10. Starting an AI capabilities company that does AI safety as a side project generally hasn’t gone well, and yet people keep doing it. The fact that something hasn’t gone well in the past doesn’t seem to me to be a sufficient explanation for why people don’t keep doing it, especially because it largely seems like Leverage failed for Leverage-specific reasons (i.e. too much engagement with woo). Additionally, your argument here seems to prove too much; the Manhattan Project was a large scientific project operating under an intense structure, and yet it was able to maintain good epistemics (i.e. not fixating too hard on designs that wouldn’t work) under those conditions. Same with a lot of really intense start-ups.
11. They may not be examples of the unilateralist’s curse in the original sense, but the term seems to have been expanded well past its original meaning, and they’re examples of that expanded meaning.
12. It seems to me like this is work of a different type than technical alignment work, and could likely be accomplished by hiring different people than the people already working on technical alignment, so it’s not directly trading off against that.
I intended my answer to be descriptive. EAs generally avoid making weak arguments (or at least I like to think we do).