I think this is a better pointer, but I think “Bay Area alignment community” is still a bit too broad. I think e.g. Lightcone and MIRI are very separate from Constellation and Open Phil and it doesn’t make sense to put them into the same bucket.
I am kinda confused by these comments. Obviously you can draw categories at higher or lower levels of resolution. Saying that it doesn’t make sense to put Lightcone and MIRI in the same bucket as Constellation and OpenPhil, or Bengio in the same bucket as the Bay Area alignment community, feels like… idk, like a Protestant Christian saying it doesn’t make sense to put Episcopalians and Baptists in the same bucket. The differences loom large for insiders but are much smaller for outsiders.
You might be implicitly claiming that AI safety people aren’t very structurally power-seeking unless they’re Bay Area EAs. I think this is mostly false, and in fact it seems to me that people often semi-independently reason themselves into power-seeking strategies after starting to care about AI x-risk. I also think that most proposals for AI safety regulation are structurally power-seeking, because they will make AI safety people arbitrators of which models are allowed (implicitly or explicitly). But a wide range of AI safety people support these (and MIRI, for example, supports some of the strongest versions of these).
I’ll again highlight that just because an action is structurally power-seeking doesn’t make it a bad idea. It just means that it comes along with certain downsides that people might not be tracking.
I don’t know, I think I’ll defend that Lightcone is genuinely not very structurally power-seeking, and neither is MIRI, and also that both of these organizations are not meaningfully part of some kind of shared power-base with most of the EA AI Alignment community in Berkeley (Lightcone is banned from receiving any kind of OpenPhil funding, for example).
I think you would at least have to argue that there are two separate power-seeking institutions here, each seeking power for themselves, but I also do genuinely think that Lightcone is not a very structurally power-seeking organization (I feel a bit more confused about MIRI, though would overall still defend that).
Suppose I’m an atheist, or a muslim, or a jew, and an Episcopalian living in my town came up to me and said “I’m not meaningfully in a shared power-base with the Baptists. Sure, there’s a huge amount of social overlap, we spend time at each other’s churches, and we share many similar motivations and often advocate for many of the same policies. But look, we often argue about theological disagreements, and also the main funder for their church doesn’t fund our church (though of course many other funders fund both Baptists and Episcopalians).”
I just don’t think this is credible, unless you’re using a very strict sense of “meaningfully”. But at that level of strictness it’s impossible to do any reasoning about power-bases, because factional divides are fractal. What it looks like to have a power-base is to have several broadly-aligned and somewhat-overlapping factions that are each seeking power for themselves. In the case above, the Episcopalian may legitimately feel very strongly about their differences with the Baptists, but this is a known bug in human psychology: the narcissism of small differences.
Though I am happy to agree that Lightcone is one of the least structurally power-seeking entities in the AI safety movement, and I respect this. (I wouldn’t say the same of current-MIRI, which is now an advocacy org focusing on policies that strongly centralize power. I’m uncertain about past-MIRI.)
I think you’re making an important point here, and I agree that given the moral valence people here will be quite tempted to gerrymander themselves out of the relevant categories (also, pretending to be the underdog, or participating in bravery debates, is an extremely common pattern in conversations like this).
I do agree that a few years ago things would have been better modeled as a shared power base, but I think a lot of this has genuinely changed post-FTX.
I also think there are really crucial differences in how much different sub-parts of this ecosystem are structurally-power-seeking, and that those are important to model (and also importantly that some of the structural power-seeking-ness of some these parts puts those parts into conflict with the others, in as much as they they are not participating in the same power-seeking strategies).
Like, the way I have conceptualized most of my life’s work so far has been to try to build neutral non-power-seeking institutions, that inform other people and help them make better decisions, and that generally try to actively avoid plans that route through “me and my friends get powerful and then solve our problems” because I think this kind of plan will almost inevitably end up just running into conflict with other power-seeking entities and then spend most of its resources on that.
And I think there are thousands of others who have similar intuitions about how to relate to the world, within the broader AI-Alignment/Rationality ecosystem, and I think those parts are genuinely not structurally power-seeking in the same way. And I agree they are all very enmeshed with parts that are power-seeking, and this makes distinguishing them harder, but I think there are really quite important differences.
I don’t actually know how much we disagree. I do think that modeling the AI Safety space as a single power-base is wrong and not really carving reality along structural lines. Like, I don’t think the situation is “look, we often argue theological disagreements”, I think the situation is often much more “these two things that care about safety are actively in-conflict with each other and are taking active steps to eradicate the other party” and at that point I just really don’t think it makes sense to model these as one.
I do think that modeling the AI Safety space as a single power-base is wrong and not really carving reality along structural lines.
This is the thing that feels most like talking past each other. You’re treating this as a binary and it’s really, really not a binary. Some examples:
There are many circumstances in which it’s useful to describe “the US government” as a power base, even though republicans and democrats are often at each other’s throats, and also there are many people within the US government who are very skeptical of it (e.g. libertarians).
There are many circumstances in which it’s useful to describe “big tech” as a power base, even though the companies in it are competing ferociously.
I’m not denying that there are crucial differences to model here. But this just seems like the wrong type of argument to use to object to accusations of gerrymandering, because every example of gerrymandering will be defended with “here are the local differences that feel crucial to me”.
So how should we evaluate this in a principled way? One criterion: how fierce is the internal fighting? Another: how many shared policy prescriptions do the different groups have? On the former, while I appreciate that you’ve been treated badly by OpenPhil, I think “trying to eradicate each other” is massive hyperbole. I would maybe accept that as a description of the fighting between AI ethics people, AI safety people, and accelerationists, but the types of weapons they use (e.g. public attacks in major news outlets, legal measures, etc) are far harsher than anything I’ve heard about internally inside AI safety. If I’m wrong about this, I’d love to know.
On the latter: when push comes to shove, a lot of these different groups band together to support stuff like interpretability research, raising awareness of AI risk, convincing policymakers it’s a big deal, AI legislation, etc. I’d believe that you don’t do this; I don’t believe that there are thousands of others who have deliberately taken stances against these things, because I think there are very few people as cautious about this as you (especially when controlling for influence over the movement).
Like, the way I have conceptualized most of my life’s work so far has been to try to build neutral non-power-seeking institutions, that inform other people and help them make better decisions, and that generally try to actively avoid plans that route through “me and my friends get powerful and then solve our problems” because I think this kind of plan will almost inevitably end up just running into conflict with other power-seeking entities and then spend most of its resources on that.
This is the thing that feels most like talking past each other. You’re treating this as a binary and it’s really, really not a binary. Some examples:
Yeah, I think this makes sense. I wasn’t particularly trying to treat it as just a binary, and I agree that there are levels of abstraction where it makes sense to model these things as one, and this also applies to the whole extended AI-Alignment/EA/Rationality ecosystem.
I do feel like this lens loses a lot of its validity at the highest levels of abstraction (like, I think there is a valid sense in which you should model AI x-risk concerned people as part of big-tech, but also, if you do that, you kind of ignore the central dynamic that is going on with the x-risk concerned people, and maybe that’s the right call sometimes, but I think in terms of “what will the future of humanity be” in making that simplification you have kind of lost the plot)
If I’m wrong about this, I’d love to know.
My best guess is you are underestimating the level of adversarialness going on, though I am also uncertain about this. I would be interested in sharing notes some time.
As one concrete example, my guess is we both agree it would not make sense to model OpenAI as part of the same power base. Like, yeah, a bunch of EAs used to be on OpenAIs board, but even during that period, they didn’t have much influence on OpenAI. I think basically all-throughout it made most sense to model these as separate communities/institutions/groups with regards to power-seeking.
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad, and would give up a large chunk of my resources to reduce their influence on the world. Not because I am in a competition between them (indeed, I think I do tend to get more power as they get more power), but because I think they genuinely have really bad consequences for the world. I also care a lot about cooperativeness, and so I don’t tend to go around getting into conflicts with lots of collateral damage or reciprocal escalation, but also, I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad
Do you have a diagnosis of the root cause of this?
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Why not try to reform EA instead? (This is related to my previous question. If we could diagnose what’s causing EA to be harmful, maybe we can fix it?)
I have spent like 40% of the last 1.5 years trying to reform EA. I think I had a small positive effect, but it’s also been extremely tiring and painful and I consider my duty with regards to this done. Buy in for reform in leadership is very low, and people seem primarily interested in short term power seeking and ass-covering.
The memo I mentioned in another comment has a bunch of analysis I’ll send it to you tomorrow when I am at my laptop.
As a datapoint, I think I was likely underestimating the level of adversarialness going on & this thread makes me less likely to lump Lightcone in with other parts of the community.
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
@habryka are you able to share details/examples RE the actions you’ve taken to get the EA community to shut down or disappear?
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad, and would give up a large chunk of my resources to reduce their influence on the world
I would also be interested in more of your thoughts on this. (My Habryka sim says something like “the episemtic norms are bad, and many EA groups/individuals are more focused on playing status games. They are spending their effort saying and doing things that they believe will give them influence points, rather than trying to say true and precise things. I think society’s chances of getting through this critical period would be higher if we focused on reasoning carefully about these complex domains, making accurate arguments, and helping ourselves & others understand the situation.” Curious if this is roughly accurate or if I’m missing some important bits. Also curious if you’re able to expand on this or provide some examples of the things in the category “Many EA people think X action is a good thing to do, but I disagree.”
I would also be interested in more of your thoughts on this
I have a memo I thought I had shared with you at one point that I wrote for EA Coordination Forum 2023. It has a bunch of wrong stuff in it, and fixing it has been too difficult, but I could share it with you privately (with disclaimers on what is wrong). Feel free to DM me if I haven’t.
@habryka are you able to share details/examples RE the actions you’ve taken to get the EA community to shut down or disappear?
Sharing my memo at the coordination forum is one such action I have taken. I have also advocated for various people to be fired, and have urged a number of external and internal stakeholders to reconsider their relationship with EA. Most of this has been kind of illegible and flaily , with me not really knowing how to do anything in the space without ending up with a bunch of dumb collateral damage and reciprocal escalation.
I would also be interested to see this. Also, could you clarify:
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.
My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn’t super seem worth it.
I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to “but you can’t possibly be against bednets”, and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil’s reputation), that seems bad.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Wow, what a striking thing for you to say without further explanation.
Personally, I’m a fan of EA. Also am an EA―signed the GWWC/10% pledge and all that. So, I wonder what you mean.
I mean, it’s in the context of a discussion with Richard who knows a lot of my broader thoughts on EA stuff. I’ve written quite a lot of comments with my thoughts on EA on the EA Forum. I’ve also written a bunch more private memos I can share with people who are interested.
we both agree it would not make sense to model OpenAI as part of the same power base
Hmm, I’m not totally sure. At various points:
OpenAI was the most prominent group talking publicly about AI risk
Sam Altman was the most prominent person talking publicly about large-scale AI regulation
A bunch of safety-minded people at OpenAI were doing OpenAI’s best capabilities work (GPT-2, GPT-3)
A bunch of safety-minded people worked on stuff that led to ChatGPT (RLHF, John Schulman’s team in general)
Elon tried to take over, and the people who opposed that were (I’m guessing) a coalition of safety people and the rest of OpenAI
It’s really hard to step out of our own perspective here, but when I put myself in the perspective of, say, someone who doesn’t believe in AGI at all, these all seem pretty indicative of a situation where OpenAI and AI safety people were to a significant extent building a shared power base, and just couldn’t keep that power base together.
[this comment is irrelevant to the point you actually care about and is just nit-picking about the analogy]
There is a pretty big divide between “liberal” and “conservative” Christianity that is in some ways bigger than the divide between different denominations. In the US, people who think of themselves as “Episcopalians” tend to be more liberal than people who call themselves “Baptists”. In the rest of this comment, I’m going to assume we’re talking about conservative Anglicans rather than Episcopalians (those terms referring to the same denominational family), and also about conservative Baptists, since they’re more likely to be up to stuff / doing meaningful advocacy, and more likely to care about denominational distinctions. That said, liberal Episcopalians and liberal Baptists are much more likely to get along, and also openly talk about how they’re in cooperation.
My guess is that conservative Anglicans and Baptists don’t spend much time at each other’s churches, at least during worship, given that they tend to have very different types of services and very different views about the point of worship (specifically about the role of the eucharist). Also there’s a decent chance they don’t allow each other to commune at their church (more likely on the Baptist end). Similarly, I don’t think they are going to have that much social overlap, altho I could be wrong here. There’s a good chance they read many of the same blogs tho.
In terms of policy advocacy, on the current margin they are going to mostly agree—common goals are going to be stuff like banning abortion, banning gay marriage, and ending the practice of gender transition. Anglican groups are going to be more comfortable with forms of state Christianity than Baptists are, altho this is lower-priority for both, I think. They are going to advocate for their preferred policies in part by denominational policy bodies, but also by joining common-cause advocacy organizations.
Both Anglican and Baptist churches are largely going to be funded by members, and their members are going to be disjoint. That said it’s possible that their policy bodies will share large donor bases.
They are also organized pretty differently internally: Anglicans have a very hierarchical structure, and Baptists having a very decentralized structure (each congregation is its own democratic policy, and is able to e.g. vote to remove the pastor and hire a new one)
Anyway: I’m pretty sympathetic to the claim of conservative Anglicans and Baptists being meaningfully distinct power bases, altho it would be misleading to not acknowledge that they’re both part of a broader conservative Christian ecosystem with shared media sources, fashions, etc.
Part of the reason this analogy didn’t vibe for me is that Anglicans and Baptists are about as dissimilar as Protestants can get. If it were Anglicans and Presbyterians or Baptists and Pentecostals that would make more sense, as those denominations are much more similar to each other.
Lightcone is banned from receiving any kind of OpenPhil funding
Why?
The reason for the ban is pretty crux-y. Are Lighitcone banned because OpenPhil dislikes you, or because you’re too close so that would be a conflict of interests, or something else.
Good Ventures have banned OpenPhil from recommending grants to organizations working in the “rationalist community building” space (including for their non-”rationalist community building” work). I understand this to be because Dustin doesn’t believe in that work and feels he suffers a bunch of reputational damage for funding it (IIRC, he said he’d be willing to suffer that reputational damage if he was personally excited by it). Lots more detail on the discussion on this post.
I think this is a better pointer, but I think “Bay Area alignment community” is still a bit too broad. I think e.g. Lightcone and MIRI are very separate from Constellation and Open Phil and it doesn’t make sense to put them into the same bucket.
I am kinda confused by these comments. Obviously you can draw categories at higher or lower levels of resolution. Saying that it doesn’t make sense to put Lightcone and MIRI in the same bucket as Constellation and OpenPhil, or Bengio in the same bucket as the Bay Area alignment community, feels like… idk, like a Protestant Christian saying it doesn’t make sense to put Episcopalians and Baptists in the same bucket. The differences loom large for insiders but are much smaller for outsiders.
You might be implicitly claiming that AI safety people aren’t very structurally power-seeking unless they’re Bay Area EAs. I think this is mostly false, and in fact it seems to me that people often semi-independently reason themselves into power-seeking strategies after starting to care about AI x-risk. I also think that most proposals for AI safety regulation are structurally power-seeking, because they will make AI safety people arbitrators of which models are allowed (implicitly or explicitly). But a wide range of AI safety people support these (and MIRI, for example, supports some of the strongest versions of these).
I’ll again highlight that just because an action is structurally power-seeking doesn’t make it a bad idea. It just means that it comes along with certain downsides that people might not be tracking.
I don’t know, I think I’ll defend that Lightcone is genuinely not very structurally power-seeking, and neither is MIRI, and also that both of these organizations are not meaningfully part of some kind of shared power-base with most of the EA AI Alignment community in Berkeley (Lightcone is banned from receiving any kind of OpenPhil funding, for example).
I think you would at least have to argue that there are two separate power-seeking institutions here, each seeking power for themselves, but I also do genuinely think that Lightcone is not a very structurally power-seeking organization (I feel a bit more confused about MIRI, though would overall still defend that).
Suppose I’m an atheist, or a muslim, or a jew, and an Episcopalian living in my town came up to me and said “I’m not meaningfully in a shared power-base with the Baptists. Sure, there’s a huge amount of social overlap, we spend time at each other’s churches, and we share many similar motivations and often advocate for many of the same policies. But look, we often argue about theological disagreements, and also the main funder for their church doesn’t fund our church (though of course many other funders fund both Baptists and Episcopalians).”
I just don’t think this is credible, unless you’re using a very strict sense of “meaningfully”. But at that level of strictness it’s impossible to do any reasoning about power-bases, because factional divides are fractal. What it looks like to have a power-base is to have several broadly-aligned and somewhat-overlapping factions that are each seeking power for themselves. In the case above, the Episcopalian may legitimately feel very strongly about their differences with the Baptists, but this is a known bug in human psychology: the narcissism of small differences.
Though I am happy to agree that Lightcone is one of the least structurally power-seeking entities in the AI safety movement, and I respect this. (I wouldn’t say the same of current-MIRI, which is now an advocacy org focusing on policies that strongly centralize power. I’m uncertain about past-MIRI.)
I think you’re making an important point here, and I agree that given the moral valence people here will be quite tempted to gerrymander themselves out of the relevant categories (also, pretending to be the underdog, or participating in bravery debates, is an extremely common pattern in conversations like this).
I do agree that a few years ago things would have been better modeled as a shared power base, but I think a lot of this has genuinely changed post-FTX.
I also think there are really crucial differences in how much different sub-parts of this ecosystem are structurally-power-seeking, and that those are important to model (and also importantly that some of the structural power-seeking-ness of some these parts puts those parts into conflict with the others, in as much as they they are not participating in the same power-seeking strategies).
Like, the way I have conceptualized most of my life’s work so far has been to try to build neutral non-power-seeking institutions, that inform other people and help them make better decisions, and that generally try to actively avoid plans that route through “me and my friends get powerful and then solve our problems” because I think this kind of plan will almost inevitably end up just running into conflict with other power-seeking entities and then spend most of its resources on that.
And I think there are thousands of others who have similar intuitions about how to relate to the world, within the broader AI-Alignment/Rationality ecosystem, and I think those parts are genuinely not structurally power-seeking in the same way. And I agree they are all very enmeshed with parts that are power-seeking, and this makes distinguishing them harder, but I think there are really quite important differences.
I don’t actually know how much we disagree. I do think that modeling the AI Safety space as a single power-base is wrong and not really carving reality along structural lines. Like, I don’t think the situation is “look, we often argue theological disagreements”, I think the situation is often much more “these two things that care about safety are actively in-conflict with each other and are taking active steps to eradicate the other party” and at that point I just really don’t think it makes sense to model these as one.
This is the thing that feels most like talking past each other. You’re treating this as a binary and it’s really, really not a binary. Some examples:
There are many circumstances in which it’s useful to describe “the US government” as a power base, even though republicans and democrats are often at each other’s throats, and also there are many people within the US government who are very skeptical of it (e.g. libertarians).
There are many circumstances in which it’s useful to describe “big tech” as a power base, even though the companies in it are competing ferociously.
I’m not denying that there are crucial differences to model here. But this just seems like the wrong type of argument to use to object to accusations of gerrymandering, because every example of gerrymandering will be defended with “here are the local differences that feel crucial to me”.
So how should we evaluate this in a principled way? One criterion: how fierce is the internal fighting? Another: how many shared policy prescriptions do the different groups have? On the former, while I appreciate that you’ve been treated badly by OpenPhil, I think “trying to eradicate each other” is massive hyperbole. I would maybe accept that as a description of the fighting between AI ethics people, AI safety people, and accelerationists, but the types of weapons they use (e.g. public attacks in major news outlets, legal measures, etc) are far harsher than anything I’ve heard about internally inside AI safety. If I’m wrong about this, I’d love to know.
On the latter: when push comes to shove, a lot of these different groups band together to support stuff like interpretability research, raising awareness of AI risk, convincing policymakers it’s a big deal, AI legislation, etc. I’d believe that you don’t do this; I don’t believe that there are thousands of others who have deliberately taken stances against these things, because I think there are very few people as cautious about this as you (especially when controlling for influence over the movement).
As above, I respect this a lot.
Yeah, I think this makes sense. I wasn’t particularly trying to treat it as just a binary, and I agree that there are levels of abstraction where it makes sense to model these things as one, and this also applies to the whole extended AI-Alignment/EA/Rationality ecosystem.
I do feel like this lens loses a lot of its validity at the highest levels of abstraction (like, I think there is a valid sense in which you should model AI x-risk concerned people as part of big-tech, but also, if you do that, you kind of ignore the central dynamic that is going on with the x-risk concerned people, and maybe that’s the right call sometimes, but I think in terms of “what will the future of humanity be” in making that simplification you have kind of lost the plot)
My best guess is you are underestimating the level of adversarialness going on, though I am also uncertain about this. I would be interested in sharing notes some time.
As one concrete example, my guess is we both agree it would not make sense to model OpenAI as part of the same power base. Like, yeah, a bunch of EAs used to be on OpenAIs board, but even during that period, they didn’t have much influence on OpenAI. I think basically all-throughout it made most sense to model these as separate communities/institutions/groups with regards to power-seeking.
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad, and would give up a large chunk of my resources to reduce their influence on the world. Not because I am in a competition between them (indeed, I think I do tend to get more power as they get more power), but because I think they genuinely have really bad consequences for the world. I also care a lot about cooperativeness, and so I don’t tend to go around getting into conflicts with lots of collateral damage or reciprocal escalation, but also, I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Do you have a diagnosis of the root cause of this?
Why not try to reform EA instead? (This is related to my previous question. If we could diagnose what’s causing EA to be harmful, maybe we can fix it?)
I have spent like 40% of the last 1.5 years trying to reform EA. I think I had a small positive effect, but it’s also been extremely tiring and painful and I consider my duty with regards to this done. Buy in for reform in leadership is very low, and people seem primarily interested in short term power seeking and ass-covering.
The memo I mentioned in another comment has a bunch of analysis I’ll send it to you tomorrow when I am at my laptop.
For some more fundamental analysis I also have this post, though it’s only a small part of the picture: https://www.lesswrong.com/posts/HCAyiuZe9wz8tG6EF/my-tentative-best-guess-on-how-eas-and-rationalists
As a datapoint, I think I was likely underestimating the level of adversarialness going on & this thread makes me less likely to lump Lightcone in with other parts of the community.
@habryka are you able to share details/examples RE the actions you’ve taken to get the EA community to shut down or disappear?
I would also be interested in more of your thoughts on this. (My Habryka sim says something like “the episemtic norms are bad, and many EA groups/individuals are more focused on playing status games. They are spending their effort saying and doing things that they believe will give them influence points, rather than trying to say true and precise things. I think society’s chances of getting through this critical period would be higher if we focused on reasoning carefully about these complex domains, making accurate arguments, and helping ourselves & others understand the situation.” Curious if this is roughly accurate or if I’m missing some important bits. Also curious if you’re able to expand on this or provide some examples of the things in the category “Many EA people think X action is a good thing to do, but I disagree.”
I have a memo I thought I had shared with you at one point that I wrote for EA Coordination Forum 2023. It has a bunch of wrong stuff in it, and fixing it has been too difficult, but I could share it with you privately (with disclaimers on what is wrong). Feel free to DM me if I haven’t.
Sharing my memo at the coordination forum is one such action I have taken. I have also advocated for various people to be fired, and have urged a number of external and internal stakeholders to reconsider their relationship with EA. Most of this has been kind of illegible and flaily , with me not really knowing how to do anything in the space without ending up with a bunch of dumb collateral damage and reciprocal escalation.
I would be keen to see the memo if you’re comfortable sharing it privately.
Sure, sent a DM.
I would also be interested to see this. Also, could you clarify:
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.
My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn’t super seem worth it.
I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to “but you can’t possibly be against bednets”, and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil’s reputation), that seems bad.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.
Wow, what a striking thing for you to say without further explanation.
Personally, I’m a fan of EA. Also am an EA―signed the GWWC/10% pledge and all that. So, I wonder what you mean.
I mean, it’s in the context of a discussion with Richard who knows a lot of my broader thoughts on EA stuff. I’ve written quite a lot of comments with my thoughts on EA on the EA Forum. I’ve also written a bunch more private memos I can share with people who are interested.
Hmm, I’m not totally sure. At various points:
OpenAI was the most prominent group talking publicly about AI risk
Sam Altman was the most prominent person talking publicly about large-scale AI regulation
A bunch of safety-minded people at OpenAI were doing OpenAI’s best capabilities work (GPT-2, GPT-3)
A bunch of safety-minded people worked on stuff that led to ChatGPT (RLHF, John Schulman’s team in general)
Elon tried to take over, and the people who opposed that were (I’m guessing) a coalition of safety people and the rest of OpenAI
It’s really hard to step out of our own perspective here, but when I put myself in the perspective of, say, someone who doesn’t believe in AGI at all, these all seem pretty indicative of a situation where OpenAI and AI safety people were to a significant extent building a shared power base, and just couldn’t keep that power base together.
[this comment is irrelevant to the point you actually care about and is just nit-picking about the analogy]
There is a pretty big divide between “liberal” and “conservative” Christianity that is in some ways bigger than the divide between different denominations. In the US, people who think of themselves as “Episcopalians” tend to be more liberal than people who call themselves “Baptists”. In the rest of this comment, I’m going to assume we’re talking about conservative Anglicans rather than Episcopalians (those terms referring to the same denominational family), and also about conservative Baptists, since they’re more likely to be up to stuff / doing meaningful advocacy, and more likely to care about denominational distinctions. That said, liberal Episcopalians and liberal Baptists are much more likely to get along, and also openly talk about how they’re in cooperation.
My guess is that conservative Anglicans and Baptists don’t spend much time at each other’s churches, at least during worship, given that they tend to have very different types of services and very different views about the point of worship (specifically about the role of the eucharist). Also there’s a decent chance they don’t allow each other to commune at their church (more likely on the Baptist end). Similarly, I don’t think they are going to have that much social overlap, altho I could be wrong here. There’s a good chance they read many of the same blogs tho.
In terms of policy advocacy, on the current margin they are going to mostly agree—common goals are going to be stuff like banning abortion, banning gay marriage, and ending the practice of gender transition. Anglican groups are going to be more comfortable with forms of state Christianity than Baptists are, altho this is lower-priority for both, I think. They are going to advocate for their preferred policies in part by denominational policy bodies, but also by joining common-cause advocacy organizations.
Both Anglican and Baptist churches are largely going to be funded by members, and their members are going to be disjoint. That said it’s possible that their policy bodies will share large donor bases.
They are also organized pretty differently internally: Anglicans have a very hierarchical structure, and Baptists having a very decentralized structure (each congregation is its own democratic policy, and is able to e.g. vote to remove the pastor and hire a new one)
Anyway: I’m pretty sympathetic to the claim of conservative Anglicans and Baptists being meaningfully distinct power bases, altho it would be misleading to not acknowledge that they’re both part of a broader conservative Christian ecosystem with shared media sources, fashions, etc.
Part of the reason this analogy didn’t vibe for me is that Anglicans and Baptists are about as dissimilar as Protestants can get. If it were Anglicans and Presbyterians or Baptists and Pentecostals that would make more sense, as those denominations are much more similar to each other.
Why?
The reason for the ban is pretty crux-y. Are Lighitcone banned because OpenPhil dislikes you, or because you’re too close so that would be a conflict of interests, or something else.
Good Ventures have banned OpenPhil from recommending grants to organizations working in the “rationalist community building” space (including for their non-”rationalist community building” work). I understand this to be because Dustin doesn’t believe in that work and feels he suffers a bunch of reputational damage for funding it (IIRC, he said he’d be willing to suffer that reputational damage if he was personally excited by it). Lots more detail on the discussion on this post.
“Bay Area EA alignment community”/”Bay Area EA community”? (Most EAs in the Bay Area are focused on alignment compared to other causes.)