This is the thing that feels most like talking past each other. You’re treating this as a binary and it’s really, really not a binary. Some examples:
Yeah, I think this makes sense. I wasn’t particularly trying to treat it as just a binary, and I agree that there are levels of abstraction where it makes sense to model these things as one, and this also applies to the whole extended AI-Alignment/EA/Rationality ecosystem.
I do feel like this lens loses a lot of its validity at the highest levels of abstraction (like, I think there is a valid sense in which you should model AI x-risk concerned people as part of big-tech, but also, if you do that, you kind of ignore the central dynamic that is going on with the x-risk concerned people, and maybe that’s the right call sometimes, but I think in terms of “what will the future of humanity be” in making that simplification you have kind of lost the plot)
If I’m wrong about this, I’d love to know.
My best guess is you are underestimating the level of adversarialness going on, though I am also uncertain about this. I would be interested in sharing notes some time.
As one concrete example, my guess is we both agree it would not make sense to model OpenAI as part of the same power base. Like, yeah, a bunch of EAs used to be on OpenAIs board, but even during that period, they didn’t have much influence on OpenAI. I think basically all-throughout it made most sense to model these as separate communities/institutions/groups with regards to power-seeking.
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad, and would give up a large chunk of my resources to reduce their influence on the world. Not because I am in a competition between them (indeed, I think I do tend to get more power as they get more power), but because I think they genuinely have really bad consequences for the world. I also care a lot about cooperativeness, and so I don’t tend to go around getting into conflicts with lots of collateral damage or reciprocal escalation, but also, I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad
Do you have a diagnosis of the root cause of this?
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Why not try to reform EA instead? (This is related to my previous question. If we could diagnose what’s causing EA to be harmful, maybe we can fix it?)
I have spent like 40% of the last 1.5 years trying to reform EA. I think I had a small positive effect, but it’s also been extremely tiring and painful and I consider my duty with regards to this done. Buy in for reform in leadership is very low, and people seem primarily interested in short term power seeking and ass-covering.
The memo I mentioned in another comment has a bunch of analysis I’ll send it to you tomorrow when I am at my laptop.
As a datapoint, I think I was likely underestimating the level of adversarialness going on & this thread makes me less likely to lump Lightcone in with other parts of the community.
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
@habryka are you able to share details/examples RE the actions you’ve taken to get the EA community to shut down or disappear?
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad, and would give up a large chunk of my resources to reduce their influence on the world
I would also be interested in more of your thoughts on this. (My Habryka sim says something like “the episemtic norms are bad, and many EA groups/individuals are more focused on playing status games. They are spending their effort saying and doing things that they believe will give them influence points, rather than trying to say true and precise things. I think society’s chances of getting through this critical period would be higher if we focused on reasoning carefully about these complex domains, making accurate arguments, and helping ourselves & others understand the situation.” Curious if this is roughly accurate or if I’m missing some important bits. Also curious if you’re able to expand on this or provide some examples of the things in the category “Many EA people think X action is a good thing to do, but I disagree.”
I would also be interested in more of your thoughts on this
I have a memo I thought I had shared with you at one point that I wrote for EA Coordination Forum 2023. It has a bunch of wrong stuff in it, and fixing it has been too difficult, but I could share it with you privately (with disclaimers on what is wrong). Feel free to DM me if I haven’t.
@habryka are you able to share details/examples RE the actions you’ve taken to get the EA community to shut down or disappear?
Sharing my memo at the coordination forum is one such action I have taken. I have also advocated for various people to be fired, and have urged a number of external and internal stakeholders to reconsider their relationship with EA. Most of this has been kind of illegible and flaily , with me not really knowing how to do anything in the space without ending up with a bunch of dumb collateral damage and reciprocal escalation.
I would also be interested to see this. Also, could you clarify:
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.
My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn’t super seem worth it.
I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to “but you can’t possibly be against bednets”, and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil’s reputation), that seems bad.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.
I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Wow, what a striking thing for you to say without further explanation.
Personally, I’m a fan of EA. Also am an EA―signed the GWWC/10% pledge and all that. So, I wonder what you mean.
I mean, it’s in the context of a discussion with Richard who knows a lot of my broader thoughts on EA stuff. I’ve written quite a lot of comments with my thoughts on EA on the EA Forum. I’ve also written a bunch more private memos I can share with people who are interested.
we both agree it would not make sense to model OpenAI as part of the same power base
Hmm, I’m not totally sure. At various points:
OpenAI was the most prominent group talking publicly about AI risk
Sam Altman was the most prominent person talking publicly about large-scale AI regulation
A bunch of safety-minded people at OpenAI were doing OpenAI’s best capabilities work (GPT-2, GPT-3)
A bunch of safety-minded people worked on stuff that led to ChatGPT (RLHF, John Schulman’s team in general)
Elon tried to take over, and the people who opposed that were (I’m guessing) a coalition of safety people and the rest of OpenAI
It’s really hard to step out of our own perspective here, but when I put myself in the perspective of, say, someone who doesn’t believe in AGI at all, these all seem pretty indicative of a situation where OpenAI and AI safety people were to a significant extent building a shared power base, and just couldn’t keep that power base together.
Yeah, I think this makes sense. I wasn’t particularly trying to treat it as just a binary, and I agree that there are levels of abstraction where it makes sense to model these things as one, and this also applies to the whole extended AI-Alignment/EA/Rationality ecosystem.
I do feel like this lens loses a lot of its validity at the highest levels of abstraction (like, I think there is a valid sense in which you should model AI x-risk concerned people as part of big-tech, but also, if you do that, you kind of ignore the central dynamic that is going on with the x-risk concerned people, and maybe that’s the right call sometimes, but I think in terms of “what will the future of humanity be” in making that simplification you have kind of lost the plot)
My best guess is you are underestimating the level of adversarialness going on, though I am also uncertain about this. I would be interested in sharing notes some time.
As one concrete example, my guess is we both agree it would not make sense to model OpenAI as part of the same power base. Like, yeah, a bunch of EAs used to be on OpenAIs board, but even during that period, they didn’t have much influence on OpenAI. I think basically all-throughout it made most sense to model these as separate communities/institutions/groups with regards to power-seeking.
I also personally do straightforwardly think that most of the efforts of the extended EA-Alignment ecosystem are bad, and would give up a large chunk of my resources to reduce their influence on the world. Not because I am in a competition between them (indeed, I think I do tend to get more power as they get more power), but because I think they genuinely have really bad consequences for the world. I also care a lot about cooperativeness, and so I don’t tend to go around getting into conflicts with lots of collateral damage or reciprocal escalation, but also, I have definitely taken actions within the bounds of what seems reasonable that have aimed at getting the EA community to shut down or disappear (and will probably continue to do so).
Do you have a diagnosis of the root cause of this?
Why not try to reform EA instead? (This is related to my previous question. If we could diagnose what’s causing EA to be harmful, maybe we can fix it?)
I have spent like 40% of the last 1.5 years trying to reform EA. I think I had a small positive effect, but it’s also been extremely tiring and painful and I consider my duty with regards to this done. Buy in for reform in leadership is very low, and people seem primarily interested in short term power seeking and ass-covering.
The memo I mentioned in another comment has a bunch of analysis I’ll send it to you tomorrow when I am at my laptop.
For some more fundamental analysis I also have this post, though it’s only a small part of the picture: https://www.lesswrong.com/posts/HCAyiuZe9wz8tG6EF/my-tentative-best-guess-on-how-eas-and-rationalists
As a datapoint, I think I was likely underestimating the level of adversarialness going on & this thread makes me less likely to lump Lightcone in with other parts of the community.
@habryka are you able to share details/examples RE the actions you’ve taken to get the EA community to shut down or disappear?
I would also be interested in more of your thoughts on this. (My Habryka sim says something like “the episemtic norms are bad, and many EA groups/individuals are more focused on playing status games. They are spending their effort saying and doing things that they believe will give them influence points, rather than trying to say true and precise things. I think society’s chances of getting through this critical period would be higher if we focused on reasoning carefully about these complex domains, making accurate arguments, and helping ourselves & others understand the situation.” Curious if this is roughly accurate or if I’m missing some important bits. Also curious if you’re able to expand on this or provide some examples of the things in the category “Many EA people think X action is a good thing to do, but I disagree.”
I have a memo I thought I had shared with you at one point that I wrote for EA Coordination Forum 2023. It has a bunch of wrong stuff in it, and fixing it has been too difficult, but I could share it with you privately (with disclaimers on what is wrong). Feel free to DM me if I haven’t.
Sharing my memo at the coordination forum is one such action I have taken. I have also advocated for various people to be fired, and have urged a number of external and internal stakeholders to reconsider their relationship with EA. Most of this has been kind of illegible and flaily , with me not really knowing how to do anything in the space without ending up with a bunch of dumb collateral damage and reciprocal escalation.
I would be keen to see the memo if you’re comfortable sharing it privately.
Sure, sent a DM.
I would also be interested to see this. Also, could you clarify:
Are you talking here about ‘the extended EA-Alignment ecosystem’, or do you mean you’ve aimed at getting the global poverty/animal welfare/other non-AI-related EA community to shut down or disappear?
The leadership of these is mostly shared. There are many good parts of EA, and reform would be better than shutting down, but reform seems unlikely at this point.
My world model mostly predicts effects on technological development and the long term future dominate, so in as much as the non-AI related parts of EA are good or bad, I think what matters is their effect on that. Mostly the effect seems small, and quibbling over the sign doesn’t super seem worth it.
I do think there is often an annoying motte and bailey going on where people try to critique EA for their negative effects in the important things, and those get redirected to “but you can’t possibly be against bednets”, and in as much as the bednet people are willingly participating in that (as seems likely the case for e.g. Open Phil’s reputation), that seems bad.
What do you mean the leadership is shared? That seems much less true now Effective Ventures have started spinning off their orgs. It seems like the funding is still largely shared, but that’s a different claim.
Wow, what a striking thing for you to say without further explanation.
Personally, I’m a fan of EA. Also am an EA―signed the GWWC/10% pledge and all that. So, I wonder what you mean.
I mean, it’s in the context of a discussion with Richard who knows a lot of my broader thoughts on EA stuff. I’ve written quite a lot of comments with my thoughts on EA on the EA Forum. I’ve also written a bunch more private memos I can share with people who are interested.
Hmm, I’m not totally sure. At various points:
OpenAI was the most prominent group talking publicly about AI risk
Sam Altman was the most prominent person talking publicly about large-scale AI regulation
A bunch of safety-minded people at OpenAI were doing OpenAI’s best capabilities work (GPT-2, GPT-3)
A bunch of safety-minded people worked on stuff that led to ChatGPT (RLHF, John Schulman’s team in general)
Elon tried to take over, and the people who opposed that were (I’m guessing) a coalition of safety people and the rest of OpenAI
It’s really hard to step out of our own perspective here, but when I put myself in the perspective of, say, someone who doesn’t believe in AGI at all, these all seem pretty indicative of a situation where OpenAI and AI safety people were to a significant extent building a shared power base, and just couldn’t keep that power base together.