My general impression based on numerous interactions is that many EA orgs are specifically looking to hire and work with other EAs, many longtermist orgs are looking to specifically work with longtermists, and many AI safety orgs are specifically looking to hire people who are passionate about existential risks from AI. I get this to a certain extent, but I strongly suspect that ultimately this may be very counterproductive if we are really truly playing to win.
And it’s not just in terms of who gets hired. Maybe I’m wrong about this, but my impression is that many EA funding orgs are primarily looking to fund other EA orgs. I suspect that a new and inexperienced EA org may have an easier time getting funded to work on a given project than if a highly experienced non-EA org would apply for funding to pursue the same idea. (Again, entirely possible I’m wrong about that, and apologies to EA funding orgs if I am mis-characterizing how things work. On the other hand, if I am wrong about this then that is an indication that EA orgs might need to do a better job communicating how their funding decisions are made, because I am virtually positive that this is the impression that many other people have gotten as well.)
One reason why this selectivity kind of makes sense at least for some areas like AI safety is because of infohazard concerns, where if we get people who are not focused on the long-term to be involved then they might use our money to do capability enhancement research instead of pursuing longtermist goals. Again, I get this to a certain extent, but I think that if we are really playing to win then we can probably use our collective ingenuity to find ways around this.
Right now this focus on only looking for other EAs appears (to me, at least) to be causing an enormous bottleneck for achieving the goals we are ultimately aiming for.
The CIA has the mission to protect the constitution of the United States. In practice, the CIA constantly violates the constitution of the United States.
Defense contractors constantly push for global politics to go in a direction that military budgets go up and that often involves making the world a less safe place.
The equivalent for defense contractors being aligned to make money for their stakeholders would be for an NGO like MIRI being aligned for increasing their budget through donations. A lot of NGO’s are aligned in that way instead of being aligned for their mission.
Yes, but they don’t know how to hire other people to do that. Especially, they don’t know how to get people who come mainly because they are paid a lot of money to care more about things besides that money.
This appears to be a problem with the whole organization, rather than a secrecy problem per se. The push from defense contractors in particular is highly public. It looks like there are three problems to solve here:
Alignment of the org.
Effectiveness of the org.
Secrecy of the org.
There is clearly tension between these three, but just because they aren’t fully independent doesn’t mean they are mutually exclusive.
My general impression based on numerous interactions is that many EA orgs are specifically looking to hire and work with other EAs, many longtermist orgs are looking to specifically work with longtermists, and many AI safety orgs are specifically looking to hire people who are passionate about existential risks from AI. I get this to a certain extent, but I strongly suspect that ultimately this may be very counterproductive if we are really truly playing to win.
And it’s not just in terms of who gets hired. Maybe I’m wrong about this, but my impression is that many EA funding orgs are primarily looking to fund other EA orgs. I suspect that a new and inexperienced EA org may have an easier time getting funded to work on a given project than if a highly experienced non-EA org would apply for funding to pursue the same idea. (Again, entirely possible I’m wrong about that, and apologies to EA funding orgs if I am mis-characterizing how things work. On the other hand, if I am wrong about this then that is an indication that EA orgs might need to do a better job communicating how their funding decisions are made, because I am virtually positive that this is the impression that many other people have gotten as well.)
One reason why this selectivity kind of makes sense at least for some areas like AI safety is because of infohazard concerns, where if we get people who are not focused on the long-term to be involved then they might use our money to do capability enhancement research instead of pursuing longtermist goals. Again, I get this to a certain extent, but I think that if we are really playing to win then we can probably use our collective ingenuity to find ways around this.
Right now this focus on only looking for other EAs appears (to me, at least) to be causing an enormous bottleneck for achieving the goals we are ultimately aiming for.
I’m shocked, shocked, to find gambling in this establishment.
There is a precedent for doing secret work of high strategic importance, which is every intelligence agency and defense contractor ever.
The CIA has the mission to protect the constitution of the United States. In practice, the CIA constantly violates the constitution of the United States.
Defense contractors constantly push for global politics to go in a direction that military budgets go up and that often involves making the world a less safe place.
Neither of those fields is well-aligned.
Yeah, the CIA isn’t aligned. Defense contractors are quite aligned with the interests of their shareholders.
The equivalent for defense contractors being aligned to make money for their stakeholders would be for an NGO like MIRI being aligned for increasing their budget through donations. A lot of NGO’s are aligned in that way instead of being aligned for their mission.
I’m reasonably certain that the people currently in MIRI genuinely want to prevent the rise of unfriendly AI.
Yes, but they don’t know how to hire other people to do that. Especially, they don’t know how to get people who come mainly because they are paid a lot of money to care more about things besides that money.
This appears to be a problem with the whole organization, rather than a secrecy problem per se. The push from defense contractors in particular is highly public. It looks like there are three problems to solve here:
Alignment of the org.
Effectiveness of the org.
Secrecy of the org.
There is clearly tension between these three, but just because they aren’t fully independent doesn’t mean they are mutually exclusive.