if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one’s own.
Nitpicking: there are reasons to have multiple projects, for example it’s convenient to be in the same geographic location but not anyone can relocate to any place.
Sure—and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas.
Generally though, it’s far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that.
A “secondary concern” in the sense that, we should work remotely? Or in the sense that everyone should relocate? Because the latter is unrealistic: people have families, friends, communities, not anyone can uproot themself.
A secondary concern in that it’s better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.
I think this is much more complex than you’re assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)
AFAICT, Anthropic is not an existential AI safety org per se, they’re just doing a very particular type of research which might help with existential safety. But also, why do you think they don’t require physical presence?
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them. The first line of copy on their website is
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them.
Are you talking about “you can work from home and come to the office occasionally”, or “you can live on a different continent”?
Sounds pretty much like a safety org to me.
I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.
I believe Anthropic doesn’t expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.
Nitpicking: there are reasons to have multiple projects, for example it’s convenient to be in the same geographic location but not anyone can relocate to any place.
Sure—and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas.
Generally though, it’s far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that.
A “secondary concern” in the sense that, we should work remotely? Or in the sense that everyone should relocate? Because the latter is unrealistic: people have families, friends, communities, not anyone can uproot themself.
A secondary concern in that it’s better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations.
I think this is much more complex than you’re assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.)
This might be the right approach, but notice that no existing AI risk org does that. They all require physical presence.
Anthropic does not require consistent physical presence.
AFAICT, Anthropic is not an existential AI safety org per se, they’re just doing a very particular type of research which might help with existential safety. But also, why do you think they don’t require physical presence?
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them. The first line of copy on their website is
Sounds pretty much like a safety org to me.
Are you talking about “you can work from home and come to the office occasionally”, or “you can live on a different continent”?
I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.
I believe Anthropic doesn’t expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.