AFAICT, Anthropic is not an existential AI safety org per se, they’re just doing a very particular type of research which might help with existential safety. But also, why do you think they don’t require physical presence?
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them. The first line of copy on their website is
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them.
Are you talking about “you can work from home and come to the office occasionally”, or “you can live on a different continent”?
Sounds pretty much like a safety org to me.
I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.
I believe Anthropic doesn’t expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.
This might be the right approach, but notice that no existing AI risk org does that. They all require physical presence.
Anthropic does not require consistent physical presence.
AFAICT, Anthropic is not an existential AI safety org per se, they’re just doing a very particular type of research which might help with existential safety. But also, why do you think they don’t require physical presence?
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them. The first line of copy on their website is
Sounds pretty much like a safety org to me.
Are you talking about “you can work from home and come to the office occasionally”, or “you can live on a different continent”?
I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.
I believe Anthropic doesn’t expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now.