If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them. The first line of copy on their website is
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them.
Are you talking about “you can work from home and come to the office occasionally”, or “you can live on a different continent”?
Sounds pretty much like a safety org to me.
I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.
If you’re asking why I believe that they don’t require presence, I’ve been interviewing with them and that’s my understanding from talking with them. The first line of copy on their website is
Sounds pretty much like a safety org to me.
Are you talking about “you can work from home and come to the office occasionally”, or “you can live on a different continent”?
I found no mention of existential risk on their web page. They seem to be a commercial company, aiming at short-to-mid-term applications. I doubt they have any intention to do e.g. purely theoretical research, especially if it has no applications to modern systems. So, what they do can still be meritorious and relevant to reducing existential risk. But, the context of this discussion is: can we replace all AI safety orgs by just one org. And, Anthropic is too specialized to serve such a role.