I collect AI progress monitoring resources here
I am currently focusing on pivoting, at least PT, to AI safety. My blog page listing my priorities
I value kindness and empathy.
I am a holistic polymath with a lifelong curiosity about science and knowledge.
Unlike many people, I have a short inferential distance across many topics. This makes me take certain things for granted that merit explanation. I often struggle to calibrate for this with smart people.
Currently working in big corp. Personally supporting 11 countries as TR specialist, tackling senior stakeholder management and coordination challenges across three continents. Collecting skills.
Peak theory interests: Thermodynamics & information theory, ontology, epistemology, ethics, all biology, negotiation, strategic problem solving, social dynamics.
M.Sc. molecular and cellular biology
If you believe they meant that they have a path to reliably make safe AI, then the point that Anthropic is making this claim in bad faith holds true. Importantly, it holds even if you don’t consider existential risk, or believe Anthropic is on any direct trajectory to ASI right now.
I read it to mean that they believe they have a reliable way to accelerate risk mitigation. That still doesn’t matter if the tail-risk increases beyond a critical threshold, but it is more honest.
It’s a good callout actually, it says something about how they reason.