[I’ve talked to Zach about this project]
I think this is cool, thanks for building it! In particular, it’s great to have a single place where all these facts have been collected.
I can imagine this growing into the default reference that people use when talking about whether labs are behaving responsibly.
One reason that I’m particularly excited for this: AI-x-risk-concerned people are often accused of supporting Anthropic over other labs for reasons that are related to social affiliation rather than substantive differences. I think these accusations have some merit—if you ask AI-x-risk-concerned people for exactly how Anthropic differs from e.g. OpenAI, they often turn out to have a pretty shallow understanding of the difference. This resource makes it easier for these people to have a firmer understanding of concrete differences.
I hope also that this project makes it easier for AI-x-risk-concerned people to better allocate their social pressure on labs.