Read a bit into it, with disclaimers “I’m in the bay”/”my sphere is especially aware of Anthropic stuff” and “OpenAI and Anthropic do more of something like talking publicly or making commitments and this is good but entails that they have more integrity incidents; like, I don’t know of any xAI integrity incidents (outside of Musk personal stuff) since they never talk about safety stuff — but you shouldn’t infer that xAI is virtuous or trustworthy.”
Originally I wanted this page to have higher-level analysis/evaluation/comparison. I gave up on that because I have little confidence in my high-level judgments on the topic, especially the high-level judgments that I could legibly justify. It’s impossible to summarize the page well and it’s easy to overindex on the length of a section. But yeah, yay DeepMind for mostly avoiding being caught lying or breaking promises or being shady (as far as I’m aware), to some small but positive degree.
Read a bit into it, with disclaimers “I’m in the bay”/”my sphere is especially aware of Anthropic stuff” and “OpenAI and Anthropic do more of something like talking publicly or making commitments and this is good but entails that they have more integrity incidents; like, I don’t know of any xAI integrity incidents (outside of Musk personal stuff) since they never talk about safety stuff — but you shouldn’t infer that xAI is virtuous or trustworthy.”
Originally I wanted this page to have higher-level analysis/evaluation/comparison. I gave up on that because I have little confidence in my high-level judgments on the topic, especially the high-level judgments that I could legibly justify. It’s impossible to summarize the page well and it’s easy to overindex on the length of a section. But yeah, yay DeepMind for mostly avoiding being caught lying or breaking promises or being shady (as far as I’m aware), to some small but positive degree.