I think it’s totally fine to think that Anthropic is a net positive. Personally, right now, I broadly also think it’s a net positive. I have friends on both sides of this.
I’d flag though that your previous comment suggested more to me than “this is just you giving your probability”
> Give me your model, with numbers, that shows supporting Anthropic to be a bad bet, or admit you are confused and that you don’t actually have good advice to give anyone.
I feel like there are much nicer ways to phase that last bit. I suspect that this is much of the reason you got disagreement points.
Fair enough. I’m frustrated and worried, and should have phrased that more neutrally. I wanted to make stronger arguments for my point, and then partway through my comment realized I didn’t feel good about sharing my thoughts.
I think the best I can do is gesture at strategy games that involve private information and strategic deception like Diplomacy and Stratego and MtG and Poker, and say that in situations with high stakes and politics and hidden information, perhaps don’t take all moves made by all players at literally face value. Think a bit to yourself about what each player might have in their uands, what their incentives look like, what their private goals might be. Maybe someone whose mind is clearer on this could help lay out a set of alternative hypotheses which all fit the available public data?
The private data is, pretty consistently, Anthropic being very similar to OpenAI where it matters the most and failing to mention in private policy-related settings its publicly stated belief on the risk that smarter-than-human AI will kill everyone.
I don’t feel free to share my model, unfortunately. Hopefully someone else will chime in. I agree with your point and that this is a good question!
I am not trying to say I am certain that Anthropic is going to be net positive, just that that’s my view as the higher probability.
I think it’s totally fine to think that Anthropic is a net positive. Personally, right now, I broadly also think it’s a net positive. I have friends on both sides of this.
I’d flag though that your previous comment suggested more to me than “this is just you giving your probability”
> Give me your model, with numbers, that shows supporting Anthropic to be a bad bet, or admit you are confused and that you don’t actually have good advice to give anyone.
I feel like there are much nicer ways to phase that last bit. I suspect that this is much of the reason you got disagreement points.
Fair enough. I’m frustrated and worried, and should have phrased that more neutrally. I wanted to make stronger arguments for my point, and then partway through my comment realized I didn’t feel good about sharing my thoughts.
I think the best I can do is gesture at strategy games that involve private information and strategic deception like Diplomacy and Stratego and MtG and Poker, and say that in situations with high stakes and politics and hidden information, perhaps don’t take all moves made by all players at literally face value. Think a bit to yourself about what each player might have in their uands, what their incentives look like, what their private goals might be. Maybe someone whose mind is clearer on this could help lay out a set of alternative hypotheses which all fit the available public data?
The private data is, pretty consistently, Anthropic being very similar to OpenAI where it matters the most and failing to mention in private policy-related settings its publicly stated belief on the risk that smarter-than-human AI will kill everyone.