Cheers, I did see that and wondered whether still to post the comment but I do think that having a gigantic company owning a large chunk and presumably a lot of leverage over the company is a new form of pressure so it’d be reassuring to have some discussion of how to manage that relationship.
Didn’t Google previously own a large share? So now there are 2 gigantic companies owning a large share, which makes me think each has much less leverage, as Anthropic could get further funding from the other.
Yeah, I agree that that’s a reasonable concern, but I’m not sure what they could possibly discuss about it publicly. If the public, legible, legal structure hasn’t changed, and the concern is that the implicit dynamics might have shifted in some illegible way, what could they say publicly that would address that? Any sort of “Trust us, we’re super good at managing illegible implicit power dynamics.” would presumably carry no information, no?
That it is so difficult for Anthropic to reassure people stems from the contrast between Anthropic’s responsibility focused mission statements and the hard reality of them receiving billions in dollars of profit motivated investment.
It is rational to draw conclusions by weighting a companies actions more heavily than their PR.
It is rational to draw conclusions by weighting a companies actions more heavily than their PR.
Yeah—I’m very on board with this. I think people tend to put way too much weight and pay way too much attention to nice-sounding PR rather than just focusing on concrete evidence, past actions, hard commitments, etc. If you focus on nice-sounding PR, then GenericEvilCo can very cheaply gain your favor by manufacturing that for you, but actually making concrete commitments is much more expensive.
So yes, I think your opinion of Anthropic should mostly be priors + hard evidence. If you learned that there was an AI lab that had taken in a $4B investment from Amazon and had also committed to the LTBT governance structure and Responsible Scaling Policy, what would you then think about that company, updating on no other evidence? Ditto for OpenAI, Google DeepMind—I think you should judge them each in approximately the same way. You’ll end up relying on your priors a lot if you do this, but you’ll also be able to operate much more safely in an epistemic environment where some of the major players might be trying to game your approval.
Cheers, I did see that and wondered whether still to post the comment but I do think that having a gigantic company owning a large chunk and presumably a lot of leverage over the company is a new form of pressure so it’d be reassuring to have some discussion of how to manage that relationship.
Didn’t Google previously own a large share? So now there are 2 gigantic companies owning a large share, which makes me think each has much less leverage, as Anthropic could get further funding from the other.
Yeah, I agree that that’s a reasonable concern, but I’m not sure what they could possibly discuss about it publicly. If the public, legible, legal structure hasn’t changed, and the concern is that the implicit dynamics might have shifted in some illegible way, what could they say publicly that would address that? Any sort of “Trust us, we’re super good at managing illegible implicit power dynamics.” would presumably carry no information, no?
That it is so difficult for Anthropic to reassure people stems from the contrast between Anthropic’s responsibility focused mission statements and the hard reality of them receiving billions in dollars of profit motivated investment.
It is rational to draw conclusions by weighting a companies actions more heavily than their PR.
Yeah—I’m very on board with this. I think people tend to put way too much weight and pay way too much attention to nice-sounding PR rather than just focusing on concrete evidence, past actions, hard commitments, etc. If you focus on nice-sounding PR, then GenericEvilCo can very cheaply gain your favor by manufacturing that for you, but actually making concrete commitments is much more expensive.
So yes, I think your opinion of Anthropic should mostly be priors + hard evidence. If you learned that there was an AI lab that had taken in a $4B investment from Amazon and had also committed to the LTBT governance structure and Responsible Scaling Policy, what would you then think about that company, updating on no other evidence? Ditto for OpenAI, Google DeepMind—I think you should judge them each in approximately the same way. You’ll end up relying on your priors a lot if you do this, but you’ll also be able to operate much more safely in an epistemic environment where some of the major players might be trying to game your approval.