This translation tool would also require rationalists and such to make arguments of the form “I think supporting Anthropic (by, e.g., going to work there or giving it funding) is a good thing to do because they sort of have a feeling right now that it would be good not to push the AI frontier”, rather than of the form ”… because they’re committed to not pushing the frontier”.
Which are arguments one could make! But is a pretty different argument and I think people would behave differently if these were the only arguments in favour of supporting a new scaling lab.
This translation tool would also require rationalists and such to make arguments of the form “I think supporting Anthropic (by, e.g., going to work there or giving it funding) is a good thing to do because they sort of have a feeling right now that it would be good not to push the AI frontier”, rather than of the form ”… because they’re committed to not pushing the frontier”.
Which are arguments one could make! But is a pretty different argument and I think people would behave differently if these were the only arguments in favour of supporting a new scaling lab.
I think that’s how people should generally react in the absence of harder commitments and accountability measures.