What were the other options? Have you considered advising xAI privately, or re-directing xAI to be advised by someone else? Also, would the default be clearly worse?
As you surely are quite aware of, one of the bigger fights about AI safety across academia, policymaking and public spaces now is the discussion about AI safety being “distraction” from immediate social harms, and being actually the agenda favoured by the leading labs and technologists. (Often comes with accusations of attempted regulatory capture, worries about concentration of power, etc.)
In my view, given this situation, it seems valuable to have AI safety represented also by somewhat neutral coordination institutions without obvious conflicts of interest and large attack surfaces.
As I wrote in the OP, CAIS made some relatively bold moves to became one of the most visible “public representatives” of AI safety—including the name choice, and organizing the widely reported Statement on AI risk (which was a success). Until now, my impression was when you are taking the namespace, you also aim for CAIS to be such “somewhat neutral coordination institution without obvious conflicts of interest and large attack surfaces”.
Maybe I was wrong, and you don’t aim for this coordination/representative role. But if you do, advising xAI seems a strange choice for multiple reasons: 1. it makes you somewhat less neutral party for the broader world; even if the link to xAI does not actually influence your judgement or motivations, I think on priors it’s broadly sensible for policymakers, politicians and public to suspect all kind of activism, advocacy and lobbying efforts having some side-motivations or conflicts of interest, and this strengthens this suspicion 2. the existing public announcements do not inspire confidence in the safety mindset in xAI founders; it seems unclear whether you advised xAI also about the plan “align to curiosity” 3. if xAI turns to be mostly interested in safety-washing, it’s more of problem if it’s aided by more central/representative org
What were the other options? Have you considered advising xAI privately, or re-directing xAI to be advised by someone else? Also, would the default be clearly worse?
As you surely are quite aware of, one of the bigger fights about AI safety across academia, policymaking and public spaces now is the discussion about AI safety being “distraction” from immediate social harms, and being actually the agenda favoured by the leading labs and technologists. (Often comes with accusations of attempted regulatory capture, worries about concentration of power, etc.)
In my view, given this situation, it seems valuable to have AI safety represented also by somewhat neutral coordination institutions without obvious conflicts of interest and large attack surfaces.
As I wrote in the OP, CAIS made some relatively bold moves to became one of the most visible “public representatives” of AI safety—including the name choice, and organizing the widely reported Statement on AI risk (which was a success). Until now, my impression was when you are taking the namespace, you also aim for CAIS to be such “somewhat neutral coordination institution without obvious conflicts of interest and large attack surfaces”.
Maybe I was wrong, and you don’t aim for this coordination/representative role. But if you do, advising xAI seems a strange choice for multiple reasons:
1. it makes you somewhat less neutral party for the broader world; even if the link to xAI does not actually influence your judgement or motivations, I think on priors it’s broadly sensible for policymakers, politicians and public to suspect all kind of activism, advocacy and lobbying efforts having some side-motivations or conflicts of interest, and this strengthens this suspicion
2. the existing public announcements do not inspire confidence in the safety mindset in xAI founders; it seems unclear whether you advised xAI also about the plan “align to curiosity”
3. if xAI turns to be mostly interested in safety-washing, it’s more of problem if it’s aided by more central/representative org