I don’t really understand the concern here. It seems from Google’s perspective there was basically a near certainty that the NYT/media/some people on twitter would have some complaints about Gemini regardless of the model outputs. Their options were either to have the complaints be about i) the model apparently offering advice on ‘dangerous’ topics/perpetuating biases which both seem conducive to requests for more regulation or ii) to cause some outrage/counter-outrage on ‘wokeness’ which may drive some clicks but is unlikely to cause any serious policymaker to want to regulate and when it quietly gets fixed a couple weeks later have caused basically no actual economic/other harms or costs. Further with ii) you get some evidence/talking points for all the many times in the upcoming years people push for regulation forcing AI models to censor certain outputs/ anti-bias/discrimination proposals. It seems to be that ii) is pretty strongly preferable to the point one would want to basically be guaranteeing the model errs on ii) rather than i) when first releasing it.
I don’t really understand the concern here. It seems from Google’s perspective there was basically a near certainty that the NYT/media/some people on twitter would have some complaints about Gemini regardless of the model outputs. Their options were either to have the complaints be about i) the model apparently offering advice on ‘dangerous’ topics/perpetuating biases which both seem conducive to requests for more regulation or ii) to cause some outrage/counter-outrage on ‘wokeness’ which may drive some clicks but is unlikely to cause any serious policymaker to want to regulate and when it quietly gets fixed a couple weeks later have caused basically no actual economic/other harms or costs. Further with ii) you get some evidence/talking points for all the many times in the upcoming years people push for regulation forcing AI models to censor certain outputs/ anti-bias/discrimination proposals. It seems to be that ii) is pretty strongly preferable to the point one would want to basically be guaranteeing the model errs on ii) rather than i) when first releasing it.