Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?