Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
I imagine, someone might figure out a way to make the AI much cheaper, maybe by sacrificing the generality. For example, this probably doesn’t make sense, but would it be possible to train an LLM only based on Python code (as opposed to the entire internet) and produce an AI that is only a Python code autocomplete? If it could be 1000x cheaper, you could make a startup without having to build a new power plant for you. Imagine that you add some special sauce to the algorithm (for example the AI will always internally write unit tests, which will visibly increase the correctness of the generated code; or it will be some combination of the ancient “expert system” approach with the new LLM approach, for example the LLM will train the expert system and then the expert system will provide feedback for the LLM), so you would be able to sell your narrow AI even when more general AIs are available. And once you start selling it, you get an income, which means you can expand the functionality.
It is better to have a consensus that such things are too dangerous to leave in hands of startups that can’t already lobby the government.
Hey, I am happy that the CEOs admit that the dangers exist. But if they are only doing it to secure their profits, it will probably warp their interpretations of what exactly the risks are, and what is a good way to reduce them.
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?
Just guessing, but maybe admitting the danger is strategically useful, because it may result in regulations that will hurt the potential competitors more. The regulations often impose fixed costs (such as paying a specialized team which produces paperwork on environmental impacts), which are okay when you are already making millions.
I imagine, someone might figure out a way to make the AI much cheaper, maybe by sacrificing the generality. For example, this probably doesn’t make sense, but would it be possible to train an LLM only based on Python code (as opposed to the entire internet) and produce an AI that is only a Python code autocomplete? If it could be 1000x cheaper, you could make a startup without having to build a new power plant for you. Imagine that you add some special sauce to the algorithm (for example the AI will always internally write unit tests, which will visibly increase the correctness of the generated code; or it will be some combination of the ancient “expert system” approach with the new LLM approach, for example the LLM will train the expert system and then the expert system will provide feedback for the LLM), so you would be able to sell your narrow AI even when more general AIs are available. And once you start selling it, you get an income, which means you can expand the functionality.
It is better to have a consensus that such things are too dangerous to leave in hands of startups that can’t already lobby the government.
Hey, I am happy that the CEOs admit that the dangers exist. But if they are only doing it to secure their profits, it will probably warp their interpretations of what exactly the risks are, and what is a good way to reduce them.
My sense of things is that OpenAI at least appears to be lobbying against regulation moreso than they are lobbying for it?