funding—the company need money to perform research on safety alignment (X risks, and assuming they do want to to this), and to get there they need to publish models so that they can 1) make profits from them, 2) attract more funding. A quick look on the funding source shows Amazon, Google, some other ventures, and some other tech companies
empirical approach—they want to take empirical approach to AI safety and would need some limited capable models
But both of the points above are my own speculations
I wonder if this is due to
funding—the company need money to perform research on safety alignment (X risks, and assuming they do want to to this), and to get there they need to publish models so that they can 1) make profits from them, 2) attract more funding. A quick look on the funding source shows Amazon, Google, some other ventures, and some other tech companies
empirical approach—they want to take empirical approach to AI safety and would need some limited capable models
But both of the points above are my own speculations