OpenAI is not solely focused on alignment; they allocate only 20% of their resources to superalignment research. What about the remaining 80%? This often goes undiscussed, yet it arguably represents the largest budget allocated to improving capabilities. Why not devote 100% of resources to building an aligned AGI? Such an approach would likely yield the best economic returns, as people are more likely to use a trustworthy AI, and governments would also be more inclined to promote its adoption.
OpenAI is not solely focused on alignment; they allocate only 20% of their resources to superalignment research. What about the remaining 80%? This often goes undiscussed, yet it arguably represents the largest budget allocated to improving capabilities. Why not devote 100% of resources to building an aligned AGI? Such an approach would likely yield the best economic returns, as people are more likely to use a trustworthy AI, and governments would also be more inclined to promote its adoption.