LLMs as currently trained run ~0 risk of catastrophic instrumental convergence even if scaled up with 1000x more compute
LLMs as currently trained run ~0 risk of catastrophic instrumental convergence even if scaled up with 1000x more compute