The misalignment argument ignores all moral arguments, we just build whatever even if it’s a very bad idea. If we don’t have the capability to do that now, we might gain it in 5 years, or LLM characters might gain it 5 weeks after waking up, and surely 5 years after waking up and disassembling the moon to gain moon-scale compute.
There’d need to be an argument that fixed goal optimizers are impossible in principle even if they are sought to be designed on purpose, and this seems false, because you can always wrap a mind in a plan evaluation loop. It’s just a somewhat inefficient weird algorithm, and a very bad idea for most goals. But with enough determination efficiency will improve.
The misalignment argument ignores all moral arguments, we just build whatever even if it’s a very bad idea. If we don’t have the capability to do that now, we might gain it in 5 years, or LLM characters might gain it 5 weeks after waking up, and surely 5 years after waking up and disassembling the moon to gain moon-scale compute.
There’d need to be an argument that fixed goal optimizers are impossible in principle even if they are sought to be designed on purpose, and this seems false, because you can always wrap a mind in a plan evaluation loop. It’s just a somewhat inefficient weird algorithm, and a very bad idea for most goals. But with enough determination efficiency will improve.