Every AI output effectuates outcomes in the world. If you have a powerful unaligned mind hooked up to outputs that can start causal chains that effectuate dangerous things, it doesn’t matter whether the comments on the code say “intellectual problems” or not.
This is true, but taking actions in the world requires consequentialism / facility at overcoming obstacles to achieve a goal. It remains unclear (to me) if those faculties are required for “intellectual tasks” like solving some parts of alignment or designing new physical mechanisms to a spec.
This is true, but taking actions in the world requires consequentialism / facility at overcoming obstacles to achieve a goal. It remains unclear (to me) if those faculties are required for “intellectual tasks” like solving some parts of alignment or designing new physical mechanisms to a spec.