We don’t have “aligned AGI”. We have neither “AGI” nor an “aligned” system. We have sophisticated human-output simulators that don’t have the generality to produce effective agentic behavior when looped but which also don’t follow human intentions with the reliability that you’d want from a super-powerful system (which, fortunately, they aren’t).
We don’t have “aligned AGI”. We have neither “AGI” nor an “aligned” system. We have sophisticated human-output simulators that don’t have the generality to produce effective agentic behavior when looped but which also don’t follow human intentions with the reliability that you’d want from a super-powerful system (which, fortunately, they aren’t).