I think the core useful point here / TLDR is: Aligning superintelligent AI to “normal” human standards, still isn’t enough to prevent catastrophe, because superintelligent human-ish-goal-AI would have the same problems as a too-powerful person or small group, and be more powerful/dangerous. Hence the need for (e.g.) provable security, CEV, basically stronger measures than are used for humans.
I think the core useful point here / TLDR is: Aligning superintelligent AI to “normal” human standards, still isn’t enough to prevent catastrophe, because superintelligent human-ish-goal-AI would have the same problems as a too-powerful person or small group, and be more powerful/dangerous. Hence the need for (e.g.) provable security, CEV, basically stronger measures than are used for humans.