While a fun quip to make, it has little relation to the physical reality. There are technologically possible ways of aligning AIs that have no analogy for aligning humans, and there are approaches to aligning humans that are not available for aligning AIs. Solving one problem, thus, doesn’t necessarily have anything to do with solving the other: you can be able to align an AGI without being able to align a flesh-and-blood human, and you could develop the ability to robustly align humans yet end up not one step closer to aligning an AGI.
I mean, I can’t say this whole debacle doesn’t have a funny allegorical meaning in the context of AI Alignment and OpenAI’s chances of achieving it. But it’s a funny allegory, not exact correspondence.
While a fun quip to make, it has little relation to the physical reality. There are technologically possible ways of aligning AIs that have no analogy for aligning humans, and there are approaches to aligning humans that are not available for aligning AIs. Solving one problem, thus, doesn’t necessarily have anything to do with solving the other: you can be able to align an AGI without being able to align a flesh-and-blood human, and you could develop the ability to robustly align humans yet end up not one step closer to aligning an AGI.
I mean, I can’t say this whole debacle doesn’t have a funny allegorical meaning in the context of AI Alignment and OpenAI’s chances of achieving it. But it’s a funny allegory, not exact correspondence.