I think the misunderstanding came from Eliezer’s reference to a perpetual motion machine. The point was that people suggesting how to build them often have complicated schemes that tend to not adequately address the central difficulty of creating one. That’s where the analogy ends. From thermodynamics, we have strong reasons to believe such a thing is not just difficult but impossible whereas we have no corresponding theory to rule out verifiably safe AI.
Habryka’s analogy to nuclear reactor plans is similar except we know that building one of those is difficult but actually possible.
I think the misunderstanding came from Eliezer’s reference to a perpetual motion machine. The point was that people suggesting how to build them often have complicated schemes that tend to not adequately address the central difficulty of creating one. That’s where the analogy ends. From thermodynamics, we have strong reasons to believe such a thing is not just difficult but impossible whereas we have no corresponding theory to rule out verifiably safe AI.
Habryka’s analogy to nuclear reactor plans is similar except we know that building one of those is difficult but actually possible.