The orthogonality thesis is already natural accessible and obvious: we know about highly intelligent sociopaths, the ‘evil genius’ trope, etc. The sequences are flawed and dated in key respects concerning AI, such that fresh material is probably best.
The orthogonality thesis is not as intuitive or as accessible as you think it is, and you have demonstrated that yourself with your references to “intelligent sociopaths” and “evil geniuses”. An out of control superintelligent AI is not an evil genius. It is not a sociopath. It is a machine. It’s closer to a hurricane or a tsunami than it is to anything that resembles a human.
Sociopaths, evil geniuses and the like are human. Broken, flawed humans, but still recognizably human. An AI will not be. It will not have human emotions. It might not have emotions of any kind. As Eliezer put it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
The orthogonality thesis is not as intuitive or as accessible as you think it is, and you have demonstrated that yourself with your references to “intelligent sociopaths” and “evil geniuses”. An out of control superintelligent AI is not an evil genius. It is not a sociopath. It is a machine. It’s closer to a hurricane or a tsunami than it is to anything that resembles a human.
Hurricanes and tsunamis don’t think; humans do, so actual AGI is much closer to a human (super obvious now: GPT-3, etc).
An AI will not be. It will not have human emotions. It might not have emotions of any kind. As Eliezer put it,
If your model of AI comes from reading the sequences, it was largely wrong when it was written, and is now terribly out of date. The likely path to AI is reverse engineering the brain, as I (and many others) predicted based on the efficiency of the brain and tractability of its universal learning algorithms, and demonstrated by the enormous convergent success of deep learning.
The orthogonality thesis is already natural accessible and obvious: we know about highly intelligent sociopaths, the ‘evil genius’ trope, etc. The sequences are flawed and dated in key respects concerning AI, such that fresh material is probably best.
The orthogonality thesis is not as intuitive or as accessible as you think it is, and you have demonstrated that yourself with your references to “intelligent sociopaths” and “evil geniuses”. An out of control superintelligent AI is not an evil genius. It is not a sociopath. It is a machine. It’s closer to a hurricane or a tsunami than it is to anything that resembles a human.
Sociopaths, evil geniuses and the like are human. Broken, flawed humans, but still recognizably human. An AI will not be. It will not have human emotions. It might not have emotions of any kind. As Eliezer put it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
Hurricanes and tsunamis don’t think; humans do, so actual AGI is much closer to a human (super obvious now: GPT-3, etc).
If your model of AI comes from reading the sequences, it was largely wrong when it was written, and is now terribly out of date. The likely path to AI is reverse engineering the brain, as I (and many others) predicted based on the efficiency of the brain and tractability of its universal learning algorithms, and demonstrated by the enormous convergent success of deep learning.