This post is cute, but there are several flaws/omissions that can lead to compound propagating errors in typical interpretations.
Any two AI designs might be less similar to each other than you are to a petunia.
Cute. The general form of this statement:
(Any two X might be less similar to each other than you are to a petunia) is trivially true if our basis of comparison is based solely on genetic similarity.
This leads to the first big problem with this post: The idea that minds are determined by DNA. This idea only makes sense if one is thinking of a mind as a sort of potential space.
Clone Einstein and raise him with wolves and you get a sort of smart wolf mind inhabiting a human body. Minds are memetic. Petunias don’t have minds. I am my mind.
The second issue (more of a missing idea really) is that of functional/algorithmic equivalence. If you take a human brain, scan it, and sufficiently simulate out the key circuits, you get a functional equivalent of the original mind encoded in that brain. The substrate doesn’t matter, and nor even do the exact algorithms, as any circuit can be replaced with any algorithm that preserves the input/output relationships.
Functional equivalence is another way of arriving at the “minds are memetic” conclusion.
As a result of this, the region of mindspace which we can likely first access with AGI designs is some small envelop around current human mindspace.
The map of mindspace here may be more or less correct, but whats-anything-but-clear is how distinct near term de novo AGI actually is from say human uploads, given: functional equivalence, bayesian brain, no free lunch in optimization, and the mind is memetic.
For example, if the most viable route to AGI turns out to be brain-like designs, then it is silly not to anthropomorphize AGI.
This post is cute, but there are several flaws/omissions that can lead to compound propagating errors in typical interpretations.
Cute. The general form of this statement:
(Any two X might be less similar to each other than you are to a petunia) is trivially true if our basis of comparison is based solely on genetic similarity.
This leads to the first big problem with this post: The idea that minds are determined by DNA. This idea only makes sense if one is thinking of a mind as a sort of potential space.
Clone Einstein and raise him with wolves and you get a sort of smart wolf mind inhabiting a human body. Minds are memetic. Petunias don’t have minds. I am my mind.
The second issue (more of a missing idea really) is that of functional/algorithmic equivalence. If you take a human brain, scan it, and sufficiently simulate out the key circuits, you get a functional equivalent of the original mind encoded in that brain. The substrate doesn’t matter, and nor even do the exact algorithms, as any circuit can be replaced with any algorithm that preserves the input/output relationships.
Functional equivalence is another way of arriving at the “minds are memetic” conclusion.
As a result of this, the region of mindspace which we can likely first access with AGI designs is some small envelop around current human mindspace.
The map of mindspace here may be more or less correct, but whats-anything-but-clear is how distinct near term de novo AGI actually is from say human uploads, given: functional equivalence, bayesian brain, no free lunch in optimization, and the mind is memetic.
For example, if the most viable route to AGI turns out to be brain-like designs, then it is silly not to anthropomorphize AGI.