This post is cute, but there are several flaws/omissions that can lead to compound propagating errors in typical interpretations.
Any two AI designs might be less similar to each other than you are to a petunia.
Cute. The general form of this statement:
(Any two X might be less similar to each other than you are to a petunia) is trivially true if our basis of comparison is based solely on genetic similarity.
This leads to the first big problem with this post: The idea that minds are determined by DNA. This idea only makes sense if one is thinking of a mind as a sort of potential space.
Clone Einstein and raise him with wolves and you get a sort of smart wolf mind inhabiting a human body. Minds are memetic. Petunias don’t have minds. I am my mind.
The second issue (more of a missing idea really) is that of functional/algorithmic equivalence. If you take a human brain, scan it, and sufficiently simulate out the key circuits, you get a functional equivalent of the original mind encoded in that brain. The substrate doesn’t matter, and nor even do the exact algorithms, as any circuit can be replaced with any algorithm that preserves the input/output relationships.
Functional equivalence is another way of arriving at the “minds are memetic” conclusion.
As a result of this, the region of mindspace which we can likely first access with AGI designs is some small envelop around current human mindspace.
The map of mindspace here may be more or less correct, but whats-anything-but-clear is how distinct near term de novo AGI actually is from say human uploads, given: functional equivalence, bayesian brain, no free lunch in optimization, and the mind is memetic.
For example, if the most viable route to AGI turns out to be brain-like designs, then it is silly not to anthropomorphize AGI.
This leads to the first big problem with this post: The idea that minds are determined by DNA. This idea only makes sense if one is thinking of a mind as a sort of potential space.
Clone Einstein and raise him with wolves and you get a sort of smart wolf mind inhabiting a human body. Minds are memetic. Petunias don’t have minds. I am my mind.
Reversing your analogy, if you clone a wolf and raise it with Einsteins, you do not get another Einstein. That is because hardware (DNA) matters and wolves do not have the required brain hardware to instantiate Einstein’s mind.
Minds are determined to a large extent (though not fully as you rightly point out) by the specific hardware or hardware architecture that instantiates them, Eliezer stresses this point in his post Detached Lever Fallacy that comes earlier in the sequence: learning a human language and culture requires specialized hardware that has been evolved (optimized) specifically for the task, the same goes for deciding to grow fur when it’s cold (you need sensors) or for parsing what is seen (eyes + visual cortex). The right environment is not enough, you need a mind that already works to even be able to learn from the environment.
Minds are memetic
A quick search tells me this is not true for reptiles, amphibians, fish and insects.
Primates (including humans) and birds have specific hardware dedicated to this task: mirror neurons.
The substrate doesn’t matter
In practice it does as soon as you try to make such a mind because you do not have the same runtime complexity or energy consumption etc depending on the substrate. For example you can emulate a Quantum Computer on a classical computer, but you get an exponential slow-down.
This post is cute, but there are several flaws/omissions that can lead to compound propagating errors in typical interpretations.
Cute. The general form of this statement:
(Any two X might be less similar to each other than you are to a petunia) is trivially true if our basis of comparison is based solely on genetic similarity.
This leads to the first big problem with this post: The idea that minds are determined by DNA. This idea only makes sense if one is thinking of a mind as a sort of potential space.
Clone Einstein and raise him with wolves and you get a sort of smart wolf mind inhabiting a human body. Minds are memetic. Petunias don’t have minds. I am my mind.
The second issue (more of a missing idea really) is that of functional/algorithmic equivalence. If you take a human brain, scan it, and sufficiently simulate out the key circuits, you get a functional equivalent of the original mind encoded in that brain. The substrate doesn’t matter, and nor even do the exact algorithms, as any circuit can be replaced with any algorithm that preserves the input/output relationships.
Functional equivalence is another way of arriving at the “minds are memetic” conclusion.
As a result of this, the region of mindspace which we can likely first access with AGI designs is some small envelop around current human mindspace.
The map of mindspace here may be more or less correct, but whats-anything-but-clear is how distinct near term de novo AGI actually is from say human uploads, given: functional equivalence, bayesian brain, no free lunch in optimization, and the mind is memetic.
For example, if the most viable route to AGI turns out to be brain-like designs, then it is silly not to anthropomorphize AGI.
Reversing your analogy, if you clone a wolf and raise it with Einsteins, you do not get another Einstein. That is because hardware (DNA) matters and wolves do not have the required brain hardware to instantiate Einstein’s mind.
Minds are determined to a large extent (though not fully as you rightly point out) by the specific hardware or hardware architecture that instantiates them, Eliezer stresses this point in his post Detached Lever Fallacy that comes earlier in the sequence: learning a human language and culture requires specialized hardware that has been evolved (optimized) specifically for the task, the same goes for deciding to grow fur when it’s cold (you need sensors) or for parsing what is seen (eyes + visual cortex). The right environment is not enough, you need a mind that already works to even be able to learn from the environment.
A quick search tells me this is not true for reptiles, amphibians, fish and insects. Primates (including humans) and birds have specific hardware dedicated to this task: mirror neurons.
In practice it does as soon as you try to make such a mind because you do not have the same runtime complexity or energy consumption etc depending on the substrate. For example you can emulate a Quantum Computer on a classical computer, but you get an exponential slow-down.