Disclaimer: my formal background here consists only of an undergraduate intro to neuroscience course taken to fulfill a distribution requirement.
I’m wondering if this is actually a serious problem. Assuming we are trying to perform a very low-level emulation (say, electro-chemical interactions in and amongst neurons, or lower), I’d guess that one of two things would happen.
0) The emulation isn’t good enough, meaning every interaction between neurons has a small but significant error in it. The errors would compound very, very quickly, and the emulated mind’s thought process would be easily distinguishable from a human’s within minutes if not seconds. In the long term, if the emulation is even stable at all, its behavior would fall very much into the trough of the mental uncanny valley, or else be completely inhuman. (I don’t know if anyone has talked about a mental uncanny valley before, but it seems like it would probably exist.)
1) The emulation is good enough, so the local emulation errors are suppressed by negative feedback instead of accumulating. In this case, the emulation would be effectively totally indistinguishable from the original brain-implemented mind, from both the outside and the inside.
My reason for rejecting borderline cases as unlikely is basically that I think an “uncanny valley” effect would occur whenever local errors accumulate into larger and larger discrepancies, and that for a sufficiently high fidelity emulation, errors would be suppressed by negative feedback. (I know this isn’t a very concrete argument, by my intuition strongly suggests that the brain already relies on negative feedback to keep thought processes relatively stable.) The true borderline cases would be ones in which the errors accumulate so slowly that it would take a long time before a behavior discrepancy is noticeable, but once it is noticeable, that would be the end of it, in that no one could take seriously the idea that the emulation is the same person (at least, in the sense of personal identity we’re currently used to). But even this might not be possible, if the negative feedback effect is strong.
I would love to hear from someone who knows better.
I wonder if a lossy emulation might feel like/act like a human with a slightly altered brain chemistry. We have lots of examples of what it’s like to have your neurons operating abnormally, due to emotion, tiredness, alcohol, other chemicals, etc etc. I’m not sure “uncanny valley” is the best term to capture that.
But I think those are examples of neurons operating normally, not abnormally. Even in the case of mind-influencing drugs, mostly the drugs just affect the brain on its own terms by altering various neurotransmitter levels. On the other hand, a low-level emulation glitch could distort the very rules by which information is processed in the brain.
Note that I am distinguishing “design shortcomings” from “bugs” here.
I don’t quite see how you’d get “the overall rules” wrong. I figure standard software engineering is all that’s required to make sure that the low-level pieces are put together properly. Possibly this is just a failure of imagination on my part, but I can’t think of an example of a defect that is more pervasive than “we got the neuron/axion model wrong.” And if you’re emulating at the neuron level or below, I’d figure that an emulation shortcoming would look exactly like altering neural behavior.
Disclaimer: my formal background here consists only of an undergraduate intro to neuroscience course taken to fulfill a distribution requirement.
I’m wondering if this is actually a serious problem. Assuming we are trying to perform a very low-level emulation (say, electro-chemical interactions in and amongst neurons, or lower), I’d guess that one of two things would happen.
0) The emulation isn’t good enough, meaning every interaction between neurons has a small but significant error in it. The errors would compound very, very quickly, and the emulated mind’s thought process would be easily distinguishable from a human’s within minutes if not seconds. In the long term, if the emulation is even stable at all, its behavior would fall very much into the trough of the mental uncanny valley, or else be completely inhuman. (I don’t know if anyone has talked about a mental uncanny valley before, but it seems like it would probably exist.)
1) The emulation is good enough, so the local emulation errors are suppressed by negative feedback instead of accumulating. In this case, the emulation would be effectively totally indistinguishable from the original brain-implemented mind, from both the outside and the inside.
My reason for rejecting borderline cases as unlikely is basically that I think an “uncanny valley” effect would occur whenever local errors accumulate into larger and larger discrepancies, and that for a sufficiently high fidelity emulation, errors would be suppressed by negative feedback. (I know this isn’t a very concrete argument, by my intuition strongly suggests that the brain already relies on negative feedback to keep thought processes relatively stable.) The true borderline cases would be ones in which the errors accumulate so slowly that it would take a long time before a behavior discrepancy is noticeable, but once it is noticeable, that would be the end of it, in that no one could take seriously the idea that the emulation is the same person (at least, in the sense of personal identity we’re currently used to). But even this might not be possible, if the negative feedback effect is strong.
I would love to hear from someone who knows better.
I wonder if a lossy emulation might feel like/act like a human with a slightly altered brain chemistry. We have lots of examples of what it’s like to have your neurons operating abnormally, due to emotion, tiredness, alcohol, other chemicals, etc etc. I’m not sure “uncanny valley” is the best term to capture that.
But I think those are examples of neurons operating normally, not abnormally. Even in the case of mind-influencing drugs, mostly the drugs just affect the brain on its own terms by altering various neurotransmitter levels. On the other hand, a low-level emulation glitch could distort the very rules by which information is processed in the brain.
Note that I am distinguishing “design shortcomings” from “bugs” here.
I don’t quite see how you’d get “the overall rules” wrong. I figure standard software engineering is all that’s required to make sure that the low-level pieces are put together properly. Possibly this is just a failure of imagination on my part, but I can’t think of an example of a defect that is more pervasive than “we got the neuron/axion model wrong.” And if you’re emulating at the neuron level or below, I’d figure that an emulation shortcoming would look exactly like altering neural behavior.