What if it had only been verified that the em’s overall behavior perfectly corresponds to its biological template (i.e. without corresponding subparts down to your chosen ground level)?
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.)
What if e.g. groups of neurons could be perfectly (and more efficiently) simulated, using an algorithm which doesn’t need to retain a “synapse” construct?
I would believe in this after someone had shown extremely high-fidelity simulation of synaptic compartments, then demonstrated the (computational) proposition that their high-level sim was equivalent.
Do you feel that some of the biological structural features on some level of granularity need to have clearly identifiable point-to-point counterparts in the algorithm?
No, but it’s sufficient to establish causal isomorphism. At the most extreme level, if you can simulate out a synapse by quantum fields, then you are very confident in your ability to simulate it because you have a laws-of-physics-level understanding of the quantum fields and of the simulation of the quantum fields.
Since in any case, “verified surface correspondence” is a given (i.e. all em-implementations aren’t differentiable from a black-box view)
Only in terms of very high-level abstractions being reproduced, since literal pointwise behavior is unlikely to be reproducible given thermal noise and quantum uncertainty. But it remains true that I expect any disturbance of the referent of “consciousness” to disturb the resulting agent’s tendency to write philosophy papers about “consciousness”. Note the high-level behavioral abstraction.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’. There are no worlds of reasonable probability in which both tests are simultaneously and accidentally fooled in the process of constructing a technology honestly meant to produce high-fidelity uploads.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Instead, if structural correspondence allowed for significant additional confidence that the em’s professions of being conscious were true, wouldn’t such a model just not stop, demanding “turtles all the way down”?
I guess I’m not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to “just” overall functional equivalence.
IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’..
Key word: “Sufficient”. I did not say, “necessary”.
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I’d like to capture the notion of being able to contain a consciousness, so what I’m asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. “pointwise” isomorphism, if you’re saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn’t seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human → em, and em → human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.
Instead, if structural correspondence allowed for significant additional confidence that the em’s professions of being conscious were true, wouldn’t such a model just not stop, demanding “turtles all the way down”?
IOW, why assign “top” probability to the synaptic level, when there are further levels.
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.)
I would believe in this after someone had shown extremely high-fidelity simulation of synaptic compartments, then demonstrated the (computational) proposition that their high-level sim was equivalent.
No, but it’s sufficient to establish causal isomorphism. At the most extreme level, if you can simulate out a synapse by quantum fields, then you are very confident in your ability to simulate it because you have a laws-of-physics-level understanding of the quantum fields and of the simulation of the quantum fields.
Only in terms of very high-level abstractions being reproduced, since literal pointwise behavior is unlikely to be reproducible given thermal noise and quantum uncertainty. But it remains true that I expect any disturbance of the referent of “consciousness” to disturb the resulting agent’s tendency to write philosophy papers about “consciousness”. Note the high-level behavioral abstraction.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’. There are no worlds of reasonable probability in which both tests are simultaneously and accidentally fooled in the process of constructing a technology honestly meant to produce high-fidelity uploads.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Instead, if structural correspondence allowed for significant additional confidence that the em’s professions of being conscious were true, wouldn’t such a model just not stop, demanding “turtles all the way down”?
I guess I’m not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to “just” overall functional equivalence.
IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.
Key word: “Sufficient”. I did not say, “necessary”.
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I’d like to capture the notion of being able to contain a consciousness, so what I’m asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. “pointwise” isomorphism, if you’re saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn’t seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
Has this been discussed in any other threads?
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human → em, and em → human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.
IOW, why assign “top” probability to the synaptic level, when there are further levels.