The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human → em, and em → human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human → em, and em → human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.