On the other hand, allowing any invertible function to be a _morphism doesn’t seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers.
Neither do I, but my intuition suggests that a static copy of a brain/the software necessary to emulate it plus a counter wouldn’t cause that brain to experience consciousness, whereas actually running the simulation as a reversible computation would...
Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to “pointwise causal isomorphism”:
Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified >that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of >detection) and verified surface correspondence (the person emulated says they can’t internally detect any >difference) then my probability of consciousness is essentially “top”, i.e. I would not bother to think about >alternative hypotheses because the probability would be low enough to fall off the radar of things I should think >about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even >though your biological brain isn’t?
We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.
I don’t understand why this is a counterexample.
Neither do I, but my intuition suggests that a static copy of a brain/the software necessary to emulate it plus a counter wouldn’t cause that brain to experience consciousness, whereas actually running the simulation as a reversible computation would...
Can you provide some more background? What is a morphism of computations?
Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to “pointwise causal isomorphism”:
We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.