What language test? (And, how would a fully-homomorphically-encrypted sim with a lost key be shown to be conscious by anything that requires communicating with it?)
you have to decide how far the reversibility goes
The sort of reversibility Scott Aaronson is talking about goes all the way: after reversal, the thing in question is in exactly the same state as it was in before. No memory, no trauma, no imprint, nothing.
The Vaidman brain isn’t conscious I don’t think because it’s based on a specific input and a specific output.
I don’t understand that at all. Why does that stop it being conscious? If I ask you a specific yes/no question (in the ordinary fashion, no Vaidman tricksiness) and you answer it, does the fact that you were giving a specific answer to a specific question mean that you weren’t conscious while you did it?
Giving answers is an irreversible operation. The whole “is a fully reversible computer conscious?” thing doesn’t really make sense to me—for the computer to actually have an effect on the world requires irreversible outputs. So I have trouble imagiing scenarios where my expectactions are different but the entire process remains reversible...
You could set up a fully quantum whole brain emulation of a person sitting in a room with a piece of paper that says “Prove the Riemann Hypothesis”. Once they’ve finished the proof, you record what’s written on their paper, and reverse the entire simulation (as it was fully quantum mechanical, thus, in principle, fully unitarily reversible).
Looking at what they wrote on the paper doesn’t mean you have to communicate with them.
The act of writing on the paper was an irreversible action. And yes, looking at it is comunication, in the physical sense. Specifically, the photon interaction with the paper and with your eyes is not reversible. Any act of extracting information from the computational process in a way where the information or anything causally dependent on that information is not also reversed when the computation is run backwards, must be an irreversable action.
What does a universe look like where a computation has been run forwards, and then run backwards in a fully reversible way? Like it never happened at all.
I think the confusion here is about what “fully quantum whole brain emulation” actually means.
The idea is that you have a box (probably large), within which is running a closed system calculation which is equivalent to simulating someone sitting in a room trying to write a theorem (all the way down to the quantum level). You are not interacting with the simulation, you are running the simulation. At every stage of the simulation, you have perfect information about the full density matrix of the system (i.e., the person being simulated, the room, the atoms in the person’s brain, the movements of the pencil, etc.)
If you have this level of control, then you are implementing the full unitary time evolution of the system. The time evolution operator is reversible. Thus, you can just run the calculation backwards.
So, to the person in the room writing the proof, as far as they know, the photon flying from the paper hitting their eye and being registered by their brain is an irreversible interaction—they don’t have complete control over their environment. But to you, the simulation runner, this action is perfectly reversible.
Now, the contention may be that this simulated person wasn’t actually ever conscious during the course of this ultra-high-fidelity experiment. Answering that question either way seems to have strange philosophical implications.
What you describe is all true, however useless as described. The earlier poster wanted the simulation to output data (e.g. by writing it on paper—the paper being outside of the simulation), and then reverse the simulation. Sorry, you can’t do that. “Reversible” has very specific meaning in the context of statistical and quantum physics. Even if the computation itself can be reversed, once it has output data that property is lost. We’d no longer be talking about a reversible process, because once the computation is reversed, that output still exists.
I’m not sure who you’re talking about because I’m the person above referring to someone writing on paper—and the paper was meant to also be within the simulation. The simulator is “reading the paper” by nature of having perfect information about the system.
“Reversible” in this context is only meant to describe the contents of the simulation. Computation can occur completely reversibly.
Sorry, got mixed up with cameroncowan. Anyway, to the original point:
You said “Once they’ve finished the proof, you record what’s written on their paper, and reverse the entire simulation… Looking at what they wrote on the paper doesn’t mean you have to communicate with them.”
My interpretation—which may be wrong—is that you are suggesting that the person running the simulation record the state of the simulation at the moment the problem is solved, or at least the part of the simulator state having to do with the paper. However the process of extracting information out of the simulation—saving state—is irreversable, at least if you want it to survive rewinding the simulation.
To put differently, if the simulation is fully reversible, then you run it forwards, run it backwards, and that the end you have absolutely zero knowledge about what happened inbetween. Any preserved state that wasn’t there at the beginning would mean that the process wasn’t fully reversed.
Looking at the paper is communicating with the simulation. It maybe be a one-way communication, but that is enough.
I’m suggesting that the person running the simulation knows the state of the simulation at all times. If this bothers you, pretend everything is being done digitally, on a classical computer, with exponential slowdown.
Such a calculation can be done reversibly without ever passing information into the system.
What do you mean by “knows the state of the simulation”? What is the point of this exercise?
Yes the machine running the simulation knows the current state of the simulation at any given point (ignoring fully homomorphic encryption). It must however forget this intermediate state when the computation is reversed, including any copies/checkpoints it has. Otherwise we’re not talking about a reversible process. Do we agree on this point?
My original post was:
Giving answers is an irreversible operation. The whole “is a fully reversible computer conscious?” thing doesn’t really make sense to me—for the computer to actually have an effect on the world requires irreversible outputs. So I have trouble imagiing scenarios where my expectactions are different but the entire process remains reversible...
How does your setup of a simulated person performing mathmatics, then being forgotten as the simulation is run backwards address this concern?
I disagree that “giving answers is an irreversible operation”. My setup explicitly doesn’t “forget” the calculation (the calculation being simulating someone proving the Riemann hypothesis, and us extracting that proof from the simulation), and my setup is explicitly reversible (because we have the full density matrix of the system at all times, and can in principle perform unitary time evolution backwards from the final state if we wanted to).
Nothing is ever being forgotten. I’m not sure where that came from, because I’ve never claimed that anything is being forgotten at any step. I’m not sure why you’re insisting that things be forgotten to satisfy reversibility, either.
I would like to know that as well because I think there is an effect if it is conscious to make it fully reversible I think denies a certain consciousness.
“Reading it” is akin to “having perfect information about the full density matrix of the system”. You don’t have to perturb the system to get information out of it.
How is a simulation of a conscious mind, operating behind a “wall” of fully homomorphic encryption for which no one has the key, going to pass this “language test”?
I don’t think that kind of reversibility is possible while also maintaining consciousness.
Then you agree with Scott Aaronson on at least one thing.
Then that invalidates the idea if you remove the tricksiness.
What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is “based on a specific input and a specific output”, but we seem to be agreed that one can have a normal interaction with a normal conscious brain “based on a specific input and a specific output” so that can’t be it. So what is the relevant difference, in your opinion?
That is my point, its not and therefore can’t pass the conscious language test and I think thats quite the problem.
I think the Vaidman procedure doesn’t make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a “question” without definable bounds and get any answer between “42″ and “I don’t know or I cannot answer that.” So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say “I don’t know” or “my head is parrot your post is invalid.” The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.
Then I have no idea what you meant by “If you use the language test then yes and FHE encrypted sm with a lost key is still conscious”.
the specific input and output being only a yes or no answer makes it no better than the computers we are using right now.
If I ask you a question and somehow constrain you only to answer yes or no, that doesn’t stop you being conscious as you decide your answer. There’s a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that’s unusual is the context. (But the context is very unusual.)
I would say that no the FHE is not because the most important aspect of language is communication and meaning. The ability to communicate matters not as long as it cannot have meaning to at least one other person. Upon the 2nd point we are agreed.
What language test? (And, how would a fully-homomorphically-encrypted sim with a lost key be shown to be conscious by anything that requires communicating with it?)
The sort of reversibility Scott Aaronson is talking about goes all the way: after reversal, the thing in question is in exactly the same state as it was in before. No memory, no trauma, no imprint, nothing.
I don’t understand that at all. Why does that stop it being conscious? If I ask you a specific yes/no question (in the ordinary fashion, no Vaidman tricksiness) and you answer it, does the fact that you were giving a specific answer to a specific question mean that you weren’t conscious while you did it?
Giving answers is an irreversible operation. The whole “is a fully reversible computer conscious?” thing doesn’t really make sense to me—for the computer to actually have an effect on the world requires irreversible outputs. So I have trouble imagiing scenarios where my expectactions are different but the entire process remains reversible...
You could set up a fully quantum whole brain emulation of a person sitting in a room with a piece of paper that says “Prove the Riemann Hypothesis”. Once they’ve finished the proof, you record what’s written on their paper, and reverse the entire simulation (as it was fully quantum mechanical, thus, in principle, fully unitarily reversible).
Looking at what they wrote on the paper doesn’t mean you have to communicate with them.
The act of writing on the paper was an irreversible action. And yes, looking at it is comunication, in the physical sense. Specifically, the photon interaction with the paper and with your eyes is not reversible. Any act of extracting information from the computational process in a way where the information or anything causally dependent on that information is not also reversed when the computation is run backwards, must be an irreversable action.
What does a universe look like where a computation has been run forwards, and then run backwards in a fully reversible way? Like it never happened at all.
I think the confusion here is about what “fully quantum whole brain emulation” actually means.
The idea is that you have a box (probably large), within which is running a closed system calculation which is equivalent to simulating someone sitting in a room trying to write a theorem (all the way down to the quantum level). You are not interacting with the simulation, you are running the simulation. At every stage of the simulation, you have perfect information about the full density matrix of the system (i.e., the person being simulated, the room, the atoms in the person’s brain, the movements of the pencil, etc.)
If you have this level of control, then you are implementing the full unitary time evolution of the system. The time evolution operator is reversible. Thus, you can just run the calculation backwards.
So, to the person in the room writing the proof, as far as they know, the photon flying from the paper hitting their eye and being registered by their brain is an irreversible interaction—they don’t have complete control over their environment. But to you, the simulation runner, this action is perfectly reversible.
Now, the contention may be that this simulated person wasn’t actually ever conscious during the course of this ultra-high-fidelity experiment. Answering that question either way seems to have strange philosophical implications.
What you describe is all true, however useless as described. The earlier poster wanted the simulation to output data (e.g. by writing it on paper—the paper being outside of the simulation), and then reverse the simulation. Sorry, you can’t do that. “Reversible” has very specific meaning in the context of statistical and quantum physics. Even if the computation itself can be reversed, once it has output data that property is lost. We’d no longer be talking about a reversible process, because once the computation is reversed, that output still exists.
I’m not sure who you’re talking about because I’m the person above referring to someone writing on paper—and the paper was meant to also be within the simulation. The simulator is “reading the paper” by nature of having perfect information about the system.
“Reversible” in this context is only meant to describe the contents of the simulation. Computation can occur completely reversibly.
Sorry, got mixed up with cameroncowan. Anyway, to the original point:
You said “Once they’ve finished the proof, you record what’s written on their paper, and reverse the entire simulation… Looking at what they wrote on the paper doesn’t mean you have to communicate with them.”
My interpretation—which may be wrong—is that you are suggesting that the person running the simulation record the state of the simulation at the moment the problem is solved, or at least the part of the simulator state having to do with the paper. However the process of extracting information out of the simulation—saving state—is irreversable, at least if you want it to survive rewinding the simulation.
To put differently, if the simulation is fully reversible, then you run it forwards, run it backwards, and that the end you have absolutely zero knowledge about what happened inbetween. Any preserved state that wasn’t there at the beginning would mean that the process wasn’t fully reversed.
Looking at the paper is communicating with the simulation. It maybe be a one-way communication, but that is enough.
I’m suggesting that the person running the simulation knows the state of the simulation at all times. If this bothers you, pretend everything is being done digitally, on a classical computer, with exponential slowdown.
Such a calculation can be done reversibly without ever passing information into the system.
What do you mean by “knows the state of the simulation”? What is the point of this exercise?
Yes the machine running the simulation knows the current state of the simulation at any given point (ignoring fully homomorphic encryption). It must however forget this intermediate state when the computation is reversed, including any copies/checkpoints it has. Otherwise we’re not talking about a reversible process. Do we agree on this point?
My original post was:
How does your setup of a simulated person performing mathmatics, then being forgotten as the simulation is run backwards address this concern?
I disagree that “giving answers is an irreversible operation”. My setup explicitly doesn’t “forget” the calculation (the calculation being simulating someone proving the Riemann hypothesis, and us extracting that proof from the simulation), and my setup is explicitly reversible (because we have the full density matrix of the system at all times, and can in principle perform unitary time evolution backwards from the final state if we wanted to).
Nothing is ever being forgotten. I’m not sure where that came from, because I’ve never claimed that anything is being forgotten at any step. I’m not sure why you’re insisting that things be forgotten to satisfy reversibility, either.
I would like to know that as well because I think there is an effect if it is conscious to make it fully reversible I think denies a certain consciousness.
That’s what Scott’s blog is about :)
But writing the proof and reading it is communication.
“Reading it” is akin to “having perfect information about the full density matrix of the system”. You don’t have to perturb the system to get information out of it.
Language Test: The Language Test is simple language for the Heideggarian idea of language as a proof of consciousness.
Reversibility: I don’t think that kind of reversibility is possible while also maintaining consciousness.
Vaidman Brain: Then that invalidates the idea if you remove the tricksiness. I would of course remain in a certain state of conscious the entire time.
How is a simulation of a conscious mind, operating behind a “wall” of fully homomorphic encryption for which no one has the key, going to pass this “language test”?
Then you agree with Scott Aaronson on at least one thing.
What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is “based on a specific input and a specific output”, but we seem to be agreed that one can have a normal interaction with a normal conscious brain “based on a specific input and a specific output” so that can’t be it. So what is the relevant difference, in your opinion?
That is my point, its not and therefore can’t pass the conscious language test and I think thats quite the problem.
I think the Vaidman procedure doesn’t make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a “question” without definable bounds and get any answer between “42″ and “I don’t know or I cannot answer that.” So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say “I don’t know” or “my head is parrot your post is invalid.” The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.
Then I have no idea what you meant by “If you use the language test then yes and FHE encrypted sm with a lost key is still conscious”.
If I ask you a question and somehow constrain you only to answer yes or no, that doesn’t stop you being conscious as you decide your answer. There’s a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that’s unusual is the context. (But the context is very unusual.)
I would say that no the FHE is not because the most important aspect of language is communication and meaning. The ability to communicate matters not as long as it cannot have meaning to at least one other person. Upon the 2nd point we are agreed.