How is a simulation of a conscious mind, operating behind a “wall” of fully homomorphic encryption for which no one has the key, going to pass this “language test”?
I don’t think that kind of reversibility is possible while also maintaining consciousness.
Then you agree with Scott Aaronson on at least one thing.
Then that invalidates the idea if you remove the tricksiness.
What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is “based on a specific input and a specific output”, but we seem to be agreed that one can have a normal interaction with a normal conscious brain “based on a specific input and a specific output” so that can’t be it. So what is the relevant difference, in your opinion?
That is my point, its not and therefore can’t pass the conscious language test and I think thats quite the problem.
I think the Vaidman procedure doesn’t make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a “question” without definable bounds and get any answer between “42″ and “I don’t know or I cannot answer that.” So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say “I don’t know” or “my head is parrot your post is invalid.” The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.
Then I have no idea what you meant by “If you use the language test then yes and FHE encrypted sm with a lost key is still conscious”.
the specific input and output being only a yes or no answer makes it no better than the computers we are using right now.
If I ask you a question and somehow constrain you only to answer yes or no, that doesn’t stop you being conscious as you decide your answer. There’s a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that’s unusual is the context. (But the context is very unusual.)
I would say that no the FHE is not because the most important aspect of language is communication and meaning. The ability to communicate matters not as long as it cannot have meaning to at least one other person. Upon the 2nd point we are agreed.
Language Test: The Language Test is simple language for the Heideggarian idea of language as a proof of consciousness.
Reversibility: I don’t think that kind of reversibility is possible while also maintaining consciousness.
Vaidman Brain: Then that invalidates the idea if you remove the tricksiness. I would of course remain in a certain state of conscious the entire time.
How is a simulation of a conscious mind, operating behind a “wall” of fully homomorphic encryption for which no one has the key, going to pass this “language test”?
Then you agree with Scott Aaronson on at least one thing.
What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is “based on a specific input and a specific output”, but we seem to be agreed that one can have a normal interaction with a normal conscious brain “based on a specific input and a specific output” so that can’t be it. So what is the relevant difference, in your opinion?
That is my point, its not and therefore can’t pass the conscious language test and I think thats quite the problem.
I think the Vaidman procedure doesn’t make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a “question” without definable bounds and get any answer between “42″ and “I don’t know or I cannot answer that.” So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say “I don’t know” or “my head is parrot your post is invalid.” The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.
Then I have no idea what you meant by “If you use the language test then yes and FHE encrypted sm with a lost key is still conscious”.
If I ask you a question and somehow constrain you only to answer yes or no, that doesn’t stop you being conscious as you decide your answer. There’s a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that’s unusual is the context. (But the context is very unusual.)
I would say that no the FHE is not because the most important aspect of language is communication and meaning. The ability to communicate matters not as long as it cannot have meaning to at least one other person. Upon the 2nd point we are agreed.