That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?
“No, no!” says the philosopher. “In the thought experiment, they aren’t randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain’s inputs and outputs! There! I’ve got you cornered now! You can’t play Follow-The-Improbability any further!”
Oh. So your specification is the source of the improbability here.
When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.
The point is that you have specified something so improbable that it is not going to actually happen, so I don’t have to explain it, like I don’t have to worry about how I would explain my arm being replaced by a purple tentacle.
Mitchell isn’t asking you to explain anything. He’s asking you to predict (effectively) what would happen, consciousness-wise, given a randomly generated GLUT. There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation (or that the situation is underspecified; maybe it depends on what sort of randomizer you use, or something).
There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation.
My theory does cover that situation; it says the GLUT will not be conscious. It also says that situation will not happen, because GLUTs that act like people come from entanglement with people. Things that don’t actually happen are allowed to violate general rules about things that do happen.
Okay. Why did you bother bringing up the tentacle, or the section you quoted from Eliezer’s post? Why insist on the improbability of a hypothetical when “least convenient possible world” has already been called?
Because I was challenging the applicability of Least Convenient Possible Worlds to this discussion. It is a fully general (and invalid) argument against any theory T to say take this event A that T says is super improbable and suppose that (in the Least Convenient Possible World) A happens, which is overwhelming evidence against T. The tentacle arm replacement is one such event that would contradict a lot of theories. Would you ask someone defending the theory that their body does not drastically change overnight to consider the Least Convenient Possible World where they do wake up with a tentacle instead of an arm?
But you don’t actually need to resort to this dodge. You already said the lookup tables aren’t conscious; that in itself is a step which is troublesome for a lot of computationalists. You could just add a clause to your original statement, e.g.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
Voila, you now have an answer which covers all possible worlds and not just the probable ones. I think it’s what you wanted to say anyway.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
If that answer would have satisfied you, why did you ask about a scenario so improbable you felt compelled to justify it with an appeal to the Least Convenient Possible World?
Do you now agree that GLUT simulations do not imply the existence of zombies?
I thought you were overlooking the extremely-improbable case by mistake, rather than overlooking it on principle.
For me, the point of a GLUT is that it is a simulation of consciousness that is not itself conscious, a somewhat different concept from the usual philosophical notion of a zombie, which is supposed to be physically identical to a conscious being, but with the consciousness somehow subtracted. A GLUT is physically different from the thing it simulates, so it’s a different starting point.
That world is more inconvenient than the one where I wake up with my arm replaced by a purple tentacle. Did you even read the article you linked to?
My specification is the reason we are talking about something improbable. It’s not the cause of the improbable thing itself.
The point is that you have specified something so improbable that it is not going to actually happen, so I don’t have to explain it, like I don’t have to worry about how I would explain my arm being replaced by a purple tentacle.
Mitchell isn’t asking you to explain anything. He’s asking you to predict (effectively) what would happen, consciousness-wise, given a randomly generated GLUT. There is a fact of the matter as to what would happen in that situation (in the same sense, whatever that may be, that there are facts about consciousness in normal situations), and a complete theory will be able to say what it is; the best you can say is that you don’t currently have a theory that covers that situation (or that the situation is underspecified; maybe it depends on what sort of randomizer you use, or something).
My theory does cover that situation; it says the GLUT will not be conscious. It also says that situation will not happen, because GLUTs that act like people come from entanglement with people. Things that don’t actually happen are allowed to violate general rules about things that do happen.
Okay. Why did you bother bringing up the tentacle, or the section you quoted from Eliezer’s post? Why insist on the improbability of a hypothetical when “least convenient possible world” has already been called?
Because I was challenging the applicability of Least Convenient Possible Worlds to this discussion. It is a fully general (and invalid) argument against any theory T to say take this event A that T says is super improbable and suppose that (in the Least Convenient Possible World) A happens, which is overwhelming evidence against T. The tentacle arm replacement is one such event that would contradict a lot of theories. Would you ask someone defending the theory that their body does not drastically change overnight to consider the Least Convenient Possible World where they do wake up with a tentacle instead of an arm?
But you don’t actually need to resort to this dodge. You already said the lookup tables aren’t conscious; that in itself is a step which is troublesome for a lot of computationalists. You could just add a clause to your original statement, e.g.
“The lookup tables are not conscious, but the process that produced them was either conscious or extremely improbable.”
Voila, you now have an answer which covers all possible worlds and not just the probable ones. I think it’s what you wanted to say anyway.
If that answer would have satisfied you, why did you ask about a scenario so improbable you felt compelled to justify it with an appeal to the Least Convenient Possible World?
Do you now agree that GLUT simulations do not imply the existence of zombies?
I thought you were overlooking the extremely-improbable case by mistake, rather than overlooking it on principle.
For me, the point of a GLUT is that it is a simulation of consciousness that is not itself conscious, a somewhat different concept from the usual philosophical notion of a zombie, which is supposed to be physically identical to a conscious being, but with the consciousness somehow subtracted. A GLUT is physically different from the thing it simulates, so it’s a different starting point.