I can certainly ask, “were this Turing machine mind to be run, would it believe it were conscious?” But that doesn’t give me licence to assert that “because this Turing machine would be conscious if it were run, it is conscious even though it has not run, is not running, and never will run”.
A handheld calculator that’s never switched on will never tell us the sum of 6 and 9, even if we’re dead certain that there’s nothing wrong with the calculator.
I am not trying to say it would be conscious without being run. [...] I am trying to say that the computation as an abstract function has an output which is the sentence “I believe I am conscious.”
Now I think I agree with you. (Because I think you’re using “output” here in the counterfactual sense.)
But now these claims are too weak to invalidate what trist is saying. If we all agree that l-zombies, being the analogue of the calculator that’s never switched on, never actually say anything (just as the calculator never actually calculates anything), then someone who’s speaking can’t be an l-zombie (just as a calculator that’s telling us 6 + 9 = 15 must be switched on).
I agree that in order for the L-zombie to do anything in this world, it must be run. (Although I am very open to the possibility that I am wrong about that and prediction without simulation is possible)
Pi has a googolth digit even if we don’t run the calculation. A decision procedure has an output even if we do not run it. We just do not know what it is. I do not see the problem.
No, a decision procedure doesn’t have an output if you don’t run it. There is something that would be the output if you ran it. But if you ran it, it would not be an l-zombie.
Let’s give this program a name. Call it MaybeZombie. Benja is saying “If MaybeZombie is an l-zombie, then MaybeZombie would say ‘I have conscious experiences, so clearly I can’t be an l-zombie’ ”. Benja did not say “If MaybeZombie is an l-zombie, then if MaybeZombie were run, MaybeZombie would say ‘I have conscious experiences, so clearly I can’t be an l-zombie’ ”.
There is no case in which a program can think “I have conscious experiences, so clearly I can’t be an l-zombie” and be wrong. You’re trying to argue that based on a mixed counterfactual. You’re saying “I’m sitting here in Universe A, and I’m imagining Universe B where there is this program MaybeZombie that isn’t run, and the me-in-Universe-A is imagining the me-in-Universe-B imagining a Universe C in which MaybeZombie is run. And now the me-in-Universe-A is observing that the me-in-Universe-B would conclude that MaybeZombie-in-Universe-C would say ‘I have conscious experiences, so clearly I can’t be an l-zombie’.”
You’re evaluating “Does MaybeZombie say ‘I have conscious experiences, so clearly I can’t be an l-zombie’?” in Universe C, but evaluating “Is MaybeZombie conscious?” in Universe B. You’re concluding that MaybeZombie is “wrong” by mixing two different levels of counterfactuals. The analogy to pi is not appropriate, because the properties of pi don’t change depending on whether we calculate it. The properties of MaybeZombie do depend on whether MaybeZombie is run.
It is perfectly valid for any mind to say “I have conscious experiences, so clearly I can’t be an l-zombie”. The statement “I can’t be an l-zombie” clearly means “I can’t be an l-zombie in this universe”.
No, a decision procedure doesn’t have an output if you don’t run it. There is something that would be the output if you ran it.
I’m not sure that is a particularly useful way to carve reality. At best it means that we need another word for the thing that Coscott is referring to as ‘output’ that we can use instead of the word output. The thing Coscott is talking about is a much more useful thing when analysing decision procedures than the thing you have defined ‘output’ to mean.
“What would happen if hypothetically X were done” is one of the most common targets in statistical inference. That’s a huge chunk of what Fisher/Neyman had done (originally in the context of agriculture: “what if we had given this fertilizer to this plot of land?”) This is almost a hundred years ago.
No, a decision procedure doesn’t have an output if you don’t run it.
This made me think that he was talking about the property of the output, so my misunderstanding was relative to that interpretation.
I personally think that consciousness is a property of MaybeZombie, and that L-zombies do not make sense, but the position I was trying to defend in this thread was that we can talk about the theoretical output of a function without actually running that funciton. (We might not be able to talk very much, since perhaps we can’t know the output without running it)
We are getting in a situation similar to the Ontological Argument for God, in which an argument gets bogged down in equivocation. The question becomes: what is a valid predicate of MaybeZombie? One could argue that there is a distinction to be made between such predicates as “the program has a Kolmogorov complexity of less than 3^^^3 bits” on the one hand, versus such predicates as “the program has been run” on the other. The former is an inherent property, while the latter is extrinsic to the program, and in some sense is not a property of the program itself. And yet, grammatically at least, “has been run” is the predicate of “the program” in the sentence “the program has been run”. If “has said ‘I must not be a zombie’ ” is not a valid predicate of MaybeZombie, then talking about whether MaybeZombie has said ‘I must not be a zombie’ is invalid. If one can meaningfully talk about whether MaybeZombie has said ‘I must not be a zombie’, then “has said ‘I must not be a zombie’ ” is a valid predicate of MaybeZombie. Since this predicate is obviously false if MaybeZombie isn’t run, and could be true if MaybeZombie is run, then this is a property of MaybeZombie that depends on whether MaybeZombie is run.
I do not think your complaint is valid.
I can ask questions like what is the googolth digit of pi, without calculating it.
Similarly, you can ask questions whether a Turing Machine mind would believe it has conscious experience without actually running it.
I can certainly ask, “were this Turing machine mind to be run, would it believe it were conscious?” But that doesn’t give me licence to assert that “because this Turing machine would be conscious if it were run, it is conscious even though it has not run, is not running, and never will run”.
A handheld calculator that’s never switched on will never tell us the sum of 6 and 9, even if we’re dead certain that there’s nothing wrong with the calculator.
I am not trying to say it would be conscious without being run. (Although I believe it would)
I am trying to say that the computation as an abstract function has an output which is the sentence “I believe I am conscious.”
Now I think I agree with you. (Because I think you’re using “output” here in the counterfactual sense.)
But now these claims are too weak to invalidate what trist is saying. If we all agree that l-zombies, being the analogue of the calculator that’s never switched on, never actually say anything (just as the calculator never actually calculates anything), then someone who’s speaking can’t be an l-zombie (just as a calculator that’s telling us 6 + 9 = 15 must be switched on).
Okay, I guess I interpreted trist incorrectly.
I agree that in order for the L-zombie to do anything in this world, it must be run. (Although I am very open to the possibility that I am wrong about that and prediction without simulation is possible)
Well, yes, it would if you ran it :D
Pi has a googolth digit even if we don’t run the calculation. A decision procedure has an output even if we do not run it. We just do not know what it is. I do not see the problem.
No, a decision procedure doesn’t have an output if you don’t run it. There is something that would be the output if you ran it. But if you ran it, it would not be an l-zombie.
Let’s give this program a name. Call it MaybeZombie. Benja is saying “If MaybeZombie is an l-zombie, then MaybeZombie would say ‘I have conscious experiences, so clearly I can’t be an l-zombie’ ”. Benja did not say “If MaybeZombie is an l-zombie, then if MaybeZombie were run, MaybeZombie would say ‘I have conscious experiences, so clearly I can’t be an l-zombie’ ”.
There is no case in which a program can think “I have conscious experiences, so clearly I can’t be an l-zombie” and be wrong. You’re trying to argue that based on a mixed counterfactual. You’re saying “I’m sitting here in Universe A, and I’m imagining Universe B where there is this program MaybeZombie that isn’t run, and the me-in-Universe-A is imagining the me-in-Universe-B imagining a Universe C in which MaybeZombie is run. And now the me-in-Universe-A is observing that the me-in-Universe-B would conclude that MaybeZombie-in-Universe-C would say ‘I have conscious experiences, so clearly I can’t be an l-zombie’.”
You’re evaluating “Does MaybeZombie say ‘I have conscious experiences, so clearly I can’t be an l-zombie’?” in Universe C, but evaluating “Is MaybeZombie conscious?” in Universe B. You’re concluding that MaybeZombie is “wrong” by mixing two different levels of counterfactuals. The analogy to pi is not appropriate, because the properties of pi don’t change depending on whether we calculate it. The properties of MaybeZombie do depend on whether MaybeZombie is run.
It is perfectly valid for any mind to say “I have conscious experiences, so clearly I can’t be an l-zombie”. The statement “I can’t be an l-zombie” clearly means “I can’t be an l-zombie in this universe”.
I’m not sure that is a particularly useful way to carve reality. At best it means that we need another word for the thing that Coscott is referring to as ‘output’ that we can use instead of the word output. The thing Coscott is talking about is a much more useful thing when analysing decision procedures than the thing you have defined ‘output’ to mean.
That’s just a potential outcome, pretty standard stuff:
http://www.stat.cmu.edu/~fienberg/Rubin/Rubin-JASA-05.pdf
“What would happen if hypothetically X were done” is one of the most common targets in statistical inference. That’s a huge chunk of what Fisher/Neyman had done (originally in the context of agriculture: “what if we had given this fertilizer to this plot of land?”) This is almost a hundred years ago.
I do not understand how the properties of MaybeZombie depend on whether or not MaybeZombie is run.
Because consciousness isn’t a property of MaybeZombie, it’s a property of the process of running it?
This made me think that he was talking about the property of the output, so my misunderstanding was relative to that interpretation.
I personally think that consciousness is a property of MaybeZombie, and that L-zombies do not make sense, but the position I was trying to defend in this thread was that we can talk about the theoretical output of a function without actually running that funciton. (We might not be able to talk very much, since perhaps we can’t know the output without running it)
We are getting in a situation similar to the Ontological Argument for God, in which an argument gets bogged down in equivocation. The question becomes: what is a valid predicate of MaybeZombie? One could argue that there is a distinction to be made between such predicates as “the program has a Kolmogorov complexity of less than 3^^^3 bits” on the one hand, versus such predicates as “the program has been run” on the other. The former is an inherent property, while the latter is extrinsic to the program, and in some sense is not a property of the program itself. And yet, grammatically at least, “has been run” is the predicate of “the program” in the sentence “the program has been run”. If “has said ‘I must not be a zombie’ ” is not a valid predicate of MaybeZombie, then talking about whether MaybeZombie has said ‘I must not be a zombie’ is invalid. If one can meaningfully talk about whether MaybeZombie has said ‘I must not be a zombie’, then “has said ‘I must not be a zombie’ ” is a valid predicate of MaybeZombie. Since this predicate is obviously false if MaybeZombie isn’t run, and could be true if MaybeZombie is run, then this is a property of MaybeZombie that depends on whether MaybeZombie is run.