For l-zombies to do anything they need to be run, whereupon they stop being l-zombies.
Omega doesn’t necessarily need to run a conscious copy of Eliezer to be pretty sure that Eliezer would pay up in the counterfactual mugging; it could use other information about Eliezer, like Eliezer’s comments on LW, the way that I just did. It should be possible to achieve pretty high confidence that way about what Eliezer-being-asked-about-a-counterfactual-mugging would do, even if that version of Eliezer should happen to be an l-zombie.
But you see Eliezer’s comments because a conscious copy of Eliezer has been run. If I’m figuring out what output a program “would” give “if” it were run, in what sense am I not running it? Suppose I have a program MaybeZombie, and I run a Turing Test with it as the Testee and you as the Tester. Every time you send a question to MaybeZombie, I figure out what MaybeZombie would say if it were run, and send that response back to you. Can I get MaybeZombie to pass a Turing Test, without ever running it?
But you see Eliezer’s comments because a conscious copy of Eliezer has been run.
A conscious copy of Eliezer that thought about what Eliezer would do when faced with that situation, not a conscious copy of Eliezer actually faced with that situation—the latter Eliezer is still an l-zombie, if we live in a world with l-zombies.
Is Eliezer thinking about what he would do when faced with that situation not him running an extremely simplified simulation of himself? Obviously this simulation is not equivalent to real Eliezer, but there’s clearly something being run here, so it can’t be an L-zombie.
Suppose I’ve seen records of some inputs and outputs to a program: 1->2, 5->10, 100->200. In every case I am aware of it was given a number as input, it output the doubled number. I don’t have the program’s source and or ability to access the computer it’s actually running on. I form a hypothesis: if this program received input 10000, it would output 20000. Am I running the program?
In this case: doubling program<->Eliezer, inputs<->comments and threads he is answering, outputs<->his replies.
If I’m figuring out what output a program “would” give “if” it were run, in what sense am I not running it?
In the sense of not producing effects on the outside world actually running it would produce. E.g. given this program
int goodbye_world() {
launch_nuclear_missiles();
return 0;
}
I can conclude running it would launch missiles (assuming suitable implementation of the launch_nuclear_missiles function) and output 0 without actually launching the missiles.
Benja defines an l-zombie as “a Turing machine which, if anybody ever ran it...” A Turing Machine can’t launch nuclear missiles. A nuclear missile launcher can be hooked up to a Turing Machine, and launch nuclear missile on the condition that the Turing Machine reach some state, but the Turing Machine isn’t launching the missiles, the nuclear missile launcher is.
But I can still do static analysis of a Turing machine without running it. E.g. I can determine a T.M. would never terminate on given input in finite time.
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Suppose I write a program that is short and simple enough that you can go through it and figure out in your head exactly what the computer will do at each line of code. In what sense has your mind not run the program, but a computer that executes the program has?
Imagine the following dialog:
Alice: “So, you’ve installed a javascript interpreter on your machine?”
Bob: “Nope.”
Alice: “But I clicked on this javascript program, and I got exactly what I was supposed to get.”
Bob: “Oh, that’s because I’ve associated javascript source code files with a program that looks at javascript code, determines what the output would be if the program had been run, and outputs the result.”
Alice: “So… you’ve a installed a javascript interpreter.”
Bob: “No. I told you, it doesn’t run the program, it just computes what the result of the program would be.”
Alice: “But that’s what a javascript interpreter is. It’s a program that looks at source code, determines what the proper output is, and gives that output.”
Bob: “Yes, but an interpreter does that by running the program. My program does it by doing a static analysis.”
Alice: “So, what is the difference? For instance, if I write a program that adds two plus two, what is the difference?”
Bob: “An interpreter would calculate what 2+2 is. My program calculates what 2+2 would be, if my computer had calculated the sum. But it doesn’t actually calculate the sum. It just does a static analysis of a program that would have calculated the sum.”
I don’t see how, outside of a rarefied philosophical context, Bob wouldn’t be found to be stark raving mad. It seems to me that you’re arguing for p-zombies (at least, behavioral zombies): one could build a machine that, given any input, tells you what the output would be if it were a conscious being. Such a machine would be indistinguishable from an actually conscious being, without actually being conscious.
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Crucial words here are “at some point”. And Benja’s original comment (as I understand it) says precisely that Omega doesn’t need to get to that point in order to find out with high confidence what Eliezer’s reaction to counterfactual mugging would be.
Omega doesn’t necessarily need to run a conscious copy of Eliezer to be pretty sure that Eliezer would pay up in the counterfactual mugging; it could use other information about Eliezer, like Eliezer’s comments on LW, the way that I just did. It should be possible to achieve pretty high confidence that way about what Eliezer-being-asked-about-a-counterfactual-mugging would do, even if that version of Eliezer should happen to be an l-zombie.
But you see Eliezer’s comments because a conscious copy of Eliezer has been run. If I’m figuring out what output a program “would” give “if” it were run, in what sense am I not running it? Suppose I have a program MaybeZombie, and I run a Turing Test with it as the Testee and you as the Tester. Every time you send a question to MaybeZombie, I figure out what MaybeZombie would say if it were run, and send that response back to you. Can I get MaybeZombie to pass a Turing Test, without ever running it?
A conscious copy of Eliezer that thought about what Eliezer would do when faced with that situation, not a conscious copy of Eliezer actually faced with that situation—the latter Eliezer is still an l-zombie, if we live in a world with l-zombies.
Is Eliezer thinking about what he would do when faced with that situation not him running an extremely simplified simulation of himself? Obviously this simulation is not equivalent to real Eliezer, but there’s clearly something being run here, so it can’t be an L-zombie.
Suppose I’ve seen records of some inputs and outputs to a program: 1->2, 5->10, 100->200. In every case I am aware of it was given a number as input, it output the doubled number. I don’t have the program’s source and or ability to access the computer it’s actually running on. I form a hypothesis: if this program received input 10000, it would output 20000. Am I running the program?
In this case: doubling program<->Eliezer, inputs<->comments and threads he is answering, outputs<->his replies.
No, you’ve built your model of the program and you’re running your own model.
In the sense of not producing effects on the outside world actually running it would produce. E.g. given this program
I can conclude running it would launch missiles (assuming suitable implementation of the
launch_nuclear_missiles
function) and output 0 without actually launching the missiles.Within the domain that the program has run (your imagination) missiles have been launched.
Benja defines an l-zombie as “a Turing machine which, if anybody ever ran it...” A Turing Machine can’t launch nuclear missiles. A nuclear missile launcher can be hooked up to a Turing Machine, and launch nuclear missile on the condition that the Turing Machine reach some state, but the Turing Machine isn’t launching the missiles, the nuclear missile launcher is.
But I can still do static analysis of a Turing machine without running it. E.g. I can determine a T.M. would never terminate on given input in finite time.
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Suppose I write a program that is short and simple enough that you can go through it and figure out in your head exactly what the computer will do at each line of code. In what sense has your mind not run the program, but a computer that executes the program has?
Imagine the following dialog:
Alice: “So, you’ve installed a javascript interpreter on your machine?”
Bob: “Nope.”
Alice: “But I clicked on this javascript program, and I got exactly what I was supposed to get.”
Bob: “Oh, that’s because I’ve associated javascript source code files with a program that looks at javascript code, determines what the output would be if the program had been run, and outputs the result.”
Alice: “So… you’ve a installed a javascript interpreter.”
Bob: “No. I told you, it doesn’t run the program, it just computes what the result of the program would be.”
Alice: “But that’s what a javascript interpreter is. It’s a program that looks at source code, determines what the proper output is, and gives that output.”
Bob: “Yes, but an interpreter does that by running the program. My program does it by doing a static analysis.”
Alice: “So, what is the difference? For instance, if I write a program that adds two plus two, what is the difference?”
Bob: “An interpreter would calculate what 2+2 is. My program calculates what 2+2 would be, if my computer had calculated the sum. But it doesn’t actually calculate the sum. It just does a static analysis of a program that would have calculated the sum.”
I don’t see how, outside of a rarefied philosophical context, Bob wouldn’t be found to be stark raving mad. It seems to me that you’re arguing for p-zombies (at least, behavioral zombies): one could build a machine that, given any input, tells you what the output would be if it were a conscious being. Such a machine would be indistinguishable from an actually conscious being, without actually being conscious.
Crucial words here are “at some point”. And Benja’s original comment (as I understand it) says precisely that Omega doesn’t need to get to that point in order to find out with high confidence what Eliezer’s reaction to counterfactual mugging would be.