If I’m figuring out what output a program “would” give “if” it were run, in what sense am I not running it?
In the sense of not producing effects on the outside world actually running it would produce. E.g. given this program
int goodbye_world() {
launch_nuclear_missiles();
return 0;
}
I can conclude running it would launch missiles (assuming suitable implementation of the launch_nuclear_missiles function) and output 0 without actually launching the missiles.
Benja defines an l-zombie as “a Turing machine which, if anybody ever ran it...” A Turing Machine can’t launch nuclear missiles. A nuclear missile launcher can be hooked up to a Turing Machine, and launch nuclear missile on the condition that the Turing Machine reach some state, but the Turing Machine isn’t launching the missiles, the nuclear missile launcher is.
But I can still do static analysis of a Turing machine without running it. E.g. I can determine a T.M. would never terminate on given input in finite time.
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Suppose I write a program that is short and simple enough that you can go through it and figure out in your head exactly what the computer will do at each line of code. In what sense has your mind not run the program, but a computer that executes the program has?
Imagine the following dialog:
Alice: “So, you’ve installed a javascript interpreter on your machine?”
Bob: “Nope.”
Alice: “But I clicked on this javascript program, and I got exactly what I was supposed to get.”
Bob: “Oh, that’s because I’ve associated javascript source code files with a program that looks at javascript code, determines what the output would be if the program had been run, and outputs the result.”
Alice: “So… you’ve a installed a javascript interpreter.”
Bob: “No. I told you, it doesn’t run the program, it just computes what the result of the program would be.”
Alice: “But that’s what a javascript interpreter is. It’s a program that looks at source code, determines what the proper output is, and gives that output.”
Bob: “Yes, but an interpreter does that by running the program. My program does it by doing a static analysis.”
Alice: “So, what is the difference? For instance, if I write a program that adds two plus two, what is the difference?”
Bob: “An interpreter would calculate what 2+2 is. My program calculates what 2+2 would be, if my computer had calculated the sum. But it doesn’t actually calculate the sum. It just does a static analysis of a program that would have calculated the sum.”
I don’t see how, outside of a rarefied philosophical context, Bob wouldn’t be found to be stark raving mad. It seems to me that you’re arguing for p-zombies (at least, behavioral zombies): one could build a machine that, given any input, tells you what the output would be if it were a conscious being. Such a machine would be indistinguishable from an actually conscious being, without actually being conscious.
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Crucial words here are “at some point”. And Benja’s original comment (as I understand it) says precisely that Omega doesn’t need to get to that point in order to find out with high confidence what Eliezer’s reaction to counterfactual mugging would be.
In the sense of not producing effects on the outside world actually running it would produce. E.g. given this program
I can conclude running it would launch missiles (assuming suitable implementation of the
launch_nuclear_missiles
function) and output 0 without actually launching the missiles.Within the domain that the program has run (your imagination) missiles have been launched.
Benja defines an l-zombie as “a Turing machine which, if anybody ever ran it...” A Turing Machine can’t launch nuclear missiles. A nuclear missile launcher can be hooked up to a Turing Machine, and launch nuclear missile on the condition that the Turing Machine reach some state, but the Turing Machine isn’t launching the missiles, the nuclear missile launcher is.
But I can still do static analysis of a Turing machine without running it. E.g. I can determine a T.M. would never terminate on given input in finite time.
But my point is that at some point, a “static analysis” becomes functionally equivalent to running it. If I do a “static analysis” to find out what the state of the Turing machine will be at each step, I will get exactly the same result (a sequence of states) that I would have gotten if I had run it for “real”, and I will have to engage in computation that is, in some sense, equivalent to the computation that the program asks for.
Suppose I write a program that is short and simple enough that you can go through it and figure out in your head exactly what the computer will do at each line of code. In what sense has your mind not run the program, but a computer that executes the program has?
Imagine the following dialog:
Alice: “So, you’ve installed a javascript interpreter on your machine?”
Bob: “Nope.”
Alice: “But I clicked on this javascript program, and I got exactly what I was supposed to get.”
Bob: “Oh, that’s because I’ve associated javascript source code files with a program that looks at javascript code, determines what the output would be if the program had been run, and outputs the result.”
Alice: “So… you’ve a installed a javascript interpreter.”
Bob: “No. I told you, it doesn’t run the program, it just computes what the result of the program would be.”
Alice: “But that’s what a javascript interpreter is. It’s a program that looks at source code, determines what the proper output is, and gives that output.”
Bob: “Yes, but an interpreter does that by running the program. My program does it by doing a static analysis.”
Alice: “So, what is the difference? For instance, if I write a program that adds two plus two, what is the difference?”
Bob: “An interpreter would calculate what 2+2 is. My program calculates what 2+2 would be, if my computer had calculated the sum. But it doesn’t actually calculate the sum. It just does a static analysis of a program that would have calculated the sum.”
I don’t see how, outside of a rarefied philosophical context, Bob wouldn’t be found to be stark raving mad. It seems to me that you’re arguing for p-zombies (at least, behavioral zombies): one could build a machine that, given any input, tells you what the output would be if it were a conscious being. Such a machine would be indistinguishable from an actually conscious being, without actually being conscious.
Crucial words here are “at some point”. And Benja’s original comment (as I understand it) says precisely that Omega doesn’t need to get to that point in order to find out with high confidence what Eliezer’s reaction to counterfactual mugging would be.