Appreciate the time and care you put into your response—I find it helpful to work through this a step at a time.
I would similarly say that I can’t actually write a program that calculates the square root of 200, because programs can’t do any such thing. Rather, it is the program executor that performs the calculation, making use of the program.
One of the big achievements of programming language research, in the 1960s and 1970s, was to let us make precise assertions about the meaning of a program, without any discussion of running it. We can say “this program emits the number that’s the best possible approximation to the square root of its input”, without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution.
Not all properties are of this form. “This program is currently paged out” is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I assume there are properties that human minds possess that are of both sorts. For example, I suspect “this is a sad brain” is a meaningful assertion even if the brain is frozen or if we’re looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn’t currently running or a synthesized AGI.
Why do you believe this? It doesn’t seem to follow from the above: even if consciousness is a property of program-executors rather than of the programs themselves, it doesn’t follow that only biological humans can execute the appropriate programs.
I agree with this. I didn’t mean to say “nothing but a biological human can ever be conscious”—just that “not all physical embodiments would have the right property.” I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
We can say “this program emits the number that’s the best possible approximation to the square root of its input”, without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution. Not all properties are of this form. “This program is currently paged out” is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I agree with this as far as it goes, but I observe that your examples conflate two different distinctions. Consider the following statements about a program: P1: “This program emits the number that’s the best possible approximation to the square root of its input” P2: “This program is currently paged out” P3: “This program is currently emitting the number that’s the best possible approximation to the square root of its input” P4: “This program pages out under the following circumstances”
I agree with you that P1 is about the abstract program and P2 is about the current instantiation. But P3 is also about the instantiation, and P4 is about the abstract program.
That said, I would also agree that P4 is not actually something that’s meaningful to say about most programs, and might not even be coherently specifiable. In practice, the more sensible statement is: P5: “This program-executor pages programs out under the following circumstances.”
And similarly in the other direction: P6: “This program-executor executes programs which emit the best possible approximations to the square roots of their input” is a goofy thing to say.
So, OK, yes. Some aspects of a program’s execution (like calculating square roots) are properties of the program, and some aspects (like page-swapping) are properties of the program-executor, and some aspects (like assisting in the derivation of novel mathematical theorems) are properties of still larger contexts.
I suspect “this is a sad brain” is a meaningful assertion even if the brain is frozen or if we’re looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
Why do you suspect/think this? It doesn’t seem to follow from anything we’ve said so far.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn’t currently running or a synthesized AGI.
Sure… I’m all in favor of having language available that makes precise distinctions when we want to be precise, even if we don’t necessarily use that language in casual conversation.
I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
Well, I expect a paper-and-pencil simulation of an appropriate program to be essentially impossible, but that’s entirely for pragmatic reasons. I wouldn’t expect an “uploaded human” running on hardware as slow as a human with paper and pencil to work, either.
I suspect your reasons are different, but I’m not sure what your reasons are.
Appreciate the time and care you put into your response—I find it helpful to work through this a step at a time.
One of the big achievements of programming language research, in the 1960s and 1970s, was to let us make precise assertions about the meaning of a program, without any discussion of running it. We can say “this program emits the number that’s the best possible approximation to the square root of its input”, without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution.
Not all properties are of this form. “This program is currently paged out” is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I assume there are properties that human minds possess that are of both sorts. For example, I suspect “this is a sad brain” is a meaningful assertion even if the brain is frozen or if we’re looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn’t currently running or a synthesized AGI.
I agree with this. I didn’t mean to say “nothing but a biological human can ever be conscious”—just that “not all physical embodiments would have the right property.” I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
I agree with this as far as it goes, but I observe that your examples conflate two different distinctions. Consider the following statements about a program:
P1: “This program emits the number that’s the best possible approximation to the square root of its input”
P2: “This program is currently paged out”
P3: “This program is currently emitting the number that’s the best possible approximation to the square root of its input”
P4: “This program pages out under the following circumstances”
I agree with you that P1 is about the abstract program and P2 is about the current instantiation. But P3 is also about the instantiation, and P4 is about the abstract program.
That said, I would also agree that P4 is not actually something that’s meaningful to say about most programs, and might not even be coherently specifiable. In practice, the more sensible statement is:
P5: “This program-executor pages programs out under the following circumstances.”
And similarly in the other direction:
P6: “This program-executor executes programs which emit the best possible approximations to the square roots of their input”
is a goofy thing to say.
So, OK, yes. Some aspects of a program’s execution (like calculating square roots) are properties of the program, and some aspects (like page-swapping) are properties of the program-executor, and some aspects (like assisting in the derivation of novel mathematical theorems) are properties of still larger contexts.
Why do you suspect/think this? It doesn’t seem to follow from anything we’ve said so far.
Sure… I’m all in favor of having language available that makes precise distinctions when we want to be precise, even if we don’t necessarily use that language in casual conversation.
Well, I expect a paper-and-pencil simulation of an appropriate program to be essentially impossible, but that’s entirely for pragmatic reasons. I wouldn’t expect an “uploaded human” running on hardware as slow as a human with paper and pencil to work, either.
I suspect your reasons are different, but I’m not sure what your reasons are.