I would certainly agree that consciousness occurs over time, and that therefore to say that a system “is conscious” at a particular instant is true enough for normal conversation, but is technically speaking problematic, just like to say that the system “is falling” or “is burning” at a particular instant is technically speaking problematic.
Hrm? In the ordinary physical world, we can talk perfectly well about the interval in which an object is falling—it’s the period in which it’s moving downward as a result of gravity. And likewise it’s possible to say “the building started burning at time t, and was on fire until time t1, at which point it smouldered until t2”. If you think chemical or positional change requires time, we can do the usual move of asking about “the small interval of time around t.”
The thing that confuses me is that it seems like we have two conflicting desires or intuitions:
To say that consciousness is a property of minds, which is to say, of programs.
We also want to talk about consciousness as though it’s an ordinary physical state, like “walking”, or “digesting”—the property starts (possibly gradually), continues for a period, and then fades out. It seems naturally to say “I become conscious when I wake up in the morning, and then am conscious until night-time when I go to sleep.
And the problem is that our intuitions for physical processes and time don’t match with what we know how to define about programs. There’s no formal way to talk about “the program is executing at time t”; that’s an implementational detail that isn’t an intrinsic property of the program or even of the program’s execution history.
I am indeed suggesting we adopt a definition of consciousness that only applies to the physical embodiment of a mind. The advantage of this definition is that it captures all the conventional uses of the term in everyday life, without forcing us to deal with definitional puzzles in other contexts.
As a rough example of such a definition: “Consciousness is the property of noticing changes over time in one’s mental state or beliefs.”
Ah, I think I understand your position better now. Thanks for clarifying.
So, OK… if I understand you correctly, you’re saying that when a building is burning, we can formally describe that process as a property of the building, but when a program is executing, that isn’t a property of the program, but rather of the system executing the program, such as a brain.
And therefore you want to talk about consciousness as a property of program-executors (physical embodiments of minds) rather than of programs themselves.
Yes? Have I got that more or less right?
Sure, I would agree with that as far as it goes, though I don’t think it’s particularly important.
I would similarly say that I can’t actually write a program that calculates the square root of 200, because programs can’t do any such thing. Rather, it is the program executor that performs the calculation, making use of the program.
And I would similarly say that this distinction, while true enough, isn’t particularly important.
Digressing a little… it’s pretty common for humans to use metonymy to emphasize what we consider the important part of the system. So it’s no surprise that we’re in the habit of talking metonymically about what “the program” does, since most of us consider the program the important part… that is, we consider two different program-executors running the same program identical in the most important ways, and consider a single program-executor running two different programs (at different times) to be different in the most important ways.
So it’s unlikely that we’ll stop talking that way, since it expresses something we consider important.
But, sure, I agree that when we want more precision for some reason it can be useful to recognize that it’s actually the program executor that is exhibiting the behavior which the program specifies (such as consciousness, or calculating the square root of 200), not the program itself.
So, going back to your original comment with that context in mind:
Suppose I just wrote down on paper the states of a simulated brain at time t, t1, t2, etc. Would the model be conscious?
In normal conversation I would say “yes.”
But to be more precise about this: in this case there is a system (call it S) comprising your brain, a piece of paper, and various other things. S is executing some unspecified program that generates a series of mind-executing-system states (in this case, simulated brain-states). Not all programs cause consciousness when executed, of course, any more than all programs cause the calculation of the square root of 200. But if S is executing such a program, then S is conscious while executing it.
I’m not exactly sure what “the model” refers to in this case, but if the model includes the relevant parts of S then yes, the model is conscious.
Is it still conscious if I simulate step t1, and then wander away for a while?
This, too, is to some extent a matter of semantics.
If system S2 spends 5 seconds calculating the first N digits of pi, pauses to do something else for a second, then spends 5 seconds calculating the next N digits of pi, it’s not particularly interesting to ask whether S2 spent 10 or 11 seconds calculating those 2N digits. If I want to be more precise, I can say S2 is calculating pi for the intervals of time around each step in the execution, so if we choose intervals short enough S2′s calculation is intermittent, while if we choose intervals long enough it is continuous. But there is no right answer about the proper interval length to choose; it depends on what kind of analysis we’re doing.
Similarly, if S generates state t1 and then disintegrates (for example, you wander away for a while) and then later reintegrates (you return) and generates t2, whether we consider S to remain conscious in the interim is essentially a function of the intervals we’re using for analysis.
My sense is that consciousness is a property of biological humans in the physical world and that it’s not necessarily useful as a description for anything else. It’s a social construction and probably doesn’t have any useful summary in terms of physics or computer science or properties of neurons.
Why do you believe this? It doesn’t seem to follow from the above: even if consciousness is a property of program-executors rather than of the programs themselves, it doesn’t follow that only biological humans can execute the appropriate programs.
Appreciate the time and care you put into your response—I find it helpful to work through this a step at a time.
I would similarly say that I can’t actually write a program that calculates the square root of 200, because programs can’t do any such thing. Rather, it is the program executor that performs the calculation, making use of the program.
One of the big achievements of programming language research, in the 1960s and 1970s, was to let us make precise assertions about the meaning of a program, without any discussion of running it. We can say “this program emits the number that’s the best possible approximation to the square root of its input”, without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution.
Not all properties are of this form. “This program is currently paged out” is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I assume there are properties that human minds possess that are of both sorts. For example, I suspect “this is a sad brain” is a meaningful assertion even if the brain is frozen or if we’re looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn’t currently running or a synthesized AGI.
Why do you believe this? It doesn’t seem to follow from the above: even if consciousness is a property of program-executors rather than of the programs themselves, it doesn’t follow that only biological humans can execute the appropriate programs.
I agree with this. I didn’t mean to say “nothing but a biological human can ever be conscious”—just that “not all physical embodiments would have the right property.” I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
We can say “this program emits the number that’s the best possible approximation to the square root of its input”, without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution. Not all properties are of this form. “This program is currently paged out” is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I agree with this as far as it goes, but I observe that your examples conflate two different distinctions. Consider the following statements about a program: P1: “This program emits the number that’s the best possible approximation to the square root of its input” P2: “This program is currently paged out” P3: “This program is currently emitting the number that’s the best possible approximation to the square root of its input” P4: “This program pages out under the following circumstances”
I agree with you that P1 is about the abstract program and P2 is about the current instantiation. But P3 is also about the instantiation, and P4 is about the abstract program.
That said, I would also agree that P4 is not actually something that’s meaningful to say about most programs, and might not even be coherently specifiable. In practice, the more sensible statement is: P5: “This program-executor pages programs out under the following circumstances.”
And similarly in the other direction: P6: “This program-executor executes programs which emit the best possible approximations to the square roots of their input” is a goofy thing to say.
So, OK, yes. Some aspects of a program’s execution (like calculating square roots) are properties of the program, and some aspects (like page-swapping) are properties of the program-executor, and some aspects (like assisting in the derivation of novel mathematical theorems) are properties of still larger contexts.
I suspect “this is a sad brain” is a meaningful assertion even if the brain is frozen or if we’re looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
Why do you suspect/think this? It doesn’t seem to follow from anything we’ve said so far.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn’t currently running or a synthesized AGI.
Sure… I’m all in favor of having language available that makes precise distinctions when we want to be precise, even if we don’t necessarily use that language in casual conversation.
I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
Well, I expect a paper-and-pencil simulation of an appropriate program to be essentially impossible, but that’s entirely for pragmatic reasons. I wouldn’t expect an “uploaded human” running on hardware as slow as a human with paper and pencil to work, either.
I suspect your reasons are different, but I’m not sure what your reasons are.
Hrm? In the ordinary physical world, we can talk perfectly well about the interval in which an object is falling—it’s the period in which it’s moving downward as a result of gravity. And likewise it’s possible to say “the building started burning at time t, and was on fire until time t1, at which point it smouldered until t2”. If you think chemical or positional change requires time, we can do the usual move of asking about “the small interval of time around t.”
The thing that confuses me is that it seems like we have two conflicting desires or intuitions:
To say that consciousness is a property of minds, which is to say, of programs.
We also want to talk about consciousness as though it’s an ordinary physical state, like “walking”, or “digesting”—the property starts (possibly gradually), continues for a period, and then fades out. It seems naturally to say “I become conscious when I wake up in the morning, and then am conscious until night-time when I go to sleep.
And the problem is that our intuitions for physical processes and time don’t match with what we know how to define about programs. There’s no formal way to talk about “the program is executing at time t”; that’s an implementational detail that isn’t an intrinsic property of the program or even of the program’s execution history.
I am indeed suggesting we adopt a definition of consciousness that only applies to the physical embodiment of a mind. The advantage of this definition is that it captures all the conventional uses of the term in everyday life, without forcing us to deal with definitional puzzles in other contexts.
As a rough example of such a definition: “Consciousness is the property of noticing changes over time in one’s mental state or beliefs.”
Ah, I think I understand your position better now. Thanks for clarifying.
So, OK… if I understand you correctly, you’re saying that when a building is burning, we can formally describe that process as a property of the building, but when a program is executing, that isn’t a property of the program, but rather of the system executing the program, such as a brain.
And therefore you want to talk about consciousness as a property of program-executors (physical embodiments of minds) rather than of programs themselves.
Yes? Have I got that more or less right?
Sure, I would agree with that as far as it goes, though I don’t think it’s particularly important.
I would similarly say that I can’t actually write a program that calculates the square root of 200, because programs can’t do any such thing. Rather, it is the program executor that performs the calculation, making use of the program.
And I would similarly say that this distinction, while true enough, isn’t particularly important.
Digressing a little… it’s pretty common for humans to use metonymy to emphasize what we consider the important part of the system. So it’s no surprise that we’re in the habit of talking metonymically about what “the program” does, since most of us consider the program the important part… that is, we consider two different program-executors running the same program identical in the most important ways, and consider a single program-executor running two different programs (at different times) to be different in the most important ways.
So it’s unlikely that we’ll stop talking that way, since it expresses something we consider important.
But, sure, I agree that when we want more precision for some reason it can be useful to recognize that it’s actually the program executor that is exhibiting the behavior which the program specifies (such as consciousness, or calculating the square root of 200), not the program itself.
So, going back to your original comment with that context in mind:
In normal conversation I would say “yes.”
But to be more precise about this: in this case there is a system (call it S) comprising your brain, a piece of paper, and various other things. S is executing some unspecified program that generates a series of mind-executing-system states (in this case, simulated brain-states). Not all programs cause consciousness when executed, of course, any more than all programs cause the calculation of the square root of 200. But if S is executing such a program, then S is conscious while executing it.
I’m not exactly sure what “the model” refers to in this case, but if the model includes the relevant parts of S then yes, the model is conscious.
This, too, is to some extent a matter of semantics.
If system S2 spends 5 seconds calculating the first N digits of pi, pauses to do something else for a second, then spends 5 seconds calculating the next N digits of pi, it’s not particularly interesting to ask whether S2 spent 10 or 11 seconds calculating those 2N digits. If I want to be more precise, I can say S2 is calculating pi for the intervals of time around each step in the execution, so if we choose intervals short enough S2′s calculation is intermittent, while if we choose intervals long enough it is continuous. But there is no right answer about the proper interval length to choose; it depends on what kind of analysis we’re doing.
Similarly, if S generates state t1 and then disintegrates (for example, you wander away for a while) and then later reintegrates (you return) and generates t2, whether we consider S to remain conscious in the interim is essentially a function of the intervals we’re using for analysis.
Why do you believe this? It doesn’t seem to follow from the above: even if consciousness is a property of program-executors rather than of the programs themselves, it doesn’t follow that only biological humans can execute the appropriate programs.
Appreciate the time and care you put into your response—I find it helpful to work through this a step at a time.
One of the big achievements of programming language research, in the 1960s and 1970s, was to let us make precise assertions about the meaning of a program, without any discussion of running it. We can say “this program emits the number that’s the best possible approximation to the square root of its input”, without running the program, without even having any physical system that can run the program. There are some non-trivial and important properties of programs, as separate from a particular physical execution.
Not all properties are of this form. “This program is currently paged out” is a property of the particular concrete execution, not of an abstract execution history or of the program itself.
I assume there are properties that human minds possess that are of both sorts. For example, I suspect “this is a sad brain” is a meaningful assertion even if the brain is frozen or if we’re looking at one time-step of a simulation. However, I think consciousness, the way we usually use the word, is the second sort of property.
And the reason the distinction matters is that I want a sense what aspects of minds are relevant when discussing, e.g., an upload that isn’t currently running or a synthesized AGI.
I agree with this. I didn’t mean to say “nothing but a biological human can ever be conscious”—just that “not all physical embodiments would have the right property.” I expect that uploaded-human-attached-to-robot would be usefully described as conscious, and that a paper-and-pencil simulation would NOT.
I agree with this as far as it goes, but I observe that your examples conflate two different distinctions. Consider the following statements about a program:
P1: “This program emits the number that’s the best possible approximation to the square root of its input”
P2: “This program is currently paged out”
P3: “This program is currently emitting the number that’s the best possible approximation to the square root of its input”
P4: “This program pages out under the following circumstances”
I agree with you that P1 is about the abstract program and P2 is about the current instantiation. But P3 is also about the instantiation, and P4 is about the abstract program.
That said, I would also agree that P4 is not actually something that’s meaningful to say about most programs, and might not even be coherently specifiable. In practice, the more sensible statement is:
P5: “This program-executor pages programs out under the following circumstances.”
And similarly in the other direction:
P6: “This program-executor executes programs which emit the best possible approximations to the square roots of their input”
is a goofy thing to say.
So, OK, yes. Some aspects of a program’s execution (like calculating square roots) are properties of the program, and some aspects (like page-swapping) are properties of the program-executor, and some aspects (like assisting in the derivation of novel mathematical theorems) are properties of still larger contexts.
Why do you suspect/think this? It doesn’t seem to follow from anything we’ve said so far.
Sure… I’m all in favor of having language available that makes precise distinctions when we want to be precise, even if we don’t necessarily use that language in casual conversation.
Well, I expect a paper-and-pencil simulation of an appropriate program to be essentially impossible, but that’s entirely for pragmatic reasons. I wouldn’t expect an “uploaded human” running on hardware as slow as a human with paper and pencil to work, either.
I suspect your reasons are different, but I’m not sure what your reasons are.