Ok, so let’s say I put two different systems in front of you, and I tell you that system A is conscious whereas system B is not. Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems ? As far as I can tell, the answer is “no”. Here are some possible differences that people have proposed over the years:
Perhaps system A would be a much better conversation partner than system B.
But no, System B could just be really good at pretending that it’s conscious, without exhibiting any true consciousness at all.
System A will perform better at a variety of cognitive tasks.
But no, that’s intelligence, not consciousness, and in fact system B might be a lot smarter than A.
System A deserves moral consideration, whereas system B is just a tool.
Ok, but I asked you for a prediction, not a prescription.
It is quite possible that I’m missing something; but if I’m not, then consciousness is an empty concept, since it has no effect on anything we can actually observe.
As far as I understand, at least some philosophers would say “yes”, although admittedly I’m not sure why.
Additionally, in this specific case, it might be possible to fake introspection of something other than one’s own system. After all, System B just needs to fool the observer into thinking that it’s conscious at all, not that it’s conscious about anything specific. Insofar as that makes any sense...
A functional equivlent of a person would make the same reports, including apparently introspective ones. However,they would not have the same truth values. They might report that they area real person, not a simulation. So a a lot depends on whether introspection unintended as a success word.
Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems
I’m going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.
The role of system A is to modify system B. It’s meta-level thinking.
An animal can think: “I will beat my rival and have sex with his mate, rawr!” but it takes a more human mind to follow that up with: “No wait, I got to handle this carefully. If I’m not strong enough to beat my rival, what will happen? I’d better go see if I can find an ally for this fight.”
Of course, consciousness is not binary. It’s the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it’s meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can’t. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.
Sorry, I think you misinterpreted my scenario; let me clarify.
I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well.
So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?
My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.
I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
My prediction is that the Dell will be able to decide to do things of its own initiative.
‘on its own initiative’ looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
I don’t think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
Let’s say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
Hook them up to communicate with each other, and say “There’s a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we’re probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip.”
When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to ‘make life easier for the university IT department,’ you’ll know.
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I’m not really sure which one I’d choose...
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
Well no, of course merely being connected to a conscious system is not going to do anything, it’s not magic. The conscious system would have to interact with the laptop in a way that’s directly or indirectly related to its being conscious to get an observable difference.
For comparison, think of those scenario’s where you’re perfectly aware of what’s going on, but you can’t seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it’s mere existence isn’t going to do anything.
In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows).
So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?
Again no, a computer being conscious does not necessitate it acting differently. You could add a ‘consciousness routine’ without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it’s own code, then I imagine you could tell the difference.
Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I’m not sure if that’s true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn’t do it.
At least personally, I expect the conscious system A to be “self-maintaining” in some sense, to defend its own cognition in a way that an intelligent-but-unconscious system wouldn’t.
I feel like there’s something to this line of inquiry or something like it, and obviously I’m leaning towards ‘consciousness’ not being obviously useful on the whole. But consider:
‘Consciousness’ is a useful concept if and only if it partitions thingspace in a relevant way. But then if System A is conscious and System B is not, then there must be some relevant difference and we probably make differing predictions. For otherwise they would not have this relevant partition between them; if they were indistinguishable on all relevant counts, then A would be indistinguishable from B hence conscious and B indistinguishable from A hence non-conscious, which would contradict our supposition that ‘consciousness’ is a useful concept.
Similarly, if we assume that ‘consciousness’ is an empty concept, then saying A is conscious and B is not does not give us any more information than just knowing that I have two (possibly identical, depending on whether we still believe something cannot be both conscious and non-conscious) systems.
So it seems that beliefs about whether ‘consciousness’ is meaningful are preserved under consideration of this line of inquiry, so that it is circular/begs the question in the sense that after considering it, one is a ‘consciousness’-skeptic, so to speak, if and only if one was already a consciousness skeptic. But I’m slightly confused because this line of inquiry feels relevant. Hrm...
Ok, so let’s say I put two different systems in front of you, and I tell you that system A is conscious whereas system B is not. Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems ? As far as I can tell, the answer is “no”. Here are some possible differences that people have proposed over the years:
Perhaps system A would be a much better conversation partner than system B. But no, System B could just be really good at pretending that it’s conscious, without exhibiting any true consciousness at all.
System A will perform better at a variety of cognitive tasks. But no, that’s intelligence, not consciousness, and in fact system B might be a lot smarter than A.
System A deserves moral consideration, whereas system B is just a tool. Ok, but I asked you for a prediction, not a prescription.
It is quite possible that I’m missing something; but if I’m not, then consciousness is an empty concept, since it has no effect on anything we can actually observe.
Is it possible to fake introspection without having introspection?
As far as I understand, at least some philosophers would say “yes”, although admittedly I’m not sure why.
Additionally, in this specific case, it might be possible to fake introspection of something other than one’s own system. After all, System B just needs to fool the observer into thinking that it’s conscious at all, not that it’s conscious about anything specific. Insofar as that makes any sense...
Functional equivalence.
I’m not sure what you mean; can you elaborate ?
A functional equivlent of a person would make the same reports, including apparently introspective ones. However,they would not have the same truth values. They might report that they area real person, not a simulation. So a a lot depends on whether introspection unintended as a success word.
I’m going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.
Ok, sounds good. So, specifically, is there anything that you’d expect system A to do that system B would be unable to do (or vice versa) ?
The role of system A is to modify system B. It’s meta-level thinking.
An animal can think: “I will beat my rival and have sex with his mate, rawr!”
but it takes a more human mind to follow that up with: “No wait, I got to handle this carefully. If I’m not strong enough to beat my rival, what will happen? I’d better go see if I can find an ally for this fight.”
Of course, consciousness is not binary. It’s the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it’s meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can’t. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.
Sorry, I think you misinterpreted my scenario; let me clarify.
I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well.
So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?
My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.
I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
‘on its own initiative’ looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
I don’t think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
Let’s say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
Hook them up to communicate with each other, and say “There’s a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we’re probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip.”
When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to ‘make life easier for the university IT department,’ you’ll know.
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I’m not really sure which one I’d choose...
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
I haven’t the slightest idea. That’s the trouble with this definition.
Well no, of course merely being connected to a conscious system is not going to do anything, it’s not magic. The conscious system would have to interact with the laptop in a way that’s directly or indirectly related to its being conscious to get an observable difference.
For comparison, think of those scenario’s where you’re perfectly aware of what’s going on, but you can’t seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it’s mere existence isn’t going to do anything.
Sorry if this still doesn’t answer your question.
That does not, in fact, answer my question :-(
In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows).
So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?
Again no, a computer being conscious does not necessitate it acting differently. You could add a ‘consciousness routine’ without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it’s own code, then I imagine you could tell the difference.
Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I’m not sure if that’s true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn’t do it.
Maybe not, but you can upgrade your own programs. You can improve your “rationality” program, your “cooking” program, et cetera.
Yes, I can learn to a certain extent, but so can Pandora (the music-matching problem); IMO that’s not much of a yardstick.
At least personally, I expect the conscious system A to be “self-maintaining” in some sense, to defend its own cognition in a way that an intelligent-but-unconscious system wouldn’t.
I feel like there’s something to this line of inquiry or something like it, and obviously I’m leaning towards ‘consciousness’ not being obviously useful on the whole. But consider:
‘Consciousness’ is a useful concept if and only if it partitions thingspace in a relevant way. But then if System A is conscious and System B is not, then there must be some relevant difference and we probably make differing predictions. For otherwise they would not have this relevant partition between them; if they were indistinguishable on all relevant counts, then A would be indistinguishable from B hence conscious and B indistinguishable from A hence non-conscious, which would contradict our supposition that ‘consciousness’ is a useful concept.
Similarly, if we assume that ‘consciousness’ is an empty concept, then saying A is conscious and B is not does not give us any more information than just knowing that I have two (possibly identical, depending on whether we still believe something cannot be both conscious and non-conscious) systems.
So it seems that beliefs about whether ‘consciousness’ is meaningful are preserved under consideration of this line of inquiry, so that it is circular/begs the question in the sense that after considering it, one is a ‘consciousness’-skeptic, so to speak, if and only if one was already a consciousness skeptic. But I’m slightly confused because this line of inquiry feels relevant. Hrm...