Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems
I’m going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.
The role of system A is to modify system B. It’s meta-level thinking.
An animal can think: “I will beat my rival and have sex with his mate, rawr!” but it takes a more human mind to follow that up with: “No wait, I got to handle this carefully. If I’m not strong enough to beat my rival, what will happen? I’d better go see if I can find an ally for this fight.”
Of course, consciousness is not binary. It’s the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it’s meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can’t. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.
Sorry, I think you misinterpreted my scenario; let me clarify.
I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well.
So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?
My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.
I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
My prediction is that the Dell will be able to decide to do things of its own initiative.
‘on its own initiative’ looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
I don’t think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
Let’s say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
Hook them up to communicate with each other, and say “There’s a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we’re probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip.”
When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to ‘make life easier for the university IT department,’ you’ll know.
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I’m not really sure which one I’d choose...
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
Well no, of course merely being connected to a conscious system is not going to do anything, it’s not magic. The conscious system would have to interact with the laptop in a way that’s directly or indirectly related to its being conscious to get an observable difference.
For comparison, think of those scenario’s where you’re perfectly aware of what’s going on, but you can’t seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it’s mere existence isn’t going to do anything.
In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows).
So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?
Again no, a computer being conscious does not necessitate it acting differently. You could add a ‘consciousness routine’ without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it’s own code, then I imagine you could tell the difference.
Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I’m not sure if that’s true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn’t do it.
I’m going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.
Ok, sounds good. So, specifically, is there anything that you’d expect system A to do that system B would be unable to do (or vice versa) ?
The role of system A is to modify system B. It’s meta-level thinking.
An animal can think: “I will beat my rival and have sex with his mate, rawr!”
but it takes a more human mind to follow that up with: “No wait, I got to handle this carefully. If I’m not strong enough to beat my rival, what will happen? I’d better go see if I can find an ally for this fight.”
Of course, consciousness is not binary. It’s the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it’s meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can’t. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.
Sorry, I think you misinterpreted my scenario; let me clarify.
I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well.
So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?
My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.
I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.
‘on its own initiative’ looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.
I don’t think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).
Let’s say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?
Hook them up to communicate with each other, and say “There’s a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we’re probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip.”
When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to ‘make life easier for the university IT department,’ you’ll know.
Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I’m not really sure which one I’d choose...
The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.
I haven’t the slightest idea. That’s the trouble with this definition.
Well no, of course merely being connected to a conscious system is not going to do anything, it’s not magic. The conscious system would have to interact with the laptop in a way that’s directly or indirectly related to its being conscious to get an observable difference.
For comparison, think of those scenario’s where you’re perfectly aware of what’s going on, but you can’t seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it’s mere existence isn’t going to do anything.
Sorry if this still doesn’t answer your question.
That does not, in fact, answer my question :-(
In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows).
So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?
Again no, a computer being conscious does not necessitate it acting differently. You could add a ‘consciousness routine’ without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it’s own code, then I imagine you could tell the difference.
Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I’m not sure if that’s true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn’t do it.
Maybe not, but you can upgrade your own programs. You can improve your “rationality” program, your “cooking” program, et cetera.
Yes, I can learn to a certain extent, but so can Pandora (the music-matching problem); IMO that’s not much of a yardstick.