But is it conscious? Well, if consciousness is a process, then no. This is just a snapshot of that process. It’s like a 4-stroke engine sitting frozen in the compression stroke, sitting next to an engine frozen in the power stroke, sitting next to an engine in the exhaust stroke, sitting next to an engine in the induction stroke, sitting next to a bunch more engines frozen in every state in between. Each state was like a frozen waterfall, but it doesn’t count as a water fall if none of the water is actually falling. It only counts a consciousness if it is actually thinking things like “I am aware that I am aware”, but not if it just has a series of thoughts frozen in place. Thoughts frozen on a sheet of paper, or on a computer screen are just representations of what was once an active process of thinking.
Would you deny that the function f(x) = x and the set of ordered pairs {...,(-1,-1),(0,0),(1,1),...} are merely two different representations of the same thing?
That doesn’t answer your point, though, which I presume was to appeal to a notion of interchangeable parts being equivalent, as Turing has suggested. I think it would be inaccurate to say that GLUT=Bob, even if FGLUT(input) = FBob(input). It’s not like comparing the same software or function running on two different operating systems. It’s comparing a function programmed in C# with throwing sand on the table and noticing that, if you interpret the pattern as dots and dashes, the Morse code happens to give the same result.
A Turing test seems like it should be valid in all real instances. The randomly generated GLUT, by definition, is one among countless trillions and trillions which gives a false positive on a Turing test. It’d be like giving 10^^^^^^^^^^^2 Turing tests via phone to an empty room, and having the random static just happen to sound like words in some of them, and have those words happen to form coherent sentences in a subset of those, and then have those coherent sentences happen to by actual rational answers to the examiner’s questions.
The difference is that, with the GLUT, you’ve created a list of all possible answers, and then rejected everything but the coherent ones that match a certain personality type and resemble a single person. You’ve then taken just these pre-recordings, from among countless trillions of trillions, and labeled them alone as your conscious GLUT and used them to pass a Turing test. However, it’s not fair to look just at this one pass, any more than it is reasonable to look at just the one pass out of trillions from the random noise on the phone line. You have to also consider the trillions and trillions of failed attempts. If all the static in the phone line “words” and “sentences” had been pre-recorded before the trillions of tests, would you then say that the one tape to pass was truly conscious?
Would you say that you are conscious? I mean, after all you’re nothing but one result that came out of countless experiments done by natural selection over the past ~4 billion years.
Yes, I am conscious, and so are most or all other humans. I am aware of my own existance, therefore I am conscious by definition. It would be improbable for the universe to make me conscious but not other people, despite the physical similarities, so I’m almost 100% certain that you aren’t a zombie. It’s not clear to me when “unconscious” people are and are not conscious, in the sense of being aware of their own existence. I could probably do a web search and uncover some data that hinted at where the line should be drawn, but that would be going off on a tangent.
The distinction I am trying to make is between random chance creating a thing in itself, and random chance resulting in the same outcomes as the thing would have had. Random chance can erode a rock with lines that look like writing, or random chance can create a self replicating set of molecules that eventually evolve into intelligent life. If we look at writing on a stone, we presume that the writer is conscious, but that’s not the case if there is no writer.
I’m trying to make a map/territory distinction. Consciousness is something that actually exists, in physical form, in the real world. There are some combinations of atoms which are consciousness, and some that aren’t. When we draw our map, we naturally assume that if it looks like a duck and quacks like a duck, it’s a duck. But philosophers have asked us an interesting question:
What if, by random chance, a TV projector came into existence from the random motions of atoms, and it projected something that looked and moved exactly like a duck? What if it projected this image directly into both your retinas, in 3D, with enough fidelity that you couldn’t tell the difference? Clearly, it’s not a duck, even though it looks and quacks exactly like a duck. Since there is no observable difference, the thing that makes one a duck must be some mystical property, external to the observable world. There must be some innate quality of duckhood, some mystical duckiness, which imbues ducks with their duckitude.
I don’t know that I’m conscious. (And to avoid the inevitable snark, using “I” in written text doesn’t demonstrate I do.)
If I wanted to know that something is, say, a triangle, you could tell me what it means for something to be a triangle. I could then check things and say “yup, that one is a triangle”.
If I were to instead be puzzled about “red”, you couldn’t describe being red to me, but you could at least show me examples of things that are all red. I could verify that the perceptions I have of those things are similar. Furthermore, I could then point to other objects, say “these produce the same perceptions in me as the first objects”, and discover that you agree with me on those objects producing the same sensation, even if I can’t directly compare the sensation in my head to the sensation in yours.
But even this isn’t possible for “consciousness”. Nobody can perceive more than one example of consciousness. Even assuming that I am conscious and can perceive that, there’s no commonality—there aren’t any sets of things which both of us will perceive as consciousness, that I can use to generalize from to figure out whether some unknown thing I can perceive is also consciousness. If I only ever saw one red object in my lifetime, and the only red object you ever saw in your lifetime was a different object, how could either of us know that the two are the same color?
Are you arguing that you don’t know if you are conscious because you can’t be sure that the consciousness you experience and observe matches the consciousness other people claim to experience and observe?
I would argue that the word “red” is poorly defined, since it predates a modern understanding of light. For the purposes of discussion, we might define an object as “red” if the majority of the energy of the visible spectrum (430-790nm) light it reflects or otherwise emits (under uniform illumination across the visible spectrum) has a wavelength between 620 and 750 nm.
But you say this:
f I were to instead be puzzled about “red”, you couldn’t describe being red to me, but you could at least show me examples of things that are all red.
That may have been true at some point in history, but today I can describe being red to you in great detail, as I did above. Before we had that knowledge, though, we had less strong evidence that we were talking about the same thing, but there was still evidence. Without knowing about photons, it would still seem unlikely that we were talking bout different “reds”. Occam’s razor would favor a single “red” property or set of properties over something that appeared different to different people based on some complex set of rules. The counter-evidence is colorblindness, of course, but the complexity added by having to claim that some people have eye problems is significantly less than the complexity that would be added by the theory that each person has their own version of red.
I would argue that we have a similar level of knowledge about consciousness, which is further hindered, as you point out, that we can only observe our own consciousness with high fidelity. To observe other people’s consciousnesses, we have to examine the things they say and write about their own consciousness. It’s a bit like observing the sun, and observing the stars, and trying to deduce whether the sun is a star.
Different cultures seem to have independently come up with similar sounding ideas about consciousness, so it seems like most people are more or less talking about the same sort of thing. I’m sure there are minute differences, just as there are different shades of red, and different classifications of stars. After all, the atoms and neurons in our brains are all configured slightly differently, so it would be surprising if our consciousnesses were exactly identical, neuron for neuron. Then again, it would also be surprising if our sun was exactly the same as some other star, atom for atom. But that’s why we use words like “star”, “dog”, “red” and even “consciousness” to refer to entire classes of things. In this case, “consciousness” refers to the sensation of existing, and the thing that causes us to talk about consciousness. That’s not a full definition, but it’s a start. We’ll have to wait on neuroscience, or perhaps AI research, before we can get a more precise definition.
Are you arguing that you don’t know if you are conscious because you can’t be sure that the consciousness you experience and observe matches the consciousness other people claim to experience and observe?
Sort of, but not quite.
In the case of “red”, I can’t be sure that someone’s mental sensation when seeing red is the same as my mental sensation when seeing red. They’re private, after all. But I can at least be fairly sure that these sensations, however different they may be privately, still point to the same set of things. They are operationally the same. Since my perceptions of red correlate with the other person’s perceptions of red, it makes sense to conclude (with less than perfect certainty) that red objects have something in common with each other—that is, that redness is a natural category.
But I can’t apply this to consciousness. There are no consciousnesses that we can both see—we can each see at most one, and we can never see the same one that the other can. So the factor that leads me to conclude that redness is a natural category is absent for consciousness.
Different cultures seem to have independently come up with similar sounding ideas about consciousness
Have they? Different cultures have come up with similar sounding ideas on how to conclude that something has consciousness, but they (or their members) cannot ever make direct observations of two consciousnesses and say that they observe similarities between them. So the example of different cultures agreeing only lets us be pretty sure that “consciousness-labelled-observed-behavior” is a real thing, but not that one person’s direct observation of their own consciousness is the same as another person’s direct observation.
Ah, so you were talking about the possible mismatch between our perceptions of the redness of red. I could try to guess at a technical answer, since it would be highly immoral to experiment with actual people. I’m not sure it would make any difference to the consciousness argument, though.
It sounds like you do experience some sort of sensation of existing, but that you don’t talk about this sensation with words like “consciousness”, or anything else, because you can’t draw a logical link between different people’s consciousnesses to show that they are the same thing.
But I’m not talking about formal logic. I’d agree with you that given what we know, we can’t deduce that everyone is talking about the same “consciousness”. However, we have tools in our bag besides just formal logic. One such tool is Bayes’ theorem. Do you really prescribe less than a 50% probability to the hypothesis that our ideas of “consciousness” are similar, rather than entirely random things? Maybe it isn’t above a 95% certainty, or 99.9%, or whatever arbitrary threshold you would choose before you can safely say that you “know” something.
Personally, I would assign a low probability to the idea that our consciousnesses are identical, but a quite high probability to the idea that they are at least similar in nature. People seem to talk about consciousness in much different ways than they talk about potatoes or space-time. There are enough differences in the rest of our brains that I would be surprised if consciousnesses were identical, but there are still patterns that are similar between most human brains. It strikes me as an unsolved but bounded question, rather than an unknowable one.
Ah. I think I understand your position a bit better now; thanks. Now let me ask you the following question:
Suppose I take a certain volume of space large enough to hold a human brain—say, a 1-by-1-by-1-cubic-meter space. Now let us suppose that I fill that space with a random arrangement of quarks and electrons. This will almost certainly produce nothing more than a shapeless blob of matter. But now suppose that I continue doing this, over and over again, until finally, after perhaps quintillions upon quintillions of trials, I finally manage to construct a human brain—simply out of random chance. (This is actually a real phenomenon speculated about by physicists, known as the Boltzmann brain.)
Assuming that this brain doesn’t die immediately due to being created in a vacuum, would you agree that it is conscious?
The vast majority of such brains would not be. They’d just be hunks of dead meat, no different from the brain of a cadaver. A tiny subset, however, would be conscious, at least until they ran out of oxygen or whatever and died.
I’m not objecting to the matter in which the GLUT is created, but merely observing that it doesn’t have a form which seems like it would give rise to consciousness. Without knowing the exact mechanism by which human brains give rise to consciousness, it is difficult to say precisely where to draw the line between calling something conscious or not conscious, but a GLUT doesn’t seem to be structured in a way that could think. I’m arguing that it is possible, at least in principle, to cheat a Turing test with a GLUT.
I gave a few more comments in response to blossom’s question if you are interested.
Would you deny that the function f(x) = x and the set of ordered pairs {...,(-1,-1),(0,0),(1,1),...} are merely two different representations of the same thing?
Sure, at least for integer values of x. :p
That doesn’t answer your point, though, which I presume was to appeal to a notion of interchangeable parts being equivalent, as Turing has suggested. I think it would be inaccurate to say that GLUT=Bob, even if FGLUT(input) = FBob(input). It’s not like comparing the same software or function running on two different operating systems. It’s comparing a function programmed in C# with throwing sand on the table and noticing that, if you interpret the pattern as dots and dashes, the Morse code happens to give the same result.
A Turing test seems like it should be valid in all real instances. The randomly generated GLUT, by definition, is one among countless trillions and trillions which gives a false positive on a Turing test. It’d be like giving 10^^^^^^^^^^^2 Turing tests via phone to an empty room, and having the random static just happen to sound like words in some of them, and have those words happen to form coherent sentences in a subset of those, and then have those coherent sentences happen to by actual rational answers to the examiner’s questions.
The difference is that, with the GLUT, you’ve created a list of all possible answers, and then rejected everything but the coherent ones that match a certain personality type and resemble a single person. You’ve then taken just these pre-recordings, from among countless trillions of trillions, and labeled them alone as your conscious GLUT and used them to pass a Turing test. However, it’s not fair to look just at this one pass, any more than it is reasonable to look at just the one pass out of trillions from the random noise on the phone line. You have to also consider the trillions and trillions of failed attempts. If all the static in the phone line “words” and “sentences” had been pre-recorded before the trillions of tests, would you then say that the one tape to pass was truly conscious?
Would you say that you are conscious? I mean, after all you’re nothing but one result that came out of countless experiments done by natural selection over the past ~4 billion years.
Yes, I am conscious, and so are most or all other humans. I am aware of my own existance, therefore I am conscious by definition. It would be improbable for the universe to make me conscious but not other people, despite the physical similarities, so I’m almost 100% certain that you aren’t a zombie. It’s not clear to me when “unconscious” people are and are not conscious, in the sense of being aware of their own existence. I could probably do a web search and uncover some data that hinted at where the line should be drawn, but that would be going off on a tangent.
The distinction I am trying to make is between random chance creating a thing in itself, and random chance resulting in the same outcomes as the thing would have had. Random chance can erode a rock with lines that look like writing, or random chance can create a self replicating set of molecules that eventually evolve into intelligent life. If we look at writing on a stone, we presume that the writer is conscious, but that’s not the case if there is no writer.
I’m trying to make a map/territory distinction. Consciousness is something that actually exists, in physical form, in the real world. There are some combinations of atoms which are consciousness, and some that aren’t. When we draw our map, we naturally assume that if it looks like a duck and quacks like a duck, it’s a duck. But philosophers have asked us an interesting question:
I don’t know that I’m conscious. (And to avoid the inevitable snark, using “I” in written text doesn’t demonstrate I do.)
If I wanted to know that something is, say, a triangle, you could tell me what it means for something to be a triangle. I could then check things and say “yup, that one is a triangle”.
If I were to instead be puzzled about “red”, you couldn’t describe being red to me, but you could at least show me examples of things that are all red. I could verify that the perceptions I have of those things are similar. Furthermore, I could then point to other objects, say “these produce the same perceptions in me as the first objects”, and discover that you agree with me on those objects producing the same sensation, even if I can’t directly compare the sensation in my head to the sensation in yours.
But even this isn’t possible for “consciousness”. Nobody can perceive more than one example of consciousness. Even assuming that I am conscious and can perceive that, there’s no commonality—there aren’t any sets of things which both of us will perceive as consciousness, that I can use to generalize from to figure out whether some unknown thing I can perceive is also consciousness. If I only ever saw one red object in my lifetime, and the only red object you ever saw in your lifetime was a different object, how could either of us know that the two are the same color?
Are you arguing that you don’t know if you are conscious because you can’t be sure that the consciousness you experience and observe matches the consciousness other people claim to experience and observe?
I would argue that the word “red” is poorly defined, since it predates a modern understanding of light. For the purposes of discussion, we might define an object as “red” if the majority of the energy of the visible spectrum (430-790nm) light it reflects or otherwise emits (under uniform illumination across the visible spectrum) has a wavelength between 620 and 750 nm.
But you say this:
That may have been true at some point in history, but today I can describe being red to you in great detail, as I did above. Before we had that knowledge, though, we had less strong evidence that we were talking about the same thing, but there was still evidence. Without knowing about photons, it would still seem unlikely that we were talking bout different “reds”. Occam’s razor would favor a single “red” property or set of properties over something that appeared different to different people based on some complex set of rules. The counter-evidence is colorblindness, of course, but the complexity added by having to claim that some people have eye problems is significantly less than the complexity that would be added by the theory that each person has their own version of red.
I would argue that we have a similar level of knowledge about consciousness, which is further hindered, as you point out, that we can only observe our own consciousness with high fidelity. To observe other people’s consciousnesses, we have to examine the things they say and write about their own consciousness. It’s a bit like observing the sun, and observing the stars, and trying to deduce whether the sun is a star.
Different cultures seem to have independently come up with similar sounding ideas about consciousness, so it seems like most people are more or less talking about the same sort of thing. I’m sure there are minute differences, just as there are different shades of red, and different classifications of stars. After all, the atoms and neurons in our brains are all configured slightly differently, so it would be surprising if our consciousnesses were exactly identical, neuron for neuron. Then again, it would also be surprising if our sun was exactly the same as some other star, atom for atom. But that’s why we use words like “star”, “dog”, “red” and even “consciousness” to refer to entire classes of things. In this case, “consciousness” refers to the sensation of existing, and the thing that causes us to talk about consciousness. That’s not a full definition, but it’s a start. We’ll have to wait on neuroscience, or perhaps AI research, before we can get a more precise definition.
Sort of, but not quite.
In the case of “red”, I can’t be sure that someone’s mental sensation when seeing red is the same as my mental sensation when seeing red. They’re private, after all. But I can at least be fairly sure that these sensations, however different they may be privately, still point to the same set of things. They are operationally the same. Since my perceptions of red correlate with the other person’s perceptions of red, it makes sense to conclude (with less than perfect certainty) that red objects have something in common with each other—that is, that redness is a natural category.
But I can’t apply this to consciousness. There are no consciousnesses that we can both see—we can each see at most one, and we can never see the same one that the other can. So the factor that leads me to conclude that redness is a natural category is absent for consciousness.
Have they? Different cultures have come up with similar sounding ideas on how to conclude that something has consciousness, but they (or their members) cannot ever make direct observations of two consciousnesses and say that they observe similarities between them. So the example of different cultures agreeing only lets us be pretty sure that “consciousness-labelled-observed-behavior” is a real thing, but not that one person’s direct observation of their own consciousness is the same as another person’s direct observation.
Ah, so you were talking about the possible mismatch between our perceptions of the redness of red. I could try to guess at a technical answer, since it would be highly immoral to experiment with actual people. I’m not sure it would make any difference to the consciousness argument, though.
It sounds like you do experience some sort of sensation of existing, but that you don’t talk about this sensation with words like “consciousness”, or anything else, because you can’t draw a logical link between different people’s consciousnesses to show that they are the same thing.
But I’m not talking about formal logic. I’d agree with you that given what we know, we can’t deduce that everyone is talking about the same “consciousness”. However, we have tools in our bag besides just formal logic. One such tool is Bayes’ theorem. Do you really prescribe less than a 50% probability to the hypothesis that our ideas of “consciousness” are similar, rather than entirely random things? Maybe it isn’t above a 95% certainty, or 99.9%, or whatever arbitrary threshold you would choose before you can safely say that you “know” something.
Personally, I would assign a low probability to the idea that our consciousnesses are identical, but a quite high probability to the idea that they are at least similar in nature. People seem to talk about consciousness in much different ways than they talk about potatoes or space-time. There are enough differences in the rest of our brains that I would be surprised if consciousnesses were identical, but there are still patterns that are similar between most human brains. It strikes me as an unsolved but bounded question, rather than an unknowable one.
To perceive at all, regardless of the nature of that perception, is consciousness. So I think the “I” snark is warranted.
Ah. I think I understand your position a bit better now; thanks. Now let me ask you the following question:
Suppose I take a certain volume of space large enough to hold a human brain—say, a 1-by-1-by-1-cubic-meter space. Now let us suppose that I fill that space with a random arrangement of quarks and electrons. This will almost certainly produce nothing more than a shapeless blob of matter. But now suppose that I continue doing this, over and over again, until finally, after perhaps quintillions upon quintillions of trials, I finally manage to construct a human brain—simply out of random chance. (This is actually a real phenomenon speculated about by physicists, known as the Boltzmann brain.)
Assuming that this brain doesn’t die immediately due to being created in a vacuum, would you agree that it is conscious?
The vast majority of such brains would not be. They’d just be hunks of dead meat, no different from the brain of a cadaver. A tiny subset, however, would be conscious, at least until they ran out of oxygen or whatever and died.
I’m not objecting to the matter in which the GLUT is created, but merely observing that it doesn’t have a form which seems like it would give rise to consciousness. Without knowing the exact mechanism by which human brains give rise to consciousness, it is difficult to say precisely where to draw the line between calling something conscious or not conscious, but a GLUT doesn’t seem to be structured in a way that could think. I’m arguing that it is possible, at least in principle, to cheat a Turing test with a GLUT.
I gave a few more comments in response to blossom’s question if you are interested.