I tend to agree with you, although I think of this as a strict limitation of empiricism.
We expand scientific knowledge through the creation of universal experiences that are (in principle) available to anyone- replicable experiments- and on the models that follow from those experiences. Consciousness, in contrast, is an experience of being onesself (and in exceptional cases, the experience of being aware that one is aware of onesself, and on down the rabbit hole). What would it mean to create the experience of being a particular entity, accessible to any observer? That question looks suspiciously like gibberish. The excellent and admirable What is it like to be a bat? develops this idea quite a bit.
But hopefully, this problem becomes fairly trivial even if it doesn’t disappear as such. We can certainly notice that human bodies tend to seem conscious and that shoes tend not to, and by developing AI from a (gulp) basically phenomenological perspective we can create a future we have every reason to believe is rich with selves and perspectives, no more a leap of faith than biological reproduction. We just won’t be able to point a consciousness-detecting machine at them and wait for it to go ‘ping’.
Having been able to firsthand experinece what it is like to be an echolocator I don’t find the question gibberish at all.
Even with ordinary evidence you don’t have access to other peoples experiences of the aparatus. That is I will never know what the thermometer will look to you. And if I am like color blind I can never have quite same experience. But still we get compatible enough experiences to constuct a “thermometer reading” that is same no matter the subject. In theory it should not be any more harder to construct such experiences that stand for experiences rather than temperatures.
I heard on the televisionthat some blind click had developed the skill of seeing with clicks. It sounded cool and worth the effort so I trained myself to have that abiilty too. Yes, it was fun as predicted but not totally mindblowing. No, it is not impossible.
Small babies have eyes that recieve light but they can’t see because they can’t process the information sufficently. For hearing people retain this property after being 3 days old. There is no natural incentive to be particulaltry picky about hearing, you can be a human just fine without being a echolocator (with just stereo hearing) and not even know you are missing anything. (Humans are seers, not sniffers like dogs or hearers like bats. Humans are also trichronmats while the average animal is a quadrochromat and yes still a human that doesn’ feel like missing out on anything because you don’t know you are the handicapped minority)
The argument is like because people are naturally illiterate they can’t possibly imagine what it would be to see instead of hear words. If you don’t go outside of the experience of a medieval peasant that might hold. If you are given a text ina foreign alphabeth and later given the alphabeth and asked to point out the letters you saw you might not be able to complete the task. That you have this property doesn’t mean it can’t be changed with training. Your ability to see letters will be improved if you work on your literacy. People as able to work for a more efficient sensory processing. And this includes also high end stuff such as synesthetic ability to use spatial metaphors for amounts. There are people whos processes of identifying a letter/number give it a color associaton. It’s not that the information would be in the wrong format for the brain to accept it. It is that the brain is in a insufficiently expressive format yet to represent the stimuli. But it is more of the duty of the brain to change rather than the incomprehensibility of the object. Map and territority etc.
People that argue that imagination can’t encompass that have not seriosly tried. And even if they have seriosly tried that is more evidence for their lower than average imagination capability than the truth of their argument. “What it is to be a human” isn’t even nearly so standard that it can be referenced as single monolithic concept much less a axiom that doesn’t need to be stated.
So claiming logical impossibility is hasty in the greatest measure available.
by developing AI from a (gulp) basically phenomenological perspective we can create a future we have every reason to believe is rich with selves and perspectives, no more a leap of faith than biological reproduction.
Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we’re actionably and emotionally confident in the consciousness of the humans that surround us every day. We can’t experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.
An AGI presents new challenges, but we’ve already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of ‘conscious’ label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.
we can somewhat comfortably map our previous assumptions on to a new domain.
I’m not sure how comfortably.
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, “he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?” The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.
Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.
Perhaps. I’m still working around this, and perhaps my discomfort is a facet of a outdated worldview. I’ll note however, that this reduces charity to fuzzy-seeking. It doesn’t make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is indifference to saving every life.
In any case, I feel safe in presuming the consciousness of other humans, not because they resemble me in outputs, as because we were both produced by the same process of evolution, and it would be strange if the evolution made me conscious but not the beings that are genetically basically-identical to me. I do not so readily make that assumption for a non-human, even a human-brain-emulation running on hardware other than a brain.
I tend to agree with you, although I think of this as a strict limitation of empiricism.
We expand scientific knowledge through the creation of universal experiences that are (in principle) available to anyone- replicable experiments- and on the models that follow from those experiences. Consciousness, in contrast, is an experience of being onesself (and in exceptional cases, the experience of being aware that one is aware of onesself, and on down the rabbit hole). What would it mean to create the experience of being a particular entity, accessible to any observer? That question looks suspiciously like gibberish. The excellent and admirable What is it like to be a bat? develops this idea quite a bit.
But hopefully, this problem becomes fairly trivial even if it doesn’t disappear as such. We can certainly notice that human bodies tend to seem conscious and that shoes tend not to, and by developing AI from a (gulp) basically phenomenological perspective we can create a future we have every reason to believe is rich with selves and perspectives, no more a leap of faith than biological reproduction. We just won’t be able to point a consciousness-detecting machine at them and wait for it to go ‘ping’.
Having been able to firsthand experinece what it is like to be an echolocator I don’t find the question gibberish at all.
Even with ordinary evidence you don’t have access to other peoples experiences of the aparatus. That is I will never know what the thermometer will look to you. And if I am like color blind I can never have quite same experience. But still we get compatible enough experiences to constuct a “thermometer reading” that is same no matter the subject. In theory it should not be any more harder to construct such experiences that stand for experiences rather than temperatures.
Elaborate?
comment in a previous thread with similar topic
I heard on the televisionthat some blind click had developed the skill of seeing with clicks. It sounded cool and worth the effort so I trained myself to have that abiilty too. Yes, it was fun as predicted but not totally mindblowing. No, it is not impossible.
Small babies have eyes that recieve light but they can’t see because they can’t process the information sufficently. For hearing people retain this property after being 3 days old. There is no natural incentive to be particulaltry picky about hearing, you can be a human just fine without being a echolocator (with just stereo hearing) and not even know you are missing anything. (Humans are seers, not sniffers like dogs or hearers like bats. Humans are also trichronmats while the average animal is a quadrochromat and yes still a human that doesn’ feel like missing out on anything because you don’t know you are the handicapped minority)
The argument is like because people are naturally illiterate they can’t possibly imagine what it would be to see instead of hear words. If you don’t go outside of the experience of a medieval peasant that might hold. If you are given a text ina foreign alphabeth and later given the alphabeth and asked to point out the letters you saw you might not be able to complete the task. That you have this property doesn’t mean it can’t be changed with training. Your ability to see letters will be improved if you work on your literacy. People as able to work for a more efficient sensory processing. And this includes also high end stuff such as synesthetic ability to use spatial metaphors for amounts. There are people whos processes of identifying a letter/number give it a color associaton. It’s not that the information would be in the wrong format for the brain to accept it. It is that the brain is in a insufficiently expressive format yet to represent the stimuli. But it is more of the duty of the brain to change rather than the incomprehensibility of the object. Map and territority etc.
People that argue that imagination can’t encompass that have not seriosly tried. And even if they have seriosly tried that is more evidence for their lower than average imagination capability than the truth of their argument. “What it is to be a human” isn’t even nearly so standard that it can be referenced as single monolithic concept much less a axiom that doesn’t need to be stated.
So claiming logical impossibility is hasty in the greatest measure available.
Can you elaborate on this?
Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we’re actionably and emotionally confident in the consciousness of the humans that surround us every day. We can’t experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.
An AGI presents new challenges, but we’ve already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of ‘conscious’ label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.
I’m not sure how comfortably.
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, “he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?” The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.
Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.
Perhaps. I’m still working around this, and perhaps my discomfort is a facet of a outdated worldview. I’ll note however, that this reduces charity to fuzzy-seeking. It doesn’t make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is indifference to saving every life.
In any case, I feel safe in presuming the consciousness of other humans, not because they resemble me in outputs, as because we were both produced by the same process of evolution, and it would be strange if the evolution made me conscious but not the beings that are genetically basically-identical to me. I do not so readily make that assumption for a non-human, even a human-brain-emulation running on hardware other than a brain.