by developing AI from a (gulp) basically phenomenological perspective we can create a future we have every reason to believe is rich with selves and perspectives, no more a leap of faith than biological reproduction.
Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we’re actionably and emotionally confident in the consciousness of the humans that surround us every day. We can’t experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.
An AGI presents new challenges, but we’ve already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of ‘conscious’ label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.
we can somewhat comfortably map our previous assumptions on to a new domain.
I’m not sure how comfortably.
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, “he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?” The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.
Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.
Perhaps. I’m still working around this, and perhaps my discomfort is a facet of a outdated worldview. I’ll note however, that this reduces charity to fuzzy-seeking. It doesn’t make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is indifference to saving every life.
In any case, I feel safe in presuming the consciousness of other humans, not because they resemble me in outputs, as because we were both produced by the same process of evolution, and it would be strange if the evolution made me conscious but not the beings that are genetically basically-identical to me. I do not so readily make that assumption for a non-human, even a human-brain-emulation running on hardware other than a brain.
Can you elaborate on this?
Well, one of the reasons that the Turing Test has lasted so long as a benchmark, despite its problems, is the central genius of holding inorganic machines to the same standards as organic ones. Notwithstanding p-zombies and some of the weirder anime shows, we’re actionably and emotionally confident in the consciousness of the humans that surround us every day. We can’t experience these consciousnesses directly, but we do care about their states in terms of both instrumental and object-level utility.
An AGI presents new challenges, but we’ve already demonstrated a basic willingness to treat ambulatory meat sacks as valuable beings with an internal perspective. By assigning the same sort of ‘conscious’ label to a synthetic being who nonetheless has a similar set of experiential consequences in our lives, we can somewhat comfortably map our previous assumptions on to a new domain. That gives us a beachhead, and a basis for cautious expansion and observation in the much more malleable space of inorganic intelligences.
I’m not sure how comfortably.
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, “he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?” The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.
Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.
Perhaps. I’m still working around this, and perhaps my discomfort is a facet of a outdated worldview. I’ll note however, that this reduces charity to fuzzy-seeking. It doesn’t make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is indifference to saving every life.
In any case, I feel safe in presuming the consciousness of other humans, not because they resemble me in outputs, as because we were both produced by the same process of evolution, and it would be strange if the evolution made me conscious but not the beings that are genetically basically-identical to me. I do not so readily make that assumption for a non-human, even a human-brain-emulation running on hardware other than a brain.