Consciousness is a red herring. We don’t even know if human beings are conscious. You may have a strong belief that you are yourself a conscious being, but how can you know if other people are conscious? Do you have a way to test if other people are conscious?
A superintelligent, misaligned AI poses an existential risk to humanity quite independantly of whether it is conscious or not. Consciousness is an interesting philosophical topic, but has no relevance to anything in the real world.
I’m not sure how we could say that there’s no phenomenon that the word “consciousness” refers to, it seems to me that it’s like questioning if reality itself exists: the point of “reality” is referred to the consistency of things we perceive, if we question if reality ‘exists’, we still find that consistency of things we perceive regardless, it seems to me that it’s analogous to questioning consciousness.
We don’t even know if human beings are conscious(...) how can you know if other people are conscious? Do you have a way to test if other people are conscious?
If I can identify the referent of the word “consciousness” at all, then I can see if the way other people speak about their experiences matches with that concept of “consciousness”, and they do. That’s evidence in favour of then being conscious.
And we can actually detect empirical differences between consciousness and non-consciousness, because there are people that perceive visual stimuli who say that are not aware of seeing anything (even while they could at some point of their lifes).
You are talking about what I would call a phenomenological, or “philosophical-in-the-hard-problem-sense” consciousness (“phenomenological” is also not quite right the word because psychology is also phenomenology, relative to neuroscience, but this is an aside).
“Psychological” consciousness (specifically, two kinds of it: affective/basal/core consciousness, and access consciousness) is not mysterious at all. These are just normal objects in neuropsychology.
Corresponding objects could also be found in AIs, and called “interpretable AI consciousness”.
“Psychological” and “interpretable” consciousness could be (maybe) generalised in some sort of “general consciousness in systems”. (Actually, Fields et al. already proposed such a theory, but their conception of general consciousness surely couldn’t serve as a basis of ethics.)
The proper theory of non-anthropocentric ethics, shall it be based in some way on consciousness (which I’m actually doubtful about; I will write a post about this soon), surely should use “psychological” and “interpretable” rather than “philosophical-in-the-hard-problem-sense” consciousness.
Consciousness is a red herring. We don’t even know if human beings are conscious. You may have a strong belief that you are yourself a conscious being, but how can you know if other people are conscious? Do you have a way to test if other people are conscious?
A superintelligent, misaligned AI poses an existential risk to humanity quite independantly of whether it is conscious or not. Consciousness is an interesting philosophical topic, but has no relevance to anything in the real world.
I’m not sure how we could say that there’s no phenomenon that the word “consciousness” refers to, it seems to me that it’s like questioning if reality itself exists: the point of “reality” is referred to the consistency of things we perceive, if we question if reality ‘exists’, we still find that consistency of things we perceive regardless, it seems to me that it’s analogous to questioning consciousness.
If I can identify the referent of the word “consciousness” at all, then I can see if the way other people speak about their experiences matches with that concept of “consciousness”, and they do. That’s evidence in favour of then being conscious.
And we can actually detect empirical differences between consciousness and non-consciousness, because there are people that perceive visual stimuli who say that are not aware of seeing anything (even while they could at some point of their lifes).
You are talking about what I would call a phenomenological, or “philosophical-in-the-hard-problem-sense” consciousness (“phenomenological” is also not quite right the word because psychology is also phenomenology, relative to neuroscience, but this is an aside).
“Psychological” consciousness (specifically, two kinds of it: affective/basal/core consciousness, and access consciousness) is not mysterious at all. These are just normal objects in neuropsychology.
Corresponding objects could also be found in AIs, and called “interpretable AI consciousness”.
“Psychological” and “interpretable” consciousness could be (maybe) generalised in some sort of “general consciousness in systems”. (Actually, Fields et al. already proposed such a theory, but their conception of general consciousness surely couldn’t serve as a basis of ethics.)
The proper theory of non-anthropocentric ethics, shall it be based in some way on consciousness (which I’m actually doubtful about; I will write a post about this soon), surely should use “psychological” and “interpretable” rather than “philosophical-in-the-hard-problem-sense” consciousness.