The Type II error of behaving as if these and future systems are not conscious in a world where they are in fact conscious.
Consciousness does not have a commonly agreed upon definition. The question of whether an AI is conscious cannot be answered until you choose a precise definition of consciousness, at which point the question falls out of the realm of philosophy into standard science.
This might seem like mere pedantry or missing the point, because the whole challenge is to figure out the definition of consciousness, but I think it is actually the central issue. People are grasping for some solution to the “hard problem” of capturing the je ne sais quoi of what it is like to be a thing, but they will not succeed until they deconfuse themselves about the intangible nature of sentience.
You cannot know about something unless it is somehow connected the causal chain that led to the current state of your brain. If we know about a thing called “consciousness” then it is part of this causal chain. Therefore “consciousness”, whatever it is, is a part of physics. There is no evidence for, and there cannot ever be evidence for, any kind of dualism or epiphenomenal consciousness. This leaves us to conclude that either panpsychism or materialism is correct. And causally-connected panpsychism is just materialism where we haven’t discovered all the laws of physics yet. This is basically the argument for illusionism.
So “consciousness” is the algorithm that causes brains to say “I think therefore I am”. Is there some secret sauce that makes this algorithm special and different from all currently known algorithms, such that if we understood it we would suddenly feel enlightened? I doubt it. I expect we will just find a big pile of heuristics and optimization procedures that are fundamentally familiar to computer science. Maybe you disagree, that’s fine! But let’s just be clear that that is what we’re looking for, not some other magisterium.
If consciousness is indeed sufficient for moral patienthood, then the stakes seem remarkably high from a utilitarian perspective.
Agreed. If your utility function is that you like computations similar to the human experience of pleasure and you dislike computations similar to the human experience of pain (mine is!). But again, let’s not confuse ourselves by thinking there’s some deep secret about the nature of reality to uncover. Your choice of meta-ethical system is of the same type signature as your choice of favorite food.
Consciousness does not have a commonly agreed upon definition. The question of whether an AI is conscious cannot be answered until you choose a precise definition of consciousness, at which point the question falls out of the realm of philosophy into standard science.
Agree. Also happen to think that there are basic conflations/confusions that tend to go on in these conversations (eg, self-consciousness vs. consciousness) that make the task of defining what we mean by consciousness more arduous and confusing than it likely needs to be (which isn’t to say that defining consciousness is easy). I would analogize consciousness to intelligence in terms of its difficulty to nail down precisely, but I don’t think there is anything philosophically special about consciousness that inherently eludes modeling.
is there some secret sauce that makes the algorithm [that underpins consciousness] special and different from all currently known algorithms, such that if we understood it we would suddenly feel enlightened? I doubt it. I expect we will just find a big pile of heuristics and optimization procedures that are fundamentally familiar to computer science.
Largely agree with this too—it very well may be the case (as seems now to be obviously true of intelligence) that there is no one ‘master’ algorithm that underlies the whole phenomenon, but rather as you say, a big pile of smaller procedures, heuristics, etc. So be it—we definitely want to better understand (for reasons explained in the post) what set of potentially-individually-unimpressive algorithms, when run in concert, give you system that is conscious.
So, to your point, there is not necessarily any one ‘deep secret’ to uncover that will crack the mystery (though we think, eg, Graziano’s AST might be a strong candidate solution for at least part of this mystery), but I would still think that (1) it is worthwhile to attempt to model the functional role of consciousness, and that (2) whether we actually have better or worse models of consciousness matters tremendously.
And causally-connected panpsychism is just materialism where we haven’t discovered all the laws of physics yet.
Materialism, specifically applied to consciousness, is also just materialism where we haven’t discovered all the laws of physics yet — specifically, those that constitute the sought-for materialist explanation of consciousness.
It is the same as how “atoms!” is not an explanation of everyday phenomena such as fire. Knowing what specific atoms are involved, what they are doing and why, and how that gives rise to our observations of fire, that is an explanation.
Without that real explanation, “atoms!” or “materialism!”, is just a label plastered over our ignorance.
materialism where we haven’t discovered all the laws of physics yet — specifically, those that constitute the sought-for materialist explanation of consciousness
It seems unlikely that new laws of physics are required to understand consciousness? My claim is that understanding consciousness just requires us to understand the algorithms in the brain.
Without that real explanation, “atoms!” or “materialism!”, is just a label plastered over our ignorance.
Agreed. I don’t think this contradicts what I wrote (not sure if that was the implication).
Consciousness does not have a commonly agreed upon definition. The question of whether an AI is conscious cannot be answered until you choose a precise definition of consciousness, at which point the question falls out of the realm of philosophy into standard science.
This might seem like mere pedantry or missing the point, because the whole challenge is to figure out the definition of consciousness, but I think it is actually the central issue. People are grasping for some solution to the “hard problem” of capturing the je ne sais quoi of what it is like to be a thing, but they will not succeed until they deconfuse themselves about the intangible nature of sentience.
You cannot know about something unless it is somehow connected the causal chain that led to the current state of your brain. If we know about a thing called “consciousness” then it is part of this causal chain. Therefore “consciousness”, whatever it is, is a part of physics. There is no evidence for, and there cannot ever be evidence for, any kind of dualism or epiphenomenal consciousness. This leaves us to conclude that either panpsychism or materialism is correct. And causally-connected panpsychism is just materialism where we haven’t discovered all the laws of physics yet. This is basically the argument for illusionism.
So “consciousness” is the algorithm that causes brains to say “I think therefore I am”. Is there some secret sauce that makes this algorithm special and different from all currently known algorithms, such that if we understood it we would suddenly feel enlightened? I doubt it. I expect we will just find a big pile of heuristics and optimization procedures that are fundamentally familiar to computer science. Maybe you disagree, that’s fine! But let’s just be clear that that is what we’re looking for, not some other magisterium.
Agreed. If your utility function is that you like computations similar to the human experience of pleasure and you dislike computations similar to the human experience of pain (mine is!). But again, let’s not confuse ourselves by thinking there’s some deep secret about the nature of reality to uncover. Your choice of meta-ethical system is of the same type signature as your choice of favorite food.
Thanks for the comment!
Agree. Also happen to think that there are basic conflations/confusions that tend to go on in these conversations (eg, self-consciousness vs. consciousness) that make the task of defining what we mean by consciousness more arduous and confusing than it likely needs to be (which isn’t to say that defining consciousness is easy). I would analogize consciousness to intelligence in terms of its difficulty to nail down precisely, but I don’t think there is anything philosophically special about consciousness that inherently eludes modeling.
Largely agree with this too—it very well may be the case (as seems now to be obviously true of intelligence) that there is no one ‘master’ algorithm that underlies the whole phenomenon, but rather as you say, a big pile of smaller procedures, heuristics, etc. So be it—we definitely want to better understand (for reasons explained in the post) what set of potentially-individually-unimpressive algorithms, when run in concert, give you system that is conscious.
So, to your point, there is not necessarily any one ‘deep secret’ to uncover that will crack the mystery (though we think, eg, Graziano’s AST might be a strong candidate solution for at least part of this mystery), but I would still think that (1) it is worthwhile to attempt to model the functional role of consciousness, and that (2) whether we actually have better or worse models of consciousness matters tremendously.
Materialism, specifically applied to consciousness, is also just materialism where we haven’t discovered all the laws of physics yet — specifically, those that constitute the sought-for materialist explanation of consciousness.
It is the same as how “atoms!” is not an explanation of everyday phenomena such as fire. Knowing what specific atoms are involved, what they are doing and why, and how that gives rise to our observations of fire, that is an explanation.
Without that real explanation, “atoms!” or “materialism!”, is just a label plastered over our ignorance.
It seems unlikely that new laws of physics are required to understand consciousness? My claim is that understanding consciousness just requires us to understand the algorithms in the brain.
Agreed. I don’t think this contradicts what I wrote (not sure if that was the implication).