It’s extremely premature to leap to the conclusion that consciousness is some sort of unobservable opaque fact. In particular, we don’t know the mechanics of what’s going on in the brain as you understand and say “I am conscious”. We have to at least look for the causes of these effects where they’re most likely to be, before concluding that they are causeless.
People don’t even have a good definition of consciousness that cleanly separates it from nearby concepts like introspection or self-awareness in terms of observable effects. The lack of observable effects goes so far that people posit they could get rid of consciousness and everything would happen the same (i.e. p-zombies). That is not a unassailable strength making consciousness impossible to study, it is a glaring weakness implying that p-zombie-style consciousness is a useless or malformed concept.
I completely agree with Eliezer on this one: a big chunk of this mystery should dissolve under the weight of neuroscience.
It’s extremely premature to leap to the conclusion that...
Premature to leap to conclusions? Absolutely. Premature to ask questions? I don’t think so. Premature acknowledge foreseen obstacles? Perhaps. We really do have little information bout how the abrain works and how a brain creates a mind. Speculation before we have data may not be very useful.
I want to underscore how skeptical I am of drawing conclusion about the world on the basis of thought alone. Philosophy is not an effective method for finding truth. The pronouncements by philosophers of what is “necessary” is more often than not shown to be fallacious bordering on the absurd, once scientists get to the problem. Science’s track record of proving the presumed to be unprovable, is fantastic. Yet, knowing this, the line on inquiry still seems to present problems, a priori.
How could we know if an AI is conscious? We could look for signs of consciousness, or structural details that always (or even frequently) accompany consciousness. But in order to identify those features we need to assume what we are tryign to prove.
Is this specific problem clear? That is what I want to know about.
We have to at least look for the causes of these effects where they’re most likely to be, before concluding that they are causeless.
I am in no way suggesting that consciousness is causeless (which seems somewhat absurd to me), only that there is an essential difficulty in discovering the cause. I heartily recommend that we look. I am ABSOLUTELY not suggesting that we should give up on trying to understand the nature of mind, and especially with the scientific method. However, my faulty a priori reasoning foresees a limitation in our empirical methods, which have a much better track record. When the empirical methods exceed my expectation, I’ll update and abandon my a priori reasoning since I know that it is far less reliable (though I would want to know what was wrong with my reasoning). Until the empirical methods come though for me, I make a weak prediction that they will fail, in this instance, and am asking others to enlighten me about my (knowingly faulty) a priori reasoning.
I apologize if I’m belaboring the point, but I know that I’m going against the grain of the community and could be misconstrued. I want to be clear so as not to be misrepresented.
It’s extremely premature to leap to the conclusion that consciousness is some sort of unobservable opaque fact. In particular, we don’t know the mechanics of what’s going on in the brain as you understand and say “I am conscious”. We have to at least look for the causes of these effects where they’re most likely to be, before concluding that they are causeless.
People don’t even have a good definition of consciousness that cleanly separates it from nearby concepts like introspection or self-awareness in terms of observable effects. The lack of observable effects goes so far that people posit they could get rid of consciousness and everything would happen the same (i.e. p-zombies). That is not a unassailable strength making consciousness impossible to study, it is a glaring weakness implying that p-zombie-style consciousness is a useless or malformed concept.
I completely agree with Eliezer on this one: a big chunk of this mystery should dissolve under the weight of neuroscience.
Premature to leap to conclusions? Absolutely. Premature to ask questions? I don’t think so. Premature acknowledge foreseen obstacles? Perhaps. We really do have little information bout how the abrain works and how a brain creates a mind. Speculation before we have data may not be very useful.
I want to underscore how skeptical I am of drawing conclusion about the world on the basis of thought alone. Philosophy is not an effective method for finding truth. The pronouncements by philosophers of what is “necessary” is more often than not shown to be fallacious bordering on the absurd, once scientists get to the problem. Science’s track record of proving the presumed to be unprovable, is fantastic. Yet, knowing this, the line on inquiry still seems to present problems, a priori.
How could we know if an AI is conscious? We could look for signs of consciousness, or structural details that always (or even frequently) accompany consciousness. But in order to identify those features we need to assume what we are tryign to prove.
Is this specific problem clear? That is what I want to know about.
I am in no way suggesting that consciousness is causeless (which seems somewhat absurd to me), only that there is an essential difficulty in discovering the cause. I heartily recommend that we look. I am ABSOLUTELY not suggesting that we should give up on trying to understand the nature of mind, and especially with the scientific method. However, my faulty a priori reasoning foresees a limitation in our empirical methods, which have a much better track record. When the empirical methods exceed my expectation, I’ll update and abandon my a priori reasoning since I know that it is far less reliable (though I would want to know what was wrong with my reasoning). Until the empirical methods come though for me, I make a weak prediction that they will fail, in this instance, and am asking others to enlighten me about my (knowingly faulty) a priori reasoning.
I apologize if I’m belaboring the point, but I know that I’m going against the grain of the community and could be misconstrued. I want to be clear so as not to be misrepresented.