As an AI researcher, I can tell you that if an explanation of consciousness fell into my lap tomorrow, I would be intellectually fascinated, but it would do nothing to solve any of the problems I’m currently facing.
Part of what we’d like an AI to be able to do is minimize pain and maximize pleasure. How do we go about building such an AI, if we don’t know which patterns of neuron firings (or chemical reactions, or information processing, or whatever) constitute pain, and which constitute pleasure? Do you not consider that to be part of the problem of consciousness, or related to it?
(Well, one way is if the AI could itself solve such problems, but I’m assuming that’s not what you meant...)
Part of what we’d like an AI to be able to do is minimize pain and maximize pleasure. How do we go about building such an AI, if we don’t know which patterns of neuron firings (or chemical reactions, or information processing, or whatever) constitute pain, and which constitute pleasure? Do you not consider that to be part of the problem of consciousness, or related to it?
(Well, one way is if the AI could itself solve such problems, but I’m assuming that’s not what you meant...)
Huh? We already know that, we’ve known it since the 1950s. As far as I’m aware, the knowledge hasn’t really helped us solve our problems.