Granted that we don’t yet understand consciousness, and supposing for the sake of argument it might require special physics (though I don’t think this is the case), I still think you had it right the first time.
Understanding consciousness is not possible yet. It’s not the kind of problem that’s going to be solved by armchair thought. We need an example working in a form where we can see the moving parts. In other words, we need uploading. (Or, if you believe the requirement for special physics means uploading can’t work, we need to get to the point where we have the technology to do it, and observe it failing where standard theory would have had it succeeding, or otherwise find an experiment to distinguish the special physics from the currently understood kind.)
And understanding consciousness is not necessary yet. It’s something that can quite happily be left for another few generations. It’s not on the critical path to anything. As an AI researcher, I can tell you that if an explanation of consciousness fell into my lap tomorrow, I would be intellectually fascinated, but it would do nothing to solve any of the problems I’m currently facing.
SENS, by contrast, obviously is on the critical path to, well, everything else—it’s hard to solve any problems at all when you’re dead.
So my recommendation is to go back to your original plan.
As an AI researcher, I can tell you that if an explanation of consciousness fell into my lap tomorrow, I would be intellectually fascinated, but it would do nothing to solve any of the problems I’m currently facing.
Part of what we’d like an AI to be able to do is minimize pain and maximize pleasure. How do we go about building such an AI, if we don’t know which patterns of neuron firings (or chemical reactions, or information processing, or whatever) constitute pain, and which constitute pleasure? Do you not consider that to be part of the problem of consciousness, or related to it?
(Well, one way is if the AI could itself solve such problems, but I’m assuming that’s not what you meant...)
Granted that we don’t yet understand consciousness, and supposing for the sake of argument it might require special physics (though I don’t think this is the case), I still think you had it right the first time.
Understanding consciousness is not possible yet. It’s not the kind of problem that’s going to be solved by armchair thought. We need an example working in a form where we can see the moving parts. In other words, we need uploading. (Or, if you believe the requirement for special physics means uploading can’t work, we need to get to the point where we have the technology to do it, and observe it failing where standard theory would have had it succeeding, or otherwise find an experiment to distinguish the special physics from the currently understood kind.)
And understanding consciousness is not necessary yet. It’s something that can quite happily be left for another few generations. It’s not on the critical path to anything. As an AI researcher, I can tell you that if an explanation of consciousness fell into my lap tomorrow, I would be intellectually fascinated, but it would do nothing to solve any of the problems I’m currently facing.
SENS, by contrast, obviously is on the critical path to, well, everything else—it’s hard to solve any problems at all when you’re dead.
So my recommendation is to go back to your original plan.
Part of what we’d like an AI to be able to do is minimize pain and maximize pleasure. How do we go about building such an AI, if we don’t know which patterns of neuron firings (or chemical reactions, or information processing, or whatever) constitute pain, and which constitute pleasure? Do you not consider that to be part of the problem of consciousness, or related to it?
(Well, one way is if the AI could itself solve such problems, but I’m assuming that’s not what you meant...)
Huh? We already know that, we’ve known it since the 1950s. As far as I’m aware, the knowledge hasn’t really helped us solve our problems.