The assertion I originally put forth is AI safety;
I actually agree with aim of using some basic, “visceral” drive for AI safety. I have argued that making an AIs top-level drive the same as it’s ostensible purpose, paperclipping or whatever, is a potential disaster, because any kind of cease and desist command has to be a “non maskable interrupt” that overrides everything else.
But if all you are doing is trying to constrain an AIs behaviour, you have the opportunity to use methodological behaviourism, because you are basically trying to get a certain kind of response to a certain kind of input ..you can sidestep the Hard Problem.
But that isn’t anything very new. The functional/behavioural equivalents of pleasure and pain are positive and negative reinforcement, which machine learning systems have already.(That’s somewhat new to MIRIland, because MIRI tends not to take much notice that large and important class of AIs, but otherwise it isn’t new).
You list a number of useful things one could do with an understanding of pain and pleasure as qualia. The hypotheticals are true enough, because there are a lot of things one could do with an understanding of qualia.
But valency isn’t really a simplification of the Hard Problem..it just appears to be one. In other words, if you are aiming at AI control, then bringing in qualia just makes things considerably more difficult for yourself.
But I’m really not sure how you can think that this is completely irrelevant to supporting or refuting IIT: IIT made a prediction, Casali et al. tested the prediction, the prediction seemed to hold up. No qualiometer needed.
It made a prediction about what it does, which is scales of more consciousness to less consciousness. That isn’t particularly relevant to understanding how qualia are implemented. It’s not clear that an artificial system implemented to have high consciousness according to IIT would have qualia at all. But, while IIT isn’t elarly relevant to qualia, qualia aren’t clearly relevant to AI control.
But you seem to just want to give up, to put this topic beyond the reach of science, and criticize anyone trying to find clever indirect approaches.
You don’t have data about my overall approach.
What I’m doing is noting that, historically, the problem remains unsolved, and that, historically, people who think there is some relatively easy answer have misunderstood the question, or are engaging in circular reasoning about their favourite theory or, are running off a subjective feeling of optimism...
I guess it’ll be an empirical question whether IIT morphs into something that can substantially address questions of qualia- based on my understandings and intuitions, I’m pretty optimistic about this.
I actually agree with aim of using some basic, “visceral” drive for AI safety. I have argued that making an AIs top-level drive the same as it’s ostensible purpose, paperclipping or whatever, is a potential disaster, because any kind of cease and desist command has to be a “non maskable interrupt” that overrides everything else.
But if all you are doing is trying to constrain an AIs behaviour, you have the opportunity to use methodological behaviourism, because you are basically trying to get a certain kind of response to a certain kind of input ..you can sidestep the Hard Problem.
But that isn’t anything very new. The functional/behavioural equivalents of pleasure and pain are positive and negative reinforcement, which machine learning systems have already.(That’s somewhat new to MIRIland, because MIRI tends not to take much notice that large and important class of AIs, but otherwise it isn’t new).
You list a number of useful things one could do with an understanding of pain and pleasure as qualia. The hypotheticals are true enough, because there are a lot of things one could do with an understanding of qualia. But valency isn’t really a simplification of the Hard Problem..it just appears to be one. In other words, if you are aiming at AI control, then bringing in qualia just makes things considerably more difficult for yourself.
It made a prediction about what it does, which is scales of more consciousness to less consciousness. That isn’t particularly relevant to understanding how qualia are implemented. It’s not clear that an artificial system implemented to have high consciousness according to IIT would have qualia at all. But, while IIT isn’t elarly relevant to qualia, qualia aren’t clearly relevant to AI control.
You don’t have data about my overall approach.
What I’m doing is noting that, historically, the problem remains unsolved, and that, historically, people who think there is some relatively easy answer have misunderstood the question, or are engaging in circular reasoning about their favourite theory or, are running off a subjective feeling of optimism...