But that isn’t really saying anything about qualia. The authors can relate their PCI measure to consciousness as judged medically… in humans. But would that scale be applicable to very simple systems or artificial systems? There is a real possibility that qualia could go missing in computational simulations,even assuming strict physicalism. In fact , we standardly assume that AIs embedded in games don’t suffer.
If you’re looking for a Full, Complete Data-Driven And Validated Solution to the Qualia Problem, I fear we’ll have to wait a long, long time. This seems squarely in the ‘AI complete’ realm of difficulty.
But if you’re looking for clever ways of chipping away at the problem, then yes, Casali’s Perturbational Complexity Index should be interesting. It doesn’t directly say anything about qualia, but it does indirectly support Tononi’s approach, which says much about qualia. (Of course, we don’t yet know how to interpret most of what it says, nor can we validate IIT directly yet, but I’d just note that this is such a hard, multi-part problem that any interesting/predictive results are valuable, and will make the other parts of the problem easier down the line.)
nd will make the other parts of the problem easier down the line
That’s what I am disputing. You are taking a problem we don;t know how to make a start on, and turning it into a smaller problem we also don’t know how to make a start on. That is’t an advance. Reducing or simplifying a problem isn’t an unconditional, universal solvent, it only works where the simpler problem is one you can actually make progress on.
IIT isn’t going toi be of any real use unless it is confirmed, and how are you goign to confirm it, as a theory of qualia, without qualiometers?
If we are going to continue not having qualiometers, we may have to give up on testing consciousness objectiively in favour oof subjective measures...phenomenology and heterophenomenology. But you can only do heterophenomenology on a system that can report its subjective sates. Starting with simpler systems, like a single simulated pain receptor, is not going to work.
The assertion I originally put forth is AI safety; it is not about reverse-engineering qualia. I’m willing to briefly discuss some intuitions on how one may make meaningful progress on reverse-engineering qualia as a courtesy to you, my anonymous conversation partner here, but since this isn’t what I originally posted about I don’t have a lot of time to address radical skepticism, especially when it seems like you want to argue against some strawman version of IIT.
You ask for references (in a somewhat rude monosyllabic manner) on “some of the empirical work on coma patients IIT has made possible” and I give you exactly that. You then ignore it as “not really qualia research”- which is fine. But I’m really not sure how you can think that this is completely irrelevant to supporting or refuting IIT: IIT made a prediction, Casali et al. tested the prediction, the prediction seemed to hold up. No qualiometer needed. (Granted, this would be a lot easier if we did have them.)
This apparently leads to you say,
You are taking a problem we don;t know how to make a start on, and turning it into a smaller problem we also don’t know how to make a start on.
More precisely, I’m taking a problem you don’t know how to make a start on, and am turning it into a smaller problem that you also don’t seem to know how to make a start on. Which is fine, and I don’t wish to be a jerk about it, and not merely because Tononi/Tegmark/Griffith could be wrong in how they’re approaching consciousness, and I could be wrong in how I’m adapting their stuff to try to explain some specific things about qualia. But you seem to just want to give up, to put this topic beyond the reach of science, and criticize anyone trying to find clever indirect approaches. Needless to say I vehemently disagree with the productiveness of that attitude.
I think we are in agreement that valence could be a fairly simple property. I also agree that the brain is Vastly Complex, and that qualia research has some excruciatingly difficult methodological hurdles to overcome, and I agree that IIT is still a very speculative hypothesis which shouldn’t be taken on faith. I think we differ radically on our understandings of IIT and related research. I guess it’ll be an empirical question whether IIT morphs into something that can substantially address questions of qualia- based on my understandings and intuitions, I’m pretty optimistic about this.
The assertion I originally put forth is AI safety;
I actually agree with aim of using some basic, “visceral” drive for AI safety. I have argued that making an AIs top-level drive the same as it’s ostensible purpose, paperclipping or whatever, is a potential disaster, because any kind of cease and desist command has to be a “non maskable interrupt” that overrides everything else.
But if all you are doing is trying to constrain an AIs behaviour, you have the opportunity to use methodological behaviourism, because you are basically trying to get a certain kind of response to a certain kind of input ..you can sidestep the Hard Problem.
But that isn’t anything very new. The functional/behavioural equivalents of pleasure and pain are positive and negative reinforcement, which machine learning systems have already.(That’s somewhat new to MIRIland, because MIRI tends not to take much notice that large and important class of AIs, but otherwise it isn’t new).
You list a number of useful things one could do with an understanding of pain and pleasure as qualia. The hypotheticals are true enough, because there are a lot of things one could do with an understanding of qualia.
But valency isn’t really a simplification of the Hard Problem..it just appears to be one. In other words, if you are aiming at AI control, then bringing in qualia just makes things considerably more difficult for yourself.
But I’m really not sure how you can think that this is completely irrelevant to supporting or refuting IIT: IIT made a prediction, Casali et al. tested the prediction, the prediction seemed to hold up. No qualiometer needed.
It made a prediction about what it does, which is scales of more consciousness to less consciousness. That isn’t particularly relevant to understanding how qualia are implemented. It’s not clear that an artificial system implemented to have high consciousness according to IIT would have qualia at all. But, while IIT isn’t elarly relevant to qualia, qualia aren’t clearly relevant to AI control.
But you seem to just want to give up, to put this topic beyond the reach of science, and criticize anyone trying to find clever indirect approaches.
You don’t have data about my overall approach.
What I’m doing is noting that, historically, the problem remains unsolved, and that, historically, people who think there is some relatively easy answer have misunderstood the question, or are engaging in circular reasoning about their favourite theory or, are running off a subjective feeling of optimism...
I guess it’ll be an empirical question whether IIT morphs into something that can substantially address questions of qualia- based on my understandings and intuitions, I’m pretty optimistic about this.
You mean this?
But that isn’t really saying anything about qualia. The authors can relate their PCI measure to consciousness as judged medically… in humans. But would that scale be applicable to very simple systems or artificial systems? There is a real possibility that qualia could go missing in computational simulations,even assuming strict physicalism. In fact , we standardly assume that AIs embedded in games don’t suffer.
If you’re looking for a Full, Complete Data-Driven And Validated Solution to the Qualia Problem, I fear we’ll have to wait a long, long time. This seems squarely in the ‘AI complete’ realm of difficulty.
But if you’re looking for clever ways of chipping away at the problem, then yes, Casali’s Perturbational Complexity Index should be interesting. It doesn’t directly say anything about qualia, but it does indirectly support Tononi’s approach, which says much about qualia. (Of course, we don’t yet know how to interpret most of what it says, nor can we validate IIT directly yet, but I’d just note that this is such a hard, multi-part problem that any interesting/predictive results are valuable, and will make the other parts of the problem easier down the line.)
That’s what I am disputing. You are taking a problem we don;t know how to make a start on, and turning it into a smaller problem we also don’t know how to make a start on. That is’t an advance. Reducing or simplifying a problem isn’t an unconditional, universal solvent, it only works where the simpler problem is one you can actually make progress on.
IIT isn’t going toi be of any real use unless it is confirmed, and how are you goign to confirm it, as a theory of qualia, without qualiometers?
If we are going to continue not having qualiometers, we may have to give up on testing consciousness objectiively in favour oof subjective measures...phenomenology and heterophenomenology. But you can only do heterophenomenology on a system that can report its subjective sates. Starting with simpler systems, like a single simulated pain receptor, is not going to work.
We’re not on the same page. Let’s try this again.
The assertion I originally put forth is AI safety; it is not about reverse-engineering qualia. I’m willing to briefly discuss some intuitions on how one may make meaningful progress on reverse-engineering qualia as a courtesy to you, my anonymous conversation partner here, but since this isn’t what I originally posted about I don’t have a lot of time to address radical skepticism, especially when it seems like you want to argue against some strawman version of IIT.
You ask for references (in a somewhat rude monosyllabic manner) on “some of the empirical work on coma patients IIT has made possible” and I give you exactly that. You then ignore it as “not really qualia research”- which is fine. But I’m really not sure how you can think that this is completely irrelevant to supporting or refuting IIT: IIT made a prediction, Casali et al. tested the prediction, the prediction seemed to hold up. No qualiometer needed. (Granted, this would be a lot easier if we did have them.)
This apparently leads to you say,
More precisely, I’m taking a problem you don’t know how to make a start on, and am turning it into a smaller problem that you also don’t seem to know how to make a start on. Which is fine, and I don’t wish to be a jerk about it, and not merely because Tononi/Tegmark/Griffith could be wrong in how they’re approaching consciousness, and I could be wrong in how I’m adapting their stuff to try to explain some specific things about qualia. But you seem to just want to give up, to put this topic beyond the reach of science, and criticize anyone trying to find clever indirect approaches. Needless to say I vehemently disagree with the productiveness of that attitude.
I think we are in agreement that valence could be a fairly simple property. I also agree that the brain is Vastly Complex, and that qualia research has some excruciatingly difficult methodological hurdles to overcome, and I agree that IIT is still a very speculative hypothesis which shouldn’t be taken on faith. I think we differ radically on our understandings of IIT and related research. I guess it’ll be an empirical question whether IIT morphs into something that can substantially address questions of qualia- based on my understandings and intuitions, I’m pretty optimistic about this.
I actually agree with aim of using some basic, “visceral” drive for AI safety. I have argued that making an AIs top-level drive the same as it’s ostensible purpose, paperclipping or whatever, is a potential disaster, because any kind of cease and desist command has to be a “non maskable interrupt” that overrides everything else.
But if all you are doing is trying to constrain an AIs behaviour, you have the opportunity to use methodological behaviourism, because you are basically trying to get a certain kind of response to a certain kind of input ..you can sidestep the Hard Problem.
But that isn’t anything very new. The functional/behavioural equivalents of pleasure and pain are positive and negative reinforcement, which machine learning systems have already.(That’s somewhat new to MIRIland, because MIRI tends not to take much notice that large and important class of AIs, but otherwise it isn’t new).
You list a number of useful things one could do with an understanding of pain and pleasure as qualia. The hypotheticals are true enough, because there are a lot of things one could do with an understanding of qualia. But valency isn’t really a simplification of the Hard Problem..it just appears to be one. In other words, if you are aiming at AI control, then bringing in qualia just makes things considerably more difficult for yourself.
It made a prediction about what it does, which is scales of more consciousness to less consciousness. That isn’t particularly relevant to understanding how qualia are implemented. It’s not clear that an artificial system implemented to have high consciousness according to IIT would have qualia at all. But, while IIT isn’t elarly relevant to qualia, qualia aren’t clearly relevant to AI control.
You don’t have data about my overall approach.
What I’m doing is noting that, historically, the problem remains unsolved, and that, historically, people who think there is some relatively easy answer have misunderstood the question, or are engaging in circular reasoning about their favourite theory or, are running off a subjective feeling of optimism...