(Warning: I expect that the following comment has at least one major error, since this topic is well outside my usual area of knowledge. Please read it as a request for edification, not as an attempt to push forward the envelope.)
Until we can detect or explain qualia in the wild, how can we make rational claims about its computability?
To make a simple analogy, suppose we have a machine which consists of a transparent box, a switch, and a speaker. Inside the box is a lightbulb and a light sensor. The switch controls the light, and the light sensor is hooked up to the speaker and makes it emit a tone IFF light is detected.
Suppose a species with no sight or concept of light is attempting to reverse-engineer the device. A reverse-engineered simulation of this machine could achieve the same external output as the original by connecting the switch more directly to the speaker, without going through a light-emitting-and-detecting phase. That would be an equivalent algorithm to the one that the original machine is executing, from the perspective of the observers, but it wouldn’t be doing the same thing.
Similarly, qualia has an effect on the world and it can be detected, but at the moment only in a tentative and indirect way. In particular, we don’t have tests that can distinguish false positives from true positives very well (just as the sightless scientists in the example haven’t figured out how to distinguish the tone from the light).
To put it another way, the simulated machine is kind of a zombie machine; it has all the proper external observable stuff going on, but not the correct internal process. From an evolutionary perspective a zombie is unlikely, but it seems like a naive reverse engineer could make one pretty easily, since they have no way of verifying if they’ve got the important part working if the external indicators for the important part can be easily accidentally faked.
Executive summary: Until we can detect naturally occurring qualia, it seems plausible that any simulations we create might accidentally be zombies and we wouldn’t be able to tell.
I think you are onto the right idea with your analogy, but if you work through the implications, it should be clear that if qualia are truly not functionally important, than we shouldn’t value them.
I mean, to use your analogy—if we discover brains that lack the equivalent of the pointless internal light bulb, should we value them any different?
If they are important, then it is highly likely our intelligent machines will also have them.
I find it far more likely that qualia are a necessary consequence of the massively connected probablistic induction the brain uses, and our intelligent machines will have similar qualia.
Evolution wouldn’t have created the light bulb type structures—complex adaptations must pay for themselves.
If they are important, then it is highly likely our intelligent machines will also have them.
I agree that qualia probably have fitness importance (or are the spandrel of something that does), but I’m not very sure that algorithms in general that implement probabilistic induction similar to our brain’s are also likely to have qualia. Couldn’t it plausibly be an implementation-specific effect, that would not necessarily be reproduced by a similar but non-identical reverse-engineered system?
It is possible, but I don’t find it plausible, partly because I understand qualia to be nearly unavoidable side effects of the whole general category of probabilistic induction engines like our brain, and I belive that practical AGI will necessarily use similar techniques.
Qualia are related to word connotations and the subconscious associative web: everything that happens in such a cognitive engine, every thought, experience or neural stimulus, has a huge web of pseudo-random complex associations that impose small but measurably statistical influence across the whole system.
The experience of perceiving one wavelength of light will have small but measurable differences on every cognitive measure, from mood to types of thoughts one may experience afterwards, and so on. Self-reflecting on how these associative traces ‘feel’ from the inside leads to qualia.
(Warning: I expect that the following comment has at least one major error, since this topic is well outside my usual area of knowledge. Please read it as a request for edification, not as an attempt to push forward the envelope.)
Until we can detect or explain qualia in the wild, how can we make rational claims about its computability?
To make a simple analogy, suppose we have a machine which consists of a transparent box, a switch, and a speaker. Inside the box is a lightbulb and a light sensor. The switch controls the light, and the light sensor is hooked up to the speaker and makes it emit a tone IFF light is detected.
Suppose a species with no sight or concept of light is attempting to reverse-engineer the device. A reverse-engineered simulation of this machine could achieve the same external output as the original by connecting the switch more directly to the speaker, without going through a light-emitting-and-detecting phase. That would be an equivalent algorithm to the one that the original machine is executing, from the perspective of the observers, but it wouldn’t be doing the same thing.
Similarly, qualia has an effect on the world and it can be detected, but at the moment only in a tentative and indirect way. In particular, we don’t have tests that can distinguish false positives from true positives very well (just as the sightless scientists in the example haven’t figured out how to distinguish the tone from the light).
To put it another way, the simulated machine is kind of a zombie machine; it has all the proper external observable stuff going on, but not the correct internal process. From an evolutionary perspective a zombie is unlikely, but it seems like a naive reverse engineer could make one pretty easily, since they have no way of verifying if they’ve got the important part working if the external indicators for the important part can be easily accidentally faked.
Executive summary: Until we can detect naturally occurring qualia, it seems plausible that any simulations we create might accidentally be zombies and we wouldn’t be able to tell.
I think you are onto the right idea with your analogy, but if you work through the implications, it should be clear that if qualia are truly not functionally important, than we shouldn’t value them.
I mean, to use your analogy—if we discover brains that lack the equivalent of the pointless internal light bulb, should we value them any different?
If they are important, then it is highly likely our intelligent machines will also have them.
I find it far more likely that qualia are a necessary consequence of the massively connected probablistic induction the brain uses, and our intelligent machines will have similar qualia.
Evolution wouldn’t have created the light bulb type structures—complex adaptations must pay for themselves.
I agree that qualia probably have fitness importance (or are the spandrel of something that does), but I’m not very sure that algorithms in general that implement probabilistic induction similar to our brain’s are also likely to have qualia. Couldn’t it plausibly be an implementation-specific effect, that would not necessarily be reproduced by a similar but non-identical reverse-engineered system?
It is possible, but I don’t find it plausible, partly because I understand qualia to be nearly unavoidable side effects of the whole general category of probabilistic induction engines like our brain, and I belive that practical AGI will necessarily use similar techniques.
Qualia are related to word connotations and the subconscious associative web: everything that happens in such a cognitive engine, every thought, experience or neural stimulus, has a huge web of pseudo-random complex associations that impose small but measurably statistical influence across the whole system.
The experience of perceiving one wavelength of light will have small but measurable differences on every cognitive measure, from mood to types of thoughts one may experience afterwards, and so on. Self-reflecting on how these associative traces ‘feel’ from the inside leads to qualia.