The problem of color is for materialists what the problem of evil is for theists: it’s the overwhelming fact that they can’t help tripping over, but which they also can’t bring themselves to take seriously. There is no necessary inconsistency in either case; you could have a theism in which God isn’t good, or a materialism in which colors exist. But no, the existing concept (of God, of physics) has to be basically right; so all the creative energy goes into rationalizing that belief.
The problem of color is the problem of anthropomorphism.
In reductionist materialism, the “qualia” and “experience” of color is merely an internally-consistent, self-reinforcing creation of the animal brain that assigned specific neural values to specific inputs sent by specific cells that react to specific light wavelengths in some reality-entangled manner.
In this philosophy, we only perceive “color” as a “special experience” because we do not realize that the same is true for all of our senses, and that the same would be true of any other physically-possible “sense”, and that some new incredible “qualia” would be literally created (gasp, you sinful blasphemer!) if we artificially created a new “sense” through modification of the human brain.
In summary: The “magical yellowness” qualia of yellow that feels like it can’t possibly be merely information is actually created by your brain. It is “real” in that without this the yellowness would merely be knowledge of wavelengths, not yellowness-experience, but it is still created wholepiece by the brain, not by some light shiny from outside the universe.
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a “new” qualia that was never perceived before, just like colorblind people that have never seen color and don’t have any idea what you’re talking about who would suddenly be able to see colors.
Edit: I would stake out further and go so far as to claim, though this is not an easy hypothesis to test and falsify by any stretch and might not even be doable within my natural lifetime, that there is a tangible explanation for the particular properties (this is a magiclike unknown-explanation stopsign) of the experiences of our senses—of why sound feels and is experienced the way it does and is, why colors feel and are experienced the way they do and are. I would also posit a correlation between the feeling and the qualia-seeming experiences. All of this to posit the hypothesis that we could not only create new qualia, but even create new qualia with specific “kinds” of experience-qualia-ness, like creating a new sense that both feels and is experienced somewhere in-between colors and the 400hz sound in the n-space of “qualia”.
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a “new” qualia that was never perceived before, just like colorblind people that have never seen color and don’t have any idea what you’re talking about who would suddenly be able to see colors.
There have been cases of people blind from birth who, by some medical treatment were enabled to see. No references to hand, but Oliver Sacks probably writes about this somewhere. They clearly get new qualia, which are moreover clearly different from those who were sighted from birth.
I thought to use this too, but I was once or twice given the argument that blind people who are made to see are only “accessing” a Given-By-External-Power-To-Humans-At-Birth qualia from outside reality—the argument Eliezer tried to take down in the metaethics sequence about morality being “a light shining from outside” that humans just happen to find and match, applied to qualia. It’s a very good stopsign and/or FGC, apparently.
Because of this, I looked for a more definitive test that these philosophies—those that would discard “creating” sight as a valid new qualia—do not predict (and arguably, cannot, in terms of probability mass, if they want to remain coherent).
I thought to use this too, but I was once or twice given the argument that blind people who are made to see are only “accessing” a Given-By-External-Power-To-Humans-At-Birth qualia from outside reality
Surely that argument is refuted by the fact that the newly sighted do not receive the same qualia as the always-sighted? Instead, they get pretty much the experiences you might predict given what we know about the importance of early experience for the developing faculties: confusion overcome only imperfectly and with difficulty, and with assistance from their more developed senses.
The idea that they received something at birth that they have difficulty accessing has the same problem as the idea that the brain is merely a physical interface through which the soul interacts with the world: all the data are just as consistent with the simpler hypothesis that the brain is the whole story. (That includes the data that there are experiences, which is a difficulty for both materialism, and materialism with the magic word “soul” added.)
Surely that argument is refuted by the fact that the newly sighted do not receive the same qualia as the always-sighted?
Yes, it is, when you accept the evidence you’ve given as valid and can weight arguments based on their probability logic. Denial mechanisms in place will usually prevent proponents of the argument from recognizing the refutation as a valid one. Lots of difficult argumentation and untangling of webs of rationalizations ensues (and arguing by the Occam’s Razor route is even less practical, because in their model, their hypotheses of soul or outer-light or what-have-you is simpler when other parts of their model of the whole world are taken into account, which means even more knots to untangle).
I seek to circumvent that debate entirely by putting the burden of proof on my own “side”, for several reasons, some of which are tinted a slight shade of gray closer to the Dark Arts than I would like.
(That includes the data that there are experiences, which is a difficulty for both materialism, and materialism with the magic word “soul” added.)
I dont’t think this is correct. The phenomenology of subjective exprience suggests that such experiences should be “simple” in a sense—sort of like a bundle of tiny little XML tags attached to the brain. Of course, this is not to argue that our brain parts literally have tiny little XML tags attached to them, any more than other complex objects do. But it does suggest that they might be causally connected to some other, physically simpler phenomena.
In this philosophy, we only perceive “color” as a “special experience” because we do not realize that the same is true for all of our senses
Indeed, all of our senses, by definition, have qualia, and colour is just a particularly striking example. It is interesting, though, to note that not all brain tissue produces qualia: the cerebellum operates without them. Our motor control (what the cerebellum primarily does) proceeds without qualia—we have almost no awareness of what we are doing with individual muscles. This is why all forms of teaching people how to move, whether physiotherapy, dance training, martial arts, sports, and so on, make a lot of use of indirect visualisation to produce the desired results. (These can easily be mistaken, sometimes by the teachers themselves, for literal descriptions, e.g. of “chi” or “energy”.) Golfers are taught to “follow through”, even though nothing that happens after the moment of impact can have any effect on the ball. It is the intention to follow through that changes how the club is swung, and how it impacts the ball, in a way that could not be achieved by any more direct instruction.
In this philosophy, we only perceive “color” as a “special experience” because we do not realize that the same is true for all of our senses
Ye-e-e-s, but the standard qualiaphilic take is that all the other sense are problematic as well. You think you are levelling
down, but you are levelling up.
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a “new” qualia that was never perceived before,
That isn’t a test of reductionism, etc, since many of the alternatives make the same prediction. For instance, David Chalmer’s theory that qualia are non-physical properties that supervene on the physical properties of the brain.
That isn’t a test of reductionism, etc, since many of the alternatives make the same prediction. For instance, David Chalmer’s theory that qualia are non-physical properties that supervene on the physical properties of the brain.
True, it isn’t a particularly specific test that supports all the common views of most LW users. That is not its intended purpose.
The purpose is to establish that “qualia” are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings for the express purpose of allowing some specific subset of possible complex causal systems to have more stuff that sets them apart from other complex causal systems, just because the former are able to causally build abstract model of parts of their own system and would have internal causal patterns abstractly modeled as “negative reinforcement” that they causally attempt to avoid being fired if these aforementioned “qualia” building blocks didn’t set them apart from the latter kind of complex systems...
… but I guess it does sound kind of obviously silly when you phrase it from a reductionist perspective.
The purpose is to establish that “qualia” are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings for the express purpose of allowing some specific subset of possible complex causal systems to have more stuff that sets them apart from other complex causal systems,
But it doesn’t. It just establishes that if they, they covary with physical states in the way that would be expected
from identity theory. Admitedly it seems redundant to have a non physical extra ingredient that nonetheless
just shadows what brains are doing physicallly. I think that’s a flaw in Chalmers’ theory. But its conceptual,
not empirical.
“It just establishes that if they exist, they covary with physical states in the way that would be expected from identity theory.”
But thats not the whole problem. It establishes they covary with physical states in the way that would be expected from identity theory, and Chalmerserian dualism, and a bunch of other theories (but maybe not
Cartesian dualism).
Tests need to distinguish between theories, and yours doesn’t.
The purpose is to establish that “qualia” are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings
Since qualia describe an event (in a sense), I think that if they’re ever found to have measurable existence, they’ll not be so much what a gluon is to “top-quark”, but more something like what division is to the real numbers...
That is exactly—if I interpret your comment charitably—what my hypothesis concludes and what I want to test with the proposed experiment in the grand-grand-parent.
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Or, actually, backing up… ought I reject such a theory, from Chalmer et al’s perspective? Or is “1+1=2” a nonphysical property of certain systems (say, two individual apples placed alongside each other) in the same sense that “red” is?
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Now I’m confused: what you just said is a description of a ‘supervenient’ relation. Are you saying that anytime X is said to supervene on Y, we should reject the theory which features X’s?
No. Supervence is an ontologically neutral relationship. In Chalmer’s theory, qualia supervene on brain states,
so novel brain states will lead to novvel qualia. In identity theory, qualia superven on brain states, so ditto. So
the Novel Qualia test does not distinguish the one from the other. The argument for qualia being non-physical
properties, as opposed to algorithms, is down to their redubility, or lack thereof, not supervenience.
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
This is not really true, at least without adding some pretty restrictive conditions. By using “joke interpretations”, as pointed out by Searle and Putnam, one could assert that a huge number of “algorithms” supervene on any large-enough physical object.
I mean, sure, the fact that a circuit implementing the algorithm “1+1=2” returns “2″ given the instruction to execute “1+1” is entirely predictable, much as the fact that a mouse conditioned to avoid red will avoid a red room is predictable. Absolutely agreed.
But as I understand the idea of qualia, the claim is that the mouse’s predictable behavior with respect to a red room (and the neural activity that gives rise to it) is not a complete description of what’s going on… there is also the mouse’s experience of red, which is an entirely separate, nonphysical, fact about the event, which cannot be explained by current physics even in principle. (Or maybe it turns out mice don’t have an experience of red, but humans certainly do, or at least I certainly do.) Right?
Which, OK. But I also have the experience of seeing two things, just like I have the experience of seeing a red thing. On what basis do I justify the claim that that experience is completely described by a description of the physical system that calculates “2”? How do I know that my experience of 2 isn’t an entirely separate nonphysical fact about the event which cannot be explained by current physics even in principle?
Like Spinning_Sandwich, I don’t think that color is qualitatively different in its problematic-ness than e.g. pitch of sound, or perception of geometry, or even memory of a smell, or any other aspect of consciousness.
Color just serves as the easiest referrenced example of the mystery because colors feel as largely an irreducible sensation (you can perhaps reduce it to two separate sensations of hue+brightness, or something like that, but not much further, like one might do by trying to reduce geometry to points and numbers)
I don’t see how colors in particular are a problem for materialism any more than consciousness itself is. I certainly fail to see how it’s equivalent to the problem of evil for theists of the “God is good” bent.
Could you explain in a bit more detail how the problem of evil parallels this? And I mean excruciating detail, if possible, because I really haven’t a clue what you’re getting at.
One would not predict the existence of intrinisically subjective qualities in an entirely physcial, and therefor entirely objective, universe.
Disagree.
Let’s look at the actual observations. I see red, It has some atomic “redness” that is different from the atomic “blueness” of blue and the atomic pleasure of orgasm and the atomic feeling of cold.
Each of these atomic “qualia” are subjectively irreducible. There are not smaller parts that my subjective experience of “red” is made up of.
Is this roughly the qualia problem? That’s my understanding of it.
Here’s a simple computer program that reports on whether or not it has atomic subjective experience:
qualia = {"red", "blue", "cold", "pleasure"}
memory_associations = {red = {"anger", "hot"}, blue = {"cold", "calm"},
pleasure = {"hot", "good"}}
function experience_qualia(input)
for _, q in ipairs(qualia) do
if input == q then
print("my experience of", input, "is the same as", q)
else
print(q, "and", input, "feel different")
end
end
print("furthermore, the feeling of", input, "seems connected to")
print(table.concat(memory_associations[input], " and "))
print("I have no way of reducing these experiences, therefore I exist outside physics")
end
experience_qualia"red"
experience_qualia"blue"
From the inside, the program experiences no mechanisms of reduction of these atomic qualia, but from the outside, we can see that they are strings, made up of bytes, and compared by hash value. While I don’t know the details of the neurosceince of qualia, I expect the findings to be roughly similar. Something will be an irreducible symbol with various associations and uniqueness from within the system, but outside, we will be able to see “oh look, redness is this particular pattern of neurons firing”.
EDIT: LW killed my program formatting. It should still run (lua, by the way)
Also, lots of syntax differences (end, then, do, function, whitespace, elseif, etc). They are similar in that they are dynamic languages. I don’t think anything was particularly inspired by anything else.
If you mean this… to be clear, I didn’t complain about it not demonstrating “qualia sameness”. I complained (implicitly) that the claim that it demonstrated all the properties that some people claim demonstrate qualia in real-world systems (like people) was demonstrably false.
(In particular, that it didn’t demonstrate anything persistent across different reporting, whereas my own experience does demonstrate something persistent across different reporting.)
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
Point taken. As I tried to explain somewhere, it was all the properties that I thought of at the moment, with the implicit assertion that the rest of the properties could be demonstrated as required.
“From the inside, the program experiences no mechanisms of reduction of these atomic qualia”
Materialism predicts that algorithms have an “inside”?
As a further note, I’ll have to say that if all the blue and if the red in my visual experience were switched around, my hunch tells me that I’d be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that “red” is associated with hot, and that “blue” is associated with cold… The qualia of the visual experience itself would be different.
Materialism predicts that algorithms have an “inside”?
Yes. The scene from within a formal system (like algebra) has certain qualities (equations, variables, functions, etc) that are different from the scene outside (markings on paper, equals sign, BEDMAS, variable names, brackets for function application).
That’s not really a materialism thing, it’s a math thing.
As a further note, I’ll have to say that if all the blue and if the red in my visual experience were squitched around, my hunch tells me that I’d be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that “red” is associated with hot, and that “blue” is associated with cold… The qualia of the visual experience itself would be different.
Hence the part where they are compared to other qualia. Maybe that’s not enough, but imagining getting “blue” or “sdfg66df” instead of “red” (which is the evidence you are using) is of course going to return “they are different” because they don’t compare equal. Even if the output of the computation ends up being the same.
That’s not really a materialism thing, it’s a math thing.
I’m under the impression that what you describe falls under computationalism, not materialism, but my reading on these ideas is shallow and I may be confusing some of these terms...
I must say I can’t tell the difference between materialism “the mind is built of stuff” and computationalism “the mind is built of algorithms (running on stuff)”.
That thought experiment doesn’t make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn’t notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.
There’s no evidence that your programme experiences anything from the inside. Which is one way in which your claim is surreptitiously eliminativist. Another is that, examined from the outside, we can tell what the programme’s qualia are:
they are nothing. They have no quaities other than being different from one another. But qualia don’t seem like that from the inside! You say your programme’s qualia are subjective because it can’t examine their internal structure...but there
ins’t any. They are not subjective somethings, they are just nothings.
There’s no evidence that your programme experiences anything from the inside.
then neither is there evidence that I do, or you do.
they are nothing. They have no quaities other than being different from one another.
I can’t think of qualities that my subjective experience of “red” has that the atom “red” does not have in my program.
But qualia don’t seem like that from the inside!
Sure they do. Redness has this unique redness to it the same way “red” has this unique ness.
your programme’s qualia are subjective because
I was using “subjective” as a perspective, not a quality.
can’t examine their internal structure...but there ins’t any.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc.
then neither is there evidence that I do, or you do.
I have plenty of evidence of my own experiences. Were you restricting “evidence” to third-person, objective evidence?
I can’t think of qualities that my subjective experience of “red” has that the atom “red” does not have in my program.
I can. I think that if I experienced nothing but an even expanse of red, that would be different from experiencing
nothing but a salty taste, or nothing but middle C
Sure they do. Redness has this unique redness to it the same way “red” has this unique ness.
Redness isn’t expressible. “Object at 0x8cf643” is.
Your programme’s qualia are subjective because can’t examine their internal structure...but there ins’t any.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc
If that’s accessible to them, it’s objective and expressible. If not, its just a nothing. Neither way do
you have a “somethng” that is subjective.
I wouldn’t predict the existence of self-replicating molecules either. In fact, I’m not sure I’m in a position to predict anything at all about physical phenomena without appealing to empirical knowledge I’ve gathered from this particular physical world.
I can write a computer program that experiences qualia to the same extent that I do. What confusing thing is left?
Evil is a problem because the benevolent god hypothesis predicts its non-existence. Qualia is not a problem; materialism adequately explains all aspects of it, except the exact neuroscience details.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior—that is, your code either reports the experience or it doesn’t, but there’s no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn’t.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
shared between two runs of the program one of which reports the experience and one of which doesn’t.
This is a different issue, analogous to whether my “red” and your “red” are the same. From the inside, we’d feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for..
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple
way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really
happens at the hardware level. You seemed to have substutued an easier problem: you have ensured
that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
Um, that program has no causal entanglement with 700nm-wavelength light, 470nm-wavelength light, temperature, or a utility function. I am totally unwilling to admit it might experience red, blue, cold, or pleasure.
If I upload you and stimulate your upload’s “red” cones, you’ll have red qualia, without any 700nm light involved (except for the 700nm light which gave rise to your mind-design which I copied etc., but if you’re talking about entanglement that distant, than nyan_sandwich was also entangled with 700nm light before writing the code)
Yes, my experience of redness can come not only from light, but also from dreams, hallucinations, sensory illusions, and direct neural stimulation. But I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
Take, for example, the occasional case of cochlear implants for people born deaf. When the implant is turned on, they immediately have a sensation, but that sensation only gradually becomes “sound” qualia to them over roughly a year of living with that new sensory input. They don’t experience the sound qualia in dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation) until after their brain is adapted to interpreting and using sound.
Or take the case of tongue-vision systems for people born blind. It likewise starts out as an uninformative mess of a signal to the user, but gradually turns into a subjective experience of sight as the user learns to make sense of the signal. They recognize the experience from how other people have spoken of it, but they never knew the experience previously from dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation).
In short, I think the long-term potentiation of the neural pathways is a very significant kind of causal entanglement that is not present in the program under discussion.
I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
What if you’re a brain in a vat, and you’ve grown up plugged into a high-resolution World of Warcraft? If qualia are wholly inside the skull, their qualitative character can’t depend on facts outside the skull.
Well you need some input to the brain, even if it’s in a vat. Something has to either stimulate the retina or stimulate the relevant neurons further down the line. At least during some learning phase.
Or I guess you could assemble a brain-in-a-vat with memories built-in (e.g. the memory of seeing red). Thus the brain will have the architecture (and therefore the ability) to imagine red.
We could give it all those things. Machine vision is easy. A temperature measurement is easy. A pleasure-based reward system is easy (bayesian spam filter).
Utility functions are unrelated to pleasure. (We could make it optimize too tho, if you want. Give it free-will to boot)
The problem of color is for materialists what the problem of evil is for theists: it’s the overwhelming fact that they can’t help tripping over, but which they also can’t bring themselves to take seriously. There is no necessary inconsistency in either case; you could have a theism in which God isn’t good, or a materialism in which colors exist. But no, the existing concept (of God, of physics) has to be basically right; so all the creative energy goes into rationalizing that belief.
The problem of color is the problem of anthropomorphism.
In reductionist materialism, the “qualia” and “experience” of color is merely an internally-consistent, self-reinforcing creation of the animal brain that assigned specific neural values to specific inputs sent by specific cells that react to specific light wavelengths in some reality-entangled manner.
In this philosophy, we only perceive “color” as a “special experience” because we do not realize that the same is true for all of our senses, and that the same would be true of any other physically-possible “sense”, and that some new incredible “qualia” would be literally created (gasp, you sinful blasphemer!) if we artificially created a new “sense” through modification of the human brain.
In summary: The “magical yellowness” qualia of yellow that feels like it can’t possibly be merely information is actually created by your brain. It is “real” in that without this the yellowness would merely be knowledge of wavelengths, not yellowness-experience, but it is still created wholepiece by the brain, not by some light shiny from outside the universe.
In addition, this hypothesis is definitely testable. I made a claim above. Create a new sensory input / type of stimuli, and we will perceive a “new” qualia that was never perceived before, just like colorblind people that have never seen color and don’t have any idea what you’re talking about who would suddenly be able to see colors.
Edit: I would stake out further and go so far as to claim, though this is not an easy hypothesis to test and falsify by any stretch and might not even be doable within my natural lifetime, that there is a tangible explanation for the particular properties (this is a magiclike unknown-explanation stopsign) of the experiences of our senses—of why sound feels and is experienced the way it does and is, why colors feel and are experienced the way they do and are. I would also posit a correlation between the feeling and the qualia-seeming experiences. All of this to posit the hypothesis that we could not only create new qualia, but even create new qualia with specific “kinds” of experience-qualia-ness, like creating a new sense that both feels and is experienced somewhere in-between colors and the 400hz sound in the n-space of “qualia”.
There have been cases of people blind from birth who, by some medical treatment were enabled to see. No references to hand, but Oliver Sacks probably writes about this somewhere. They clearly get new qualia, which are moreover clearly different from those who were sighted from birth.
ETA: Wikipedia article on recovery from blindness.
I thought to use this too, but I was once or twice given the argument that blind people who are made to see are only “accessing” a Given-By-External-Power-To-Humans-At-Birth qualia from outside reality—the argument Eliezer tried to take down in the metaethics sequence about morality being “a light shining from outside” that humans just happen to find and match, applied to qualia. It’s a very good stopsign and/or FGC, apparently.
Because of this, I looked for a more definitive test that these philosophies—those that would discard “creating” sight as a valid new qualia—do not predict (and arguably, cannot, in terms of probability mass, if they want to remain coherent).
Surely that argument is refuted by the fact that the newly sighted do not receive the same qualia as the always-sighted? Instead, they get pretty much the experiences you might predict given what we know about the importance of early experience for the developing faculties: confusion overcome only imperfectly and with difficulty, and with assistance from their more developed senses.
The idea that they received something at birth that they have difficulty accessing has the same problem as the idea that the brain is merely a physical interface through which the soul interacts with the world: all the data are just as consistent with the simpler hypothesis that the brain is the whole story. (That includes the data that there are experiences, which is a difficulty for both materialism, and materialism with the magic word “soul” added.)
Yes, it is, when you accept the evidence you’ve given as valid and can weight arguments based on their probability logic. Denial mechanisms in place will usually prevent proponents of the argument from recognizing the refutation as a valid one. Lots of difficult argumentation and untangling of webs of rationalizations ensues (and arguing by the Occam’s Razor route is even less practical, because in their model, their hypotheses of soul or outer-light or what-have-you is simpler when other parts of their model of the whole world are taken into account, which means even more knots to untangle).
I seek to circumvent that debate entirely by putting the burden of proof on my own “side”, for several reasons, some of which are tinted a slight shade of gray closer to the Dark Arts than I would like.
I dont’t think this is correct. The phenomenology of subjective exprience suggests that such experiences should be “simple” in a sense—sort of like a bundle of tiny little XML tags attached to the brain. Of course, this is not to argue that our brain parts literally have tiny little XML tags attached to them, any more than other complex objects do. But it does suggest that they might be causally connected to some other, physically simpler phenomena.
Indeed, all of our senses, by definition, have qualia, and colour is just a particularly striking example. It is interesting, though, to note that not all brain tissue produces qualia: the cerebellum operates without them. Our motor control (what the cerebellum primarily does) proceeds without qualia—we have almost no awareness of what we are doing with individual muscles. This is why all forms of teaching people how to move, whether physiotherapy, dance training, martial arts, sports, and so on, make a lot of use of indirect visualisation to produce the desired results. (These can easily be mistaken, sometimes by the teachers themselves, for literal descriptions, e.g. of “chi” or “energy”.) Golfers are taught to “follow through”, even though nothing that happens after the moment of impact can have any effect on the ball. It is the intention to follow through that changes how the club is swung, and how it impacts the ball, in a way that could not be achieved by any more direct instruction.
Ye-e-e-s, but the standard qualiaphilic take is that all the other sense are problematic as well. You think you are levelling down, but you are levelling up.
That isn’t a test of reductionism, etc, since many of the alternatives make the same prediction. For instance, David Chalmer’s theory that qualia are non-physical properties that supervene on the physical properties of the brain.
True, it isn’t a particularly specific test that supports all the common views of most LW users. That is not its intended purpose.
The purpose is to establish that “qualia” are not ontologically basic building blocks of the universe sprung into existence alongside up-quarks and charmings for the express purpose of allowing some specific subset of possible complex causal systems to have more stuff that sets them apart from other complex causal systems, just because the former are able to causally build abstract model of parts of their own system and would have internal causal patterns abstractly modeled as “negative reinforcement” that they causally attempt to avoid being fired if these aforementioned “qualia” building blocks didn’t set them apart from the latter kind of complex systems...
… but I guess it does sound kind of obviously silly when you phrase it from a reductionist perspective.
But it doesn’t. It just establishes that if they, they covary with physical states in the way that would be expected from identity theory. Admitedly it seems redundant to have a non physical extra ingredient that nonetheless just shadows what brains are doing physicallly. I think that’s a flaw in Chalmers’ theory. But its conceptual, not empirical.
I… err… what? My mastery of the English language is insufficient to compute the meaning of the I-assume-is-a sentence above.
I meant
“It just establishes that if they exist, they covary with physical states in the way that would be expected from identity theory.”
But thats not the whole problem. It establishes they covary with physical states in the way that would be expected from identity theory, and Chalmerserian dualism, and a bunch of other theories (but maybe not Cartesian dualism).
Tests need to distinguish between theories, and yours doesn’t.
Hmm. I thought it did. I guess I need to review a few things.
Since qualia describe an event (in a sense), I think that if they’re ever found to have measurable existence, they’ll not be so much what a gluon is to “top-quark”, but more something like what division is to the real numbers...
That is exactly—if I interpret your comment charitably—what my hypothesis concludes and what I want to test with the proposed experiment in the grand-grand-parent.
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Or, actually, backing up… ought I reject such a theory, from Chalmer et al’s perspective? Or is “1+1=2” a nonphysical property of certain systems (say, two individual apples placed alongside each other) in the same sense that “red” is?
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Now I’m confused: what you just said is a description of a ‘supervenient’ relation. Are you saying that anytime X is said to supervene on Y, we should reject the theory which features X’s?
No. Supervence is an ontologically neutral relationship. In Chalmer’s theory, qualia supervene on brain states, so novel brain states will lead to novvel qualia. In identity theory, qualia superven on brain states, so ditto. So the Novel Qualia test does not distinguish the one from the other. The argument for qualia being non-physical properties, as opposed to algorithms, is down to their redubility, or lack thereof, not supervenience.
This is not really true, at least without adding some pretty restrictive conditions. By using “joke interpretations”, as pointed out by Searle and Putnam, one could assert that a huge number of “algorithms” supervene on any large-enough physical object.
Are they?
I mean, sure, the fact that a circuit implementing the algorithm “1+1=2” returns “2″ given the instruction to execute “1+1” is entirely predictable, much as the fact that a mouse conditioned to avoid red will avoid a red room is predictable. Absolutely agreed.
But as I understand the idea of qualia, the claim is that the mouse’s predictable behavior with respect to a red room (and the neural activity that gives rise to it) is not a complete description of what’s going on… there is also the mouse’s experience of red, which is an entirely separate, nonphysical, fact about the event, which cannot be explained by current physics even in principle. (Or maybe it turns out mice don’t have an experience of red, but humans certainly do, or at least I certainly do.) Right?
Which, OK. But I also have the experience of seeing two things, just like I have the experience of seeing a red thing. On what basis do I justify the claim that that experience is completely described by a description of the physical system that calculates “2”? How do I know that my experience of 2 isn’t an entirely separate nonphysical fact about the event which cannot be explained by current physics even in principle?
Like Spinning_Sandwich, I don’t think that color is qualitatively different in its problematic-ness than e.g. pitch of sound, or perception of geometry, or even memory of a smell, or any other aspect of consciousness.
Color just serves as the easiest referrenced example of the mystery because colors feel as largely an irreducible sensation (you can perhaps reduce it to two separate sensations of hue+brightness, or something like that, but not much further, like one might do by trying to reduce geometry to points and numbers)
I don’t see how colors in particular are a problem for materialism any more than consciousness itself is. I certainly fail to see how it’s equivalent to the problem of evil for theists of the “God is good” bent.
Could you explain in a bit more detail how the problem of evil parallels this? And I mean excruciating detail, if possible, because I really haven’t a clue what you’re getting at.
I don’t know about excruciating detail, but I think the general idea is this:
One would not predict the existence of evil in a universe created by a benevolent God.
One would not predict the existence of intrinisically subjective qualities in an entirely physcial, and therefor entirely objective, universe.
Disagree.
Let’s look at the actual observations. I see red, It has some atomic “redness” that is different from the atomic “blueness” of blue and the atomic pleasure of orgasm and the atomic feeling of cold. Each of these atomic “qualia” are subjectively irreducible. There are not smaller parts that my subjective experience of “red” is made up of.
Is this roughly the qualia problem? That’s my understanding of it.
Here’s a simple computer program that reports on whether or not it has atomic subjective experience:
From the inside, the program experiences no mechanisms of reduction of these atomic qualia, but from the outside, we can see that they are strings, made up of bytes, and compared by hash value. While I don’t know the details of the neurosceince of qualia, I expect the findings to be roughly similar. Something will be an irreducible symbol with various associations and uniqueness from within the system, but outside, we will be able to see “oh look, redness is this particular pattern of neurons firing”.
EDIT: LW killed my program formatting. It should still run (lua, by the way)
Having never seen any Lua, I’m surprised by how much it looks like Python. Any idea whether Python stole its set literals from Lua?
ETA: Python port (with output)
python:
Lua:
Also, lots of syntax differences (end, then, do, function, whitespace, elseif, etc). They are similar in that they are dynamic languages. I don’t think anything was particularly inspired by anything else.
Ah, ok, in python {‘x’, ‘y’} would denote an unordered set containing ‘x’ and ‘y’, I assumed a correspondence.
lua unordered sets are a bit more verbose:
thanks for the port.
Next up we should extend it with free will and true knowledge (causal entanglement).
And I think someone asked about not demonstrating qualia sameness in the absence of truthful reporting.
(I’m not going to waste more time on any of this, but it could be done)
If you mean this… to be clear, I didn’t complain about it not demonstrating “qualia sameness”. I complained (implicitly) that the claim that it demonstrated all the properties that some people claim demonstrate qualia in real-world systems (like people) was demonstrably false.
(In particular, that it didn’t demonstrate anything persistent across different reporting, whereas my own experience does demonstrate something persistent across different reporting.)
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
I removed “complained”.
Point taken. As I tried to explain somewhere, it was all the properties that I thought of at the moment, with the implicit assertion that the rest of the properties could be demonstrated as required.
Point taken.
Reported.
oh. thank you very much. I should learn to do that.
Materialism predicts that algorithms have an “inside”?
As a further note, I’ll have to say that if all the blue and if the red in my visual experience were switched around, my hunch tells me that I’d be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that “red” is associated with hot, and that “blue” is associated with cold… The qualia of the visual experience itself would be different.
Yes. The scene from within a formal system (like algebra) has certain qualities (equations, variables, functions, etc) that are different from the scene outside (markings on paper, equals sign, BEDMAS, variable names, brackets for function application).
That’s not really a materialism thing, it’s a math thing.
Hence the part where they are compared to other qualia. Maybe that’s not enough, but imagining getting “blue” or “sdfg66df” instead of “red” (which is the evidence you are using) is of course going to return “they are different” because they don’t compare equal. Even if the output of the computation ends up being the same.
I’m under the impression that what you describe falls under computationalism, not materialism, but my reading on these ideas is shallow and I may be confusing some of these terms...
I must say I can’t tell the difference between materialism “the mind is built of stuff” and computationalism “the mind is built of algorithms (running on stuff)”.
If I get them confused in some way, sorry.
That thought experiment doesn’t make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn’t notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.
I could not find an online Lua-bin, but pasting it into a Lua Demo and clicking Run does the trick.
did it work?
There’s no evidence that your programme experiences anything from the inside. Which is one way in which your claim is surreptitiously eliminativist. Another is that, examined from the outside, we can tell what the programme’s qualia are: they are nothing. They have no quaities other than being different from one another. But qualia don’t seem like that from the inside! You say your programme’s qualia are subjective because it can’t examine their internal structure...but there ins’t any. They are not subjective somethings, they are just nothings.
then neither is there evidence that I do, or you do.
I can’t think of qualities that my subjective experience of “red” has that the atom “red” does not have in my program.
Sure they do. Redness has this unique redness to it the same way “red” has this unique ness.
I was using “subjective” as a perspective, not a quality.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc.
I have plenty of evidence of my own experiences. Were you restricting “evidence” to third-person, objective evidence?
I can. I think that if I experienced nothing but an even expanse of red, that would be different from experiencing nothing but a salty taste, or nothing but middle C
Redness isn’t expressible. “Object at 0x8cf643” is.
If that’s accessible to them, it’s objective and expressible. If not, its just a nothing. Neither way do you have a “somethng” that is subjective.
I wouldn’t predict the existence of self-replicating molecules either. In fact, I’m not sure I’m in a position to predict anything at all about physical phenomena without appealing to empirical knowledge I’ve gathered from this particular physical world.
It’s a pickle, all right.
OK: “does not predict” was not strong enough. In each case, the opposite is predicted.
I can write a computer program that experiences qualia to the same extent that I do. What confusing thing is left?
Evil is a problem because the benevolent god hypothesis predicts its non-existence. Qualia is not a problem; materialism adequately explains all aspects of it, except the exact neuroscience details.
Please do so and publish.
See my other comment in this thread for the code.
It’s very simple, and it’s not an AI, but it’s qualia have all the properties that mine seem to have.
All the properties?
Huh.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior—that is, your code either reports the experience or it doesn’t, but there’s no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn’t.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
I suspect that’s true of everyone else, as well.
All the ones I though of in the moment.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
This is a different issue, analogous to whether my “red” and your “red” are the same. From the inside, we’d feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
Not sure what you are getting at.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
Well, the original idea used CLISP GENSYMs.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really happens at the hardware level. You seemed to have substutued an easier problem: you have ensured that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
And which one is that? Both the software and the hardware could be said to be. But your compu-qualia are accessible to the one, but not the other!
Haskell doens’t do anything. Electrons pushing electrons does things.
Um, that program has no causal entanglement with 700nm-wavelength light, 470nm-wavelength light, temperature, or a utility function. I am totally unwilling to admit it might experience red, blue, cold, or pleasure.
If I upload you and stimulate your upload’s “red” cones, you’ll have red qualia, without any 700nm light involved (except for the 700nm light which gave rise to your mind-design which I copied etc., but if you’re talking about entanglement that distant, than nyan_sandwich was also entangled with 700nm light before writing the code)
No need for uploading, electrodes in the brain do the trick.
...that really should have occurred to me first.
Yes, my experience of redness can come not only from light, but also from dreams, hallucinations, sensory illusions, and direct neural stimulation. But I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
Take, for example, the occasional case of cochlear implants for people born deaf. When the implant is turned on, they immediately have a sensation, but that sensation only gradually becomes “sound” qualia to them over roughly a year of living with that new sensory input. They don’t experience the sound qualia in dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation) until after their brain is adapted to interpreting and using sound.
Or take the case of tongue-vision systems for people born blind. It likewise starts out as an uninformative mess of a signal to the user, but gradually turns into a subjective experience of sight as the user learns to make sense of the signal. They recognize the experience from how other people have spoken of it, but they never knew the experience previously from dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation).
In short, I think the long-term potentiation of the neural pathways is a very significant kind of causal entanglement that is not present in the program under discussion.
What if you’re a brain in a vat, and you’ve grown up plugged into a high-resolution World of Warcraft? If qualia are wholly inside the skull, their qualitative character can’t depend on facts outside the skull.
Well you need some input to the brain, even if it’s in a vat. Something has to either stimulate the retina or stimulate the relevant neurons further down the line. At least during some learning phase.
Or I guess you could assemble a brain-in-a-vat with memories built-in (e.g. the memory of seeing red). Thus the brain will have the architecture (and therefore the ability) to imagine red.
I can’t tell if you are joking.
We could give it all those things. Machine vision is easy. A temperature measurement is easy. A pleasure-based reward system is easy (bayesian spam filter).
Utility functions are unrelated to pleasure. (We could make it optimize too tho, if you want. Give it free-will to boot)
Now you’re ready to give a program freewill? :D
“Some factors are still missing, like the expression of the people’s will...”