I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in “standard” naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as “seeing the world directly, nothing else, nothing more.” Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.
Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of “misapprehension”, where you don’t really perceive the world directly anymore. That does not mean you “weren’t perceiving the world directly before.” But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as “failed representations of true objects” you don’t, anymore, need to in addition restate one’s previous belief in “perceiving the world directly.” Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.
Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a “bug” in one’s mind. So here you have two ontologies, where you can certainly explain it all with just one.
Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a “bug” of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.
No, it is much more simple than that—“green” is a wavelength of light, and “the feeling of green” is how the information “green” is encoded in your information processing system, that’s it. No special ontology for qualia or whatever. Qualia isn’t a fundamental component of the universe like quarks and photons are, it’s only encoding of information in your brain.
But yes, how reality is encoded in an information system sometimes doesn’t match the external world, the information system can be wrong. That’s a natural, direct consequence of that ontology, not a new postulate, and definitely not any other ontology. The fact that “the feeling of green” is how “green wavelength” is encoded in an information processing system automatically implies that if you perturbate the information processing system by giving it LSD, it may very well encode “green wavelength” without “green wavelength” being actually present.
In short, ontology is not the right level to look at qualia—qualia is information in a (very) complex information processing system, it has no fundamental existence. Trying to explain it at an ontological level just make you ask invalid questions.
Green is not a wavelength of light. Last time I checked, wavelength is measured in units of length, not in words. We might call light of wavelength 520nm “green” if we want, and we do BECAUSE we are conscious and we have the qualia of green whenever we see light of wavelength 520nm. But this is only a shorthand, a convention. For all I know, other people might see light of wavelength 520nm as red (i.e. what I describe as red, i.e. light of wavelength 700nm), but refer to it as green because there is no direct way to compare the qualia.
I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in “standard” naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as “seeing the world directly, nothing else, nothing more.” Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.
Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of “misapprehension”, where you don’t really perceive the world directly anymore. That does not mean you “weren’t perceiving the world directly before.” But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as “failed representations of true objects” you don’t, anymore, need to in addition restate one’s previous belief in “perceiving the world directly.” Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.
Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a “bug” in one’s mind. So here you have two ontologies, where you can certainly explain it all with just one.
Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a “bug” of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.
No, it is much more simple than that—“green” is a wavelength of light, and “the feeling of green” is how the information “green” is encoded in your information processing system, that’s it. No special ontology for qualia or whatever. Qualia isn’t a fundamental component of the universe like quarks and photons are, it’s only encoding of information in your brain.
But yes, how reality is encoded in an information system sometimes doesn’t match the external world, the information system can be wrong. That’s a natural, direct consequence of that ontology, not a new postulate, and definitely not any other ontology. The fact that “the feeling of green” is how “green wavelength” is encoded in an information processing system automatically implies that if you perturbate the information processing system by giving it LSD, it may very well encode “green wavelength” without “green wavelength” being actually present.
In short, ontology is not the right level to look at qualia—qualia is information in a (very) complex information processing system, it has no fundamental existence. Trying to explain it at an ontological level just make you ask invalid questions.
Green is not a wavelength of light. Last time I checked, wavelength is measured in units of length, not in words. We might call light of wavelength 520nm “green” if we want, and we do BECAUSE we are conscious and we have the qualia of green whenever we see light of wavelength 520nm. But this is only a shorthand, a convention. For all I know, other people might see light of wavelength 520nm as red (i.e. what I describe as red, i.e. light of wavelength 700nm), but refer to it as green because there is no direct way to compare the qualia.