I can write a computer program that experiences qualia to the same extent that I do. What confusing thing is left?
Evil is a problem because the benevolent god hypothesis predicts its non-existence. Qualia is not a problem; materialism adequately explains all aspects of it, except the exact neuroscience details.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior—that is, your code either reports the experience or it doesn’t, but there’s no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn’t.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
shared between two runs of the program one of which reports the experience and one of which doesn’t.
This is a different issue, analogous to whether my “red” and your “red” are the same. From the inside, we’d feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for..
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple
way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really
happens at the hardware level. You seemed to have substutued an easier problem: you have ensured
that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
Um, that program has no causal entanglement with 700nm-wavelength light, 470nm-wavelength light, temperature, or a utility function. I am totally unwilling to admit it might experience red, blue, cold, or pleasure.
If I upload you and stimulate your upload’s “red” cones, you’ll have red qualia, without any 700nm light involved (except for the 700nm light which gave rise to your mind-design which I copied etc., but if you’re talking about entanglement that distant, than nyan_sandwich was also entangled with 700nm light before writing the code)
Yes, my experience of redness can come not only from light, but also from dreams, hallucinations, sensory illusions, and direct neural stimulation. But I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
Take, for example, the occasional case of cochlear implants for people born deaf. When the implant is turned on, they immediately have a sensation, but that sensation only gradually becomes “sound” qualia to them over roughly a year of living with that new sensory input. They don’t experience the sound qualia in dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation) until after their brain is adapted to interpreting and using sound.
Or take the case of tongue-vision systems for people born blind. It likewise starts out as an uninformative mess of a signal to the user, but gradually turns into a subjective experience of sight as the user learns to make sense of the signal. They recognize the experience from how other people have spoken of it, but they never knew the experience previously from dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation).
In short, I think the long-term potentiation of the neural pathways is a very significant kind of causal entanglement that is not present in the program under discussion.
I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
What if you’re a brain in a vat, and you’ve grown up plugged into a high-resolution World of Warcraft? If qualia are wholly inside the skull, their qualitative character can’t depend on facts outside the skull.
Well you need some input to the brain, even if it’s in a vat. Something has to either stimulate the retina or stimulate the relevant neurons further down the line. At least during some learning phase.
Or I guess you could assemble a brain-in-a-vat with memories built-in (e.g. the memory of seeing red). Thus the brain will have the architecture (and therefore the ability) to imagine red.
We could give it all those things. Machine vision is easy. A temperature measurement is easy. A pleasure-based reward system is easy (bayesian spam filter).
Utility functions are unrelated to pleasure. (We could make it optimize too tho, if you want. Give it free-will to boot)
I can write a computer program that experiences qualia to the same extent that I do. What confusing thing is left?
Evil is a problem because the benevolent god hypothesis predicts its non-existence. Qualia is not a problem; materialism adequately explains all aspects of it, except the exact neuroscience details.
Please do so and publish.
See my other comment in this thread for the code.
It’s very simple, and it’s not an AI, but it’s qualia have all the properties that mine seem to have.
All the properties?
Huh.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior—that is, your code either reports the experience or it doesn’t, but there’s no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn’t.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
I suspect that’s true of everyone else, as well.
All the ones I though of in the moment.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
This is a different issue, analogous to whether my “red” and your “red” are the same. From the inside, we’d feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
Not sure what you are getting at.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
Well, the original idea used CLISP GENSYMs.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really happens at the hardware level. You seemed to have substutued an easier problem: you have ensured that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
And which one is that? Both the software and the hardware could be said to be. But your compu-qualia are accessible to the one, but not the other!
Haskell doens’t do anything. Electrons pushing electrons does things.
Um, that program has no causal entanglement with 700nm-wavelength light, 470nm-wavelength light, temperature, or a utility function. I am totally unwilling to admit it might experience red, blue, cold, or pleasure.
If I upload you and stimulate your upload’s “red” cones, you’ll have red qualia, without any 700nm light involved (except for the 700nm light which gave rise to your mind-design which I copied etc., but if you’re talking about entanglement that distant, than nyan_sandwich was also entangled with 700nm light before writing the code)
No need for uploading, electrodes in the brain do the trick.
...that really should have occurred to me first.
Yes, my experience of redness can come not only from light, but also from dreams, hallucinations, sensory illusions, and direct neural stimulation. But I think the entanglement with light has to be present first and the others depend on it in order for the qualia to be there.
Take, for example, the occasional case of cochlear implants for people born deaf. When the implant is turned on, they immediately have a sensation, but that sensation only gradually becomes “sound” qualia to them over roughly a year of living with that new sensory input. They don’t experience the sound qualia in dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation) until after their brain is adapted to interpreting and using sound.
Or take the case of tongue-vision systems for people born blind. It likewise starts out as an uninformative mess of a signal to the user, but gradually turns into a subjective experience of sight as the user learns to make sense of the signal. They recognize the experience from how other people have spoken of it, but they never knew the experience previously from dreams, hallucinations, or sensory illusions (and presumably also would not have experienced it in direct neural stimulation).
In short, I think the long-term potentiation of the neural pathways is a very significant kind of causal entanglement that is not present in the program under discussion.
What if you’re a brain in a vat, and you’ve grown up plugged into a high-resolution World of Warcraft? If qualia are wholly inside the skull, their qualitative character can’t depend on facts outside the skull.
Well you need some input to the brain, even if it’s in a vat. Something has to either stimulate the retina or stimulate the relevant neurons further down the line. At least during some learning phase.
Or I guess you could assemble a brain-in-a-vat with memories built-in (e.g. the memory of seeing red). Thus the brain will have the architecture (and therefore the ability) to imagine red.
I can’t tell if you are joking.
We could give it all those things. Machine vision is easy. A temperature measurement is easy. A pleasure-based reward system is easy (bayesian spam filter).
Utility functions are unrelated to pleasure. (We could make it optimize too tho, if you want. Give it free-will to boot)
Now you’re ready to give a program freewill? :D
“Some factors are still missing, like the expression of the people’s will...”