For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior—that is, your code either reports the experience or it doesn’t, but there’s no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn’t.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
shared between two runs of the program one of which reports the experience and one of which doesn’t.
This is a different issue, analogous to whether my “red” and your “red” are the same. From the inside, we’d feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for..
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple
way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really
happens at the hardware level. You seemed to have substutued an easier problem: you have ensured
that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
All the properties?
Huh.
For my own part, my experience of perceiving inputs includes something that is shared among the times that I report the experience honestly, when I lie about the experience, and when I remain silent about the experience.
I see nothing in your sample code that is capable of supporting that behavior—that is, your code either reports the experience or it doesn’t, but there’s no second thing that can either align with the report or conflict with it, or that can be shared between two runs of the program one of which reports the experience and one of which doesn’t.
I conclude that my experience of perceiving inputs has relevant properties that your sample code does not.
I suspect that’s true of everyone else, as well.
All the ones I though of in the moment.
Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for.
You could record that that sameness was there by remembering previous inputs and looking at those.
This is a different issue, analogous to whether my “red” and your “red” are the same. From the inside, we’d feel some of th same things (stop sign, aggressiveness, hot) but then some different things (that apple I ate yesterday). From the outside, they are implemented in different chunks of flesh, but may or may not have analogous patterns that represent them.
Once you can clearly specify what question to ask, I think the program can answer it and will have the same conclusion you do.
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
Not sure what you are getting at.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
Well, the original idea used CLISP GENSYMs.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really happens at the hardware level. You seemed to have substutued an easier problem: you have ensured that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
And which one is that? Both the software and the hardware could be said to be. But your compu-qualia are accessible to the one, but not the other!
Haskell doens’t do anything. Electrons pushing electrons does things.