Once you put in the functionality that it can lie about what it’s experiencing (and crieteria for deciding when to lie), and functionality for testing those samenesses, I think it would have the properties you are looking for..
I hold that qualia are opaque symbols.
But your problem is that their opacity in your original example hinges on their being implemented in a simple
way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really
happens at the hardware level. You seemed to have substutued an easier problem: you have ensured
that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
But your problem is that their opacity in your original example hinges on their being implemented in a simple way. You need to find a way of upgrading the AI to be a ealistic experiencer without adding describable structure to its “qualia”.
Not sure what you are getting at.
You can make it as opaque or transparent as you want by only exposing a certain set of operation to the outside system (equality, closeness (for color), association). I could have implemented color as tuples ({1,0,0} being red). I just used strings because someone already did the work.
A flaw in mine is that strings can be reduced by .. (concatenation) and string operations. I just pretended that those operations weren’t available (most of the restrictions you make in a program are pretend). I’ll admit I didn’t do a very good job of drawing the line between the thing existing in the system,and the system itself. But that could be done with more architecting.
Well, the original idea used CLISP GENSYMs.
So how do you ensure the outside system is the one doing the experiencing? After all, everything really happens at the hardware level. You seemed to have substutued an easier problem: you have ensured that the outside sytem is the one doing the reporting.
How do you know that you are doing the experiencing? It’s because the system you call “you” is the one making the observations about experience.
Likewise here, the one driving the comparisons and doing the reporting seems to be the one that should be said to be experiencing.
Of course once the architectural details are allowed to affect what you think of the system, everything goes a bit mushy. What if I’d written it in haskell (lazy, really nonstandard evaluation order)? What if I never ran the program (I didn’t)? What if I ran it twice?
And which one is that? Both the software and the hardware could be said to be. But your compu-qualia are accessible to the one, but not the other!
Haskell doens’t do anything. Electrons pushing electrons does things.