Presumably, the exact same way you’d write any other function.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human’s “redness qualia”. If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.
Of course, I’m arguing a bit by the premises here with “correct behavior” being “fully and coherently maintained”. The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.
TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human’s “redness qualia”
That doesn’t mean there are no qualia (I have them so I know there are). That also doesn’t mean qualia just
serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.
That doesn’t mean there are no qualia (I have them so I know there are). That also doesn’t mean qualia just serendiptously arrive whenever the correct inputs and outputs are in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you’d need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Obviously I haven’t solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
* If this isn’t among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you’d need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Sorry that is most definitely “serendipitously arrive”. You don’t know how to engineer the Redness in explicilty,
you are just assuming it must be there if everything else is in place.
However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
The claimis more like “hasn’t been”, and you haven’t shown me a SeeRed().
Presumably, the exact same way you’d write any other function.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human’s “redness qualia”. If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.
Of course, I’m arguing a bit by the premises here with “correct behavior” being “fully and coherently maintained”. The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.
TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.
False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.
That doesn’t mean there are no qualia (I have them so I know there are). That also doesn’t mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you’d need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Obviously I haven’t solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
* If this isn’t among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.
Sorry that is most definitely “serendipitously arrive”. You don’t know how to engineer the Redness in explicilty, you are just assuming it must be there if everything else is in place.
The claimis more like “hasn’t been”, and you haven’t shown me a SeeRed().