The author of this paper, and a co-worker of his, recently presented this work to my research group, and we discussed implication, also with artificial phenomenology in mind. My impression is that potential implications could be profound if we make some plausible extra assumptions, but that this mostly concerns attempts to decipher biological consciousness (which is my field, so it had me hyped).
Whether you interpret this—very interesting—work to show that specific qualia are experienced very similarly between humans depends on whether you assume the subjective feel of a qualia is fully determined by its unique and asymmetric relation to others. Assuming this is, imho, plausible (I played around with this idea many years ago already), but that has further implications—among others, the one that the hard problem of consciousness may not be quite as hard as we previously thought.
But this paper basically addresses how a specific phenomenal character could be non-random (which is amazing), and in the process, makes inverse qualia scenarios extremely unlikely (which is also amazing) and gives novel approaches for deciphering neural correlates of consciousness (also amazing); it does not, however, answer the question whether a particular entity with this structure experiences anything at all, which is a completely separate area of research (though it is very tentatively beginning to converge). We specifically discussed that it is quite plausible that you could get a similar structure within an artificial neural net trained to reproduce human perceptions, hence building the equivalent of our phenomenological map, albeit without the neural net feeling anything at all, and hence also not seeing red in particular. I am not sure what implications you see for alignment, or where your final question was heading in this regard.
P.S.: Happy to answer questions about this, but the admins of this site have set me to only one post of any kind per day.
Thank you for your thought-provoking and captivating response. Your expertise in the field of biological consciousness is clear, and I’m grateful for the depth and breadth of your commentary on the potential implications of this paper.
If we accept the assumption that the subjectivity of a specific qualia is defined by its unique and asymmetric relations to other qualia, then this paper indeed offers a method for verifying the possibility that such qualia could be experienced similarly among humans. Your point that the ‘hard problem’ of consciousness may not be as challenging as we previously thought is profoundly important.
However, I hold a slightly different view about the ‘new approach to deciphering neural correlates of consciousness’ proposed in this paper. While I agree that this approach does not specifically answer whether a certain entity with a qualia structure experiences anything, given the right conditions and complexity, I am interested in contemplating the possibility of such an experience occurring, if we were to introduce what you refer to as ‘some plausible extra assumptions’.
I apologize if my thoughts on alignment were unclear. I did not sufficiently explain AI alignment in my post. AI alignment is about ensuring that the goals and actions of an AI system coincide with human values and interests. Adding the factor of AI consciousness undoubtedly complicates the alignment problem. For instance, if we acknowledge an AI as a sentient being, it could lead to a situation similar to debates about animal rights, where we would need to balance human values and interests with those of non-human entities. Moreover, if an AI were to acquire qualia or consciousness, it might be able to understand humans on a much deeper level.
Regarding my final question, I was interested in exploring the potential implications of this work in the context of AI alignment and safety, as well as ethical considerations that we might need to ponder as we progress in this field. Your insights have provided plenty of food for thought, and I look forward to hearing more from you.
The author of this paper, and a co-worker of his, recently presented this work to my research group, and we discussed implication, also with artificial phenomenology in mind. My impression is that potential implications could be profound if we make some plausible extra assumptions, but that this mostly concerns attempts to decipher biological consciousness (which is my field, so it had me hyped).
Whether you interpret this—very interesting—work to show that specific qualia are experienced very similarly between humans depends on whether you assume the subjective feel of a qualia is fully determined by its unique and asymmetric relation to others. Assuming this is, imho, plausible (I played around with this idea many years ago already), but that has further implications—among others, the one that the hard problem of consciousness may not be quite as hard as we previously thought.
But this paper basically addresses how a specific phenomenal character could be non-random (which is amazing), and in the process, makes inverse qualia scenarios extremely unlikely (which is also amazing) and gives novel approaches for deciphering neural correlates of consciousness (also amazing); it does not, however, answer the question whether a particular entity with this structure experiences anything at all, which is a completely separate area of research (though it is very tentatively beginning to converge). We specifically discussed that it is quite plausible that you could get a similar structure within an artificial neural net trained to reproduce human perceptions, hence building the equivalent of our phenomenological map, albeit without the neural net feeling anything at all, and hence also not seeing red in particular. I am not sure what implications you see for alignment, or where your final question was heading in this regard.
P.S.: Happy to answer questions about this, but the admins of this site have set me to only one post of any kind per day.
Dear Portia,
Thank you for your thought-provoking and captivating response. Your expertise in the field of biological consciousness is clear, and I’m grateful for the depth and breadth of your commentary on the potential implications of this paper.
If we accept the assumption that the subjectivity of a specific qualia is defined by its unique and asymmetric relations to other qualia, then this paper indeed offers a method for verifying the possibility that such qualia could be experienced similarly among humans. Your point that the ‘hard problem’ of consciousness may not be as challenging as we previously thought is profoundly important.
However, I hold a slightly different view about the ‘new approach to deciphering neural correlates of consciousness’ proposed in this paper. While I agree that this approach does not specifically answer whether a certain entity with a qualia structure experiences anything, given the right conditions and complexity, I am interested in contemplating the possibility of such an experience occurring, if we were to introduce what you refer to as ‘some plausible extra assumptions’.
I apologize if my thoughts on alignment were unclear. I did not sufficiently explain AI alignment in my post. AI alignment is about ensuring that the goals and actions of an AI system coincide with human values and interests. Adding the factor of AI consciousness undoubtedly complicates the alignment problem. For instance, if we acknowledge an AI as a sentient being, it could lead to a situation similar to debates about animal rights, where we would need to balance human values and interests with those of non-human entities. Moreover, if an AI were to acquire qualia or consciousness, it might be able to understand humans on a much deeper level.
Regarding my final question, I was interested in exploring the potential implications of this work in the context of AI alignment and safety, as well as ethical considerations that we might need to ponder as we progress in this field. Your insights have provided plenty of food for thought, and I look forward to hearing more from you.
Thank you again for your profound insights.
Best,
Yusuke
Thank you for your kind words, and sorry for not having given proper response yet, am really swamped. Currently at the wonderful workshop “Investigating consciousness in animals and artificial systems: A comparative perspective” https://philosophy-cognition.com/cmc/2023/02/01/cfp-workshop-investigating-consciousness-in-animals-and-artificial-systems-a-comparative-perspective-june-2023/ (online as well), on the talk on potential for consciousness in multi-modal LLMs, and encountered this paper, Abdou et al. 2021 “Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color” https://arxiv.org/abs/2109.06129 Have not had time to look at properly yet (my to read pile rose considerably today in wonderful ways) but think might be relevant for your question, so wanted to quickly share.