Thank you for your thought-provoking and captivating response. Your expertise in the field of biological consciousness is clear, and I’m grateful for the depth and breadth of your commentary on the potential implications of this paper.
If we accept the assumption that the subjectivity of a specific qualia is defined by its unique and asymmetric relations to other qualia, then this paper indeed offers a method for verifying the possibility that such qualia could be experienced similarly among humans. Your point that the ‘hard problem’ of consciousness may not be as challenging as we previously thought is profoundly important.
However, I hold a slightly different view about the ‘new approach to deciphering neural correlates of consciousness’ proposed in this paper. While I agree that this approach does not specifically answer whether a certain entity with a qualia structure experiences anything, given the right conditions and complexity, I am interested in contemplating the possibility of such an experience occurring, if we were to introduce what you refer to as ‘some plausible extra assumptions’.
I apologize if my thoughts on alignment were unclear. I did not sufficiently explain AI alignment in my post. AI alignment is about ensuring that the goals and actions of an AI system coincide with human values and interests. Adding the factor of AI consciousness undoubtedly complicates the alignment problem. For instance, if we acknowledge an AI as a sentient being, it could lead to a situation similar to debates about animal rights, where we would need to balance human values and interests with those of non-human entities. Moreover, if an AI were to acquire qualia or consciousness, it might be able to understand humans on a much deeper level.
Regarding my final question, I was interested in exploring the potential implications of this work in the context of AI alignment and safety, as well as ethical considerations that we might need to ponder as we progress in this field. Your insights have provided plenty of food for thought, and I look forward to hearing more from you.
Dear Portia,
Thank you for your thought-provoking and captivating response. Your expertise in the field of biological consciousness is clear, and I’m grateful for the depth and breadth of your commentary on the potential implications of this paper.
If we accept the assumption that the subjectivity of a specific qualia is defined by its unique and asymmetric relations to other qualia, then this paper indeed offers a method for verifying the possibility that such qualia could be experienced similarly among humans. Your point that the ‘hard problem’ of consciousness may not be as challenging as we previously thought is profoundly important.
However, I hold a slightly different view about the ‘new approach to deciphering neural correlates of consciousness’ proposed in this paper. While I agree that this approach does not specifically answer whether a certain entity with a qualia structure experiences anything, given the right conditions and complexity, I am interested in contemplating the possibility of such an experience occurring, if we were to introduce what you refer to as ‘some plausible extra assumptions’.
I apologize if my thoughts on alignment were unclear. I did not sufficiently explain AI alignment in my post. AI alignment is about ensuring that the goals and actions of an AI system coincide with human values and interests. Adding the factor of AI consciousness undoubtedly complicates the alignment problem. For instance, if we acknowledge an AI as a sentient being, it could lead to a situation similar to debates about animal rights, where we would need to balance human values and interests with those of non-human entities. Moreover, if an AI were to acquire qualia or consciousness, it might be able to understand humans on a much deeper level.
Regarding my final question, I was interested in exploring the potential implications of this work in the context of AI alignment and safety, as well as ethical considerations that we might need to ponder as we progress in this field. Your insights have provided plenty of food for thought, and I look forward to hearing more from you.
Thank you again for your profound insights.
Best,
Yusuke
Thank you for your kind words, and sorry for not having given proper response yet, am really swamped. Currently at the wonderful workshop “Investigating consciousness in animals and artificial systems: A comparative perspective” https://philosophy-cognition.com/cmc/2023/02/01/cfp-workshop-investigating-consciousness-in-animals-and-artificial-systems-a-comparative-perspective-june-2023/ (online as well), on the talk on potential for consciousness in multi-modal LLMs, and encountered this paper, Abdou et al. 2021 “Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color” https://arxiv.org/abs/2109.06129 Have not had time to look at properly yet (my to read pile rose considerably today in wonderful ways) but think might be relevant for your question, so wanted to quickly share.