We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligence and consciousness, independent or intrinsically linked?
Acknowledging the limited value of introspection, nevertheless I can remember times which I was close to experiencing “pure consciousness”, with no conscious problem-solving activity at all. Perhaps I was entranced by a beautiful sunset, or haunting musical performance. My whole being seemed to be pure experience, pure consciousness, with no particular need for intelligence, Bayesian optimization, goal satisfaction, or any of the other paraphernalia which we associate with intelligence. This suggests to me that it is at least plausible that consciousness does not require intelligence.
In the other direction, the idea of an intelligence problem solver devoid of consciousness is an element in many powerful, fictional dystopias. Even Eliezer’s paperclip maximizer partakes of this trope. It seems that we have little difficulty imagining intelligence without consciousness, without awareness, sentience, qualia, the ability to suffer.
If we provisionally assume that the two qualities are independent, it raises the question of how we might program consciousness (even if we only want to know how, to avoid doing it accidentally). is it possible that even relatively simple programs may be conscious, may be capable of feeling real pain and suffering, as well as pleasure and joy? Is there any kind of research program that could shed light on these questions?
We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligence and consciousness, independent or intrinsically linked?
Acknowledging the limited value of introspection, nevertheless I can remember times which I was close to experiencing “pure consciousness”, with no conscious problem-solving activity at all. Perhaps I was entranced by a beautiful sunset, or haunting musical performance. My whole being seemed to be pure experience, pure consciousness, with no particular need for intelligence, Bayesian optimization, goal satisfaction, or any of the other paraphernalia which we associate with intelligence. This suggests to me that it is at least plausible that consciousness does not require intelligence.
In the other direction, the idea of an intelligence problem solver devoid of consciousness is an element in many powerful, fictional dystopias. Even Eliezer’s paperclip maximizer partakes of this trope. It seems that we have little difficulty imagining intelligence without consciousness, without awareness, sentience, qualia, the ability to suffer.
If we provisionally assume that the two qualities are independent, it raises the question of how we might program consciousness (even if we only want to know how, to avoid doing it accidentally). is it possible that even relatively simple programs may be conscious, may be capable of feeling real pain and suffering, as well as pleasure and joy? Is there any kind of research program that could shed light on these questions?