I just discovered this debate thanks to a YouTube recommendation from the Institute for the Future of Life. I find the formulation of the question by Anderw Serazin very well put.
The Yes Team: We would identify with it better, and it would help with creativity. You would want your digital assistant to be able to help you by modeling you, it seems necessary to model the human.
No team: It’s easier to deal with IA safety without it, the possibility to create infinite suffering is bad, unpleasant task in the world are better done by non-conscious agent.
I think the no team is right.
Interesting: during the questions, Yoshua Bengio advocates decoupling moral status from subjective experience (Moral status: would be the role in society of the entity !?). And then he proposes the following taxonomy of the concept of consciousness:
Subjective experience: trivially already incorporated in modern deep learning because each neural network has learned its own representation of the world.
self-awareness: useful notion for the agent who moves in the world
emotion: already present in reinforcement learning in a primitive way.
I just discovered this debate thanks to a YouTube recommendation from the Institute for the Future of Life. I find the formulation of the question by Anderw Serazin very well put.
The Yes Team: We would identify with it better, and it would help with creativity. You would want your digital assistant to be able to help you by modeling you, it seems necessary to model the human.
No team: It’s easier to deal with IA safety without it, the possibility to create infinite suffering is bad, unpleasant task in the world are better done by non-conscious agent.
I think the no team is right.
Interesting: during the questions, Yoshua Bengio advocates decoupling moral status from subjective experience (Moral status: would be the role in society of the entity !?). And then he proposes the following taxonomy of the concept of consciousness:
Subjective experience: trivially already incorporated in modern deep learning because each neural network has learned its own representation of the world.
self-awareness: useful notion for the agent who moves in the world
emotion: already present in reinforcement learning in a primitive way.