The sketch of a proposed solution to the hard problem of consciousness: An entity is conscious if and only if (i) it is an intelligent agent (i.e. a sufficiently general reinforcement learning system) and (ii) its values depend on the presence and/or state of other conscious entities. Yes, this definition is self-referential, but hopefully some fixed point theorem applies. There may be multiple fixed points, corresponding to “mutually alien types of consciousness”.
Why is this the correct definition? Because it describes precisely the type of agent who would care about the hard problem of consciousness.
I’m not sure your definition has much to do with consciousness, as it would also be satisfied by an AI that runs on an Intel processor and whose utility function says all AIs should run on Intel processors.
This seems to me to address the meta problem of consciousness rather than the hard problem of consciousness itself, since you seem to be more offering an etiology for the existence of agents that would care about the hard problem of consciousness rather than an etiology of qualia.
I’m trying to figure out what precisely #2 means. How do you define “values”? IE, if I’m a deep learning algorithm in a tournament with other deep learning algorithms, certainly my instrumental values depend on the state of other deep learning algorithms. Is that sufficient in your definition for consciousness?
No, I am talking about terminal values. Something like an instrumental reward function specified via an ontology that involves consciousness. Also, I am not sure deep learning in its present form qualifies as sufficiently “general” for (i).
The sketch of a proposed solution to the hard problem of consciousness: An entity is conscious if and only if (i) it is an intelligent agent (i.e. a sufficiently general reinforcement learning system) and (ii) its values depend on the presence and/or state of other conscious entities. Yes, this definition is self-referential, but hopefully some fixed point theorem applies. There may be multiple fixed points, corresponding to “mutually alien types of consciousness”.
Why is this the correct definition? Because it describes precisely the type of agent who would care about the hard problem of consciousness.
I’m not sure your definition has much to do with consciousness, as it would also be satisfied by an AI that runs on an Intel processor and whose utility function says all AIs should run on Intel processors.
Its utility function would have to say that all conscious AIs should run on Intel processors. There is self-reference there.
But, I only have rather low confidence this idea is correct (what being correct means here) or important.
This seems to me to address the meta problem of consciousness rather than the hard problem of consciousness itself, since you seem to be more offering an etiology for the existence of agents that would care about the hard problem of consciousness rather than an etiology of qualia.
Yes, but I also claim that the agents that would care about the hard problem of consciousness are exactly the agents that are themselves conscious.
I’m trying to figure out what precisely #2 means. How do you define “values”? IE, if I’m a deep learning algorithm in a tournament with other deep learning algorithms, certainly my instrumental values depend on the state of other deep learning algorithms. Is that sufficient in your definition for consciousness?
No, I am talking about terminal values. Something like an instrumental reward function specified via an ontology that involves consciousness. Also, I am not sure deep learning in its present form qualifies as sufficiently “general” for (i).
Your definition says that people’s models of other people can be conscious, doesn’t it?