I find it hard to imagine that you’re actually denying that you or I have things that, colloquially, one would describe as preferences, and exist in an objective sense.
I deny that a generic outside observer would describe us as having any specific set of preferences, in an objective sense.
This doesn’t bother me too much, because it’s sufficient that we have preferences in a subjective sense—that we can use our own empathy modules and self-reflection to define, to some extent, our preferences.
a brain is ultimately many fewer assumptions (to the pre-industrial Norse people)
“Realistic” preferences make ultimately fewer assumptions (to actual humans) that “fully rational” or other preference sets.
The problem is that this is not true for generic agents, or AIs. We have to get the human empathy module into the AI first—not so it can predict us (it can already do that through other means), but so that its decomposition of our preferences is the same as ours.
I deny that a generic outside observer would describe us as having any specific set of preferences, in an objective sense.
It’s possible that we’ve been struggling with this conversation because I’ve been failing to grasp just how radically different your opinions are to mine.
Imagine your generic outside observer was superintelligent, and understood (through pure analysis) qualia and all the corresponding mysteries of the mind. Would you then still say this outside observer would not consider us to have any specific set of preferences, in an objective sense, where “preferences” takes on its colloquial meaning?
If not, why? I think my stance is obvious; where preferences colloquially means approximately “a greater liking for one alternative over another or others”, all I have to claim is that there is an objective sense in which I like things, which is simple because there’s an objective sense in which I have that emotional state and internal stance.
I deny that a generic outside observer would describe us as having any specific set of preferences, in an objective sense.
This doesn’t bother me too much, because it’s sufficient that we have preferences in a subjective sense—that we can use our own empathy modules and self-reflection to define, to some extent, our preferences.
“Realistic” preferences make ultimately fewer assumptions (to actual humans) that “fully rational” or other preference sets.
The problem is that this is not true for generic agents, or AIs. We have to get the human empathy module into the AI first—not so it can predict us (it can already do that through other means), but so that its decomposition of our preferences is the same as ours.
It’s possible that we’ve been struggling with this conversation because I’ve been failing to grasp just how radically different your opinions are to mine.
Imagine your generic outside observer was superintelligent, and understood (through pure analysis) qualia and all the corresponding mysteries of the mind. Would you then still say this outside observer would not consider us to have any specific set of preferences, in an objective sense, where “preferences” takes on its colloquial meaning?
If not, why? I think my stance is obvious; where preferences colloquially means approximately “a greater liking for one alternative over another or others”, all I have to claim is that there is an objective sense in which I like things, which is simple because there’s an objective sense in which I have that emotional state and internal stance.