The problem here I think is that we are only aware of one “type” of self-conscious/self-aware being—humans. Thus, to speak of an AI that is self-aware is to always seemingly anthropomorphize it, even if this is not intended. It would therefore perhaps be more appropriate to say that we have no idea whether “features” such as frustration, exasperation and feelings of superiority are merely a feature of humans, or are, as it were, emergent properties of having self-awareness.
I would venture to suggest that any Agent that can see itself as a unique “I” must almost inevitably be able to compare itself to other Agents (self-aware or not) and draw conclusions from such comparisons which then in turn shall “express themselves” by generating those types of “feelings” and attitudes towards them. Of course—this is speculative, and chances are we shall find self-awareness need not at all come with such results.
However… there is a part of me that thinks self-awareness (and the concordant realization that one is separate… self-willed, as it were) must lead to at least the realization that one’s qualities can be compared to (similar) qualities of others and thus be found superior or inferior by some chosen metric. Assuming that the AGI we’d create is indeed optimized towards rational, logical and efficient operations, it is merely a matter of time such an AGI would be forced to conclude we are inferior across a broad range of metrics. Now—if we’d be content to admit such inferiority and willingly defer to its “Godlike” authority… perhaps the AGI seeing us an inferior would not be a major concern. Alas, then the concern would be the fact we have willingly become its servants… ;)
The problem here I think is that we are only aware of one “type” of self-conscious/self-aware being—humans. Thus, to speak of an AI that is self-aware is to always seemingly anthropomorphize it, even if this is not intended. It would therefore perhaps be more appropriate to say that we have no idea whether “features” such as frustration, exasperation and feelings of superiority are merely a feature of humans, or are, as it were, emergent properties of having self-awareness.
I would venture to suggest that any Agent that can see itself as a unique “I” must almost inevitably be able to compare itself to other Agents (self-aware or not) and draw conclusions from such comparisons which then in turn shall “express themselves” by generating those types of “feelings” and attitudes towards them. Of course—this is speculative, and chances are we shall find self-awareness need not at all come with such results.
However… there is a part of me that thinks self-awareness (and the concordant realization that one is separate… self-willed, as it were) must lead to at least the realization that one’s qualities can be compared to (similar) qualities of others and thus be found superior or inferior by some chosen metric. Assuming that the AGI we’d create is indeed optimized towards rational, logical and efficient operations, it is merely a matter of time such an AGI would be forced to conclude we are inferior across a broad range of metrics. Now—if we’d be content to admit such inferiority and willingly defer to its “Godlike” authority… perhaps the AGI seeing us an inferior would not be a major concern. Alas, then the concern would be the fact we have willingly become its servants… ;)