Ehn, this is very hard to predict, because it’ll depend a whole lot on whether AIs are evolved, created, or copied from humans (or rather, what mix of those mechanisms is used to bootstrap and improve the AIs). To the extent that AIs learn from or are coerced to human “values”, they’ll likely encode all human emotions. To the extent that they’re evolved separately, they’ll likely have different, possibly-overlapping sets.
As you hint at, human emotions are evolved strategies for decision-making. Since an AI is likely to be complex enough that it can’t perfectly introspect itself, and it’s likely that at least many of the same drives for expansion, cooperation, and survival in the face of competitive and cooperative organisms/agents will exist, it seems reasonable to assume the same emotions will emerge.
Ehn, this is very hard to predict, because it’ll depend a whole lot on whether AIs are evolved, created, or copied from humans (or rather, what mix of those mechanisms is used to bootstrap and improve the AIs). To the extent that AIs learn from or are coerced to human “values”, they’ll likely encode all human emotions. To the extent that they’re evolved separately, they’ll likely have different, possibly-overlapping sets.
As you hint at, human emotions are evolved strategies for decision-making. Since an AI is likely to be complex enough that it can’t perfectly introspect itself, and it’s likely that at least many of the same drives for expansion, cooperation, and survival in the face of competitive and cooperative organisms/agents will exist, it seems reasonable to assume the same emotions will emerge.