I get stuck trying to answer your question itself on the differences between AGI and humans.
But taking your question itself at its face:
ferreting out the fundamental intentions
What sort of context are you imagining? Humans aren’t even great at identifying the fundamental reason for their own actions. They’ll confabulate if forced to.
Any context where there’s any impressive successes. I gave possible examples here:
In many cases people have a strong need to partially align other humans. That is, they have a need to interact with other people in a way that communicates and modifies intentions, until they are willing to risk their resources to coordinate on stag hunts. This has happened in evolutionary history. For example, people have had to figure out whether mates are trustworthy and worthwhile to invest in raising children together rather than bailing, and people have had to figure out whether potential allies in tribal politics will be loyal. This has also happened in memetic history. For example, people have developed skill in sussing out reliable business partners that won’t scam them.
This sounds like a very interesting question.
I get stuck trying to answer your question itself on the differences between AGI and humans.
But taking your question itself at its face:
What sort of context are you imagining? Humans aren’t even great at identifying the fundamental reason for their own actions. They’ll confabulate if forced to.
Any context where there’s any impressive successes. I gave possible examples here: