I expect it to be much harder to measure the “smarts” of an AI than it is to measure the smarts of a person (because all people share a large amount of detail in their cognitive architecture), so any approach that employs “near-human level” AI runs the risk that at least one of those AIs is not near human level at all.
I expect it to be much harder to measure the “smarts” of an AI than it is to measure the smarts of a person (because all people share a large amount of detail in their cognitive architecture), so any approach that employs “near-human level” AI runs the risk that at least one of those AIs is not near human level at all.