Probably, but I think figuring out exactly what you are measuring/trying to determine is a big part of the problem. GPT doesn’t think like humans, so it’s unclear what it means for it to be close. In some absolute sense, the “intelligence” space has as many axes as there are problems on which you can measure performance.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.
Do you think there is a place for a Turing-like test that determines how close to human intelligence it is, even if it has not reached that level?
Probably, but I think figuring out exactly what you are measuring/trying to determine is a big part of the problem. GPT doesn’t think like humans, so it’s unclear what it means for it to be close. In some absolute sense, the “intelligence” space has as many axes as there are problems on which you can measure performance.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.