>APS is less understood and poorly forecasted compared to AGI.
I should clarify that I was talking about the definition used by forecasts like the Direct Approach methodology and/or the definition given in the metaculus forecast or in estimates like the Direct Approach. The latter is roughly speaking, capability sufficient to pass a hard adversarial Turing tests and human-like capabilities on enough intellectual tasks as measured by certain tests.This is something that can plausibly be upper bounded by the direct approach methodology which aims to predict when an AI could get a negligible error in predicting what a human expert would say over a specific time horizon. So this forecast is essentially a forecast of ‘human-expert-writer-simulator AI’, and that is the definition that’s used in public elicitations like the metaculus forecasts.
However, I agree with you that while in some of the sources I cite that’s how the term is defined it’s not what the word denotes (just generality, which e.g. GPT-4 plausibly is for some weak sense of the word), and you also don’t get from being able to simulate the writing of any human expert to takeover risk without making many additional assumptions.
I should clarify that I was talking about the definition used by forecasts like the Direct Approach methodology and/or the definition given in the metaculus forecast or in estimates like the Direct Approach. The latter is roughly speaking, capability sufficient to pass a hard adversarial Turing tests and human-like capabilities on enough intellectual tasks as measured by certain tests.This is something that can plausibly be upper bounded by the direct approach methodology which aims to predict when an AI could get a negligible error in predicting what a human expert would say over a specific time horizon. So this forecast is essentially a forecast of ‘human-expert-writer-simulator AI’, and that is the definition that’s used in public elicitations like the metaculus forecasts.
However, I agree with you that while in some of the sources I cite that’s how the term is defined it’s not what the word denotes (just generality, which e.g. GPT-4 plausibly is for some weak sense of the word), and you also don’t get from being able to simulate the writing of any human expert to takeover risk without making many additional assumptions.