Newcomers to the AI Safety arguments might be under the impression that there will be discrete cutoffs, i.e. either we have HLAI or we dont. The point of (t,n) AGI is to give a picture of what a continuous increase in capabilities looks like. It is also slightly more formal than the simple “words based” definitions of AGI. If you know of a more precise mathematical formulation of the notion of general and super intelligences, I would love if you could point me towards it so that I can include that in the post.
As for Four Background Claims, the reason for inclusion is to provide an intuition behind why general intelligence is important. And that even though future systems might be intelligent it is not the default case that they will either care about our goals, or even follow our goals in the way as intended by the designers.
I don’t understand how the parts fit together. For example, what’s the point of presenting the (t-,n)-AGI framework or the Four Background Claims?
Newcomers to the AI Safety arguments might be under the impression that there will be discrete cutoffs, i.e. either we have HLAI or we dont. The point of (t,n) AGI is to give a picture of what a continuous increase in capabilities looks like. It is also slightly more formal than the simple “words based” definitions of AGI. If you know of a more precise mathematical formulation of the notion of general and super intelligences, I would love if you could point me towards it so that I can include that in the post.
As for Four Background Claims, the reason for inclusion is to provide an intuition behind why general intelligence is important. And that even though future systems might be intelligent it is not the default case that they will either care about our goals, or even follow our goals in the way as intended by the designers.