there’s a gap in my inside view of the problem, part of me thinks that capabilities progress such as out-of-distribution robustness or the 4 tenets described in open problems in cooperative ai is necessary for AI to be transformative, i.e. a prereq of TAI, and another part of me that thinks AI will be xrisky and unstable if it progresses along other aspects but not along the axis of those capabilities.
There’s a geometry here of transformative / not transformative cross product with dangerous not dangerous.
To have an inside view I must be able to adequately navigate between the quadrants with respect to outcomes, interventions, etc.
If something can learn fast enough, then it’s out-of-distribution performance won’t matter as much. (OOD performance will still matter -but it’ll have less to learn where it’s good, and more to learn where it’s not.*)
*Although generalization ability seems like the reason learning matters. So I see why it seems necessary for ‘transformation’.
there’s a gap in my inside view of the problem, part of me thinks that capabilities progress such as out-of-distribution robustness or the 4 tenets described in open problems in cooperative ai is necessary for AI to be transformative, i.e. a prereq of TAI, and another part of me that thinks AI will be xrisky and unstable if it progresses along other aspects but not along the axis of those capabilities.
There’s a geometry here of transformative / not transformative cross product with dangerous not dangerous.
To have an inside view I must be able to adequately navigate between the quadrants with respect to outcomes, interventions, etc.
If something can learn fast enough, then it’s out-of-distribution performance won’t matter as much. (OOD performance will still matter -but it’ll have less to learn where it’s good, and more to learn where it’s not.*)
*Although generalization ability seems like the reason learning matters. So I see why it seems necessary for ‘transformation’.