How should we unpack black boxes we don’t have yet? For example a non-neural language capable self-maintaining goal-oriented system*.
We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.
How should we unpack black boxes we don’t have yet? For example a non-neural language capable self-maintaining goal-oriented system*.
We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.
*I’m trying to unpack AGI here somewhat