Did anything about this paper stand out to you? It doesn’t strike me as anything revolutionary on its own. Interesting component, perhaps. Does it change your expectations about what safety approaches work? Is it mainly capabilities news?
It certainly is an interesting component of a research tree that will be key to making anything seriously scale, though.
No, just a piece of the puzzle of a more salient understanding of AI self-control that I want to outline, which should integrate ML, cognitive science, theory of consciousness, control theory/resilience theory, and dynamical systems theory/stability theory.
Only this sort of understanding could make the discussion of oracle AI vs. agent AI agendas really substantiated, IMO.
Did anything about this paper stand out to you? It doesn’t strike me as anything revolutionary on its own. Interesting component, perhaps. Does it change your expectations about what safety approaches work? Is it mainly capabilities news?
It certainly is an interesting component of a research tree that will be key to making anything seriously scale, though.
No, just a piece of the puzzle of a more salient understanding of AI self-control that I want to outline, which should integrate ML, cognitive science, theory of consciousness, control theory/resilience theory, and dynamical systems theory/stability theory.
Only this sort of understanding could make the discussion of oracle AI vs. agent AI agendas really substantiated, IMO.
makes sense. Are you familiar with Structured State Spaces and followups?
https://www.semanticscholar.org/paper/Efficiently-Modeling-Long-Sequences-with-Structured-Gu-Goel/ac2618b2ce5cdcf86f9371bcca98bc5e37e46f51#citingPapers