I’m having some difficulty immediately thinking of a way of studying that
Pretty sure that’s not what 1a3orn would say, but you can study efficient world-models directly to grok that. Instead of learning about them through the intermediary of extant AIs, you can study the thing that these AIs are trying to ever-better approximate itself.
Pretty sure that’s not what 1a3orn would say, but you can study efficient world-models directly to grok that. Instead of learning about them through the intermediary of extant AIs, you can study the thing that these AIs are trying to ever-better approximate itself.
See my (somewhat outdated) post on the matter, plus the natural-abstractions agenda.