Inside the AI, whether an agent AI or a planning Oracle, there would be similar AGI-challenges like “build a predictive model of the world”, and similar FAI-conjugates of those challenges like finding the ‘user’ inside an AI-created model of the universe.
Isn’t building a predictive model of the world central to any AGI development? I don’t see why someone who focuses specifically on FAI would worry more about a predictive model that other AGI developers.
Specifically I don’t think that even without Singularity Institute there would still be AGI people working on building predictive models of the world.
Yes, hence that being referred to as an “AGI-challenge”. An FAI, however, would require not only to model the world but (for example) to “find … the ‘user’ inside an AI-created model of the universe.”
Isn’t building a predictive model of the world central to any AGI development? I don’t see why someone who focuses specifically on FAI would worry more about a predictive model that other AGI developers. Specifically I don’t think that even without Singularity Institute there would still be AGI people working on building predictive models of the world.
Yes, hence that being referred to as an “AGI-challenge”. An FAI, however, would require not only to model the world but (for example) to “find … the ‘user’ inside an AI-created model of the universe.”