Presumably an Oracle AI’s ontology would not be restricted because it’s trying to model the entire world.
Obviously we don’t particularly need an AI to play chess. It’s possible that we’d want this for some other domain, though, perhaps especially for one that has some relevance for FAI, or as a self-improving AI prototype. I also think it’s interesting as a thought experiment. I don’t understand the reasons why SI is so focused on the FAI approach, and I figure by asking questions like that one maybe I can learn more about their views.
Presumably an Oracle AI’s ontology would not be restricted because it’s trying to model the entire world.
Obviously we don’t particularly need an AI to play chess. It’s possible that we’d want this for some other domain, though, perhaps especially for one that has some relevance for FAI, or as a self-improving AI prototype. I also think it’s interesting as a thought experiment. I don’t understand the reasons why SI is so focused on the FAI approach, and I figure by asking questions like that one maybe I can learn more about their views.
Well, yes, by definition. But that’s not an answer to my question.
I don’t know which would approach would be more easily formalized and proven to be safe.