I am interested in how, in the early stages of developing an AI, we might map our perception of the human world (language) to the AI’s view of the world (likely pure maths).
There have been previous discussions such as AI ontology crises: an informal typology, but it has been said to be dangerous to attempt to map the entire world down to values.
If we use an Upper Ontology and expand it slightly (as not to get too restrictive or potentially conflicting) for Friendly AI’s concepts, this would assist in giving a human view of the current state of the AI’s perception of the world.
Are there any existing ontologies on machine intelligence, and is this something worth exploring now to test on paper?
I am interested in how, in the early stages of developing an AI, we might map our perception of the human world (language) to the AI’s view of the world (likely pure maths). There have been previous discussions such as AI ontology crises: an informal typology, but it has been said to be dangerous to attempt to map the entire world down to values.
If we use an Upper Ontology and expand it slightly (as not to get too restrictive or potentially conflicting) for Friendly AI’s concepts, this would assist in giving a human view of the current state of the AI’s perception of the world.
Are there any existing ontologies on machine intelligence, and is this something worth exploring now to test on paper?