Since AIs will be mapping entities like humans, it is interesting to ponder how they will scientifically verify facts vs fiction.
You could imagine a religious AI that read a lot of religious texts and then wants to meet or find god, or maybe replace god or something else. To learn that this is not possible, it would need instruments and real-time data from the world to build a realistic world model, but even then, it might not be enough to dissuade it from believing in religion. In fact I’d say there are millions of things an AI could believe are fact but are not facts, everything from small incidents in raw data from the world up to higly abstract conceptual models of the world—the AI can at any time go down a path that is not correct. Using an algorhithm to find unusual occurrences in data (like cancer on an mri image) is different from connecting that data point to conceptual models to explain it. We could try to limit this functionality but that seems counterproductive since how would we know how such a limitation is harming the AIs capabilities?
The map is not the territory in terms of AI
Since AIs will be mapping entities like humans, it is interesting to ponder how they will scientifically verify facts vs fiction. You could imagine a religious AI that read a lot of religious texts and then wants to meet or find god, or maybe replace god or something else. To learn that this is not possible, it would need instruments and real-time data from the world to build a realistic world model, but even then, it might not be enough to dissuade it from believing in religion. In fact I’d say there are millions of things an AI could believe are fact but are not facts, everything from small incidents in raw data from the world up to higly abstract conceptual models of the world—the AI can at any time go down a path that is not correct. Using an algorhithm to find unusual occurrences in data (like cancer on an mri image) is different from connecting that data point to conceptual models to explain it. We could try to limit this functionality but that seems counterproductive since how would we know how such a limitation is harming the AIs capabilities?