If the AI is able to question the fact that the red wire is magical, then the prior was less than 1.
It should still be able to reason about hypothetical worlds where the red wire is just a usual copper thingy, but it will always know that those hypothetical worlds are not our world. Because in our world, the red wire is magical.
As long as superstitious knowledge is very specialized, like about the specific red wire, I would hope that the AI can act quite reasonable as long as the specific red wire is not somehow part of the situation.
As implications of magic collide with observations indicating an ordinary wire, the AI may infer things that are insanely skewed. Where in the belief network these collisions happen could depend on the particulars of the algorithm involved and the shape of the belief network, it would probably be very unpredictable.
I’m not sure whether it’s Bayes or some other aspect of rationality, but wouldn’t a reasonably capable AI be checking on the sources of its beliefs?
If the AI is able to question the fact that the red wire is magical, then the prior was less than 1.
It should still be able to reason about hypothetical worlds where the red wire is just a usual copper thingy, but it will always know that those hypothetical worlds are not our world. Because in our world, the red wire is magical.
As long as superstitious knowledge is very specialized, like about the specific red wire, I would hope that the AI can act quite reasonable as long as the specific red wire is not somehow part of the situation.
If the AI ever treats anything as probability 1, it is broken.
Even the results of addition. An AI ought to assume a nonzero provability that data gets corrupted moving from one part of it’s brain to another.
I agree. The AI + Fuse System is a deliberately broken AI. In general such an AI will perform suboptimal compared to the AI alone.
If the AI under consideration has a problematic goal though, we actually want the AI to act suboptimal with regards to its goals.
I think it’s broken worse than that. A false belief with certainty will allow for something like the explosion principle.
http://en.wikipedia.org/wiki/Principle_of_explosion
As implications of magic collide with observations indicating an ordinary wire, the AI may infer things that are insanely skewed. Where in the belief network these collisions happen could depend on the particulars of the algorithm involved and the shape of the belief network, it would probably be very unpredictable.