As implications of magic collide with observations indicating an ordinary wire, the AI may infer things that are insanely skewed. Where in the belief network these collisions happen could depend on the particulars of the algorithm involved and the shape of the belief network, it would probably be very unpredictable.
If the AI ever treats anything as probability 1, it is broken.
Even the results of addition. An AI ought to assume a nonzero provability that data gets corrupted moving from one part of it’s brain to another.
I agree. The AI + Fuse System is a deliberately broken AI. In general such an AI will perform suboptimal compared to the AI alone.
If the AI under consideration has a problematic goal though, we actually want the AI to act suboptimal with regards to its goals.
I think it’s broken worse than that. A false belief with certainty will allow for something like the explosion principle.
http://en.wikipedia.org/wiki/Principle_of_explosion
As implications of magic collide with observations indicating an ordinary wire, the AI may infer things that are insanely skewed. Where in the belief network these collisions happen could depend on the particulars of the algorithm involved and the shape of the belief network, it would probably be very unpredictable.