I think every AI will need to learn from it’s environment. Thus it will need to update its current believes based upon new information from sensors.
It might conduct an experiment to check whether transmutation at a distance is possible—and find that transmutation at a distance could never be produced.
As the probability that transmutation of human DNA into fluorine is 1, this leaves some other options, like
the sensor readings are wrong
the experimental setup is wrong
it only works in the special case of the red wire
After sufficiently many experiments, the last case will have very high probability.
Which makes me think that maybe, faith is just a numerical inaccuracy.
If the AI is able to question the fact that the red wire is magical, then the prior was less than 1.
It should still be able to reason about hypothetical worlds where the red wire is just a usual copper thingy, but it will always know that those hypothetical worlds are not our world. Because in our world, the red wire is magical.
As long as superstitious knowledge is very specialized, like about the specific red wire, I would hope that the AI can act quite reasonable as long as the specific red wire is not somehow part of the situation.
As implications of magic collide with observations indicating an ordinary wire, the AI may infer things that are insanely skewed. Where in the belief network these collisions happen could depend on the particulars of the algorithm involved and the shape of the belief network, it would probably be very unpredictable.
I think every AI will need to learn from it’s environment. Thus it will need to update its current believes based upon new information from sensors.
It might conduct an experiment to check whether transmutation at a distance is possible—and find that transmutation at a distance could never be produced.
As the probability that transmutation of human DNA into fluorine is 1, this leaves some other options, like
the sensor readings are wrong
the experimental setup is wrong
it only works in the special case of the red wire
After sufficiently many experiments, the last case will have very high probability.
Which makes me think that maybe, faith is just a numerical inaccuracy.
I’m not sure whether it’s Bayes or some other aspect of rationality, but wouldn’t a reasonably capable AI be checking on the sources of its beliefs?
If the AI is able to question the fact that the red wire is magical, then the prior was less than 1.
It should still be able to reason about hypothetical worlds where the red wire is just a usual copper thingy, but it will always know that those hypothetical worlds are not our world. Because in our world, the red wire is magical.
As long as superstitious knowledge is very specialized, like about the specific red wire, I would hope that the AI can act quite reasonable as long as the specific red wire is not somehow part of the situation.
If the AI ever treats anything as probability 1, it is broken.
Even the results of addition. An AI ought to assume a nonzero provability that data gets corrupted moving from one part of it’s brain to another.
I agree. The AI + Fuse System is a deliberately broken AI. In general such an AI will perform suboptimal compared to the AI alone.
If the AI under consideration has a problematic goal though, we actually want the AI to act suboptimal with regards to its goals.
I think it’s broken worse than that. A false belief with certainty will allow for something like the explosion principle.
http://en.wikipedia.org/wiki/Principle_of_explosion
As implications of magic collide with observations indicating an ordinary wire, the AI may infer things that are insanely skewed. Where in the belief network these collisions happen could depend on the particulars of the algorithm involved and the shape of the belief network, it would probably be very unpredictable.