I can, and you can, but imagine that we’re trying to program a robot to make decisions in our place, and we can’t trust the robot to have our intuition.
A robot would always be able to tell if it’s an alarm it triggered. Humans are the ones that are bad at it. Did you actually decide to smoke because EDT is broken, or are you just justifying it like that and you’re actually doing it because you have smoking lesions?
If we program it to choose actions which maximize P(O|a) in the two-node system, it’ll shut off the alarm in the hopes that it will make a fire less likely.
Once it knows its sensor readings, knowing whether or not it triggers the alarm is no further evidence for or against a fire.
A robot would always be able to tell if it’s an alarm it triggered. Humans are the ones that are bad at it. Did you actually decide to smoke because EDT is broken, or are you just justifying it like that and you’re actually doing it because you have smoking lesions?
Once it knows its sensor readings, knowing whether or not it triggers the alarm is no further evidence for or against a fire.