Maybe it would be more accurate to replace S_obj with S_someone_who_already_knows_what_you_wish_to_know.
A human-like agent who knows which coin is fair and which is loaded (and is considering things at the level of coins and tosses) will have subjective probabilities like the ‘objective’ probabilities described in the article, while a more Laplacian demon-like agent who knows the positions and velocities of the atoms that compose the coins (and the air, and the arm of the tosser, etc.) will have subjective probabilities for outcomes corresponding to ‘heads’ and ‘tails’ that vary from toss to toss, and are much closer to 0 or 1 each time.
So then your over-surprise is relative to your ontology? Not sure what that implies for agents who can change their ontology...
Maybe it would be more accurate to replace S_obj with S_someone_who_already_knows_what_you_wish_to_know.
A human-like agent who knows which coin is fair and which is loaded (and is considering things at the level of coins and tosses) will have subjective probabilities like the ‘objective’ probabilities described in the article, while a more Laplacian demon-like agent who knows the positions and velocities of the atoms that compose the coins (and the air, and the arm of the tosser, etc.) will have subjective probabilities for outcomes corresponding to ‘heads’ and ‘tails’ that vary from toss to toss, and are much closer to 0 or 1 each time.
So then your over-surprise is relative to your ontology? Not sure what that implies for agents who can change their ontology...