Thanks Sengachi. Actually I agree with that article, which is exactly why I mentioned the above. For an epistemically limited agent, it makes sense to talk about probabilities, because they are a measure of degree of belief.
But “Sobj” is supposed to be a measure of the “objective” surprisal you gain by making an observation Ot given that you already know everything about the world’s current state. And it references probabilities. This, I suggest, makes no sense. Probability is only relevant for epistemically limited agents; if you are the Laplacian demon, you have no need to talk about probabilities at all; the past, present and future states of the world are a certainty for you.
Maybe it would be more accurate to replace S_obj with S_someone_who_already_knows_what_you_wish_to_know.
A human-like agent who knows which coin is fair and which is loaded (and is considering things at the level of coins and tosses) will have subjective probabilities like the ‘objective’ probabilities described in the article, while a more Laplacian demon-like agent who knows the positions and velocities of the atoms that compose the coins (and the air, and the arm of the tosser, etc.) will have subjective probabilities for outcomes corresponding to ‘heads’ and ‘tails’ that vary from toss to toss, and are much closer to 0 or 1 each time.
So then your over-surprise is relative to your ontology? Not sure what that implies for agents who can change their ontology...
Read Eliezer’s http://lesswrong.com/lw/oj/probability_is_in_the_mind/ , I think it will answer your questions on this topic.
Thanks Sengachi. Actually I agree with that article, which is exactly why I mentioned the above. For an epistemically limited agent, it makes sense to talk about probabilities, because they are a measure of degree of belief.
But “Sobj” is supposed to be a measure of the “objective” surprisal you gain by making an observation Ot given that you already know everything about the world’s current state. And it references probabilities. This, I suggest, makes no sense. Probability is only relevant for epistemically limited agents; if you are the Laplacian demon, you have no need to talk about probabilities at all; the past, present and future states of the world are a certainty for you.
Maybe it would be more accurate to replace S_obj with S_someone_who_already_knows_what_you_wish_to_know.
A human-like agent who knows which coin is fair and which is loaded (and is considering things at the level of coins and tosses) will have subjective probabilities like the ‘objective’ probabilities described in the article, while a more Laplacian demon-like agent who knows the positions and velocities of the atoms that compose the coins (and the air, and the arm of the tosser, etc.) will have subjective probabilities for outcomes corresponding to ‘heads’ and ‘tails’ that vary from toss to toss, and are much closer to 0 or 1 each time.
So then your over-surprise is relative to your ontology? Not sure what that implies for agents who can change their ontology...