Suppose that at time t the world is in a state Wt, and that the agent may look at it and make an observation Ot. Objectively, the surprise of this observation would be Sobj = S(Ot|Wt) = -log Pr(Ot|Wt).
One note on philosophy of probability: if the world is in state Wt, what does it mean to say that an observation Ot has some probability given Wt? Surely all observations have probability 1 if the state of the world is exhaustively known.
Practically, it may be useful to distinguish between a coin and a toss. The coin has persisting features which make it either fair or loaded for a long time, with correlation between past and future. The toss is transient, and essentially all information about it is lost when I put the coin away—except through the memory of agents.
So yes, the toss is a feature of the present state of the world. But it has the very special property, that given the bias of the coin, the toss is independent of the past and the future. It’s sometimes more useful to treat a feature like that as an observation external to the world, but of course it “really” isn’t.
Thanks Sengachi. Actually I agree with that article, which is exactly why I mentioned the above. For an epistemically limited agent, it makes sense to talk about probabilities, because they are a measure of degree of belief.
But “Sobj” is supposed to be a measure of the “objective” surprisal you gain by making an observation Ot given that you already know everything about the world’s current state. And it references probabilities. This, I suggest, makes no sense. Probability is only relevant for epistemically limited agents; if you are the Laplacian demon, you have no need to talk about probabilities at all; the past, present and future states of the world are a certainty for you.
Maybe it would be more accurate to replace S_obj with S_someone_who_already_knows_what_you_wish_to_know.
A human-like agent who knows which coin is fair and which is loaded (and is considering things at the level of coins and tosses) will have subjective probabilities like the ‘objective’ probabilities described in the article, while a more Laplacian demon-like agent who knows the positions and velocities of the atoms that compose the coins (and the air, and the arm of the tosser, etc.) will have subjective probabilities for outcomes corresponding to ‘heads’ and ‘tails’ that vary from toss to toss, and are much closer to 0 or 1 each time.
So then your over-surprise is relative to your ontology? Not sure what that implies for agents who can change their ontology...
One note on philosophy of probability: if the world is in state Wt, what does it mean to say that an observation Ot has some probability given Wt? Surely all observations have probability 1 if the state of the world is exhaustively known.
Philosiphically, yes.
Practically, it may be useful to distinguish between a coin and a toss. The coin has persisting features which make it either fair or loaded for a long time, with correlation between past and future. The toss is transient, and essentially all information about it is lost when I put the coin away—except through the memory of agents.
So yes, the toss is a feature of the present state of the world. But it has the very special property, that given the bias of the coin, the toss is independent of the past and the future. It’s sometimes more useful to treat a feature like that as an observation external to the world, but of course it “really” isn’t.
Read Eliezer’s http://lesswrong.com/lw/oj/probability_is_in_the_mind/ , I think it will answer your questions on this topic.
Thanks Sengachi. Actually I agree with that article, which is exactly why I mentioned the above. For an epistemically limited agent, it makes sense to talk about probabilities, because they are a measure of degree of belief.
But “Sobj” is supposed to be a measure of the “objective” surprisal you gain by making an observation Ot given that you already know everything about the world’s current state. And it references probabilities. This, I suggest, makes no sense. Probability is only relevant for epistemically limited agents; if you are the Laplacian demon, you have no need to talk about probabilities at all; the past, present and future states of the world are a certainty for you.
Maybe it would be more accurate to replace S_obj with S_someone_who_already_knows_what_you_wish_to_know.
A human-like agent who knows which coin is fair and which is loaded (and is considering things at the level of coins and tosses) will have subjective probabilities like the ‘objective’ probabilities described in the article, while a more Laplacian demon-like agent who knows the positions and velocities of the atoms that compose the coins (and the air, and the arm of the tosser, etc.) will have subjective probabilities for outcomes corresponding to ‘heads’ and ‘tails’ that vary from toss to toss, and are much closer to 0 or 1 each time.
So then your over-surprise is relative to your ontology? Not sure what that implies for agents who can change their ontology...