By “bead jar guess” I mean a wild, nearly-groundless assignment of a probability to a proposition. This is as opposed to a solidly backed up estimate based on something like well-controlled sample data, or a guess made with an appeal to an inelegant but often-effective hack like the availability heuristic.
Groundless or not, if you propoose to run two experiments X and Y, and select outcomes x of experiment X and y of experiment Y before running the experiments, and assign x and y the same probabilities, you have to be equally surprised by x occurring as you are by y occurring, or I’m missing something deep about what you’re saying about probabilities. Are you using the word “probability” in a different sense than Jaynes?
I haven’t read Jaynes’s work on the subject, so I couldn’t say. However, if he thinks that equal probabilities mean equal obligation to be surprised, I disagree with him. It’s easy to do things that are spectacularly unlikely—flip through a shuffled deck of cards to see a given sequence, for instance—that do not, and should not, surprise you at all.
“Surprise”, as I understand it, is something rational agents experience when an observation disconfirms the hypothesis they currently believe in relative to the hypothesis that “something is going on”, or the set of unknown unknowns.
If you generate ten numbers 1-10 from a process you think is random, and it comes up 5285590861, that is no reason to be surprised, because the sequence is algorithmically complex, and the hypothesis that “something is going on” assigns it a conditional probability no higher than the hypothesis that the process is random. But if it comes up 1212121212, that is reason to be surprised, because the sequence is algorithmically simple, so the hypothesis that “something is going on” assigns it higher conditional probability than the hypothesis that the process is random. The surprised agent is then justified in sitting up and expending resources trying to gather more info.
I haven’t read Jaynes’s work on the subject, so I couldn’t say.
Point your browser at amazon
Order ETJ’s book.
Wait approx one week for delivery
Read it.
I don’t mean to sound gushing but Jayne’s writing on probability theory is the clearest, most grounded, and most entertaining material you will ever read on the subject. Even better than that weird AI dude. Seriously it’s like trying to discuss the apocalypse without reading Revelations...
I haven’t read Jaynes’s work on the subject, so I couldn’t say. However, if he thinks that equal probabilities mean equal obligation to be surprised, I disagree with him.
I tend to agree. If I discovered that Jaynes had said such a thing I would be very surprised indeed. I’ll be surprised when the probability of seeing something with a that probability or less occur is low.
Writing down a sequence ahead of time makes it more interesting when it turns up, not more unlikely. Given the possibility of cheating, it might make it more likely.
By “bead jar guess” I mean a wild, nearly-groundless assignment of a probability to a proposition. This is as opposed to a solidly backed up estimate based on something like well-controlled sample data, or a guess made with an appeal to an inelegant but often-effective hack like the availability heuristic.
Groundless or not, if you propoose to run two experiments X and Y, and select outcomes x of experiment X and y of experiment Y before running the experiments, and assign x and y the same probabilities, you have to be equally surprised by x occurring as you are by y occurring, or I’m missing something deep about what you’re saying about probabilities. Are you using the word “probability” in a different sense than Jaynes?
I haven’t read Jaynes’s work on the subject, so I couldn’t say. However, if he thinks that equal probabilities mean equal obligation to be surprised, I disagree with him. It’s easy to do things that are spectacularly unlikely—flip through a shuffled deck of cards to see a given sequence, for instance—that do not, and should not, surprise you at all.
“Surprise”, as I understand it, is something rational agents experience when an observation disconfirms the hypothesis they currently believe in relative to the hypothesis that “something is going on”, or the set of unknown unknowns.
If you generate ten numbers 1-10 from a process you think is random, and it comes up 5285590861, that is no reason to be surprised, because the sequence is algorithmically complex, and the hypothesis that “something is going on” assigns it a conditional probability no higher than the hypothesis that the process is random. But if it comes up 1212121212, that is reason to be surprised, because the sequence is algorithmically simple, so the hypothesis that “something is going on” assigns it higher conditional probability than the hypothesis that the process is random. The surprised agent is then justified in sitting up and expending resources trying to gather more info.
Point your browser at amazon
Order ETJ’s book.
Wait approx one week for delivery
Read it.
I don’t mean to sound gushing but Jayne’s writing on probability theory is the clearest, most grounded, and most entertaining material you will ever read on the subject. Even better than that weird AI dude. Seriously it’s like trying to discuss the apocalypse without reading Revelations...
I tend to agree. If I discovered that Jaynes had said such a thing I would be very surprised indeed. I’ll be surprised when the probability of seeing something with a that probability or less occur is low.
That’s because you didn’t specify the sequence ahead of time, right?
Writing down a sequence ahead of time makes it more interesting when it turns up, not more unlikely. Given the possibility of cheating, it might make it more likely.