In my experience with Bayesian biostatisticians, they don’t talk much about the information a prior represents. But they’re also not just using common ones. They talk a lot about its “properties”—priors with “really nice properties”. As for as I can tell, they mean two things:
Computational properties
The way the distribution shifts as you get evidence. They think about this in a lot of detail, and they like priors that lead to behavior they think is reasonable.
I think this amounts to the same thing. The way they think and infer about the problem is determiend by their information. So, when they create a robot that thinks and infers in the same way, they are creating one with the same information as they have.
But, as a procedure for creating a prior that represents your information, it’s very different from Jaynes’s procedure. Jaynes’s procedure being, stating your prior information very precisely, and then finding symmetries or constraints for maximum entropy I guess.
I’m very happy you’re writing about logical uncertainty btw, it’s been on my mind a lot lately.
In my experience with Bayesian biostatisticians, they don’t talk much about the information a prior represents. But they’re also not just using common ones. They talk a lot about its “properties”—priors with “really nice properties”. As for as I can tell, they mean two things:
Computational properties
The way the distribution shifts as you get evidence. They think about this in a lot of detail, and they like priors that lead to behavior they think is reasonable.
I think this amounts to the same thing. The way they think and infer about the problem is determiend by their information. So, when they create a robot that thinks and infers in the same way, they are creating one with the same information as they have.
But, as a procedure for creating a prior that represents your information, it’s very different from Jaynes’s procedure. Jaynes’s procedure being, stating your prior information very precisely, and then finding symmetries or constraints for maximum entropy I guess.
I’m very happy you’re writing about logical uncertainty btw, it’s been on my mind a lot lately.