In modeling Bayesians (not described here), I have the problem that saying “I assign this problem probabilty .5 of being true” really means “I have no information about this problem.”
My original model treated that p=.5 as an estimate, so that a bunch of Bayesians who all assign p=.5 to a problem end up respecting each other more, instead of ignoring their own opinions due to having no information about it themselves.
I’m reformulating it to weigh opinions according to the amount of information they claim to have. But what’s the right way to do that?
Yes; but then how to work that into the scheme to produce a probability?
I deleted the original comment because I realized that the equations given already give zero weight to an agent who assigns a problem a belief value of .5. That’s because it just multiplies both m0 and m1 by .5.
I do wonder though if you should have some way of distinguishing someone who assigns a probability of .5 for complete ignorance, versus one who assigns a probability of .5 due to massive amounts of relevant evidence that just happens to balance out. But then, you’ll observe the ignorant fellow updating significantly more than the well-informed fellow on a piece of evidence, and can use that to determine the strength of their convictions.
I’ve thought about that. You could use a possible-worlds model, where the ignorant person allows all worlds, and the other person has a restricted set of possible worlds within with p is still .5. If updating then means restricting possible worlds, it should work out right in both cases.
In modeling Bayesians (not described here), I have the problem that saying “I assign this problem probabilty .5 of being true” really means “I have no information about this problem.”
My original model treated that p=.5 as an estimate, so that a bunch of Bayesians who all assign p=.5 to a problem end up respecting each other more, instead of ignoring their own opinions due to having no information about it themselves.
I’m reformulating it to weigh opinions according to the amount of information they claim to have. But what’s the right way to do that?
Use a log-based unit, like bits or decibels.
Yes; but then how to work that into the scheme to produce a probability?
I deleted the original comment because I realized that the equations given already give zero weight to an agent who assigns a problem a belief value of .5. That’s because it just multiplies both m0 and m1 by .5.
I do wonder though if you should have some way of distinguishing someone who assigns a probability of .5 for complete ignorance, versus one who assigns a probability of .5 due to massive amounts of relevant evidence that just happens to balance out. But then, you’ll observe the ignorant fellow updating significantly more than the well-informed fellow on a piece of evidence, and can use that to determine the strength of their convictions.
I’ve thought about that. You could use a possible-worlds model, where the ignorant person allows all worlds, and the other person has a restricted set of possible worlds within with p is still .5. If updating then means restricting possible worlds, it should work out right in both cases.