The word “movie” is not a probability itself, the same way the opinions people express are events and not probabilities.
With people’s opinions of what to do, there’s no reason you have to constrain what they say to things like “I, Bob, assign 50% probability that plan A is the best”. Even if you did that, you still have to consider that as evidence, it’s not like you can use another agent’s probability estimate directly in some way that you can’t use other types of statements, because it’s not your estimate. Bob might not even know probability theory.
If Bob says plan A is best, Linda and Alice say plan B is best, but Bob scored better on calibration assessments (including past decisions), and they have a poor record, you would integrate all the evidence with factors like P(Bob=A|Best=A) = 0.8, P(Alice=B|best=A) = .4, and so on to estimate P(Best=A|Bob=A, Alice=B,Linda=B).
Do you see what I mean? Bayes doesn’t become useless just because the environment is not composed of agents making explicit probability estimates.
But you never measure Best=A; you just measure how A performed, and you possibly compare that with your expectations of how A would perform. The system as just described runs into all of the problems that plurality voting has- irrelevant alternatives aren’t independent, etc.
Bayes doesn’t become useless just because the environment is not composed of agents making explicit probability estimates.
Bayes has a specific use: to maneuver through conditional probabilities. When you have moved outside the domain of a tool, it should be used with caution.
Do you see E = H anywhere? I don’t. E is the evidence, like, say, “bob thinks plan A is best”. H is the hidden variable that we are trying to infer. In this case H is “Plan A is best”.
The system as just described runs into all of the problems that plurality voting has- irrelevant alternatives aren’t independent, etc.
Like I said, I have not gone thru the math. I am not proposing a concrete formulation, I am trying to explain the concept that you don’t need to actually observe probability estimates from other agents to reason about the world.
Bayes has a specific use: to maneuver through conditional probabilities. When you have moved outside the domain of a tool, it should be used with caution.
Conditional probabilities like P(Bob thinks A is best | A is best).
Do you see E = H anywhere? I don’t. E is the evidence, like, say, “bob thinks plan A is best”. H is the hidden variable that we are trying to infer. In this case H is “Plan A is best”.
But to determine how much weight to give Bob as a judge for the second decision, you need to know whether or not Plan A was best for the first decision.
I am trying to explain the concept that you don’t need to actually observe probability estimates from other agents to reason about the world.
I agree that you don’t need to actually observe probability estimates from other agents to reason about the world. What I believe is that a Bayesian Judge is a tool that operates on probability estimates from other agents- and so if you want to reason this way, then you need this data.
But to determine how much weight to give Bob as a judge for the second decision, you need to know whether or not Plan A was best for the first decision.
You don’t need certainty. And you don’t necessarily need that particular evidence. It would still work using calibration tests to weight them.
The only evidence you really have access to from last time is who voted for what, and whether everyone thinks it was a good idea in hindsight. I think that would be enough.
What I believe is that a Bayesian Judge is a tool that operates on probability estimates from other agents- and so if you want to reason this way, then you need this data.
Ok. we are talking about different things. I’m talking about using bayesian methods to integrate evidence like votes, voting records, and hindsight estimations of optimality to determine the best distribution of probability over which plan is best (or some other output).
I have no idea how this “Bayesian Judge” thing that uses probability estimates directly would even work.
I have no idea how this “Bayesian Judge” thing that uses probability estimates directly would even work.
Here’s an article on Bayesian aggregation of forecasts. Essentially, you look at past forecasts to get P(Bob: “rain”|rain) and P(Bob: “rain”|~rain). (You can just elicit those expert likelihoods, but if you want this to be a formula rather than a person, you need them to be the data you’re looking for instead of just suggestive of the data you’re looking for.) From just that, you could calibrate Bob to find out what P(rain|Bob: “rain”) and P(rain|Bob: “~rain”) are. When you also have data on past predictions from Alice, Charlie, and David, you can combine them and get a more sophisticated estimate than any individual expert. It’s generally able to notice things like “when Alice and Bob agree, they’re both wrong,” which you couldn’t find by just computing individual calibrations.
That is, this thing you’ve been talking about is a procedure that’s already been worked out and that I’ve personally performed. It’s typically only done for forecasters of mutually exclusive possibilities and is inappropriate for decision-makers for reasons I’ve already mentioned.
The word “movie” is not a probability itself, the same way the opinions people express are events and not probabilities.
With people’s opinions of what to do, there’s no reason you have to constrain what they say to things like “I, Bob, assign 50% probability that plan A is the best”. Even if you did that, you still have to consider that as evidence, it’s not like you can use another agent’s probability estimate directly in some way that you can’t use other types of statements, because it’s not your estimate. Bob might not even know probability theory.
If Bob says plan A is best, Linda and Alice say plan B is best, but Bob scored better on calibration assessments (including past decisions), and they have a poor record, you would integrate all the evidence with factors like P(Bob=A|Best=A) = 0.8, P(Alice=B|best=A) = .4, and so on to estimate P(Best=A|Bob=A, Alice=B,Linda=B).
Do you see what I mean? Bayes doesn’t become useless just because the environment is not composed of agents making explicit probability estimates.
But you never measure Best=A; you just measure how A performed, and you possibly compare that with your expectations of how A would perform. The system as just described runs into all of the problems that plurality voting has- irrelevant alternatives aren’t independent, etc.
Bayes has a specific use: to maneuver through conditional probabilities. When you have moved outside the domain of a tool, it should be used with caution.
Of course you don’t; that’s the hypothesis.
P(H|E) = P(E|H)*P(H)/P(E)
Do you see E = H anywhere? I don’t. E is the evidence, like, say, “bob thinks plan A is best”. H is the hidden variable that we are trying to infer. In this case H is “Plan A is best”.
Like I said, I have not gone thru the math. I am not proposing a concrete formulation, I am trying to explain the concept that you don’t need to actually observe probability estimates from other agents to reason about the world.
Conditional probabilities like P(Bob thinks A is best | A is best).
I’m done.
But to determine how much weight to give Bob as a judge for the second decision, you need to know whether or not Plan A was best for the first decision.
I agree that you don’t need to actually observe probability estimates from other agents to reason about the world. What I believe is that a Bayesian Judge is a tool that operates on probability estimates from other agents- and so if you want to reason this way, then you need this data.
You don’t need certainty. And you don’t necessarily need that particular evidence. It would still work using calibration tests to weight them.
The only evidence you really have access to from last time is who voted for what, and whether everyone thinks it was a good idea in hindsight. I think that would be enough.
Ok. we are talking about different things. I’m talking about using bayesian methods to integrate evidence like votes, voting records, and hindsight estimations of optimality to determine the best distribution of probability over which plan is best (or some other output).
I have no idea how this “Bayesian Judge” thing that uses probability estimates directly would even work.
Here’s an article on Bayesian aggregation of forecasts. Essentially, you look at past forecasts to get P(Bob: “rain”|rain) and P(Bob: “rain”|~rain). (You can just elicit those expert likelihoods, but if you want this to be a formula rather than a person, you need them to be the data you’re looking for instead of just suggestive of the data you’re looking for.) From just that, you could calibrate Bob to find out what P(rain|Bob: “rain”) and P(rain|Bob: “~rain”) are. When you also have data on past predictions from Alice, Charlie, and David, you can combine them and get a more sophisticated estimate than any individual expert. It’s generally able to notice things like “when Alice and Bob agree, they’re both wrong,” which you couldn’t find by just computing individual calibrations.
That is, this thing you’ve been talking about is a procedure that’s already been worked out and that I’ve personally performed. It’s typically only done for forecasters of mutually exclusive possibilities and is inappropriate for decision-makers for reasons I’ve already mentioned.
neat!