If you had a precise definition of “effectiveness” this shouldn’t be a problem.
Coming up with a precise definition is difficult, especially if you want multiple groups to agree. Those specific questions are relatively low-level; I think we should ask a bunch of questions like that, but think we may also want some more vague things as well.
For example, say I wanted to know how good/enjoyable a specific movie would be. Predicting the ratings according to movie reviewers (evaluators) is an approach I’d regard as reasonable. I’m not sure what a precise definition for movie quality would look like (though I would be interested in proposals), but am generally happy enough with movie reviews for what I’m looking for.
“How much value has this organization created?”
Agreed that that itself isn’t a forecast, I meant in the more general case, for questions like, “How much value will this organization create next year” (as you pointed out). I probably should have used that more specific example, apologies.
And, although clearly defining value can be tedious (and prone to errors), I don’t think that problem can be avoided.
Can you be more explicit about your definition of “clearly”? I’d imagine that almost any proposal at a value function would have some vagueness. Certificates of Impact get around this by just leaving that for the review of some eventual judges, kind of similar to what I’m proposing.
Why would you do that? What’s wrong with the usual prediction markets?
The goal for this research isn’t fixing something with prediction markets, but just finding more useful things for them to predict. If we had expert panels that agreed to evaluate things in the future (for instance, they are responsible for deciding on the “value organization X has created” in 2025), then prediction markets and similar could predict what they would say.
For example, say I wanted to know how good/enjoyable a specific movie would be.
My point is that “goodness” is not a thing in the territory. At best it is a label for a set of specific measures (ratings, revenue, awards, etc). In that case, why not just work with those specific measures? Vague questions have the benefit of being short and easy to remember, but beyond that I see only problems. Motivated agents will do their best to interpret the vagueness in a way that suits them.
Is your goal to find a method to generate specific interpretations and procedures of measurement for vague properties like this one? Like a Shelling point for formalizing language? Why do you feel that can be done in a useful way? I’m asking for an intuition pump.
Can you be more explicit about your definition of “clearly”?
Certainly there is some vagueness, but it seems that we manage to live with it. I’m not proposing anything that prediction markets aren’t already doing.
Hm… At this point I don’t feel like I have a good intuition for what you find intuitive. I could give more examples, but don’t expect they would convince you much right now if the others haven’t helped.
I plan to eventually write more about this, and eventually hopefully we should have working examples up (where people are predicting things). Hopefully things should make more sense to you then.
Short comments back<>forth are a pretty messy communication medium for such work.
Coming up with a precise definition is difficult, especially if you want multiple groups to agree. Those specific questions are relatively low-level; I think we should ask a bunch of questions like that, but think we may also want some more vague things as well.
For example, say I wanted to know how good/enjoyable a specific movie would be. Predicting the ratings according to movie reviewers (evaluators) is an approach I’d regard as reasonable. I’m not sure what a precise definition for movie quality would look like (though I would be interested in proposals), but am generally happy enough with movie reviews for what I’m looking for.
Agreed that that itself isn’t a forecast, I meant in the more general case, for questions like, “How much value will this organization create next year” (as you pointed out). I probably should have used that more specific example, apologies.
Can you be more explicit about your definition of “clearly”? I’d imagine that almost any proposal at a value function would have some vagueness. Certificates of Impact get around this by just leaving that for the review of some eventual judges, kind of similar to what I’m proposing.
The goal for this research isn’t fixing something with prediction markets, but just finding more useful things for them to predict. If we had expert panels that agreed to evaluate things in the future (for instance, they are responsible for deciding on the “value organization X has created” in 2025), then prediction markets and similar could predict what they would say.
My point is that “goodness” is not a thing in the territory. At best it is a label for a set of specific measures (ratings, revenue, awards, etc). In that case, why not just work with those specific measures? Vague questions have the benefit of being short and easy to remember, but beyond that I see only problems. Motivated agents will do their best to interpret the vagueness in a way that suits them.
Is your goal to find a method to generate specific interpretations and procedures of measurement for vague properties like this one? Like a Shelling point for formalizing language? Why do you feel that can be done in a useful way? I’m asking for an intuition pump.
Certainly there is some vagueness, but it seems that we manage to live with it. I’m not proposing anything that prediction markets aren’t already doing.
Hm… At this point I don’t feel like I have a good intuition for what you find intuitive. I could give more examples, but don’t expect they would convince you much right now if the others haven’t helped.
I plan to eventually write more about this, and eventually hopefully we should have working examples up (where people are predicting things). Hopefully things should make more sense to you then.
Short comments back<>forth are a pretty messy communication medium for such work.