Thanks for the video! I had already skimmed this post when I noticed it, and then I watched it and reread the post. Perhaps my favourite thing about it was that it was slightly non-linear (skipping ahead to the diagram, non-linearity when covering sections).
Could you say a bit more about your worries with (scaling) prediction markets?
Do you have any thoughts about which experiments have the best expected information value per $?
I’m not too optimistic about traditional prediction markets, I have feelings similar to Zvi. I haven’t seen prediction markets be well subsidized for even a few dozen useful variables; in prediction augmented evaluation systems they would have to be done for thousands+ variables. They seem like more overhead per variable then simply stating one’s probability and moving on.
My next step is just messing around a lot with my own prediction application and seeing what seems to work. I plan to gradually invite people, but let them mostly do their own testing. At this point, I want to get an intuitive idea of what seems useful, similar to my experiences making other experimental applications. I’m really not sure what ideas I may come up with, with more experimentation.
That said, I am particularly excited about estimating expected values of things, but realize I may not be able to make all of these public, or may have to keep things very apolitical. I expect it to be really easy to anger people if estimates that are actually important are public.
Good find. I didn’t see that post (it came out a day after I published this, coincidentally). I’m surprised it came out so recently but imagine he probably had similar ideas, and likely wrote them down, much earlier. I definitely recommend it for more details on the science aspect.
From the post:
“For each scientific paper, there is a (perhaps small) chance that it will be randomly chosen for evaluation in, say, 30 years. If it is chosen, then at that time many diverse science evaluation historians (SEH) will study the history of that paper and its influence on future science, and will rank it relative to its contemporaries. To choose this should-have-been prestige-rank, they will consider how important was its topic, how true and novel were its claims, how solid and novel were its arguments, how influential it actually was, and how influential it would have been had it received more attention.
....
Using these assets, markets can be created wherein anyone can trade in the prestige of a paper conditional on that paper being later evaluated. Yes traders have to wait a long time for a final payoff. But they can sell their assets to someone else in the meantime, and we do regularly trade 30 year bonds today. Some care will have to be taken to make sure the base asset that is bet is stable, but this seems quite feasible.”
Thanks for the video! I had already skimmed this post when I noticed it, and then I watched it and reread the post. Perhaps my favourite thing about it was that it was slightly non-linear (skipping ahead to the diagram, non-linearity when covering sections).
Could you say a bit more about your worries with (scaling) prediction markets?
Do you have any thoughts about which experiments have the best expected information value per $?
I’m not too optimistic about traditional prediction markets, I have feelings similar to Zvi. I haven’t seen prediction markets be well subsidized for even a few dozen useful variables; in prediction augmented evaluation systems they would have to be done for thousands+ variables. They seem like more overhead per variable then simply stating one’s probability and moving on.
My next step is just messing around a lot with my own prediction application and seeing what seems to work. I plan to gradually invite people, but let them mostly do their own testing. At this point, I want to get an intuitive idea of what seems useful, similar to my experiences making other experimental applications. I’m really not sure what ideas I may come up with, with more experimentation.
That said, I am particularly excited about estimating expected values of things, but realize I may not be able to make all of these public, or may have to keep things very apolitical. I expect it to be really easy to anger people if estimates that are actually important are public.
https://www.lesswrong.com/posts/a4jRN9nbD79PAhWTB/prediction-markets-when-do-they-work
On estimating expected value, I’m reminded by some of Hanson’s work where he suggests predicting later evaluation (recent example: http://www.overcomingbias.com/2018/11/how-to-fund-prestige-science.html). I think this is an interesting subcase of the evaluating subprocess. It also fits nicely with this post by PC
Good find. I didn’t see that post (it came out a day after I published this, coincidentally). I’m surprised it came out so recently but imagine he probably had similar ideas, and likely wrote them down, much earlier. I definitely recommend it for more details on the science aspect.
From the post: “For each scientific paper, there is a (perhaps small) chance that it will be randomly chosen for evaluation in, say, 30 years. If it is chosen, then at that time many diverse science evaluation historians (SEH) will study the history of that paper and its influence on future science, and will rank it relative to its contemporaries. To choose this should-have-been prestige-rank, they will consider how important was its topic, how true and novel were its claims, how solid and novel were its arguments, how influential it actually was, and how influential it would have been had it received more attention.
....
Using these assets, markets can be created wherein anyone can trade in the prestige of a paper conditional on that paper being later evaluated. Yes traders have to wait a long time for a final payoff. But they can sell their assets to someone else in the meantime, and we do regularly trade 30 year bonds today. Some care will have to be taken to make sure the base asset that is bet is stable, but this seems quite feasible.”