Those criticisms (like most criticisms of prediction markets) are ultimately about the incentive structure and people which produce answers to the questions. One way to frame the core claim of this post is: the main bottleneck right now to making predictions markets useful for alignment is not getting good answers, but rather asking relevant questions. Maybe answer quality will be a bottleneck later on, but I don’t think it’s the limiting factor right now.
Definitely agreed that the bottleneck is mostly having good questions! One way I often think about this is, a prediction market question conveys many bits of information about the world, while the answer tends to convey very few.
Part of the goal with Manifold is to encourage as many questions as possible, lowering the barrier to question creation by making it fast and easy and (basically) free. But sometimes this does lead to people asking questions that have wide appeal but are less useful (like the ones you identified above), whereas generating really good questions often requires deep subject-matter expertise. If you have eg a list of operationalized questions, we’re always more than happy to promote them to our forecasters!
Yeah, I definitely think Manifold made the right tradeoffs (at least at current margins) in making question-creating as easy as possible.
If you have eg a list of operationalized questions, we’re always more than happy to promote them to our forecasters!
My actual hope for this post was that a few other people would read it, write down a list of questions like “how will I rank the importance of X in 3 years?”, precommit to giving their own best-guess answers to the questions in a few years, and then set up a market on each question. My guess is that a relatively new person who expects to do alignment research for the next few years would be the perfect person for this, or better yet a few such people, and it would save me the effort.
I’m up for doing that. Are there any important things I should take into account before doing it? My first draft would be something like:
Will tailcalled consider X alignment approach important in 4 years?
With description:
I have been following AI and alignment research on and off for years, and have a somewhat reasonable mathematical background to evaluate it. I tend to have an informal idea of the viability of various alignment proposals, though it’s quite possible that idea might be wrong. In 4 years, I will evaluate X and decide whether there have been any important good results since today. I will probably ask some of the alignment researchers I most respect, such as John Wentworth or Steven Byrnes, for advice about the assessment, unless it is dead-obvious. At the time of making the market, I currently think <extremely brief summary>.
<link to core post for X>
List of approaches I would currently have in the evaluation:
Something roughly along those lines sounds right. You might consider e.g. a ranking of importance, or asking some narrower questions about each agenda—how promising will they seem, how tractable will they seem, how useful will they have been in hindsight for subsequent work produced in the intervening 4-year period, how much will their frames spread, etc—depending on what questions you think are most relevant to how you should allocate attention now. You might also consider importance of subproblems, in addition to (or instead of) agendas. Or if there’s things which seem like they might be valuable to look into but would cost significant effort, and you’re not sure whether it’s worthwhile, those are great things for a market on your future judgement.
In general, “ask lots of questions” is a good heuristic here, analogous to “measure lots of stuff”.
I considered that, but unless I’m misunderstanding something about Manifold markets, they have to be either yes/no or open-ended.
or asking some narrower questions about each agenda—how promising will they seem, how tractable will they seem, how useful will they have been in hindsight for subsequent work produced in the intervening 4-year period, how much will their frames spread, etc—depending on what questions you think are most relevant to how you should allocate attention now [...]
In general, “ask lots of questions” is a good heuristic here, analogous to “measure lots of stuff”.
I agree with measuring lots of stuff in principle, but Manifold Markets only allows me to open 5 free markets.
I think asking or finding/recognizing good questions rather than giving good answers mediocre questions is probably a highly valuable skill or effort pretty much anywhere we are looking to advance knowledge—or I suppose advance pretty much anything. How well prediction markets will help in producing that . . . worth asking I suspect.
Clearly just predicting time lines does little to resolve a problem or help anyone prioritize their efforts. So as an uninformed outsider, can any of the existing prediction questions be recast into a set of question that do shift the focus on a specific problem and proposed approaches to resolve/address the risk?
Would an abstract search of recent (all?) AI alignment papers perhaps point to a collection of questions that might be then placed on the prediction markets? If so, seems like a great survey effort for an AI student to do some leg work on. (Though I suppose some primitive AI agent might be more fitting and quicker ;-)
Yes.
Those criticisms (like most criticisms of prediction markets) are ultimately about the incentive structure and people which produce answers to the questions. One way to frame the core claim of this post is: the main bottleneck right now to making predictions markets useful for alignment is not getting good answers, but rather asking relevant questions. Maybe answer quality will be a bottleneck later on, but I don’t think it’s the limiting factor right now.
Definitely agreed that the bottleneck is mostly having good questions! One way I often think about this is, a prediction market question conveys many bits of information about the world, while the answer tends to convey very few.
Part of the goal with Manifold is to encourage as many questions as possible, lowering the barrier to question creation by making it fast and easy and (basically) free. But sometimes this does lead to people asking questions that have wide appeal but are less useful (like the ones you identified above), whereas generating really good questions often requires deep subject-matter expertise. If you have eg a list of operationalized questions, we’re always more than happy to promote them to our forecasters!
Yeah, I definitely think Manifold made the right tradeoffs (at least at current margins) in making question-creating as easy as possible.
My actual hope for this post was that a few other people would read it, write down a list of questions like “how will I rank the importance of X in 3 years?”, precommit to giving their own best-guess answers to the questions in a few years, and then set up a market on each question. My guess is that a relatively new person who expects to do alignment research for the next few years would be the perfect person for this, or better yet a few such people, and it would save me the effort.
I’m up for doing that. Are there any important things I should take into account before doing it? My first draft would be something like:
With description:
List of approaches I would currently have in the evaluation:
Natural Abstractions
Infrabayes
Shard Theory
Brain-like AGI
Something roughly along those lines sounds right. You might consider e.g. a ranking of importance, or asking some narrower questions about each agenda—how promising will they seem, how tractable will they seem, how useful will they have been in hindsight for subsequent work produced in the intervening 4-year period, how much will their frames spread, etc—depending on what questions you think are most relevant to how you should allocate attention now. You might also consider importance of subproblems, in addition to (or instead of) agendas. Or if there’s things which seem like they might be valuable to look into but would cost significant effort, and you’re not sure whether it’s worthwhile, those are great things for a market on your future judgement.
In general, “ask lots of questions” is a good heuristic here, analogous to “measure lots of stuff”.
Markets up: https://www.lesswrong.com/posts/3KeT4uGygBw6YGJyP/ai-research-program-prediction-markets
I considered that, but unless I’m misunderstanding something about Manifold markets, they have to be either yes/no or open-ended.
I agree with measuring lots of stuff in principle, but Manifold Markets only allows me to open 5 free markets.
I think asking or finding/recognizing good questions rather than giving good answers mediocre questions is probably a highly valuable skill or effort pretty much anywhere we are looking to advance knowledge—or I suppose advance pretty much anything. How well prediction markets will help in producing that . . . worth asking I suspect.
Clearly just predicting time lines does little to resolve a problem or help anyone prioritize their efforts. So as an uninformed outsider, can any of the existing prediction questions be recast into a set of question that do shift the focus on a specific problem and proposed approaches to resolve/address the risk?
Would an abstract search of recent (all?) AI alignment papers perhaps point to a collection of questions that might be then placed on the prediction markets? If so, seems like a great survey effort for an AI student to do some leg work on. (Though I suppose some primitive AI agent might be more fitting and quicker ;-)