tldr: If you could have a team of smart forecasters predicting your future decisions & actions, they would likely improve them in accordance with your epistemology. This is a very broad method that’s less ideal than more reductionist approaches for specific things, but possibly simpler to implement and likelier to be accepted by decision makers with complex motivations.
Background
The standard way of finding questions to forecast involves a lot of work. As Zvi noted, questions should be very well-defined, and coming up with interesting yet specific questions takes considerable consideration.
One overarching question is how predictions can be used to drive decision making. One recommendation (one version called “Decision Markets”) often comes down to estimating future parameters, conditional on each of a set of choices. Another option is to have expert evaluators probabilistically evaluate each option, and have predictors predict their evaluations (Prediction-Augmented Evaluations.)
Proposal
One prediction proposal I suggest is to have predictors simply predict the future actions & decisions of agents. I temporarily call this an “action prediction system.” The evaluation process (the choosing process) would need to happen anyway, and the question becomes very simple. This may seem too basic to be useful, but I think it may be a lot better than at least I initially expected.
Say I’m trying to decide what laptop I should purchase. I could have some predictors predicting which one I’ll decide on. In the beginning, the prediction aggregation shows that I have an 90% chance of choosing one option. While I really would like to be the kind of person who purchases a Lenovo with Linux, I’ll probably wind up buying another Macbook. The predictors may realize that I typically check Amazon reviews and the Wirecutter for research, and they have a decent idea of what I’ll find when I eventually do.
It’s not clear to me how to best focus predictors on specific uncertain actions I may take. It seems like I would want to ask them mostly about specific decisions I am uncertain of.
One important aspect is that I should have a line of communication to the predictors. This means that some clever ones may eventually catch on to practices such as the following:
A forecaster-sales strategy
Find good decision options that have been overlooked
Make forecasts or bets on them succeeding
Provide really good arguments and research as to why they are overlooked
If I, the laptop purchaser, am skeptical, I could ignore the prediction feedback. But if I repeat the process for other decisions eventually I should eventually develop a sense of trust in the aggregation accuracy, and then in the predictor ability to understand my desires. I may also be very interested in what that community has to say, as they have developed a model of what my preferences are. If I’m generally a reasonable and intelligent person, I could learn how to best rely on these predictors to speed up and improve my future decisions.
In a way, this solution doesn’t solve the problem of “how to decide the best option;” it just moves it into what may be a more manageable place. Over time I imagine that new strategies may emerge for what generally constitutes “good arguments”, and those will be adopted. In the meantime, agents will be encouraged to quickly choose options they would generally want, using reasoning techniques they generally prefer. If one agent were really convinced by a decision market, then perhaps some forecasters would set one up in order to prove their point.
Failure Modes
There are few obvious failure modes to such a setup. I think that it could dilute signal quality, but am not as worried about some of the other obvious ones.
Weak Signals
I think it’s fair to say that if one wanted to optimize for expected value, asking forecasters to predict actions instead could lead to weaker signals. Forecasters would be estimating a few things at once (how good an option is, and how likely the agent is to choose it.) If the agent isn’t really intent on optimizing for specific things, and even if they are, it may be difficult to provide enough signal in their probabilities of chosen decisions for them to be useful. I think this would have to be empirically tested under different conditions.
There could also be complex feedback loops, especially for naive agents. An agent may trust its predictors too much. If the predictors believe the agent is too trusting or trusts the wrong signals, they could amplify those signals and find “easy stable points.” I’m really unsure of how this would look or how much competence the agent or predictors would need to have net-beneficial outcomes. I’d be interested in testing and paying attention to this failure mode.
That said, the reference class of groups who were considering and interested in paying for using “action predictions” vs. “decision markets” or similar is a very small one, and one that I expect would be convinced only by pretty good arguments. So pragmatically, in the rare cases where the question of “would our organization be wise enough to get benefit from action predictions” is asked, I’d expect the answer to lean positively. I wouldn’t expect obviously sleazy sales strategies to work to convince GiveWell of a new top cause area, for example.
Inevitable Failures
Say the predictors realized that a MacBook wouldn’t make any sense for me, but that I was still 90% likely to choose it, even after I heard all of the best arguments. It would be somewhat of an “inevitable failure.” The amount of utility I get from each item could be very uncorrelated with my chances of choosing that item, even after hearing about that difference.
While this may be unfortunate, it’s not obvious what would work in these conditions. The goal of predictions shouldn’t be to predict the future accurately, but instead to help agents make better decisions. If there were a different system that did a great job outlining the negative effect of a bad decision to my life, but I predictably ignored the system, then it just wouldn’t be useful, despite being accurate. Value of information would be low. It’s really tough for a system of information to be so good as to be useful even when ignored.
I’d also argue that the kinds of agents that would make predictably poor decisions would be ones that really aren’t interested in getting accurate and honest information. It could seem pretty brutal to them; basically, it would involve them paying for a system that continuously tells them that they are making mistakes.
This previous discussion has assumed that the agents making the decisions are the same ones paying for the forecasting. This is not always the case, but in the counterexamples, setting up other proposals could easily be seen as hostile. If I set up a system to start evaluating the expected total values of all the actions of my friend George, knowing that George would systematically ignore the main ones, I could imagine George may not be very happy with his subsidized evaluations.
Principal-agent Problems
I think “action predictions” would help agents fulfill their actual goals, while other forecasting systems would more help them fulfill their stated goals. This has obvious costs and benefits.
Let’s consider a situation with a CEO who wants to their company to be as big as possible, and corporate stakeholders who want instead for the company to be as profitable as possible.
Say the CEO commits to “maximizing shareholder revenue,” and commits to making decisions that do so. If there were a decision market set up to tell how much “shareholder value” would be maximized for each of a set of options (different to a decision prediction system), and that information was public to shareholders, then it would be obvious to them when and how often the CEO disobeys that advice. This would be a very transparent set up that would allow the shareholders to police the CEO. It would take away a lot of flexibility and authority of the CEO and place it in the hands of the decision system.
On the contrary, say the CEO instead shares a transparent action prediction system. Predictor participants would, in this case, try to understand the specific motivations of the CEO and optimize their arguments as such. Even if they were being policed by shareholders, they could know this, and disguise their arguments accordingly. If discussing and correctly predicting the net impact to shareholders would be net harmful in terms of predicting the CEO’s actions and convincing them as such, they could simply ignore it, or better yet find convincing arguments not to take that action. I expect that an action prediction system would essentially act to amplify the abilities of the decider, even if at the cost of other caring third parties.
Salesperson Melees
One argument against this is a gut reaction that it sounds very “salesy”, so probably won’t work. While I agree there are some cases where it may not too work well (stated above in the weak signal section), I think that smart people should be positively augmented by good salesmanship under reasonable incentives.
In many circumstances, salespeople practically are really useful. The industry is huge, and I’m under the impression that at least a significant fraction (>10%) is net-beneficial. Specific kinds of technical and corporate sales come to mind, where the “sales” professionals are some of the most useful for discussing technical questions with. There simply aren’t other services willing to have lengthy discussions about some topics.
Externalities
Predictions used in this way would help the goals of the agents using them, but these agents may be self-interested, leading to additional negative externalities on others. I think this prediction process doesn’t at all help in making people more altruistic. It simply would help agents better satisfy their own preferences. This is a common aspect to almost all intelligence-amplification proposals. I think it’s important to consider, but I’m really recommending this proposal more as a “possible powerful tool”, and not as a “tool that is expected to be highly globally beneficial if used.” That would be a very separate discussion.
What if people simply forecasted your future choices?
tldr: If you could have a team of smart forecasters predicting your future decisions & actions, they would likely improve them in accordance with your epistemology. This is a very broad method that’s less ideal than more reductionist approaches for specific things, but possibly simpler to implement and likelier to be accepted by decision makers with complex motivations.
Background
The standard way of finding questions to forecast involves a lot of work. As Zvi noted, questions should be very well-defined, and coming up with interesting yet specific questions takes considerable consideration.
One overarching question is how predictions can be used to drive decision making. One recommendation (one version called “Decision Markets”) often comes down to estimating future parameters, conditional on each of a set of choices. Another option is to have expert evaluators probabilistically evaluate each option, and have predictors predict their evaluations (Prediction-Augmented Evaluations.)
Proposal
One prediction proposal I suggest is to have predictors simply predict the future actions & decisions of agents. I temporarily call this an “action prediction system.” The evaluation process (the choosing process) would need to happen anyway, and the question becomes very simple. This may seem too basic to be useful, but I think it may be a lot better than at least I initially expected.
Say I’m trying to decide what laptop I should purchase. I could have some predictors predicting which one I’ll decide on. In the beginning, the prediction aggregation shows that I have an 90% chance of choosing one option. While I really would like to be the kind of person who purchases a Lenovo with Linux, I’ll probably wind up buying another Macbook. The predictors may realize that I typically check Amazon reviews and the Wirecutter for research, and they have a decent idea of what I’ll find when I eventually do.
It’s not clear to me how to best focus predictors on specific uncertain actions I may take. It seems like I would want to ask them mostly about specific decisions I am uncertain of.
One important aspect is that I should have a line of communication to the predictors. This means that some clever ones may eventually catch on to practices such as the following:
A forecaster-sales strategy
Find good decision options that have been overlooked
Make forecasts or bets on them succeeding
Provide really good arguments and research as to why they are overlooked
If I, the laptop purchaser, am skeptical, I could ignore the prediction feedback. But if I repeat the process for other decisions eventually I should eventually develop a sense of trust in the aggregation accuracy, and then in the predictor ability to understand my desires. I may also be very interested in what that community has to say, as they have developed a model of what my preferences are. If I’m generally a reasonable and intelligent person, I could learn how to best rely on these predictors to speed up and improve my future decisions.
In a way, this solution doesn’t solve the problem of “how to decide the best option;” it just moves it into what may be a more manageable place. Over time I imagine that new strategies may emerge for what generally constitutes “good arguments”, and those will be adopted. In the meantime, agents will be encouraged to quickly choose options they would generally want, using reasoning techniques they generally prefer. If one agent were really convinced by a decision market, then perhaps some forecasters would set one up in order to prove their point.
Failure Modes
There are few obvious failure modes to such a setup. I think that it could dilute signal quality, but am not as worried about some of the other obvious ones.
Weak Signals
I think it’s fair to say that if one wanted to optimize for expected value, asking forecasters to predict actions instead could lead to weaker signals. Forecasters would be estimating a few things at once (how good an option is, and how likely the agent is to choose it.) If the agent isn’t really intent on optimizing for specific things, and even if they are, it may be difficult to provide enough signal in their probabilities of chosen decisions for them to be useful. I think this would have to be empirically tested under different conditions.
There could also be complex feedback loops, especially for naive agents. An agent may trust its predictors too much. If the predictors believe the agent is too trusting or trusts the wrong signals, they could amplify those signals and find “easy stable points.” I’m really unsure of how this would look or how much competence the agent or predictors would need to have net-beneficial outcomes. I’d be interested in testing and paying attention to this failure mode.
That said, the reference class of groups who were considering and interested in paying for using “action predictions” vs. “decision markets” or similar is a very small one, and one that I expect would be convinced only by pretty good arguments. So pragmatically, in the rare cases where the question of “would our organization be wise enough to get benefit from action predictions” is asked, I’d expect the answer to lean positively. I wouldn’t expect obviously sleazy sales strategies to work to convince GiveWell of a new top cause area, for example.
Inevitable Failures
Say the predictors realized that a MacBook wouldn’t make any sense for me, but that I was still 90% likely to choose it, even after I heard all of the best arguments. It would be somewhat of an “inevitable failure.” The amount of utility I get from each item could be very uncorrelated with my chances of choosing that item, even after hearing about that difference.
While this may be unfortunate, it’s not obvious what would work in these conditions. The goal of predictions shouldn’t be to predict the future accurately, but instead to help agents make better decisions. If there were a different system that did a great job outlining the negative effect of a bad decision to my life, but I predictably ignored the system, then it just wouldn’t be useful, despite being accurate. Value of information would be low. It’s really tough for a system of information to be so good as to be useful even when ignored.
I’d also argue that the kinds of agents that would make predictably poor decisions would be ones that really aren’t interested in getting accurate and honest information. It could seem pretty brutal to them; basically, it would involve them paying for a system that continuously tells them that they are making mistakes.
This previous discussion has assumed that the agents making the decisions are the same ones paying for the forecasting. This is not always the case, but in the counterexamples, setting up other proposals could easily be seen as hostile. If I set up a system to start evaluating the expected total values of all the actions of my friend George, knowing that George would systematically ignore the main ones, I could imagine George may not be very happy with his subsidized evaluations.
Principal-agent Problems
I think “action predictions” would help agents fulfill their actual goals, while other forecasting systems would more help them fulfill their stated goals. This has obvious costs and benefits.
Let’s consider a situation with a CEO who wants to their company to be as big as possible, and corporate stakeholders who want instead for the company to be as profitable as possible.
Say the CEO commits to “maximizing shareholder revenue,” and commits to making decisions that do so. If there were a decision market set up to tell how much “shareholder value” would be maximized for each of a set of options (different to a decision prediction system), and that information was public to shareholders, then it would be obvious to them when and how often the CEO disobeys that advice. This would be a very transparent set up that would allow the shareholders to police the CEO. It would take away a lot of flexibility and authority of the CEO and place it in the hands of the decision system.
On the contrary, say the CEO instead shares a transparent action prediction system. Predictor participants would, in this case, try to understand the specific motivations of the CEO and optimize their arguments as such. Even if they were being policed by shareholders, they could know this, and disguise their arguments accordingly. If discussing and correctly predicting the net impact to shareholders would be net harmful in terms of predicting the CEO’s actions and convincing them as such, they could simply ignore it, or better yet find convincing arguments not to take that action. I expect that an action prediction system would essentially act to amplify the abilities of the decider, even if at the cost of other caring third parties.
Salesperson Melees
One argument against this is a gut reaction that it sounds very “salesy”, so probably won’t work. While I agree there are some cases where it may not too work well (stated above in the weak signal section), I think that smart people should be positively augmented by good salesmanship under reasonable incentives.
In many circumstances, salespeople practically are really useful. The industry is huge, and I’m under the impression that at least a significant fraction (>10%) is net-beneficial. Specific kinds of technical and corporate sales come to mind, where the “sales” professionals are some of the most useful for discussing technical questions with. There simply aren’t other services willing to have lengthy discussions about some topics.
Externalities
Predictions used in this way would help the goals of the agents using them, but these agents may be self-interested, leading to additional negative externalities on others. I think this prediction process doesn’t at all help in making people more altruistic. It simply would help agents better satisfy their own preferences. This is a common aspect to almost all intelligence-amplification proposals. I think it’s important to consider, but I’m really recommending this proposal more as a “possible powerful tool”, and not as a “tool that is expected to be highly globally beneficial if used.” That would be a very separate discussion.