What if people simply forecasted your future choices?
tldr: If you could have a team of smart forecasters predicting your future decisions & actions, they would likely improve them in accordance with your epistemology. This is a very broad method that’s less ideal than more reductionist approaches for specific things, but possibly simpler to implement and likelier to be accepted by decision makers with complex motivations.
Background
The standard way of finding questions to forecast involves a lot of work. As Zvi noted, questions should be very well-defined, and coming up with interesting yet specific questions takes considerable consideration.
One overarching question is how predictions can be used to drive decision making. One recommendation (one version called “Decision Markets”) often comes down to estimating future parameters, conditional on each of a set of choices. Another option is to have expert evaluators probabilistically evaluate each option, and have predictors predict their evaluations (Prediction-Augmented Evaluations.)
Proposal
One prediction proposal I suggest is to have predictors simply predict the future actions & decisions of agents. I temporarily call this an “action prediction system.” The evaluation process (the choosing process) would need to happen anyway, and the question becomes very simple. This may seem too basic to be useful, but I think it may be a lot better than at least I initially expected.
Say I’m trying to decide what laptop I should purchase. I could have some predictors predicting which one I’ll decide on. In the beginning, the prediction aggregation shows that I have an 90% chance of choosing one option. While I really would like to be the kind of person who purchases a Lenovo with Linux, I’ll probably wind up buying another Macbook. The predictors may realize that I typically check Amazon reviews and the Wirecutter for research, and they have a decent idea of what I’ll find when I eventually do.
It’s not clear to me how to best focus predictors on specific uncertain actions I may take. It seems like I would want to ask them mostly about specific decisions I am uncertain of.
One important aspect is that I should have a line of communication to the predictors. This means that some clever ones may eventually catch on to practices such as the following:
A forecaster-sales strategy
-
Find good decision options that have been overlooked
-
Make forecasts or bets on them succeeding
-
Provide really good arguments and research as to why they are overlooked
If I, the laptop purchaser, am skeptical, I could ignore the prediction feedback. But if I repeat the process for other decisions eventually I should eventually develop a sense of trust in the aggregation accuracy, and then in the predictor ability to understand my desires. I may also be very interested in what that community has to say, as they have developed a model of what my preferences are. If I’m generally a reasonable and intelligent person, I could learn how to best rely on these predictors to speed up and improve my future decisions.
In a way, this solution doesn’t solve the problem of “how to decide the best option;” it just moves it into what may be a more manageable place. Over time I imagine that new strategies may emerge for what generally constitutes “good arguments”, and those will be adopted. In the meantime, agents will be encouraged to quickly choose options they would generally want, using reasoning techniques they generally prefer. If one agent were really convinced by a decision market, then perhaps some forecasters would set one up in order to prove their point.
Failure Modes
There are few obvious failure modes to such a setup. I think that it could dilute signal quality, but am not as worried about some of the other obvious ones.
Weak Signals
I think it’s fair to say that if one wanted to optimize for expected value, asking forecasters to predict actions instead could lead to weaker signals. Forecasters would be estimating a few things at once (how good an option is, and how likely the agent is to choose it.) If the agent isn’t really intent on optimizing for specific things, and even if they are, it may be difficult to provide enough signal in their probabilities of chosen decisions for them to be useful. I think this would have to be empirically tested under different conditions.
There could also be complex feedback loops, especially for naive agents. An agent may trust its predictors too much. If the predictors believe the agent is too trusting or trusts the wrong signals, they could amplify those signals and find “easy stable points.” I’m really unsure of how this would look or how much competence the agent or predictors would need to have net-beneficial outcomes. I’d be interested in testing and paying attention to this failure mode.
That said, the reference class of groups who were considering and interested in paying for using “action predictions” vs. “decision markets” or similar is a very small one, and one that I expect would be convinced only by pretty good arguments. So pragmatically, in the rare cases where the question of “would our organization be wise enough to get benefit from action predictions” is asked, I’d expect the answer to lean positively. I wouldn’t expect obviously sleazy sales strategies to work to convince GiveWell of a new top cause area, for example.
Inevitable Failures
Say the predictors realized that a MacBook wouldn’t make any sense for me, but that I was still 90% likely to choose it, even after I heard all of the best arguments. It would be somewhat of an “inevitable failure.” The amount of utility I get from each item could be very uncorrelated with my chances of choosing that item, even after hearing about that difference.
While this may be unfortunate, it’s not obvious what would work in these conditions. The goal of predictions shouldn’t be to predict the future accurately, but instead to help agents make better decisions. If there were a different system that did a great job outlining the negative effect of a bad decision to my life, but I predictably ignored the system, then it just wouldn’t be useful, despite being accurate. Value of information would be low. It’s really tough for a system of information to be so good as to be useful even when ignored.
I’d also argue that the kinds of agents that would make predictably poor decisions would be ones that really aren’t interested in getting accurate and honest information. It could seem pretty brutal to them; basically, it would involve them paying for a system that continuously tells them that they are making mistakes.
This previous discussion has assumed that the agents making the decisions are the same ones paying for the forecasting. This is not always the case, but in the counterexamples, setting up other proposals could easily be seen as hostile. If I set up a system to start evaluating the expected total values of all the actions of my friend George, knowing that George would systematically ignore the main ones, I could imagine George may not be very happy with his subsidized evaluations.
Principal-agent Problems
I think “action predictions” would help agents fulfill their actual goals, while other forecasting systems would more help them fulfill their stated goals. This has obvious costs and benefits.
Let’s consider a situation with a CEO who wants to their company to be as big as possible, and corporate stakeholders who want instead for the company to be as profitable as possible.
Say the CEO commits to “maximizing shareholder revenue,” and commits to making decisions that do so. If there were a decision market set up to tell how much “shareholder value” would be maximized for each of a set of options (different to a decision prediction system), and that information was public to shareholders, then it would be obvious to them when and how often the CEO disobeys that advice. This would be a very transparent set up that would allow the shareholders to police the CEO. It would take away a lot of flexibility and authority of the CEO and place it in the hands of the decision system.
On the contrary, say the CEO instead shares a transparent action prediction system. Predictor participants would, in this case, try to understand the specific motivations of the CEO and optimize their arguments as such. Even if they were being policed by shareholders, they could know this, and disguise their arguments accordingly. If discussing and correctly predicting the net impact to shareholders would be net harmful in terms of predicting the CEO’s actions and convincing them as such, they could simply ignore it, or better yet find convincing arguments not to take that action. I expect that an action prediction system would essentially act to amplify the abilities of the decider, even if at the cost of other caring third parties.
Salesperson Melees
One argument against this is a gut reaction that it sounds very “salesy”, so probably won’t work. While I agree there are some cases where it may not too work well (stated above in the weak signal section), I think that smart people should be positively augmented by good salesmanship under reasonable incentives.
In many circumstances, salespeople practically are really useful. The industry is huge, and I’m under the impression that at least a significant fraction (>10%) is net-beneficial. Specific kinds of technical and corporate sales come to mind, where the “sales” professionals are some of the most useful for discussing technical questions with. There simply aren’t other services willing to have lengthy discussions about some topics.
Externalities
Predictions used in this way would help the goals of the agents using them, but these agents may be self-interested, leading to additional negative externalities on others. I think this prediction process doesn’t at all help in making people more altruistic. It simply would help agents better satisfy their own preferences. This is a common aspect to almost all intelligence-amplification proposals. I think it’s important to consider, but I’m really recommending this proposal more as a “possible powerful tool”, and not as a “tool that is expected to be highly globally beneficial if used.” That would be a very separate discussion.
Answering a question by predicting what the answer is going to be sounds like pulling oneself up by one’s own bootstraps. Where does this predictor get the information it needs to be able to make its predictions?
To be a bit more specific, it’s answering a question by having other people predict which answer you will choose; but yes, it’s very bootstrap-y.
I consider this proposal an alternative to decision markets and prediction-augmented evaluations, so I don’t think this system suffers from the challenge of information more than those two proposals. All are of course limited to a significant extent by information.
One nice point for these systems is that individuals are often predictably biased, even though they are knowledgeable. So in many cases it seems like more ignorant but less biased predictors with a few base rates of a problem can do better.
I imagine that if there were a bunch of forecasters doing this, they would eventually collect and organize tables of public data of the base rates at which agents make decisions. I expect the public data to be really good if it were properly organized. After that, agents could, of course, select to provide additional information.
The line between forecasting and recommending makes it less clear how this system would amplify as opposed to just pushing the user to be more in-line with the ‘predictors’.
I’m imagining that the predictors would often fall in-line with the user, especially if the user were reasonable enough to be making decisions using them.
Obviously this strategy would be wholly unsuitable to align ASI. When considering humans, remember that the predictors have other tricks for controling the decision, as well as the communication channel . If there is enough money in the prediction market, someone might be incentivized to offer you discounts on the mackbook.
Agreed it could be gamed in net-negative ways if there was enough incentive in the prediction system. I think that in many practical cases, the incentives are going to be much smaller than the deltas between decisions (otherwise it seems surprisingly costly to have them.)
Also, predictor meddling is also a thing in the other prediction alternatives, like decision markets. Individuals could try to sabotage outcomes selectively. I don’t believe any of these approaches are perfectly safe. I’m definitely recommending them for humans only at this point; though perhaps if there is a lot of testing we could get a better sense of what the exact incentives will be, and use that knowledge for simple AI use.