Agreed it could be gamed in net-negative ways if there was enough incentive in the prediction system. I think that in many practical cases, the incentives are going to be much smaller than the deltas between decisions (otherwise it seems surprisingly costly to have them.)
Also, predictor meddling is also a thing in the other prediction alternatives, like decision markets. Individuals could try to sabotage outcomes selectively. I don’t believe any of these approaches are perfectly safe. I’m definitely recommending them for humans only at this point; though perhaps if there is a lot of testing we could get a better sense of what the exact incentives will be, and use that knowledge for simple AI use.
Agreed it could be gamed in net-negative ways if there was enough incentive in the prediction system. I think that in many practical cases, the incentives are going to be much smaller than the deltas between decisions (otherwise it seems surprisingly costly to have them.)
Also, predictor meddling is also a thing in the other prediction alternatives, like decision markets. Individuals could try to sabotage outcomes selectively. I don’t believe any of these approaches are perfectly safe. I’m definitely recommending them for humans only at this point; though perhaps if there is a lot of testing we could get a better sense of what the exact incentives will be, and use that knowledge for simple AI use.