To increase p’-p, prisons need to incarcerate prisoners which are less prone to recidivism than predicted. Given that past criminality is an excellent predictor of future criminality, this leads to a perverse incentive towards incarcerating those who were unfairly convicted (wrongly convicted innocents or over-convinced lesser offenders).
If past criminality is a predictor of future criminality, then it should be included in the state’s predictive model of recidivism, which would fix the predictions. The actual perverse incentive here is for the prisons to reverse-engineer the predicted model, figure out where it’s consistently wrong, and then lobby to incarcerate (relatively) more of those people. Given that (a) data science is not the core competency of prison operators; (b) prisons will make it obvious when they find vulnerabilities in the model; and (c) the model can be re-trained faster than the prison lobbying cycle, it doesn’t seem like this perverse incentive is actually that bad.
(a) Prison operators are not currently incentivized to be experts in data science (b) Why? And will that fix things? There are plenty of examples of industries taking advantage of vulnerabilities, without those vulnerabilities being fixed. (c) How will it be retrained? Will there be a “We should retrain the model” lobby group, and will it act faster than the prison lobby?
Perhaps we should have a futures market in recidivism. When a prison gets a new prisoner, they buy the associated future at the market rate, and once the prisoner has been out of prison sufficiently long without committing further crimes, the prison can redeem the future. And, of course, there would be laws against prisons shorting their own prisoners.
If participants stop returning to jail at a rate of 10% or greater, Goldman will earn $2.1 million. If the recidivism rate rises above 10% over four years, Goldman stands to lose $2.4 million.
Your argument assumes that the algorithm and the prisons have access to the same data. This need not be the case—in particular, if a prison bribes a judge to over-convict, the algorithm will be (incorrectly) relying on said conviction as data, skewing the predicted recidivism measure.
That said, the perverse incentive you mentioned is absolutely in play as well.
Yes, I glossed over the possibility of prisons bribing judges to screw up the data set. That’s because the extremely small influence of marginal data points and the cost of bribing judges would make such a strategy incredibly expensive.
If past criminality is a predictor of future criminality, then it should be included in the state’s predictive model of recidivism, which would fix the predictions. The actual perverse incentive here is for the prisons to reverse-engineer the predicted model, figure out where it’s consistently wrong, and then lobby to incarcerate (relatively) more of those people. Given that (a) data science is not the core competency of prison operators; (b) prisons will make it obvious when they find vulnerabilities in the model; and (c) the model can be re-trained faster than the prison lobbying cycle, it doesn’t seem like this perverse incentive is actually that bad.
(a) Prison operators are not currently incentivized to be experts in data science (b) Why? And will that fix things? There are plenty of examples of industries taking advantage of vulnerabilities, without those vulnerabilities being fixed. (c) How will it be retrained? Will there be a “We should retrain the model” lobby group, and will it act faster than the prison lobby?
Perhaps we should have a futures market in recidivism. When a prison gets a new prisoner, they buy the associated future at the market rate, and once the prisoner has been out of prison sufficiently long without committing further crimes, the prison can redeem the future. And, of course, there would be laws against prisons shorting their own prisoners.
re: futures market in recidivism—http://freakonomics.com/2014/01/24/reducing-recidivism-through-incentives/
Your argument assumes that the algorithm and the prisons have access to the same data. This need not be the case—in particular, if a prison bribes a judge to over-convict, the algorithm will be (incorrectly) relying on said conviction as data, skewing the predicted recidivism measure.
That said, the perverse incentive you mentioned is absolutely in play as well.
Yes, I glossed over the possibility of prisons bribing judges to screw up the data set. That’s because the extremely small influence of marginal data points and the cost of bribing judges would make such a strategy incredibly expensive.