I expect there to be broad agreement that this kind of risk is possible. I expect a lot of legitimate uncertainty and disagreement about the magnitude of the risk.
I think if this kind of tampering is risky then it almost certainly has some effect on your bottom line and causes some annoyance. I don’t think AI would be so good at tampering (until it was trained to be). But I don’t think that requires fixing the problem—in many domains, any problem common enough to affect your bottom line can also be quickly fixed by fine-tuning for a competent model.
I think that if there is a relatively easy technical solution to the problem then there is a good chance it will be adopted. If not, I expect there to be a strong pressure to take the overfitting route, a lot of adverse selection for organizations and teams that consider this acceptable, a lot of “if we don’t do this someone else will,” and so on. If we need a reasonable regulatory response then I think things get a lot harder.
In general I’m very sympathetic to “there is a good chance that this will work out,” but it also seems like the kind of problem that is not hard to mess up, and there’s enough variance in our civilization’s response to challenging technical problems that there’s a real chance we’d mess it up even if it was objectively a softball.
ETA: The two big places I expect disagreement are about (i) the feasibility of irreversible robot uprising—how sure are we that the optimal strategy for a reward-maximizing model is to do their task well? (ii) is our training process producing models that actually refrain from tampering, or are we overfitting to our evaluations and producing models that would take an opportunity for a decisive uprising if it came up? I think that if we have our act together we can most likely measure (ii) experimentally; you could also imagine a conservative outlook or various forms of penetration testing to have a sense of (i). But I think it’s just quite easy to imagine us failing to reach clarity much less agreement about this.
I expect there to be broad agreement that this kind of risk is possible. I expect a lot of legitimate uncertainty and disagreement about the magnitude of the risk.
I think if this kind of tampering is risky then it almost certainly has some effect on your bottom line and causes some annoyance. I don’t think AI would be so good at tampering (until it was trained to be). But I don’t think that requires fixing the problem—in many domains, any problem common enough to affect your bottom line can also be quickly fixed by fine-tuning for a competent model.
I think that if there is a relatively easy technical solution to the problem then there is a good chance it will be adopted. If not, I expect there to be a strong pressure to take the overfitting route, a lot of adverse selection for organizations and teams that consider this acceptable, a lot of “if we don’t do this someone else will,” and so on. If we need a reasonable regulatory response then I think things get a lot harder.
In general I’m very sympathetic to “there is a good chance that this will work out,” but it also seems like the kind of problem that is not hard to mess up, and there’s enough variance in our civilization’s response to challenging technical problems that there’s a real chance we’d mess it up even if it was objectively a softball.
ETA: The two big places I expect disagreement are about (i) the feasibility of irreversible robot uprising—how sure are we that the optimal strategy for a reward-maximizing model is to do their task well? (ii) is our training process producing models that actually refrain from tampering, or are we overfitting to our evaluations and producing models that would take an opportunity for a decisive uprising if it came up? I think that if we have our act together we can most likely measure (ii) experimentally; you could also imagine a conservative outlook or various forms of penetration testing to have a sense of (i). But I think it’s just quite easy to imagine us failing to reach clarity much less agreement about this.