The corrigibility framework does look like a good framework to hang the discussion on.
Your instruction to examine Y-general danger rather than X-specific danger here seems right. However, we then need to inspect what this means for the original argument. The Russell criticism being that it’s blindingly obvious that an apparently trivial MDP is massively risky.
After this detour we see different kinds of risks: industrial machinery operation, and existential risk. The fixed objective, hard-coded, hard-designed Javan Roomba seems limited to posing the first kind of risk. When we start talking about the systems that could give rise to the second kind, the reasoning becomes far more subtle.
In which case I think it would be wise for someone with Russell’s views not to call the opposition stupid. Or to assert that the position is trivial. When in fact the argument might come down to fairly nuanced points about natural language understanding, comprehension, competence, corrigibility etc. As far as I can tell from limited reading, the arguments around how tightly bundled these things may be are not watertight.
May try to respond more fully later. Cheers for the thoughts.
I see that there’s a comment-chain under this reply but I’ll reply here to start a somewhat new line of thought. Let it be noted though that I’m pretty confident that I agree with the points that Turntrout makes. With that out of the way...
However, we then need to inspect what this means for the original argument. The Russell criticism being that it’s blindingly obvious that an apparently trivial MDP is massively risky.
In case it isn’t clear, when Russel says ” It is trivial to construct a toy MDP...”, I interpret this to mean “It is trivial to conceive of a toy MDP...” That is, he is using the word in the sense of a constructive proof; he isn’t literally implying that building an AI-risky MDP is a trivial task.
In which case I think it would be wise for someone with Russell’s views not to call the opposition stupid. Or to assert that the position is trivial.
I wouldn’t call the the opposition stupid either but I would suggest that they have not used their full imaginative capabilities to evaluate the situation. From the OP:
“Yann LeCun: [...] I think it would only be relevant in a fantasy world in which people would be smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards.”″”
The mistake Yann LeCun is making here is specifically that creating an objective for a superintelligent machine that turns out to be not-moronic (in the sense of allowing the machine to understand and care about everything we care about—something that hundreds of years of ethical philosophy has failed to do) is extremely hard. Furthermore, trying to build safeguards for a machine potentially orders of magnitude better at escaping safeguards than you are is also extremely hard. I don’t view this point as particularly subtle because simply trying for five minutes to confidently come up with a good objective demonstrates how hard it is. Ditto for safeguards (fun video by Computerphile, if you want to watch it); and especially ditto for any safeguards that aren’t along the lines of “actually let’s not let the machine be superintelligent.”
When in fact the argument might come down to fairly nuanced points about natural language understanding, comprehension, competence, corrigibility etc.
Let’s address these point-by-point:
Natural Language Understanding—Philosophers (and anyone in the field of language processing) have been talking about how language has no clear meaning for centuries
Comprehension—In terms of superintelligent AGI, the AI will be capable of modeling the world better than you can. This implies the ability to make predictions and interact with people in a way that functionally looks identical to comprehension
Competence—Well the AGI is superintelligent so it’s already very competent. Maybe we could talk about competence in terms of deliberately disabling different capabilities of the AGI (which probably wouldn’t hurt) but, even then, there’s always a chance the AI gets around the disability in another way. And that’s a massive risk.
If by this, you mean something more along the lines of “feasibility of building an AGI” though, that’s a little more uncertain. However, at the very least, we are approaching the level of compute needed to simulate a human brain and, once reached, the next step of superintelligence won’t be far away. It’s not guaranteed but there’s a significant likelihood that AGI will be feasible in the future. Even this significant likelihood is really bad.
Corrigibility—Something a bunch of AI-Safety folk came up with as a framework for approaching problems. But it still hasn’t been solved
I’ll grant that some of these things are subtle. The average Joe won’t be aware of the complexity of language or AI progress benchmarks and I certainly wouldn’t fault them for being surprised by these things—I was surprised the first time I found out about this whole AI Safety thing too. At the same time though, most college-educated computer scientists should (and from my experience, do) have a good understanding of these things.
To be more explicit with respect to your steel-man in the OP:
That it might be more difficult than expected to build something generally intelligent that didn’t get at least some safeguards for free. Because unintended intelligent behaviour may have to be generated from the same second principles which generate intended intelligent behaviour.
The unintended behaviors we’re talking about are generally not the consequence of second-principles that the AI has learned; they’re the consequences of the fact that capturing all the things we care about in a first-principles hardcoded objective function is extremely difficult. Even if the hardcoded objective function is ‘satisfy requests by humans in ways that don’t make them unhappy,’ you still gotta define ‘requests’, ‘humans’ (in the biological sense), ‘make’ (how do you assign responsibility to actions in long causal chains?), ‘them’ (just the requestor? all of humanity alive? all of future humanity? all of humanity ever?), and unhappy (amount of dopamine? vocalized expressions of satisfactions? dopamine+vocalized expressions of satisfaction?). Most of those specifications lead to unexpectedly bad outcomes.
The thought experiment expects most of the behaviour to be as intended (if it were not, this would be a capabilities discussion rather than a control discussion). Supposing the second principles also generate some seemingly inconsistent unintended behaviours sounds like an idea that should get some sort of complexity penalty.
If we set-up a complexity penalty where we expected unintended behaviors in general, we likely would never get AGI in the first place. Neural networks are extremely complex and often do strange and inconsistent things on the margin. We’ve already seen inconsistent and unintended behaviors from things we’ve already built. Thank goodness none of this stuff is superintelligent!
In which case I think it would be wise for someone with Russell’s views not to call the opposition stupid. Or to assert that the position is trivial. When in fact the argument might come down to fairly nuanced points about natural language understanding, comprehension, competence, corrigibility etc. As far as I can tell from limited reading, the arguments around how tightly bundled these things may be are not watertight.
I agree from a general convincing-people standpoint that calling discussants stupid is a bad idea. However, I think it is indeed quite obvious if framed properly, and I don’t think the argument needs to come down to nuanced points, as long as we agree on the agent design we’re talking about—the Roomba is not a farsighted reward maximizer, and is implied to be trained in a pretty weak fashion.
Suppose an agent is incentivized to maximize reward. That means it’s incentivized to be maximally able to get reward. That means it will work to stay able to get as much reward as possible. That means if we mess up, it’s working against us.
I think the main point of disagreement here is goal-directedness, but if we assume RL as the thing that gets us to AGI, the instrumental convergence case is open and shut.
This misses the original point. The Roomba is dangerous, in the sense that you could write a trivial ‘AI’ which merely gets to choose angle to travel along, and does so irregardless of grandma in the way.
But such an MDP not going to pose an X-risk. You can write down the objective function (y—x(theta))^2 differentiate wrt theta. Follow the gradient and you’ll never end up at an AI overlord. Such a system lacks any analogue of opposable thumbs, memory and a good many other things.
Pointing at dumb industrial machinery operating around civilians and saying it is dangerous may well be the truth, but it’s not the right flavour of dangerous to support Russell’s claim.
So, yes, it is going to come down to a more nuanced argument.
It’s still going to act instrumentally convergently within the MDP it thinks it’s in. If you’re assuming it thinks it’s in a different MDP that can’t possibly model the real world, or if it is in the real world but has an empty action set, then you’re right—it won’t become an overlord. But if we have a y-proximity maximizer which can actually compute an optimal policy that’s farsighted, over a state space that is “close enough” to representing the real world, then it does take over.
The thing that’s fuzzy here is “agent acting in the real world”. In his new book, Russell (as I understand it) argues that an AGI trained to play Go could figure out it was just playing a game via sensory discrepancies, and then start wireheading on the “won a Go game” signal. I don’t know if I buy that yet, but you’re correct that there’s some kind of fuzzy boundary here. If we knew what exactly it took to get a “sufficiently good model”, we’d probably be a lot closer to AGI.
But Russell’s original argument assumes the relevant factors are within the model.
If, in that MDP, there is another “human” who has some probability, however small, of switching the agent off, and if the agent has available a button that switches off that human, the agent will necessarily press that button as part of the optimal solution for fetching the coffee.
I think this is a reasonable assumption, but we need to make it explicit for clarity of discourse. Given that assumption (and the assumption that an agent can compute a farsighted optimal policy), instrumental convergence follows.
The human-off-button doesn’t help Russell’s argument with respect to the weakness under discussion.
It’s the equivalent of a Roomba with a zap obstacle action. Again the solution is to dial theta towards the target and hold the zap button assuming free zaps. It still has a closed form solution that couldn’t be described as instrumental convergence.
Russell’s argument requires a more complex agent in order to demonstrate the danger of instrumental convergence rather than simple industrial machinery operation.
Isnasene’s point above is closer to that, but that’s not the argument that Russell gives.
‘and the assumption that an agent can compute a farsighted optimal policy)’
That assumption is doing a lot of work, it’s not clear what is packed into that, and it may not be sufficient to prove the argument.
I guess I’m not clear what the theta is for (maybe I missed something, in which case I apologize). Is there one initial action: how close it goes? And it’s trained to maximize an evaluation function for its proximity, with just theta being the parameter?
That assumption is doing a lot of work, it’s not clear what is packed into that, and it may not be sufficient to prove the argument.
Well, my reasoning isn’t publicly available yet, but this is in fact sufficient, and the assumption can be formalized. For any MDP, there is a discount rate γ, and for each reward function there exists an optimal policy π∗ for that discount rate. I’m claiming that given γ sufficiently close to 1, optimal policies likely end up gaining power as an instrumentally convergent subgoal within that MDP.
(All of this can be formally defined in the right way. If you want the proof, you’ll need to hold tight for a while)
Lots of good points here, thanks.
My overall reaction is that:
The corrigibility framework does look like a good framework to hang the discussion on.
Your instruction to examine Y-general danger rather than X-specific danger here seems right. However, we then need to inspect what this means for the original argument. The Russell criticism being that it’s blindingly obvious that an apparently trivial MDP is massively risky.
After this detour we see different kinds of risks: industrial machinery operation, and existential risk. The fixed objective, hard-coded, hard-designed Javan Roomba seems limited to posing the first kind of risk. When we start talking about the systems that could give rise to the second kind, the reasoning becomes far more subtle.
In which case I think it would be wise for someone with Russell’s views not to call the opposition stupid. Or to assert that the position is trivial. When in fact the argument might come down to fairly nuanced points about natural language understanding, comprehension, competence, corrigibility etc. As far as I can tell from limited reading, the arguments around how tightly bundled these things may be are not watertight.
May try to respond more fully later. Cheers for the thoughts.
I see that there’s a comment-chain under this reply but I’ll reply here to start a somewhat new line of thought. Let it be noted though that I’m pretty confident that I agree with the points that Turntrout makes. With that out of the way...
In case it isn’t clear, when Russel says ” It is trivial to construct a toy MDP...”, I interpret this to mean “It is trivial to conceive of a toy MDP...” That is, he is using the word in the sense of a constructive proof; he isn’t literally implying that building an AI-risky MDP is a trivial task.
I wouldn’t call the the opposition stupid either but I would suggest that they have not used their full imaginative capabilities to evaluate the situation. From the OP:
The mistake Yann LeCun is making here is specifically that creating an objective for a superintelligent machine that turns out to be not-moronic (in the sense of allowing the machine to understand and care about everything we care about—something that hundreds of years of ethical philosophy has failed to do) is extremely hard. Furthermore, trying to build safeguards for a machine potentially orders of magnitude better at escaping safeguards than you are is also extremely hard. I don’t view this point as particularly subtle because simply trying for five minutes to confidently come up with a good objective demonstrates how hard it is. Ditto for safeguards (fun video by Computerphile, if you want to watch it); and especially ditto for any safeguards that aren’t along the lines of “actually let’s not let the machine be superintelligent.”
Let’s address these point-by-point:
Natural Language Understanding—Philosophers (and anyone in the field of language processing) have been talking about how language has no clear meaning for centuries
Comprehension—In terms of superintelligent AGI, the AI will be capable of modeling the world better than you can. This implies the ability to make predictions and interact with people in a way that functionally looks identical to comprehension
Competence—Well the AGI is superintelligent so it’s already very competent. Maybe we could talk about competence in terms of deliberately disabling different capabilities of the AGI (which probably wouldn’t hurt) but, even then, there’s always a chance the AI gets around the disability in another way. And that’s a massive risk.
If by this, you mean something more along the lines of “feasibility of building an AGI” though, that’s a little more uncertain. However, at the very least, we are approaching the level of compute needed to simulate a human brain and, once reached, the next step of superintelligence won’t be far away. It’s not guaranteed but there’s a significant likelihood that AGI will be feasible in the future. Even this significant likelihood is really bad.
Corrigibility—Something a bunch of AI-Safety folk came up with as a framework for approaching problems. But it still hasn’t been solved
I’ll grant that some of these things are subtle. The average Joe won’t be aware of the complexity of language or AI progress benchmarks and I certainly wouldn’t fault them for being surprised by these things—I was surprised the first time I found out about this whole AI Safety thing too. At the same time though, most college-educated computer scientists should (and from my experience, do) have a good understanding of these things.
To be more explicit with respect to your steel-man in the OP:
The unintended behaviors we’re talking about are generally not the consequence of second-principles that the AI has learned; they’re the consequences of the fact that capturing all the things we care about in a first-principles hardcoded objective function is extremely difficult. Even if the hardcoded objective function is ‘satisfy requests by humans in ways that don’t make them unhappy,’ you still gotta define ‘requests’, ‘humans’ (in the biological sense), ‘make’ (how do you assign responsibility to actions in long causal chains?), ‘them’ (just the requestor? all of humanity alive? all of future humanity? all of humanity ever?), and unhappy (amount of dopamine? vocalized expressions of satisfactions? dopamine+vocalized expressions of satisfaction?). Most of those specifications lead to unexpectedly bad outcomes.
If we set-up a complexity penalty where we expected unintended behaviors in general, we likely would never get AGI in the first place. Neural networks are extremely complex and often do strange and inconsistent things on the margin. We’ve already seen inconsistent and unintended behaviors from things we’ve already built. Thank goodness none of this stuff is superintelligent!
I agree from a general convincing-people standpoint that calling discussants stupid is a bad idea. However, I think it is indeed quite obvious if framed properly, and I don’t think the argument needs to come down to nuanced points, as long as we agree on the agent design we’re talking about—the Roomba is not a farsighted reward maximizer, and is implied to be trained in a pretty weak fashion.
Suppose an agent is incentivized to maximize reward. That means it’s incentivized to be maximally able to get reward. That means it will work to stay able to get as much reward as possible. That means if we mess up, it’s working against us.
I think the main point of disagreement here is goal-directedness, but if we assume RL as the thing that gets us to AGI, the instrumental convergence case is open and shut.
This misses the original point. The Roomba is dangerous, in the sense that you could write a trivial ‘AI’ which merely gets to choose angle to travel along, and does so irregardless of grandma in the way.
But such an MDP not going to pose an X-risk. You can write down the objective function (y—x(theta))^2 differentiate wrt theta. Follow the gradient and you’ll never end up at an AI overlord. Such a system lacks any analogue of opposable thumbs, memory and a good many other things.
Pointing at dumb industrial machinery operating around civilians and saying it is dangerous may well be the truth, but it’s not the right flavour of dangerous to support Russell’s claim.
So, yes, it is going to come down to a more nuanced argument.
It’s still going to act instrumentally convergently within the MDP it thinks it’s in. If you’re assuming it thinks it’s in a different MDP that can’t possibly model the real world, or if it is in the real world but has an empty action set, then you’re right—it won’t become an overlord. But if we have a y-proximity maximizer which can actually compute an optimal policy that’s farsighted, over a state space that is “close enough” to representing the real world, then it does take over.
The thing that’s fuzzy here is “agent acting in the real world”. In his new book, Russell (as I understand it) argues that an AGI trained to play Go could figure out it was just playing a game via sensory discrepancies, and then start wireheading on the “won a Go game” signal. I don’t know if I buy that yet, but you’re correct that there’s some kind of fuzzy boundary here. If we knew what exactly it took to get a “sufficiently good model”, we’d probably be a lot closer to AGI.
But Russell’s original argument assumes the relevant factors are within the model.
I think this is a reasonable assumption, but we need to make it explicit for clarity of discourse. Given that assumption (and the assumption that an agent can compute a farsighted optimal policy), instrumental convergence follows.
The human-off-button doesn’t help Russell’s argument with respect to the weakness under discussion.
It’s the equivalent of a Roomba with a zap obstacle action. Again the solution is to dial theta towards the target and hold the zap button assuming free zaps. It still has a closed form solution that couldn’t be described as instrumental convergence.
Russell’s argument requires a more complex agent in order to demonstrate the danger of instrumental convergence rather than simple industrial machinery operation.
Isnasene’s point above is closer to that, but that’s not the argument that Russell gives.
That assumption is doing a lot of work, it’s not clear what is packed into that, and it may not be sufficient to prove the argument.
The work is now public.
I guess I’m not clear what the theta is for (maybe I missed something, in which case I apologize). Is there one initial action: how close it goes? And it’s trained to maximize an evaluation function for its proximity, with just theta being the parameter?
Well, my reasoning isn’t publicly available yet, but this is in fact sufficient, and the assumption can be formalized. For any MDP, there is a discount rate γ, and for each reward function there exists an optimal policy π∗ for that discount rate. I’m claiming that given γ sufficiently close to 1, optimal policies likely end up gaining power as an instrumentally convergent subgoal within that MDP.
(All of this can be formally defined in the right way. If you want the proof, you’ll need to hold tight for a while)