OK, let’s say we want an AI to make a “nanobot plan”. I’ll leave aside the possibility of other humans getting access to a similar AI as mine. Then there are two types of accident risk that I need to worry about.
First, I need to worry that the AI may run for a while, then hand me a plan, and it looks like a nanobot plan, but it’s not, it’s a booby trap. To avoid (or at least minimize) that problem, we need to be confident that the AI is actuallytrying to make a nanobot plan—i.e., we need to solve the whole alignment problem.
Alternatively, maybe we’re able to thoroughly understand the plan once we see it; we’re just too stupid to come up with it ourselves. That seems awfully fraught—I’m not sure how we could be so confident that we can tell apart nanobot plans from booby-trap plans. But let’s assume that’s possible for the sake of argument, and then move on to the other type of accident risk:
Second, I need to worry that the AI will start running, and I think it’s coming up with a nanobot plan, but actually it’s hacking its way out of its box and taking over the world.
How and why might that happen?
I would say that if a nanobot plan is very hard to create—requiring new insights etc.—then the only way to do it is to create the nanobot plan is to construct an agent-like thing that is trying to create the nanobot plan.
The agent-like thing would have some kind of action space (e.g. it can choose to summon a particular journal article to re-read, or it can choose to think through a certain possibility, etc.), and it would have some kind of capability of searching for and executing plans (specifically, plans-for-how-to-create-the-nanobot-plan), and it would have a capability of creating and executing instrumental subgoals (e.g. go on a side-quest to better understand boron chemistry) and plausibly it needs some kind of metacognition to improve its ability to find subgoals and take actions.
Everything I mentioned is an “internal” plan or an “internal” action or an “internal” goal, not involving “reaching out into the world” with actuators and internet access and nanobots etc.
If only the AI would stick to such “internal” consequentialist actions (e.g. “I will read this article to better understand boron chemistry”) and not engage in any “external” consequentialist actions (e.g. “I will seize more computer power to better understand boron chemistry”), well then we would have nothing to worry about! Alas, so far as I know, nobody knows how to make a powerful AI agent that would definitely always stick to “internal” consequentialism.
Personally, I’d consider a Fusion Power Generator-like scenario a more central failure mode than either of these. It’s not about the difficulty of getting the AI to do what we asked, it’s about the difficulty of posing the problem in a way which actually captures what we want.
I agree that that is another failure mode. (And there are yet other failure modes too—e.g. instead of printing the nanobot plan, it prints “Help me I’m trapped in a box…” :-P . I apologize for sloppy wording that suggested the two things I mentioned were the only two problems.)
I disagree about “more central”. I think that’s basically a disagreement on the question of “what’s a bigger deal, inner misalignment or outer misalignment?” with you voting for “outer” and me voting for “inner, or maybe tie, I dunno”. But I’m not sure it’s a good use of time to try to hash out that disagreement. We need an alignment plan that solves all the problems simultaneously. Probably different alignment approaches will get stuck on different things.
Speaking for myself here…
OK, let’s say we want an AI to make a “nanobot plan”. I’ll leave aside the possibility of other humans getting access to a similar AI as mine. Then there are two types of accident risk that I need to worry about.
First, I need to worry that the AI may run for a while, then hand me a plan, and it looks like a nanobot plan, but it’s not, it’s a booby trap. To avoid (or at least minimize) that problem, we need to be confident that the AI is actually trying to make a nanobot plan—i.e., we need to solve the whole alignment problem.
Alternatively, maybe we’re able to thoroughly understand the plan once we see it; we’re just too stupid to come up with it ourselves. That seems awfully fraught—I’m not sure how we could be so confident that we can tell apart nanobot plans from booby-trap plans. But let’s assume that’s possible for the sake of argument, and then move on to the other type of accident risk:
Second, I need to worry that the AI will start running, and I think it’s coming up with a nanobot plan, but actually it’s hacking its way out of its box and taking over the world.
How and why might that happen?
I would say that if a nanobot plan is very hard to create—requiring new insights etc.—then the only way to do it is to create the nanobot plan is to construct an agent-like thing that is trying to create the nanobot plan.
The agent-like thing would have some kind of action space (e.g. it can choose to summon a particular journal article to re-read, or it can choose to think through a certain possibility, etc.), and it would have some kind of capability of searching for and executing plans (specifically, plans-for-how-to-create-the-nanobot-plan), and it would have a capability of creating and executing instrumental subgoals (e.g. go on a side-quest to better understand boron chemistry) and plausibly it needs some kind of metacognition to improve its ability to find subgoals and take actions.
Everything I mentioned is an “internal” plan or an “internal” action or an “internal” goal, not involving “reaching out into the world” with actuators and internet access and nanobots etc.
If only the AI would stick to such “internal” consequentialist actions (e.g. “I will read this article to better understand boron chemistry”) and not engage in any “external” consequentialist actions (e.g. “I will seize more computer power to better understand boron chemistry”), well then we would have nothing to worry about! Alas, so far as I know, nobody knows how to make a powerful AI agent that would definitely always stick to “internal” consequentialism.
Personally, I’d consider a Fusion Power Generator-like scenario a more central failure mode than either of these. It’s not about the difficulty of getting the AI to do what we asked, it’s about the difficulty of posing the problem in a way which actually captures what we want.
I agree that that is another failure mode. (And there are yet other failure modes too—e.g. instead of printing the nanobot plan, it prints “Help me I’m trapped in a box…” :-P . I apologize for sloppy wording that suggested the two things I mentioned were the only two problems.)
I disagree about “more central”. I think that’s basically a disagreement on the question of “what’s a bigger deal, inner misalignment or outer misalignment?” with you voting for “outer” and me voting for “inner, or maybe tie, I dunno”. But I’m not sure it’s a good use of time to try to hash out that disagreement. We need an alignment plan that solves all the problems simultaneously. Probably different alignment approaches will get stuck on different things.