Why does Paul think that learning needs to be “aligned” as opposed to just well-understood and well-behaved, so that it can be safely used as part of a larger aligned AI design that includes search, logic, etc.?
I mostly think it should be benign / corrigible / something like that. I think you’d need something like that whether you want to apply learning directly or to apply it as part of a larger system.
If Paul does not think ALBA is a realistic design of an entire aligned AI (since it doesn’t include search/logic/etc.) what might a realistic design look like, roughly?
You can definitely make an entire AI out of learning alone (evolution / model-free RL), and I think that’s currently the single most likely possibility even though it’s not particularly likely.
The alternative design would integrate whatever other useful techniques are turned up by the community, which will depend on what those techniques are. One possibility is search/planning. This can be integrated in a straightforward way into ALBA, I think the main obstacle is security amplification which needs to work for ALBA anyway and is closely related to empirical work on capability amplification. On the logic side it’s harder to say what a useful technique would look like other than “run your agent for a while,” which you can also do with ALBA (though it requires something like these ideas).
which makes it seem like his approach is an alternative to MIRI’s
My hope is to have safe and safely composable versions of each important AI ingredient. I would caricature the implicit MIRI view as “learning will lead to doom, so we need to develop an alternative approach that isn’t doomed,” which is a substitute in the sense that it’s also trying to route around the apparent doomedness of learning but in a quite different way.
Thanks, so to paraphrase your current position, you think once we have aligned learning it doesn’t seem as hard to integrate other AI components into the design, so aligning learning seems to be the hardest part. MIRI’s work might help with aligning other AI components and integrating them into something like ALBA, but you don’t see that as very hard anyway, so it perhaps has more value as a substitute than a complement. Is that about right?
One possibility is search/planning. This can be integrated in a straightforward way into ALBA
I don’t understand ALBA well enough to easily see extensions to the idea that are obvious to you, and I’m guessing others may be in a similar situation. (I’m guessing Jessica didn’t see it for example, or she wouldn’t have said “ALBA competes with adversaries who use only learning” without noting that there’s a straightforward extension that does more.) Can you write a post about this? (Or someone else please jump in if you do see what the “straightforward way” is.)
I mostly think it should be benign / corrigible / something like that. I think you’d need something like that whether you want to apply learning directly or to apply it as part of a larger system.
You can definitely make an entire AI out of learning alone (evolution / model-free RL), and I think that’s currently the single most likely possibility even though it’s not particularly likely.
The alternative design would integrate whatever other useful techniques are turned up by the community, which will depend on what those techniques are. One possibility is search/planning. This can be integrated in a straightforward way into ALBA, I think the main obstacle is security amplification which needs to work for ALBA anyway and is closely related to empirical work on capability amplification. On the logic side it’s harder to say what a useful technique would look like other than “run your agent for a while,” which you can also do with ALBA (though it requires something like these ideas).
My hope is to have safe and safely composable versions of each important AI ingredient. I would caricature the implicit MIRI view as “learning will lead to doom, so we need to develop an alternative approach that isn’t doomed,” which is a substitute in the sense that it’s also trying to route around the apparent doomedness of learning but in a quite different way.
Thanks, so to paraphrase your current position, you think once we have aligned learning it doesn’t seem as hard to integrate other AI components into the design, so aligning learning seems to be the hardest part. MIRI’s work might help with aligning other AI components and integrating them into something like ALBA, but you don’t see that as very hard anyway, so it perhaps has more value as a substitute than a complement. Is that about right?
I don’t understand ALBA well enough to easily see extensions to the idea that are obvious to you, and I’m guessing others may be in a similar situation. (I’m guessing Jessica didn’t see it for example, or she wouldn’t have said “ALBA competes with adversaries who use only learning” without noting that there’s a straightforward extension that does more.) Can you write a post about this? (Or someone else please jump in if you do see what the “straightforward way” is.)