I get the feeling that I’m still missing the point somehow and that Yudkowsky would say we still have a big chance of doom if our algorithms were created by hand with programmers whose algorithms always did exactly what they intended even when combined with their other algorithms.
I would bet against Eliezer being pessimistic about this, if we are assuming the algorithms are deeply-understood enough that we are confident that we can iterate on building AGI. I think there’s maybe a problem with the way Eliezer communicates that gives people the impression that he’s a rock with “DOOM” written on it.
I think the pessimism comes from there being several currently-unsolved problems that get in the way of “deeply-understood enough”. In principle it’s possible to understand these problems and hand-build a safe and stable AGI, it just looks a lot easier to hand-build an AGI without understanding them all, and even easier than that to train an AGI without even thinking about them.
I call most of these “instability” problems. Where the AI might for example learn more, or think more, or self-modify, and each of these can shift the context in a way that causes an imperfectly designed AI to pursue unintended goals.
Some may be resolved by default along the way to understanding how to build AGI by hand, but it isn’t clear. Some are kinda solved already in some contexts.
I think we’re both saying the same thing here, except that the thing I’m saying implies that I would bet for Eliezer being pessimistic about this. My point was that I have a lot of pessimism that people would code something wrong even if we knew what we were trying to code, and this is where a lot of my doom comes from. Beyond that, I think we don’t know what it is we’re trying to code up, and you give some evidence for that. I’m not saying that if we knew how to make good AI, it would still fail if we coded it perfectly. I’m saying we don’t know how to make good AI (even though we could in principle figure it out), and also current industry standards for coding things would not get it right the first time even if we knew what we were trying to build. I feel like I basically understanding the second thing, but I don’t have any gears-level understanding for why it’s hard to encode human desires beyond a bunch of intuitions from monkey’s-paw things that go wrong if you try to come up with creative disastrous ways to accomplish what seem like laudable goals.
I don’t think Eliezer is a DOOM rock, although I think a DOOM rock would be about as useful as Eliezer in practice right now because everyone making capability progress has doomed alignment strategies. My model of Eliezer’s doom argument for the current timeline is approximately “programming smart stuff that does anything useful is dangerous, we don’t know how to specify smart stuff that avoids that danger, and even if we did we seem to be content to train black-box algorithms until they look smarter without checking what they do before we run them.” I don’t understand one of the steps in that funnel of doom as well as I would like. I think that in a world where people weren’t doing the obvious doomed thing of making black-box algorithms which are smart, he would instead have a last step in the funnel of “even if we knew what we need a safe algorithm to do we don’t know how to write programs that do exactly what we want in unexpected situations,” because that is my obvious conclusion from looking at the software landscape.
I would bet against Eliezer being pessimistic about this, if we are assuming the algorithms are deeply-understood enough that we are confident that we can iterate on building AGI. I think there’s maybe a problem with the way Eliezer communicates that gives people the impression that he’s a rock with “DOOM” written on it.
I think the pessimism comes from there being several currently-unsolved problems that get in the way of “deeply-understood enough”. In principle it’s possible to understand these problems and hand-build a safe and stable AGI, it just looks a lot easier to hand-build an AGI without understanding them all, and even easier than that to train an AGI without even thinking about them.
I call most of these “instability” problems. Where the AI might for example learn more, or think more, or self-modify, and each of these can shift the context in a way that causes an imperfectly designed AI to pursue unintended goals.
Here are some descriptions of problems in that cluster: optimization daemons, ontology shifts, translating between our ontology and the AI’s internal ontology in a way that generalizes, pascal’s mugging, reflectively stable preferences & decision algorithms, reflectively stable corrigibility, and correctly estimating future competence under different circumstances.
Some may be resolved by default along the way to understanding how to build AGI by hand, but it isn’t clear. Some are kinda solved already in some contexts.
I think we’re both saying the same thing here, except that the thing I’m saying implies that I would bet for Eliezer being pessimistic about this. My point was that I have a lot of pessimism that people would code something wrong even if we knew what we were trying to code, and this is where a lot of my doom comes from. Beyond that, I think we don’t know what it is we’re trying to code up, and you give some evidence for that. I’m not saying that if we knew how to make good AI, it would still fail if we coded it perfectly. I’m saying we don’t know how to make good AI (even though we could in principle figure it out), and also current industry standards for coding things would not get it right the first time even if we knew what we were trying to build. I feel like I basically understanding the second thing, but I don’t have any gears-level understanding for why it’s hard to encode human desires beyond a bunch of intuitions from monkey’s-paw things that go wrong if you try to come up with creative disastrous ways to accomplish what seem like laudable goals.
I don’t think Eliezer is a DOOM rock, although I think a DOOM rock would be about as useful as Eliezer in practice right now because everyone making capability progress has doomed alignment strategies. My model of Eliezer’s doom argument for the current timeline is approximately “programming smart stuff that does anything useful is dangerous, we don’t know how to specify smart stuff that avoids that danger, and even if we did we seem to be content to train black-box algorithms until they look smarter without checking what they do before we run them.” I don’t understand one of the steps in that funnel of doom as well as I would like. I think that in a world where people weren’t doing the obvious doomed thing of making black-box algorithms which are smart, he would instead have a last step in the funnel of “even if we knew what we need a safe algorithm to do we don’t know how to write programs that do exactly what we want in unexpected situations,” because that is my obvious conclusion from looking at the software landscape.