I think we’re both saying the same thing here, except that the thing I’m saying implies that I would bet for Eliezer being pessimistic about this. My point was that I have a lot of pessimism that people would code something wrong even if we knew what we were trying to code, and this is where a lot of my doom comes from. Beyond that, I think we don’t know what it is we’re trying to code up, and you give some evidence for that. I’m not saying that if we knew how to make good AI, it would still fail if we coded it perfectly. I’m saying we don’t know how to make good AI (even though we could in principle figure it out), and also current industry standards for coding things would not get it right the first time even if we knew what we were trying to build. I feel like I basically understanding the second thing, but I don’t have any gears-level understanding for why it’s hard to encode human desires beyond a bunch of intuitions from monkey’s-paw things that go wrong if you try to come up with creative disastrous ways to accomplish what seem like laudable goals.
I don’t think Eliezer is a DOOM rock, although I think a DOOM rock would be about as useful as Eliezer in practice right now because everyone making capability progress has doomed alignment strategies. My model of Eliezer’s doom argument for the current timeline is approximately “programming smart stuff that does anything useful is dangerous, we don’t know how to specify smart stuff that avoids that danger, and even if we did we seem to be content to train black-box algorithms until they look smarter without checking what they do before we run them.” I don’t understand one of the steps in that funnel of doom as well as I would like. I think that in a world where people weren’t doing the obvious doomed thing of making black-box algorithms which are smart, he would instead have a last step in the funnel of “even if we knew what we need a safe algorithm to do we don’t know how to write programs that do exactly what we want in unexpected situations,” because that is my obvious conclusion from looking at the software landscape.
I think we’re both saying the same thing here, except that the thing I’m saying implies that I would bet for Eliezer being pessimistic about this. My point was that I have a lot of pessimism that people would code something wrong even if we knew what we were trying to code, and this is where a lot of my doom comes from. Beyond that, I think we don’t know what it is we’re trying to code up, and you give some evidence for that. I’m not saying that if we knew how to make good AI, it would still fail if we coded it perfectly. I’m saying we don’t know how to make good AI (even though we could in principle figure it out), and also current industry standards for coding things would not get it right the first time even if we knew what we were trying to build. I feel like I basically understanding the second thing, but I don’t have any gears-level understanding for why it’s hard to encode human desires beyond a bunch of intuitions from monkey’s-paw things that go wrong if you try to come up with creative disastrous ways to accomplish what seem like laudable goals.
I don’t think Eliezer is a DOOM rock, although I think a DOOM rock would be about as useful as Eliezer in practice right now because everyone making capability progress has doomed alignment strategies. My model of Eliezer’s doom argument for the current timeline is approximately “programming smart stuff that does anything useful is dangerous, we don’t know how to specify smart stuff that avoids that danger, and even if we did we seem to be content to train black-box algorithms until they look smarter without checking what they do before we run them.” I don’t understand one of the steps in that funnel of doom as well as I would like. I think that in a world where people weren’t doing the obvious doomed thing of making black-box algorithms which are smart, he would instead have a last step in the funnel of “even if we knew what we need a safe algorithm to do we don’t know how to write programs that do exactly what we want in unexpected situations,” because that is my obvious conclusion from looking at the software landscape.