217/PDV (I assume you’re the same person?), I agree with much of what you wrote, but do you have your own ideas for how to achieve Friendly AI? It seems like most of the objections against Paul’s ideas also apply to other people’s (such as MIRI’s). The fact that humans aren’t benign (or can’t be determined to be benign) under a sufficiently large set of environments/inputs, suffer from value drift, have unknown/unpatchable security holes all pose similar problems for CEV, for instance, which nobody has proposed a plausible way to solve, AFAIK.
In a way, I guess Paul has actually done more to explicitly acknowledge these problems than just about anyone else, even if I think (as you do) that he is too optimistic about the prospect of solving them using the ideas he has sketched out.
I agree that no one else has solved the problem or made much progress. I object to Paul’s approach here because it’s coupling the value problem more closely to other problems in architecture and value stability. I would much prefer holding off on attacking it for the moment, rather than this approach, which—to my reading—takes for granted that the problem is not hard and rests further work on top of it. Holding off at least gets room for other pieces nearby to be carved out and provide a better idea of what properties a solution would have; this approach seems to be based on the solution looking vastly simpler than I think is true.
I also have a general intuitive prior that reinforcement learning approaches are untrustworthy and are “building on sand”, but that’s neither precise nor persuasive so I’m not writing it up except on questions like this where it’s more solid. I’ve put much less work into this field than Paul or others, so I don’t want to challenge things except where I’m confident.
217/PDV (I assume you’re the same person?), I agree with much of what you wrote, but do you have your own ideas for how to achieve Friendly AI? It seems like most of the objections against Paul’s ideas also apply to other people’s (such as MIRI’s). The fact that humans aren’t benign (or can’t be determined to be benign) under a sufficiently large set of environments/inputs, suffer from value drift, have unknown/unpatchable security holes all pose similar problems for CEV, for instance, which nobody has proposed a plausible way to solve, AFAIK.
In a way, I guess Paul has actually done more to explicitly acknowledge these problems than just about anyone else, even if I think (as you do) that he is too optimistic about the prospect of solving them using the ideas he has sketched out.
(Yes, same person.)
I agree that no one else has solved the problem or made much progress. I object to Paul’s approach here because it’s coupling the value problem more closely to other problems in architecture and value stability. I would much prefer holding off on attacking it for the moment, rather than this approach, which—to my reading—takes for granted that the problem is not hard and rests further work on top of it. Holding off at least gets room for other pieces nearby to be carved out and provide a better idea of what properties a solution would have; this approach seems to be based on the solution looking vastly simpler than I think is true.
I also have a general intuitive prior that reinforcement learning approaches are untrustworthy and are “building on sand”, but that’s neither precise nor persuasive so I’m not writing it up except on questions like this where it’s more solid. I’ve put much less work into this field than Paul or others, so I don’t want to challenge things except where I’m confident.