I agree that no one else has solved the problem or made much progress. I object to Paul’s approach here because it’s coupling the value problem more closely to other problems in architecture and value stability. I would much prefer holding off on attacking it for the moment, rather than this approach, which—to my reading—takes for granted that the problem is not hard and rests further work on top of it. Holding off at least gets room for other pieces nearby to be carved out and provide a better idea of what properties a solution would have; this approach seems to be based on the solution looking vastly simpler than I think is true.
I also have a general intuitive prior that reinforcement learning approaches are untrustworthy and are “building on sand”, but that’s neither precise nor persuasive so I’m not writing it up except on questions like this where it’s more solid. I’ve put much less work into this field than Paul or others, so I don’t want to challenge things except where I’m confident.
(Yes, same person.)
I agree that no one else has solved the problem or made much progress. I object to Paul’s approach here because it’s coupling the value problem more closely to other problems in architecture and value stability. I would much prefer holding off on attacking it for the moment, rather than this approach, which—to my reading—takes for granted that the problem is not hard and rests further work on top of it. Holding off at least gets room for other pieces nearby to be carved out and provide a better idea of what properties a solution would have; this approach seems to be based on the solution looking vastly simpler than I think is true.
I also have a general intuitive prior that reinforcement learning approaches are untrustworthy and are “building on sand”, but that’s neither precise nor persuasive so I’m not writing it up except on questions like this where it’s more solid. I’ve put much less work into this field than Paul or others, so I don’t want to challenge things except where I’m confident.