if you can’t even answer the kinds of simple problems posed within the paper… then you must be very far off from being able to build a stable self-modifying AI.
For the record, I personally think this statement is too strong. Instead, I would say something more like what Paul said:
Working on [the Lobian obstacle] directly may be less productive than just trying to understand how reflective reasoning works in general—indeed, folks around here definitely try to understand how reflective reasoning works much more broadly, rather than focusing on this problem. The point of this post is to state a precise problem which existing techniques cannot resolve, because that is a common technique for making progress.
For the record, I personally think this statement is too strong. Instead, I would say something more like what Paul said: