Say you came up with the most basic template for general intelligence that works given limited resources. If you wanted to apply this potential to improve your template, would this be a sufficient condition for it to take over the world? I don’t think so. If you didn’t explicitly told it to do so, why would it?
The crux of the matter is that a goal isn’t enough to enable the full potential of general intelligence, you also need to explicitly define how to achieve that goal. General intelligence does not imply recursive self-improvement, just the potential to do so, but not the incentive. The incentive has to be given, it is not implied by general intelligence.
I think the far view on those thought patterns is that they are indicative of raising the issue of possible false understanding rather than providing a coherent new understanding. That’s for discussion.
I think the far view on those thought patterns is that they are indicative of raising the issue of possible false understanding rather than providing a coherent new understanding. That’s for discussion.