If the heuristics are optimized for “be able to satisfy requests from humans” and those requests sometimes require long-term planning, then the skill will develop. If it’s only good at satisfying simple requests that don’t require planning, in what sense is it superintelligent?
Yeah, that statement is wrong. I was trying to make a more subtle point about how an AI that learns long-term planning on a shorter time-frame is not necessarily going to be able to generalize to longer time-frames (but in the context of superintelligent AIs capable of doing human leve tasks, I do think it will generalize—so that point is kind of irrelevant). I agree with Rohin’s response.
Yeah, that statement is wrong. I was trying to make a more subtle point about how an AI that learns long-term planning on a shorter time-frame is not necessarily going to be able to generalize to longer time-frames (but in the context of superintelligent AIs capable of doing human leve tasks, I do think it will generalize—so that point is kind of irrelevant). I agree with Rohin’s response.