I think this helped me a lot understand you a bit better—thank you
Let me try paraphrasing this:
> Humans are our best example of a sort-of-general intelligence. And humans have a lazy, satisfying, ‘small-scale’ kind of reasoning that is mostly only well suited for activities close to their ‘training regime’. Hence AGIs may also be the same—and in particular if AGIs are trained with Reinforcement Learning and heavily rewarded for following human intentions this may be a likely outcome.
I think this helped me a lot understand you a bit better—thank you
Let me try paraphrasing this:
> Humans are our best example of a sort-of-general intelligence. And humans have a lazy, satisfying, ‘small-scale’ kind of reasoning that is mostly only well suited for activities close to their ‘training regime’. Hence AGIs may also be the same—and in particular if AGIs are trained with Reinforcement Learning and heavily rewarded for following human intentions this may be a likely outcome.
Is that pointing in the direction you intended?