make the observation that natural and artificial intelligence will learn over time. Given the post’s title that got me thinking about the two learning settings and how that might apply to concepts of rationality.
AI will presumably not face the same life time limitation that humans currently do. The implication is that learning will be different in the two settings. Human learning is very much dependent on prior generation and various social/group type aspects.
But I’ve generally thought of rationality (as generally understood) bound to a single mind, as it were. The idea of applying many rules about rational thought or behavior to aggregates, such as “the mob” largely a misapplication.
I wonder if the learning process based on knowledge learned (including the potential for mis-knowledge transmission) via a larger social process performs better in both general learning and development of rational though processes or if the AI single, long lived “mind” has some advantage.
I might also wonder a bit about how this might apply to questions such as that asked regarding why China did not develop science.
I suspect various aspects of my though have been discussed here—or at least can inform on the though but there are lots of post to search.
[Question] Learning Over Time for AI and Humans and Rationality
https://www.lesswrong.com/posts/pLZ3bdeng4u5W8Yft/let-s-talk-about-convergent-rationality-1
make the observation that natural and artificial intelligence will learn over time. Given the post’s title that got me thinking about the two learning settings and how that might apply to concepts of rationality.
AI will presumably not face the same life time limitation that humans currently do. The implication is that learning will be different in the two settings. Human learning is very much dependent on prior generation and various social/group type aspects.
But I’ve generally thought of rationality (as generally understood) bound to a single mind, as it were. The idea of applying many rules about rational thought or behavior to aggregates, such as “the mob” largely a misapplication.
I wonder if the learning process based on knowledge learned (including the potential for mis-knowledge transmission) via a larger social process performs better in both general learning and development of rational though processes or if the AI single, long lived “mind” has some advantage.
I might also wonder a bit about how this might apply to questions such as that asked regarding why China did not develop science.
I suspect various aspects of my though have been discussed here—or at least can inform on the though but there are lots of post to search.