I mostly agree, and I want to echo ‘tailcalled’ that there’s another layer of intelligence that builds upon humans: civilization, or human culture (although surely there’s some merit to our “architecture”, so to speak, to be sure!). We’ve found that you can teach machines essentially any task (because of Turing completeness). That doesn’t mean a single machine, by itself, might warrant being called an ‘universal learner’. Such universality would come from algorithms running on said machine. I think there’s a degree of universality inherent to animals and hence to humans as well. We can learn to predict and plan very well from scratch (many animals learn with little or no parenting required), are curious for learning more, can memorize and recall things from the past, etc..
However, I think the perspective of our integration with society is important. We also probably would not learn to reach remotely similar levels of intelligence (in the sense of the ability to solve problems, act in the world, and communicate) without instruction—much like the instruction Turing machines receive when programmed. And this instruction has undergone refinement from many generations, through other improvement algorithms (like ‘quasi-genetic’ improvement of which cultures have the best teaching methods and better outcomes, and of course teachers thinking how to teach best, what to teach, etc.).
I think there’s the insight that our brain is universal, simply because yes, we can probably follow/memorize any algorithm (i.e. explicit set of instructions) which fits our memory. But also our culture equips us with more powerful forms of universality where we detect most important problems, solve them, and evolve as a civilization.
I think the most important form of universality is that of meaning and ethics: dicovering what is meaningful, what activities we should pursue, what is ethical and isn’t, and what is a good life. I think we’re still not very firmly in this ground of universality, lest the machines we create.
I mostly agree, and I want to echo ‘tailcalled’ that there’s another layer of intelligence that builds upon humans: civilization, or human culture (although surely there’s some merit to our “architecture”, so to speak, to be sure!). We’ve found that you can teach machines essentially any task (because of Turing completeness). That doesn’t mean a single machine, by itself, might warrant being called an ‘universal learner’. Such universality would come from algorithms running on said machine. I think there’s a degree of universality inherent to animals and hence to humans as well. We can learn to predict and plan very well from scratch (many animals learn with little or no parenting required), are curious for learning more, can memorize and recall things from the past, etc..
However, I think the perspective of our integration with society is important. We also probably would not learn to reach remotely similar levels of intelligence (in the sense of the ability to solve problems, act in the world, and communicate) without instruction—much like the instruction Turing machines receive when programmed. And this instruction has undergone refinement from many generations, through other improvement algorithms (like ‘quasi-genetic’ improvement of which cultures have the best teaching methods and better outcomes, and of course teachers thinking how to teach best, what to teach, etc.).
I think there’s the insight that our brain is universal, simply because yes, we can probably follow/memorize any algorithm (i.e. explicit set of instructions) which fits our memory. But also our culture equips us with more powerful forms of universality where we detect most important problems, solve them, and evolve as a civilization.
I think the most important form of universality is that of meaning and ethics: dicovering what is meaningful, what activities we should pursue, what is ethical and isn’t, and what is a good life. I think we’re still not very firmly in this ground of universality, lest the machines we create.