Human intelligence is a drastic jump from primate intelligence but this didn’t require a drastic jump in “compute resources”, and took comparably little time in evolutionary terms.
Oh man am I not convinced of this at all. Human intelligence seems to me to be only the result of 1. scaling up primate brains and 2. accumulating knowledge in the form of language, which relied on 3. humans and hominids in general being exceptional at synchronized behavior and collective action (eg, “charge!!!”) - modern primates besides humans are still exceptionally smart per synapse among the animal kingdom.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.
Oh man am I not convinced of this at all. Human intelligence seems to me to be only the result of 1. scaling up primate brains and 2. accumulating knowledge in the form of language, which relied on 3. humans and hominids in general being exceptional at synchronized behavior and collective action (eg, “charge!!!”) - modern primates besides humans are still exceptionally smart per synapse among the animal kingdom.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.