I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.
I agree that humans are not drastically more intelligent than all other animals. This makes the prospect of AI even scarier, in my opinion, since it shows how powerful accumulated progress is.
I believe that human-level intelligence is sufficient for an AI to be extremely dangerous if it can scale while maintaining self-alignment in the form of “synchronized behavior and collective action”. Imagine what a tech company could achieve if all employees had the same company-aligned goals, efficient coordination, in silico processing speeds, high-bandwidth communication of knowledge, etc. With these sorts of advantages, it’s likely game over before it hits human-level intelligence across the board.
indeed. my commentary should not be seen as reason to believe we’re safe—just reason to believe the curve sharpness isn’t quite as bad as it could have been imagined to be.