Point well taken that technological development and global dominance were achieved by human cultures, not individual humans. But I claim that it is obviously a case of motivated reasoning to treat this as a powerful blow against the arguments for fast takeoff. A human-level AI (able to complete any cognitive task at least as well as you) is a foom risk unless it has specific additional handicaps. These might include: - For some reason it needs to sleep for a long time every night. - Its progress gets periodically erased due to random misfortune or enemy action. - It is locked into a bad strategic position, such as having no cognitive privacy from overseers. - It can’t copy itself. - It can’t gain more compute. - It can’t reliably modify itself.
I’ll be pretty surprised if we get AI systems that can do any cognitive task that I can do (such as make longterm plans and spontaneously correct my own mistakes without them being pointed out to me) but that can also only improve themselves very slowly. It really seems like, if I were able to easily edit my own brain, then I would be able to increase my abilities across the board, including my ability to increase my abilities.
My understanding is that smart human engineers fail to make capability-increasing edits to the “brains” of current AI systems. If the AI brains remain the same style, I don’t think they’ll be easier to edit when they’re human level.
Point well taken that technological development and global dominance were achieved by human cultures, not individual humans. But I claim that it is obviously a case of motivated reasoning to treat this as a powerful blow against the arguments for fast takeoff. A human-level AI (able to complete any cognitive task at least as well as you) is a foom risk unless it has specific additional handicaps. These might include:
- For some reason it needs to sleep for a long time every night.
- Its progress gets periodically erased due to random misfortune or enemy action.
- It is locked into a bad strategic position, such as having no cognitive privacy from overseers.
- It can’t copy itself.
- It can’t gain more compute.
- It can’t reliably modify itself.
I’ll be pretty surprised if we get AI systems that can do any cognitive task that I can do (such as make longterm plans and spontaneously correct my own mistakes without them being pointed out to me) but that can also only improve themselves very slowly. It really seems like, if I were able to easily edit my own brain, then I would be able to increase my abilities across the board, including my ability to increase my abilities.
My understanding is that smart human engineers fail to make capability-increasing edits to the “brains” of current AI systems. If the AI brains remain the same style, I don’t think they’ll be easier to edit when they’re human level.