I’ve come to realize that I don’t understand the argument that Artificial Intelligence will go foom as well as I’d like. That is, I’m not sure I understand why AI will inherently become massively more intelligent than humans. As I understand it, there are three points:
AI will be able to self-modify its structure.
By assumption, AI has goals, so self-modification to improve its ability to achieve those goals will make AI more effective.
AI thinks faster than humans because it thinks with circuits, not with meat.
The processing speed of a computer is certainly faster than a human.
AI will not commit as many errors of inattention, because it will not be made of meat.
Studies show humans make worse decisions when hungry or tired or the like.
I’ve come to realize that I don’t understand the argument that Artificial Intelligence will go foom as well as I’d like. That is, I’m not sure I understand why AI will inherently become massively more intelligent than humans. As I understand it, there are three points:
AI will be able to self-modify its structure.
By assumption, AI has goals, so self-modification to improve its ability to achieve those goals will make AI more effective.
AI thinks faster than humans because it thinks with circuits, not with meat.
The processing speed of a computer is certainly faster than a human.
AI will not commit as many errors of inattention, because it will not be made of meat.
Studies show humans make worse decisions when hungry or tired or the like.
Are those the basic categories for the argument?