The first several points you make seem very weak to me, however starting with the section on embodied cognition the post gets better.
Embodied cognition seems to me like a problem for programmers to overcome, not an argument against FOOM. However it serves as a good basis for your point about constrained resources; I suspect that with sufficient time and leeway and access to AIM an AGI could become an extremely effective social manipulator. However this seems like the only avenue which it would obviously have the ability to get and process responses easily without needing hardware advances which would be difficult to acquire. Pure text messages don’t require much memory or much bandwidth, and there are vast numbers of people accessible to interact with at a time, so it would be hard to restrict an AI’s ability to learn to talk to people, but this is extremely limited as a way of causing existential risk.
Your paperclip maximizing argument is one that I have thought out before, but I would emphasize a point you seem to neglect. The set of mind designs which would take over the universe seems dense in the set of all generally intelligent mind designs to me, however not necessarily among the set of mind designs which humans would think to program. I don’t think that humans would necessarily need to be able to take over the world to create a monster AI, but I do think that a monster AI implies something about the programmer which may or may not be compatible with human psychology.
Overall your post is decent and interesting subject matter. I’m not finding myself persuaded but my disagreements feel more empirical than before the reading which is good!
The first several points you make seem very weak to me, however starting with the section on embodied cognition the post gets better.
Embodied cognition seems to me like a problem for programmers to overcome, not an argument against FOOM. However it serves as a good basis for your point about constrained resources; I suspect that with sufficient time and leeway and access to AIM an AGI could become an extremely effective social manipulator. However this seems like the only avenue which it would obviously have the ability to get and process responses easily without needing hardware advances which would be difficult to acquire. Pure text messages don’t require much memory or much bandwidth, and there are vast numbers of people accessible to interact with at a time, so it would be hard to restrict an AI’s ability to learn to talk to people, but this is extremely limited as a way of causing existential risk.
Your paperclip maximizing argument is one that I have thought out before, but I would emphasize a point you seem to neglect. The set of mind designs which would take over the universe seems dense in the set of all generally intelligent mind designs to me, however not necessarily among the set of mind designs which humans would think to program. I don’t think that humans would necessarily need to be able to take over the world to create a monster AI, but I do think that a monster AI implies something about the programmer which may or may not be compatible with human psychology.
Overall your post is decent and interesting subject matter. I’m not finding myself persuaded but my disagreements feel more empirical than before the reading which is good!