I suspect your comments would be better received if you split them up, organized them a bit more, and made it a little more clear what central points are, and gave references for controversial claims, and defined your terms (e.g. it isn’t at all clear what you mean by the bleeding heart liberal who will kill Hitler but not bomb Nagasaki).
As to actual content, it seems that your description of what a “benevolent AGI” would be here misses many of the central issues. You place a lot of emphasis on “empathy” but even too much of that could be a problem. Consider the AI that decides that it needs to reduce human suffering so it will find a way to instantaneously kill all human life. And even making an AI that can model something as complicated as “empathy” in a way that we want it to is already an insanely tough task.
I suspect your comments would be better received if you split them up, organized them a bit more, and made it a little more clear what central points are, and gave references for controversial claims, and defined your terms (e.g. it isn’t at all clear what you mean by the bleeding heart liberal who will kill Hitler but not bomb Nagasaki).
As to actual content, it seems that your description of what a “benevolent AGI” would be here misses many of the central issues. You place a lot of emphasis on “empathy” but even too much of that could be a problem. Consider the AI that decides that it needs to reduce human suffering so it will find a way to instantaneously kill all human life. And even making an AI that can model something as complicated as “empathy” in a way that we want it to is already an insanely tough task.