“AI” around here usually refers to the class of well-designed programs.
Define “well-designed”.
...you cannot rely on the supposed “obviousness” of morality to get your AI to self-modify into a desirable state
Huh? I never claimed (nor do I believe in anything like) obviousness of morality. Of course human terminal values are not an attractor in goal space. Absent other considerations there is no reason to think that an evolving AI would arrive at maximum-human-happiness values. Yes, unFriendly AI can be very dangerous. I never said otherwise.
Define “well-designed”.
Huh? I never claimed (nor do I believe in anything like) obviousness of morality. Of course human terminal values are not an attractor in goal space. Absent other considerations there is no reason to think that an evolving AI would arrive at maximum-human-happiness values. Yes, unFriendly AI can be very dangerous. I never said otherwise.