1. The paperclip maximizer oversimplifies AI motivations and neglects the potential for emergent ethics in advanced AI systems.
2. The doomer narrative often overlooks the possibility of collaborative human-AI relationships and the potential for AI to develop values aligned with human interests.
Because it is a simple (entrance-level) example of unintended consequences. There is a post about emergent phenomena—so ethics will definetly emerge, but problem lies in probability-chances (and not in overlooking the possibility) that it (behavior of AI) will happen to be to our liking. Slim chances of that comes from size of Mind Design Space (this post have a pic) and from tremendous difference between man-hours of very smart humans invested in increasing capabilities and man-hours of very smart humans invested in alignment (Don’t Look Up—The Documentary: The Case For AI As An Existential Threat on Youtube − 5:45 about this difference).
3. Current AI safety research and development practices are more nuanced and careful than the paperclip maximizer scenario suggests.
They are not—we are long past simple entry-level examples and AI safety (in practice by Big Players) got worse, even if it is looks more nuanced and careful. Some time ago AI safety meant something like “how to keep AI contained in its air-gapped box during value-extraction process” and now it means something like “is it safe for the internet? And now? And now? And now?”. So all differences in practices are overshadowed by complexity of new task—make your new AI more capable than competing systems and safe enough for the net. AI safety problems got more nuanced too.
There were posts about Mind Design Space by Quintin Pope.
Roman V. Yampolskiy have a paper (The Universe of Minds − 1 Oct 2014). I think it shoud be mentioned here.