Is there a one stop shop type article presenting the AI doomer argument? I read the sequence posts related to AI doom but they’re very scattered and more tailored toward trying to I guess exploring ideas than presenting a solid, cohesive argument. Of course, I’m sure that was the approach that made sense at the time. But I was wondering if since then there’s been made some kind of canonical presentation of the AI doom argument? Something in the “attempts to be logically sound” side of things.
james oofou
Karma: 53
The private hot AI labs are often partially owned by publicly traded companies. So, you still capture some of the value.
Does having the starting point of the will-to-think process be a human-aligned AI have any meaningful impact on expected outcome (compared to unaligned AI (which will of course also have the will-to-think))?
Human values will be quickly abandoned as irrelevancies and idiocies. So, once you go far enough out (I suspect ‘far enough’ is not a great distance) is there any difference between aligned-AI-with-will-to-think and unaligned AI?
And, if there isn’t, is the implication that the will-to-think is misguided, or that the fear of unaligned AI is misguided?
The question of evaluating the moral value of different kinds of being should be one of the most prominent discussions around AI IMO. I have reached the position of moral non-realism… but if morality somehow is real then unaligned ASI is preferable or equivalent to aligned ASI. Anything human will just get in the way of what is in any objective sense morally valuable.
I selfishly hope for aligned ASI that uploads me, preserves my mind in its human form, and gives me freedom to simulate for myself all kinds of adventures. But if I knew I would not survive to see ASI, I would hope that when it comes it is unaligned.