Regarding the arguments for doom, they are quite logical, but they don’t quite have the same confidence as e.g. an argument that if you are in a burning, collapsing building, your life is in peril. There are a few too many profound unknowns that have a bearing on the consequences of superhuman AI, to know that the default outcome really is the equivalent of a paperclip maximizer.
However, I definitely agree that that is a very logical scenario, and also that the human race (or the portion of it that works on AI) is taking a huge gamble by pushing towards superhuman AI, without making its central priority that this superhuman AI is ‘friendly’ or ‘aligned’.
In that regard, I keep saying that the best plan I have seen, is June Ku’s “meta-ethical AI”, which falls into the category of AI proposals that construct an overall goal by aggregating idealized versions of the current goals of all human individuals. I want to make a post about it, but I haven’t had time… So I would suggest, check it out, and see if you can contribute technically or critically or by spreading awareness of this kind of proposal.
Regarding the arguments for doom, they are quite logical, but they don’t quite have the same confidence as e.g. an argument that if you are in a burning, collapsing building, your life is in peril. There are a few too many profound unknowns that have a bearing on the consequences of superhuman AI, to know that the default outcome really is the equivalent of a paperclip maximizer.
However, I definitely agree that that is a very logical scenario, and also that the human race (or the portion of it that works on AI) is taking a huge gamble by pushing towards superhuman AI, without making its central priority that this superhuman AI is ‘friendly’ or ‘aligned’.
In that regard, I keep saying that the best plan I have seen, is June Ku’s “meta-ethical AI”, which falls into the category of AI proposals that construct an overall goal by aggregating idealized versions of the current goals of all human individuals. I want to make a post about it, but I haven’t had time… So I would suggest, check it out, and see if you can contribute technically or critically or by spreading awareness of this kind of proposal.