Any sane person programming such an AI would program it to have positive utility for “die if lots of people ask it to” but higher negative utility for “being in a state where lots of people ask you to die”. If it’s not already in such a state, it would not then go into one just to get the utility from dying.
I fear the implication is that the creator was not entirely, as you put it, sane. It is obvious that his logic and AI programming skills left something to be desired. Not that this world is that bad, but it could have stood to be so much better...
Any sane person programming such an AI would program it to have positive utility for “die if lots of people ask it to” but higher negative utility for “being in a state where lots of people ask you to die”. If it’s not already in such a state, it would not then go into one just to get the utility from dying.
I fear the implication is that the creator was not entirely, as you put it, sane. It is obvious that his logic and AI programming skills left something to be desired. Not that this world is that bad, but it could have stood to be so much better...