I’ve realised what would make this utopia make almost perfect sense:
The AI was programmed with a massive positive utility value to “die if they ask you to”
So, in maximising it’s utility, it has to make sure it’s asked to die. It also has to fulfil other restrictions, and it wants to make humans happy. So it has to make them happy in such a way that their immediate reaction will be to want it dead, and only later will they be happy about the changes.
Any sane person programming such an AI would program it to have positive utility for “die if lots of people ask it to” but higher negative utility for “being in a state where lots of people ask you to die”. If it’s not already in such a state, it would not then go into one just to get the utility from dying.
I fear the implication is that the creator was not entirely, as you put it, sane. It is obvious that his logic and AI programming skills left something to be desired. Not that this world is that bad, but it could have stood to be so much better...
I’ve realised what would make this utopia make almost perfect sense:
The AI was programmed with a massive positive utility value to “die if they ask you to”
So, in maximising it’s utility, it has to make sure it’s asked to die. It also has to fulfil other restrictions, and it wants to make humans happy. So it has to make them happy in such a way that their immediate reaction will be to want it dead, and only later will they be happy about the changes.
Any sane person programming such an AI would program it to have positive utility for “die if lots of people ask it to” but higher negative utility for “being in a state where lots of people ask you to die”. If it’s not already in such a state, it would not then go into one just to get the utility from dying.
I fear the implication is that the creator was not entirely, as you put it, sane. It is obvious that his logic and AI programming skills left something to be desired. Not that this world is that bad, but it could have stood to be so much better...