If the AI starts to get even a hint that there are objections, it has to kick in a serious review of the plan. It will ask everyone (it is an AI, after all: it can do that even if there are 100 billion people on the planet). If it gets feedback from anyone saying that they object to the plan, that is the end of the story: it does not force anyone to go through with it. That means, it is a fundamental feature of the checking code that it will veto a plan under that circumstance. Notice, by the way, that I have generalized “consulting the programmers” to “consulting everyone”. That is an obvious extension, since the original programmers were only proxies for the will of the entire species.
This assumes that no human being would ever try to just veto everything to spite everyone else. A process for determining AGI volition that is even more overconstrained and impossible to get anything through than a homeowners’ association meeting sounds to me like a bad idea.
This assumes that no human being would ever try to just veto everything to spite everyone else. A process for determining AGI volition that is even more overconstrained and impossible to get anything through than a homeowners’ association meeting sounds to me like a bad idea.