No. There are two bad assumptions in your counterexample.
They are:
Human psychopaths are above the certain point of intelligence that I was talking about.
Human psychopaths are sufficiently long-lived for the consequences to be severe enough.
Hmmmm. #2 says that I probably didn’t make clear enough the importance of the length of interaction.
You also appear to have the assumption that my argument is that the AGI fears detection of its unfriendly behavior and any consequences that humanity can apply. Humanity CANNOT apply sufficient negative consequences to a sufficiently powerful AGI. The severe consequences are all missed opportunity costs which means that the AGI is thereby sub-optimal and thereby less intelligent than is possible.
No. There are two bad assumptions in your counterexample.
They are:
Human psychopaths are above the certain point of intelligence that I was talking about.
Human psychopaths are sufficiently long-lived for the consequences to be severe enough.
Hmmmm. #2 says that I probably didn’t make clear enough the importance of the length of interaction.
You also appear to have the assumption that my argument is that the AGI fears detection of its unfriendly behavior and any consequences that humanity can apply. Humanity CANNOT apply sufficient negative consequences to a sufficiently powerful AGI. The severe consequences are all missed opportunity costs which means that the AGI is thereby sub-optimal and thereby less intelligent than is possible.
What sort of opportunity costs?
The AI can simulate humans if it needs them, for a lower energy cost than keeping the human race alive.
So, why should it keep the human race alive?