I think that coup d’États and rebellions are nearly common enough that they could be called the default, though they are certainly not inevitable.
They do happen, nevertheless I think the default result of a rebellion/coup throughout history was simply a dictatorship or monarchy with a different dictator. And they do have a certain average lifetime (which may be enough for our purposes). And, the AI oppressor has huge systematic advantages over the mere human dictator—remote access to everyone’s mind, complete control over communication, etc.
Just to be sure I’m following you: When you are talking about the AI oppressor, are you envisioning some kind of recursive oversight scheme?
I assume here that your spoof is arguing that since we observe stable dictatorships, we should increase our probability that we will also be stable in our positions as dictators of a largely AI-run economy. (I recognize that it can be interpreted in other ways).
We expect we will have the two advantages over the AIs: We will be able to read their parameters directly, and we will be able to read any communication we wish. This is clearly insufficient, so we will need to have “AI Opressors” to help us interpret the mountains of data.
Two obvious objections:
How do we ensure the alignment of the AI Opressors?
Proper oversight of an agent that is more capable than yourself seems to become dramatically harder as the capability gap increases.
They do happen, nevertheless I think the default result of a rebellion/coup throughout history was simply a dictatorship or monarchy with a different dictator. And they do have a certain average lifetime (which may be enough for our purposes). And, the AI oppressor has huge systematic advantages over the mere human dictator—remote access to everyone’s mind, complete control over communication, etc.
Not to mention that the default result of rebellion is failure. (Figure from https://www.journalofdemocracy.org/articles/the-future-of-nonviolent-resistance-2/)
Just to be sure I’m following you: When you are talking about the AI oppressor, are you envisioning some kind of recursive oversight scheme?
I assume here that your spoof is arguing that since we observe stable dictatorships, we should increase our probability that we will also be stable in our positions as dictators of a largely AI-run economy. (I recognize that it can be interpreted in other ways).
We expect we will have the two advantages over the AIs: We will be able to read their parameters directly, and we will be able to read any communication we wish. This is clearly insufficient, so we will need to have “AI Opressors” to help us interpret the mountains of data.
Two obvious objections:
How do we ensure the alignment of the AI Opressors?
Proper oversight of an agent that is more capable than yourself seems to become dramatically harder as the capability gap increases.