Human-human: Various historical and current episodes of smarter-than-average populations being persecuted or discriminated against, such as intellectuals, “capitalists” (i.e., people labeled as such), certain ethnic groups. (I’m unsure my model is actually a good explanation of such phenomena, but this is mainly what I was trying to explain.)
Human-AI: Many people being reluctant to believe that it’s a good idea to build unaligned artificial superintelligence and then constraining them with a system of laws and/or social norms (which some people like Robin Hanson and Mathew Barnett have proposed). Aside from the issue of violent overthrow, any such system is bound to have loopholes, which the ASI will be more adept at exploiting, yet this adeptness potentially causes the ASI to be worse off (less likely to exist in the first place), similar to what happens in my model.
Human-human: Various historical and current episodes of smarter-than-average populations being persecuted or discriminated against, such as intellectuals, “capitalists” (i.e., people labeled as such), certain ethnic groups. (I’m unsure my model is actually a good explanation of such phenomena, but this is mainly what I was trying to explain.)
Human-AI: Many people being reluctant to believe that it’s a good idea to build unaligned artificial superintelligence and then constraining them with a system of laws and/or social norms (which some people like Robin Hanson and Mathew Barnett have proposed). Aside from the issue of violent overthrow, any such system is bound to have loopholes, which the ASI will be more adept at exploiting, yet this adeptness potentially causes the ASI to be worse off (less likely to exist in the first place), similar to what happens in my model.