What (human or not) phenomena do you think are well explained by this model? I tried to think of any for 5 minutes and the best I came up with was the strong egalitarianism among hunter gatherers. I don’t actually know that much about hunter gatherers though. In the modern world something where “high IQ” people are doing worse is sex, but it doesn’t seem to fit your model.
Human-human: Various historical and current episodes of smarter-than-average populations being persecuted or discriminated against, such as intellectuals, “capitalists” (i.e., people labeled as such), certain ethnic groups. (I’m unsure my model is actually a good explanation of such phenomena, but this is mainly what I was trying to explain.)
Human-AI: Many people being reluctant to believe that it’s a good idea to build unaligned artificial superintelligence and then constraining them with a system of laws and/or social norms (which some people like Robin Hanson and Mathew Barnett have proposed). Aside from the issue of violent overthrow, any such system is bound to have loopholes, which the ASI will be more adept at exploiting, yet this adeptness potentially causes the ASI to be worse off (less likely to exist in the first place), similar to what happens in my model.
What (human or not) phenomena do you think are well explained by this model? I tried to think of any for 5 minutes and the best I came up with was the strong egalitarianism among hunter gatherers. I don’t actually know that much about hunter gatherers though. In the modern world something where “high IQ” people are doing worse is sex, but it doesn’t seem to fit your model.
Human-human: Various historical and current episodes of smarter-than-average populations being persecuted or discriminated against, such as intellectuals, “capitalists” (i.e., people labeled as such), certain ethnic groups. (I’m unsure my model is actually a good explanation of such phenomena, but this is mainly what I was trying to explain.)
Human-AI: Many people being reluctant to believe that it’s a good idea to build unaligned artificial superintelligence and then constraining them with a system of laws and/or social norms (which some people like Robin Hanson and Mathew Barnett have proposed). Aside from the issue of violent overthrow, any such system is bound to have loopholes, which the ASI will be more adept at exploiting, yet this adeptness potentially causes the ASI to be worse off (less likely to exist in the first place), similar to what happens in my model.