That paper makes a convincing case that the ‘generic’ AI (some distribution of AI motivations weighted by our likelihood of developing them) will most prefer outcomes that rank low in our preference ordering, i.e. the free energy and atoms needed to support life as we know it or would want it will get reallocated to something else. That means that an AI given arbitrary power (e.g. because of a very hard takeoff, or easy bargaining among AIs but not humans, or other reasons) would be lethal. However, the situation seems different and more sensitive to initial conditions when we consider AIs with limited power that must trade off chances of conquest with a risk of failure and retaliation. I’m working on a write up of those issues.
That paper makes a convincing case that the ‘generic’ AI (some distribution of AI motivations weighted by our likelihood of developing them) will most prefer outcomes that rank low in our preference ordering, i.e. the free energy and atoms needed to support life as we know it or would want it will get reallocated to something else. That means that an AI given arbitrary power (e.g. because of a very hard takeoff, or easy bargaining among AIs but not humans, or other reasons) would be lethal. However, the situation seems different and more sensitive to initial conditions when we consider AIs with limited power that must trade off chances of conquest with a risk of failure and retaliation. I’m working on a write up of those issues.