I am not convinced AIs will avoid fighting each other for resources. If they are not based on human minds as WBE, then we have less reason to expect they will value the preservation of themselves or other agents. If they are based on human minds, we have lots of good reasons to expect that they will value things above self-preservation. I am not aware of any mechanisms that would preclude a Thucydides’ Trap style scenario from taking place.
It also seems highly likely that AIs will be employed for enforcing property rights, so even in the case where bandit-AIs prefer to target humans, conflict with some type of AI seems likely in a rising tide scenario.
Yeah. I was trying to show that humans don’t fare well by default even in a peaceful “rising tide” scenario, but in truth there will probably be more conflict, where AIs protecting humans don’t necessarily win.
I do think there is a difference in strategy though still. In the foom scenario you want to keep small the number of key players or people that might become key players.
In the non-foom you have the unhappy compromise between trying to avoid too many accidents and building up defense early vs practically everyone in time being a key player and needing to know how to handle AGI.
I am not convinced AIs will avoid fighting each other for resources. If they are not based on human minds as WBE, then we have less reason to expect they will value the preservation of themselves or other agents. If they are based on human minds, we have lots of good reasons to expect that they will value things above self-preservation. I am not aware of any mechanisms that would preclude a Thucydides’ Trap style scenario from taking place.
It also seems highly likely that AIs will be employed for enforcing property rights, so even in the case where bandit-AIs prefer to target humans, conflict with some type of AI seems likely in a rising tide scenario.
Yeah. I was trying to show that humans don’t fare well by default even in a peaceful “rising tide” scenario, but in truth there will probably be more conflict, where AIs protecting humans don’t necessarily win.
I didn’t know that!
I do think there is a difference in strategy though still. In the foom scenario you want to keep small the number of key players or people that might become key players.
In the non-foom you have the unhappy compromise between trying to avoid too many accidents and building up defense early vs practically everyone in time being a key player and needing to know how to handle AGI.