The main point I am trying to make is that AGI risks cannot be deduced or theorised solely in abstract terms. They must be understood through rigorous empirical research on complex systems. If you view AI as an agent in the world, then it functions as a complex intervention. It may or may not act as intended by its designer, it may or may not deliver preferred outcomes, and it may or may not be acceptable to the user. There are ways to estimate uncertainty in each of these parameters through empirical research. Actually, there are degrees to which it acts as intended, degrees to which it is acceptable, and so on. This calls for careful empirical research and system-level understanding.
The main point I am trying to make is that AGI risks cannot be deduced or theorised solely in abstract terms. They must be understood through rigorous empirical research on complex systems. If you view AI as an agent in the world, then it functions as a complex intervention. It may or may not act as intended by its designer, it may or may not deliver preferred outcomes, and it may or may not be acceptable to the user. There are ways to estimate uncertainty in each of these parameters through empirical research. Actually, there are degrees to which it acts as intended, degrees to which it is acceptable, and so on. This calls for careful empirical research and system-level understanding.