This still feels like instrumentality. I guess maybe the addition is that it’s a sort of “when all you have is a hammer” situation; as in, even when the optimal strategy for a problem does not involve seeking power (assuming such a problem exists; really I’d say the question is what the optimal power seeking vs using that power trade-off is), the AI would be more liable to err on the side of seeking too much power because that just happens to be such a common successful strategy that it’s sort of biased towards it.
This still feels like instrumentality. I guess maybe the addition is that it’s a sort of “when all you have is a hammer” situation; as in, even when the optimal strategy for a problem does not involve seeking power (assuming such a problem exists; really I’d say the question is what the optimal power seeking vs using that power trade-off is), the AI would be more liable to err on the side of seeking too much power because that just happens to be such a common successful strategy that it’s sort of biased towards it.