I think you should have focused on Turntrout’s formalisation of power, which much better matches the intuitions behind “power” and shows which kinds of AIs we should expect to be powerseeking.
They pretty much convinced me that agents which optimise over some sort of expected value are going to seek power (something like preserving their ability to optimise any goal they wish).
I think you should have focused on Turntrout’s formalisation of power, which much better matches the intuitions behind “power” and shows which kinds of AIs we should expect to be powerseeking.
They pretty much convinced me that agents which optimise over some sort of expected value are going to seek power (something like preserving their ability to optimise any goal they wish).