I think a critical issue is that disempowerment implies loss of control we currently have—but this is poorly defined, and left unfortunately implicit.
If we concretize the idea of control, the extreme version is that if humanity unanimously chooses some action, it will occur. This is a bit overstated, but the obvious weak version is already untrue; if a majority of citizens in a country want some action to occur, say, a specific company to turn off a datacenter and stop running a given AI model, in a liberal democracy that majority cannot reliably ensure that it does happen, since there are protections and processes in place. In fact, the intermediate version is probably untrue as well—even a supermajority cannot reliably dictate this type of action, and certainly cannot decide it quickly.
Based on this, I think critics of the gradual disempowerment argument would make a reasonable point; this isn’t a new thing, and it’s not even obviously being accelerated by AI more than to the extent that it happens via wealth or power concentration. Companies already ignore laws, power is already concentrated in few hands, and to date, this fact has little to do with AI.
I’m guessing we don’t actually strongly disagree here, but I think that unless you’re broadening / shortening “information processing and person modelling technologies” to “technologies”, it’s only been a trend for a couple decades at most—and even with that broadening, it’s only been true under some very narrow circumstances in the west recently.