The problem with this argument is that it ignores a unique feature of AIs—their copiability. It takes ~20 years and O($300k) to spin up a new human worker. It takes ~20 minutes to spin up a new AI worker.
So in the long run, for a human to economically do a task, they have to not just have some comparative advantage but have a comparative advantage that’s large enough to cover the massive cost differential in “producing” a new one.
This actually analogizes more to engines. I would argue that a big factor in the near-total replacement of horses by engines is not so much that engines are exactly 100x better than horses at everything, but that engines can be mass-produced. In fact I think the claim that engines are exactly equally better than horses at every horse-task is obviously false if you think about it for two minutes. But any time there’s a niche where engines are even slightly better than horses, we can just increase production of engines more quickly and cheaply than we can increase production of horses.
These economic concepts such as comparative advantage tend to assume, for ease of analysis, a fixed quantity of workers. When you are talking about human workers in the short term, that is a reasonable simplifying assumption. But it leads you astray when you try to use these concepts to think about AIs (or engines).
One of the most common forms of Whataboutism is of the form “You criticize X, but other people vaguely politically aligned with you failed to criticize Y.” (assuming for argument that X and Y are different but similar wrongs)
The problem with that is that the only possible sincere answers are necessarily unsatisfying, and it’s hard to gauge their sincerity. Here’s what I see as the basic possibilities.
Y and X are equally bad, my allies are wrong about this [but what are you gonna do about it?]
Y is bad but X is genuinely worse because of …. (can sound like a post hoc justification)
Y and X are equally bad, but I still support my side because the badness of Y is outweighed by the goodness of A, B, etc.
Y is actually not bad because of ….
You are right, Y is terrible, I abandon my allies [and join … what? If Y is disqualifying X surely is too...]
The PCC is a lot more valid when its actually the same person taking inconsistent positions on X and Y. Otherwise your actual interlocutor might not be inconsistent at all but has no plausible way of demonstrating that.