A small misconception that lies at the heart of this section is that AI systems (and specifically recommenders) will try to make people more predictable. This is not necessarily the case.
Yes, I’d agree (and didn’t make this clear in the post, sorry) -- the pressure towards predictability comes from a combination of the logic of performative prediction AND the “economic logic” that provide the context in which these performative predictors are being used/applied. This is certainly an important thing to be clear about!
(Though it also can only give us so much reassurance: I think it’s an extremely hard problem to find reliable ways for AI models to NOT be applied inside of the capitalist economic logic, if that’s what we’re hoping to do to avoid the legibilisation risk.)
Yes, I’d agree (and didn’t make this clear in the post, sorry) -- the pressure towards predictability comes from a combination of the logic of performative prediction AND the “economic logic” that provide the context in which these performative predictors are being used/applied. This is certainly an important thing to be clear about!
(Though it also can only give us so much reassurance: I think it’s an extremely hard problem to find reliable ways for AI models to NOT be applied inside of the capitalist economic logic, if that’s what we’re hoping to do to avoid the legibilisation risk.)