I don’t think “people have made choices that mattered” is a sufficient criteria for showing the existence of agency. IMO, to have something like agency, you roughly have to have an ongoing situation roughly like this:
Goals ↔ Actions ↔ States-of-the-world.
Some entity needs to have ongoing goals they are able to modify as they go along acting in the world and their actions also need to be able to have an effect. Agency is a complex and intuitive thing so I assume some would ask more than this to say a thing has agency. But I think this is one reasonable requirement.
Agency in a limited scope would be something like a non-profit that has a plan for helping the homeless, tries to implement it, discovers problems with the plan, and comes up with a new plan that inherently involves modifying their concept of “helping the homeless”.
By this criteria, tiny decisions with big consequences aren’t evidence of agency. I think that’s fairly intuitive. Having agency is subjectively something like “being at cause” rather than “being at effect” and that’s an ongoing, not one-time thing.
Thank you for the article. I think these “small” impacts are important to talk about. If one frame the question as “the impact of machines that think for humans”, that impact isn’t going to be a binary of just “good stuff” and “takes over and destroys humanity”, there are intermediate situations like the decay of human abilities to think critically that are significant, not just in themselves but for further impacts; IE, if everyone is dependent on Google for their opinions, how does this impact people’s opinion AI taking over entirely.