Could you clarify why you’re asking this question? Is it because you’re suggesting that defining “agency” would give us a way to restrict the kinds of systems that we need to consider?
I see some boundary cases: for example, I could theoretically imagine gradient descent creating something that mostly just uses heuristics and isn’t a full agent, but which nonetheless poses an x-risk. That said, this seems somewhat unlikely to me.
Could you clarify why you’re asking this question? Is it because you’re suggesting that defining “agency” would give us a way to restrict the kinds of systems that we need to consider?
I see some boundary cases: for example, I could theoretically imagine gradient descent creating something that mostly just uses heuristics and isn’t a full agent, but which nonetheless poses an x-risk. That said, this seems somewhat unlikely to me.