To what extent is a particular universe learnable? What inductive biases and (hyper)priors are best for learning it? What efficient approximations exist for common algorithmic problems? How well do learning algorithms generalise for common problems? Etc. All seem like empirical questions about the reality we live in. And I expect the answers to these empirical questions to constrain what intelligent systems in our universe look like.
Yes, and I expect that agenda to be significantly more tractable for designing systems that are effective in the real world than normative agendas.
All that said, my preference is to draw the distinction around “models of intelligent systems” not “models of agency” as agents aren’t the only type of intelligent systems that matter (foundation models aren’t well described as agents).
I’m a student of the descriptive school of agent foundations I think.
I suspect normative agent foundations research is just largely misguided/mistaken. Quoting myself from my research philosophy draft:
And the relevant footnotes:
Do you expect useful generic descriptive models of agency to exist?
Yes, and I expect that agenda to be significantly more tractable for designing systems that are effective in the real world than normative agendas.
All that said, my preference is to draw the distinction around “models of intelligent systems” not “models of agency” as agents aren’t the only type of intelligent systems that matter (foundation models aren’t well described as agents).