To what extent is a particular universe learnable? What inductive biases and (hyper)priors are best for learning it? What efficient approximations exist for common algorithmic problems? How well do learning algorithms generalise for common problems? Etc. All seem like empirical questions about the reality we live in. And I expect the answers to these empirical questions to constrain what intelligent systems in our universe look like.
A crux of my alignment research philosophy:
I suspect normative agent foundations research is just largely misguided/mistaken. Quoting myself from my research philosophy draft:
And the relevant footnotes: