Around the time I first got into alignment, I was thinking about how to model markets as agents (e.g. what beliefs does a market as a whole have? What goals does it have?). That turned into Why Subagents?.
I also spent a little bit of time reading up on Theory of the Firm, looking for alignment-relevant ideas; there’s a lot of stuff there about aligning employees and firms, or when it makes sense for a firm to outsource (i.e. use “subagents”) vs do things in-house, etc. That eventually led to the Pointers Problem post (via the ideas in Incentive Design With Imperfect Credit Allocation).
I expect there’s plenty more useful analogies to mine on either of those paths, and probably many other paths besides. Though note that this does require a nontrivial skill: one needs to be able to boil down the generalizable “core idea” of an argument, in a form which can carry over to another field.
Strong-voted. This is so exciting.
Any specific research avenues where AI and economics research could overlap?
Around the time I first got into alignment, I was thinking about how to model markets as agents (e.g. what beliefs does a market as a whole have? What goals does it have?). That turned into Why Subagents?.
I also spent a little bit of time reading up on Theory of the Firm, looking for alignment-relevant ideas; there’s a lot of stuff there about aligning employees and firms, or when it makes sense for a firm to outsource (i.e. use “subagents”) vs do things in-house, etc. That eventually led to the Pointers Problem post (via the ideas in Incentive Design With Imperfect Credit Allocation).
I expect there’s plenty more useful analogies to mine on either of those paths, and probably many other paths besides. Though note that this does require a nontrivial skill: one needs to be able to boil down the generalizable “core idea” of an argument, in a form which can carry over to another field.