Regarding your “intuition that there should be some “best architecture”, at least for any given environment, and that this architecture should be relatively “simple”.”, I think:
1) I’d say “task” rather than “environment”, unless I wanted to emphasize that I think selection pressure trumps the orthogonality thesis (I’m ambivalent, FWIW).
2) I don’t see why it should be “simple” (and relative to what?) in every case, but I sort of share this intuition for most cases...
3) On the other hand, I think any system with other agents probably is much more complicated (IIUC, a lot of people think social complexity drove selection pressure for human-level intelligence in a feedback loop). At a “gears level” the reason this creates an insatiable drive for greater complexity is that social dynamics can be quite winner-takes-all… if you’re one step ahead of everyone else (and they don’t realize it), then you can fleece them.
Regarding your “intuition that there should be some “best architecture”, at least for any given environment, and that this architecture should be relatively “simple”.”, I think:
1) I’d say “task” rather than “environment”, unless I wanted to emphasize that I think selection pressure trumps the orthogonality thesis (I’m ambivalent, FWIW).
2) I don’t see why it should be “simple” (and relative to what?) in every case, but I sort of share this intuition for most cases...
3) On the other hand, I think any system with other agents probably is much more complicated (IIUC, a lot of people think social complexity drove selection pressure for human-level intelligence in a feedback loop). At a “gears level” the reason this creates an insatiable drive for greater complexity is that social dynamics can be quite winner-takes-all… if you’re one step ahead of everyone else (and they don’t realize it), then you can fleece them.