That’s a reasonable position, but I think the reality is that we just don’t know. Moreover, it seems possible to build goal-directed agents that don’t become hyper-rational by (e.g.) restricting their hypothesis space. Lots of potential for deconfusion, IMO.
EDIT: the above was in response to your first paragraph. I think I didn’t respond RE the 2nd paragraph because I don’t know what “convergent goal-directedness” refers to, and was planning to read your sequence but never got around to it.
I don’t know what “convergent goal-directedness” refers to, and was planning to read your sequence but never got around to it.
I would guess that Chapter 2 of that sequence would be the most (relevant + important) piece of writing for you (w.r.t this post in particular), though I’m not sure about the relevance.
That’s a reasonable position, but I think the reality is that we just don’t know. Moreover, it seems possible to build goal-directed agents that don’t become hyper-rational by (e.g.) restricting their hypothesis space. Lots of potential for deconfusion, IMO.
EDIT: the above was in response to your first paragraph. I think I didn’t respond RE the 2nd paragraph because I don’t know what “convergent goal-directedness” refers to, and was planning to read your sequence but never got around to it.
I would guess that Chapter 2 of that sequence would be the most (relevant + important) piece of writing for you (w.r.t this post in particular), though I’m not sure about the relevance.