Indeed, I think that’s a great way to put it “preserving human agency around powerful systems” (I added it to the article). Thanks for that! I am pessimistic that this is possible (or that the question makes sense as it stands). I guess what I tried to do above—was a soft argument that “intent-aligned AIs” might not make sense without further limits or boundaries on both human intent and what AIs can do.
I agree hard wiring is probably not the best solution. However, humans are probably hardwired with a bunch of tools. Post-behaviorist psychology, e.g. self-determination-theory, argues that agency and belonging (for social organisms) are hard wired in most complex organisms (i.e. not learned). (I assume you know some/a lot about this, here’s a very rough write up https://practicalpie.com/self-determination-theory/).
Hi Charlie. Thanks for the welcome!
Indeed, I think that’s a great way to put it “preserving human agency around powerful systems” (I added it to the article). Thanks for that! I am pessimistic that this is possible (or that the question makes sense as it stands). I guess what I tried to do above—was a soft argument that “intent-aligned AIs” might not make sense without further limits or boundaries on both human intent and what AIs can do.
I agree hard wiring is probably not the best solution. However, humans are probably hardwired with a bunch of tools. Post-behaviorist psychology, e.g. self-determination-theory, argues that agency and belonging (for social organisms) are hard wired in most complex organisms (i.e. not learned). (I assume you know some/a lot about this, here’s a very rough write up https://practicalpie.com/self-determination-theory/).