Honestly, I think the journalism project I was working on in the last year may be most important for the way it sheds light on your question.
The purpose of the project, to be as concise as possible, was to investigate the evidence from neuroscience that there may be a strong analogy between modern AI programs, like language models, and distinctive subregions of the brain, like the so-called language network, responsible for our abilities to process and generate language.
As such, if you imagine coupling such a frontier language model with a crude agent architecture, then what you will wind up with might be best viewed as a form of hyperintelligent machine sociopath, with all the extremely powerful language machinery of a human—perhaps even much more powerful, considering its inhuman scaling—but none of the machinery necessary for say, emotional processing, empathy, and emotional reasoning—aside from the superficial facsimile of such, that you get from a mastery of human language. (For example, existing models lack any and all machinery corresponding to the human dopamine and serotonin systems.)
This is, for me, a frankly terrifying realization that I am still trying to wrap my head around, and plan to be posting about more, soon. Does this help at all?
Really interesting project, Mordechai! Have you seen some of Geoffrey Hinton’s latest remarks? He’s said some things along these lines actually. Feel free to message me and I can point you to it.
Honestly, I think the journalism project I was working on in the last year may be most important for the way it sheds light on your question.
The purpose of the project, to be as concise as possible, was to investigate the evidence from neuroscience that there may be a strong analogy between modern AI programs, like language models, and distinctive subregions of the brain, like the so-called language network, responsible for our abilities to process and generate language.
As such, if you imagine coupling such a frontier language model with a crude agent architecture, then what you will wind up with might be best viewed as a form of hyperintelligent machine sociopath, with all the extremely powerful language machinery of a human—perhaps even much more powerful, considering its inhuman scaling—but none of the machinery necessary for say, emotional processing, empathy, and emotional reasoning—aside from the superficial facsimile of such, that you get from a mastery of human language. (For example, existing models lack any and all machinery corresponding to the human dopamine and serotonin systems.)
This is, for me, a frankly terrifying realization that I am still trying to wrap my head around, and plan to be posting about more, soon. Does this help at all?
Really interesting project, Mordechai! Have you seen some of Geoffrey Hinton’s latest remarks? He’s said some things along these lines actually. Feel free to message me and I can point you to it.
Thanks—didn’t see his remarks about this, specifically. I’ll try to look them up, thanks.