I wanted to make this comment for a while now, but I was worried that it wasn’t worth posting because it assumes some familiarity with multi-agent systems (and it might sound completely nuts to many people). But, since your writing on this topic has influenced me more than anyone else, I’ll give it a go. Curious what you think.
I agree with the other commenters that having GPT-3 actively meddle with actual communications would feel a little off.
Although intuitively it might feel off—for me it does as well—in practice GPT-3 would be just another agent interacting with all the other agents using global neural workspace (mediated by vision).
Everyone is very worried about transferring their thinking and talking to a piece of software, but for me this seems to come from a false sense of agency. It is true that I have no agency over GPT-3, but the same goes for my thoughts. All my thoughts simply appear in my mind and I have absolutely zero choice in choosing them. From my perspective there is no difference other than automatically identifying with thoughts as “me”.
There might be a big difference in performance. I can systematically test and tune GPT-3. If I don’t like how it performs, then I shouldn’t let it influence my colony of agents. But if I do like it, then it is the first time that I have added another agent to my mind, that has the qualities I have chosen.
It is very easy to read about non-violent communication (or other topics like biases), and think to yourself, that is how I want to write and act, but in practice it is hard to change. By introducing GPT-3 as another agent, that is tuned for this exact purpose, I might be able to make this change orders of magnitude easier.
I find it hard to disagree with this when you frame it this way. :-)
Yeah I don’t think there’s anything intrinsically wrong with treating GPT-3 as yet another subagent, just one that you interface with somewhat differently. (David Chalmers has an argument about how, if you write your thoughts down to a notepad, the notepad is essentially a part of you as an external source of memory that’s in principle comparable to your internal memory buffers.)
I think the sense of “something is off” comes more from things like people feeling that someone external is reading their communications and them not feeling able to trust whoever made the app, as well as other more specific considerations that were brought up in the other comments.
I wanted to make this comment for a while now, but I was worried that it wasn’t worth posting because it assumes some familiarity with multi-agent systems (and it might sound completely nuts to many people). But, since your writing on this topic has influenced me more than anyone else, I’ll give it a go. Curious what you think.
Although intuitively it might feel off—for me it does as well—in practice GPT-3 would be just another agent interacting with all the other agents using global neural workspace (mediated by vision).
Everyone is very worried about transferring their thinking and talking to a piece of software, but for me this seems to come from a false sense of agency. It is true that I have no agency over GPT-3, but the same goes for my thoughts. All my thoughts simply appear in my mind and I have absolutely zero choice in choosing them. From my perspective there is no difference other than automatically identifying with thoughts as “me”.
There might be a big difference in performance. I can systematically test and tune GPT-3. If I don’t like how it performs, then I shouldn’t let it influence my colony of agents. But if I do like it, then it is the first time that I have added another agent to my mind, that has the qualities I have chosen.
It is very easy to read about non-violent communication (or other topics like biases), and think to yourself, that is how I want to write and act, but in practice it is hard to change. By introducing GPT-3 as another agent, that is tuned for this exact purpose, I might be able to make this change orders of magnitude easier.
I find it hard to disagree with this when you frame it this way. :-)
Yeah I don’t think there’s anything intrinsically wrong with treating GPT-3 as yet another subagent, just one that you interface with somewhat differently. (David Chalmers has an argument about how, if you write your thoughts down to a notepad, the notepad is essentially a part of you as an external source of memory that’s in principle comparable to your internal memory buffers.)
I think the sense of “something is off” comes more from things like people feeling that someone external is reading their communications and them not feeling able to trust whoever made the app, as well as other more specific considerations that were brought up in the other comments.