I find it hard to disagree with this when you frame it this way. :-)
Yeah I don’t think there’s anything intrinsically wrong with treating GPT-3 as yet another subagent, just one that you interface with somewhat differently. (David Chalmers has an argument about how, if you write your thoughts down to a notepad, the notepad is essentially a part of you as an external source of memory that’s in principle comparable to your internal memory buffers.)
I think the sense of “something is off” comes more from things like people feeling that someone external is reading their communications and them not feeling able to trust whoever made the app, as well as other more specific considerations that were brought up in the other comments.
I find it hard to disagree with this when you frame it this way. :-)
Yeah I don’t think there’s anything intrinsically wrong with treating GPT-3 as yet another subagent, just one that you interface with somewhat differently. (David Chalmers has an argument about how, if you write your thoughts down to a notepad, the notepad is essentially a part of you as an external source of memory that’s in principle comparable to your internal memory buffers.)
I think the sense of “something is off” comes more from things like people feeling that someone external is reading their communications and them not feeling able to trust whoever made the app, as well as other more specific considerations that were brought up in the other comments.