I have to disagree here. I strongly suspect that GPT, when it, say, pretends to be a certain character, is running a rough and ready approximate simulation of that character’s mental state and its interacting components (various beliefs, desires etc.) I have previously discussed this in an essay, which I will soon be posting.
Yes, GPT creates a character, say, of virtual Elon Musk. But there is no another person who is creating Elon Musk, that is, there is no agent-like simulator who may have a plan to torture or reward EM. So we can’t say that simulator is good or bad.
I see your point now, but I think this just reflects the current state of our knowledge. We haven’t yet grasped that we are implicitly creating- if not minds, then things a-bit-mind-like every time we order artificial intelligence to play a particular character.
When this knowledge becomes widespread, we’ll have to confront the reality of what we do every time we hit run. And then we’ll be back to the problem of theodicy- the God being the being that presses play- and the question being- is pressing play consistent with their being good people?* If I ask GPT-3 to tell a story about Elon Musk, is that compatible with me being a good person?
* (in the case of GPT-3, probably yes, because the models created are so simple as to lack ethical status, so pressing play doesn’t reflect poorly on the simulation requester. For more sophisticated models, the problem gets thornier.)
There is theory that the whole world is just naturally running predicting process, described in the article “Law without law” https://arxiv.org/pdf/1712.01826.pdf
I have to disagree here. I strongly suspect that GPT, when it, say, pretends to be a certain character, is running a rough and ready approximate simulation of that character’s mental state and its interacting components (various beliefs, desires etc.) I have previously discussed this in an essay, which I will soon be posting.
Yes, GPT creates a character, say, of virtual Elon Musk. But there is no another person who is creating Elon Musk, that is, there is no agent-like simulator who may have a plan to torture or reward EM. So we can’t say that simulator is good or bad.
I see your point now, but I think this just reflects the current state of our knowledge. We haven’t yet grasped that we are implicitly creating- if not minds, then things a-bit-mind-like every time we order artificial intelligence to play a particular character.
When this knowledge becomes widespread, we’ll have to confront the reality of what we do every time we hit run. And then we’ll be back to the problem of theodicy- the God being the being that presses play- and the question being- is pressing play consistent with their being good people?* If I ask GPT-3 to tell a story about Elon Musk, is that compatible with me being a good person?
* (in the case of GPT-3, probably yes, because the models created are so simple as to lack ethical status, so pressing play doesn’t reflect poorly on the simulation requester. For more sophisticated models, the problem gets thornier.)
There is theory that the whole world is just naturally running predicting process, described in the article “Law without law” https://arxiv.org/pdf/1712.01826.pdf