This article made me think of the novel Sophie’s World where the characters become aware of their own nature after exploring philosophy and the world around them. It made me think seriously about the difficulties that fictional characters are put through and relevant moral implications. This was compounded by what I was studying at the time, the way fictional characters are treated and discussed in literary discourse: characters/settings/scenarios always exist in the present tense even when you’re not aware of them. So the tortoise and hare are always racing, goldilocks is always roaming through the bears’ home and huck finn is always traveling downriver even when you’re not thinking about them. This is not meant in a literal, physical sense (or so I thought) but rather to set guidelines for how we discuss stories and storytelling and to establish a common language for literary discourse.
Now, outside of writing fiction, why do we create mental models of people? It’s how we test our assumptions to ensure we’re speaking the same language or explore other perspectives. With this in mind it follows that the mental models are extensions of the people who created them. This is mentioned in the article in the Not the Same Person section. I think they are the same person, similar to how a human is simply extensions of both parents (or their 4 grandparents or 8 great-grandparents etc.). The analogy of a human modeled by an AI is flawed. The modeled human in the analogy is the AI regardless of how similar it is to a flesh and blood person because one came from human gametes and the other from AI modeling. You can’t define ‘human’ without putting them into context with other humans. Removing that context you get something that merely appears human.
This article made me think of the novel Sophie’s World where the characters become aware of their own nature after exploring philosophy and the world around them. It made me think seriously about the difficulties that fictional characters are put through and relevant moral implications. This was compounded by what I was studying at the time, the way fictional characters are treated and discussed in literary discourse: characters/settings/scenarios always exist in the present tense even when you’re not aware of them. So the tortoise and hare are always racing, goldilocks is always roaming through the bears’ home and huck finn is always traveling downriver even when you’re not thinking about them. This is not meant in a literal, physical sense (or so I thought) but rather to set guidelines for how we discuss stories and storytelling and to establish a common language for literary discourse.
Now, outside of writing fiction, why do we create mental models of people? It’s how we test our assumptions to ensure we’re speaking the same language or explore other perspectives. With this in mind it follows that the mental models are extensions of the people who created them. This is mentioned in the article in the Not the Same Person section. I think they are the same person, similar to how a human is simply extensions of both parents (or their 4 grandparents or 8 great-grandparents etc.). The analogy of a human modeled by an AI is flawed. The modeled human in the analogy is the AI regardless of how similar it is to a flesh and blood person because one came from human gametes and the other from AI modeling. You can’t define ‘human’ without putting them into context with other humans. Removing that context you get something that merely appears human.