I imagine it would depend on sensory perception. If there is a text written by an obvious robot, it would have little emotional impact. But seeing a human face and hearing a human voice—even if I know it is a robot—would feel differently. Or possibly a “magical talking animal” like in anime, that might nicely address my objection that it is not an actual human.
I’m thinking of artificial communities and trying to manufacture the benefits of normal human communities.
If you imagine yourself feeling encouraged by the opinions of an llm wrapper agent—how would that have been accomplished?
I’m getting stuck on creating respect and community status. It’s hard to see llms as an ingroup (with good reason).
I imagine it would depend on sensory perception. If there is a text written by an obvious robot, it would have little emotional impact. But seeing a human face and hearing a human voice—even if I know it is a robot—would feel differently. Or possibly a “magical talking animal” like in anime, that might nicely address my objection that it is not an actual human.