You are right, but again, that’s all real world stuff with real world consequences.
What puzzles me is specifically that people continue to feel these emotions after it has already been established that it’s all pretend.
Come to think of it I have said things like “I hate you” and “you are such a bad person” in pretend contexts. But it was pretend, it was a game, and it didn’t actually effect anyone.
People are generally not that good at restricting their emotional responses to interactions with real world consequences or implications.
Here’s something one of my psychology professors recounted to me, which I’ve often found valuable to keep in mind. In one experiment on social isolation, test subjects were made to play virtual games of catch with two other players, where each player is represented as an avatar on a screen, and is able to offer no input except for deciding which of the other players to throw virtual “ball” to. No player has any contact with the others, nor aware of their identity or any information about them. However, two of the “players” in each experiment are actually confederates of the researcher, whose role is to gradually start excluding the real test subject by passing the ball to them less and less, eventually almost completely locking them out of the game of catch.
This type of experiment will no longer be approved by the Institutional Review Board. It was found to be too emotionally taxing on the test subjects, despite the fact that the experiment had no real world consequences, and the individuals “excluding” them had no access to any identifying information about them.
Keep in mind that, while works of fiction such as books and movies can have powerful emotional effects on people, they’re separated from activities such as the AI box experiment by the fact that the audience members aren’t actors in the narrative. The events of the narrative aren’t just pretend, they’re also happening to someone else.
As an aside, I’d be wary about assuming that nobody was actually affected when you said things like “I hate you” or “you are a bad person” in pretend contexts, unless you have some very reliable evidence to that effect. I certainly know I’ve said potentially hurtful things in contexts where I supposed nobody could possibly take them seriously, only to find out afterwards that people had been really hurt, but hadn’t wanted to admit it to my face.
This type of experiment will no longer be approved by the Institutional Review Board. It was found to be too emotionally taxing on the test subjects, despite the fact that the experiment had no real world consequences, and the individuals “excluding” them had no access to any identifying information about them.
So, two possibilities here: 1) The experiment really was emotionally taxing and humans are really fragile 2) When it comes to certain narrow domains, the IRB standards are hyper-cautious, probably for the purpose of avoiding PR issues between scientists and the public. We as a society allow our children to experience 100x worse treatment on the school playground, something that could easily be avoided by simply having an adult watch the kids.
Note that if you accept that really are that emotionally fragile, it follows from other observations that even when it comes to their own children, no one seems to know or care enough to act accordingly (except the IRB, apparently). I’m not really cynical enough to believe that one.
As an aside, I’d be wary about assuming that nobody was actually affected …I certainly know I’ve said potentially hurtful things in contexts where I supposed nobody could possibly take them seriously.
Humorous statements often obliquely reference a truth of some sort. That’s why they can be hurtful, even when they don’t actually contain any truth.
I’m fairly confident, but since the experiment is costless I will ask them directly.
So, two possibilities here: 1) The experiment really was emotionally taxing and humans are really fragile 2) When it comes to certain narrow domains, the IRB standards are hyper-cautious, probably for the purpose of avoiding PR issues between scientists and the public.
I’d say it’s some measure of both. According to my professor, the experiment was particularly emotionally taxing on the participants, but on the other hand, the IRB is somewhat notoriously hypervigilant when it comes to procedures which are physically or emotionally painful for test subjects.
Even secure, healthy people in industrialized countries are regularly exposed to experiences which would be too distressing to be permitted in an experiment by the IRB. But “too distressing to be permitted in an experiment by the IRB” is still a distinctly non-negligible level of distress, rather more than most people suspect would be associated with exclusion of one’s virtual avatar in a computer game with no associated real-life judgment or implications.
In addition to the points in my other comment, I’ll note that there’s a rather easy way to apply real-world implications to a fictional scenario. Attack qualities of the other player’s fictional representative that also apply to them in real life.
For instance, if you were to convince someone in the context of a roleplay that eating livestock is morally equivalent to eating children, and the other player in the roleplay eats livestock, you’ve effectively convinced them that they’re committing an act morally equivalent to eating children in real life. The fact that the point was discussed in the context of a fictional narrative is really irrelevant.
You are right, but again, that’s all real world stuff with real world consequences.
What puzzles me is specifically that people continue to feel these emotions after it has already been established that it’s all pretend.
Come to think of it I have said things like “I hate you” and “you are such a bad person” in pretend contexts. But it was pretend, it was a game, and it didn’t actually effect anyone.
People are generally not that good at restricting their emotional responses to interactions with real world consequences or implications.
Here’s something one of my psychology professors recounted to me, which I’ve often found valuable to keep in mind. In one experiment on social isolation, test subjects were made to play virtual games of catch with two other players, where each player is represented as an avatar on a screen, and is able to offer no input except for deciding which of the other players to throw virtual “ball” to. No player has any contact with the others, nor aware of their identity or any information about them. However, two of the “players” in each experiment are actually confederates of the researcher, whose role is to gradually start excluding the real test subject by passing the ball to them less and less, eventually almost completely locking them out of the game of catch.
This type of experiment will no longer be approved by the Institutional Review Board. It was found to be too emotionally taxing on the test subjects, despite the fact that the experiment had no real world consequences, and the individuals “excluding” them had no access to any identifying information about them.
Keep in mind that, while works of fiction such as books and movies can have powerful emotional effects on people, they’re separated from activities such as the AI box experiment by the fact that the audience members aren’t actors in the narrative. The events of the narrative aren’t just pretend, they’re also happening to someone else.
As an aside, I’d be wary about assuming that nobody was actually affected when you said things like “I hate you” or “you are a bad person” in pretend contexts, unless you have some very reliable evidence to that effect. I certainly know I’ve said potentially hurtful things in contexts where I supposed nobody could possibly take them seriously, only to find out afterwards that people had been really hurt, but hadn’t wanted to admit it to my face.
So, two possibilities here: 1) The experiment really was emotionally taxing and humans are really fragile 2) When it comes to certain narrow domains, the IRB standards are hyper-cautious, probably for the purpose of avoiding PR issues between scientists and the public. We as a society allow our children to experience 100x worse treatment on the school playground, something that could easily be avoided by simply having an adult watch the kids.
Note that if you accept that really are that emotionally fragile, it follows from other observations that even when it comes to their own children, no one seems to know or care enough to act accordingly (except the IRB, apparently). I’m not really cynical enough to believe that one.
Humorous statements often obliquely reference a truth of some sort. That’s why they can be hurtful, even when they don’t actually contain any truth.
I’m fairly confident, but since the experiment is costless I will ask them directly.
I’d say it’s some measure of both. According to my professor, the experiment was particularly emotionally taxing on the participants, but on the other hand, the IRB is somewhat notoriously hypervigilant when it comes to procedures which are physically or emotionally painful for test subjects.
Even secure, healthy people in industrialized countries are regularly exposed to experiences which would be too distressing to be permitted in an experiment by the IRB. But “too distressing to be permitted in an experiment by the IRB” is still a distinctly non-negligible level of distress, rather more than most people suspect would be associated with exclusion of one’s virtual avatar in a computer game with no associated real-life judgment or implications.
In addition to the points in my other comment, I’ll note that there’s a rather easy way to apply real-world implications to a fictional scenario. Attack qualities of the other player’s fictional representative that also apply to them in real life.
For instance, if you were to convince someone in the context of a roleplay that eating livestock is morally equivalent to eating children, and the other player in the roleplay eats livestock, you’ve effectively convinced them that they’re committing an act morally equivalent to eating children in real life. The fact that the point was discussed in the context of a fictional narrative is really irrelevant.
You might be underestimating how bad certain people are at decompartmentalization; more specifically, at not doing the genetic fallacy.