It’s that I can’t imagine this game invoking any negative emotions stronger than sad novels and movies.
What’s surprising is that Tuxedage seems to be actually hurt by this process, and that s/he seems to actually fear mentally damaging the other party.
In our daily lives we don’t usually* censor emotionally volatile content in the fear that it might harm the population. The fact that Tuxedage seems to be more ethically apprehensive about this than s/he might about, say, writing a sad novel, is what is surprising.
I don’t think s/he would show this level of apprehension about, say, making someone sit through Grave of the Firefles. If s/he can actually invoke emotions more intense than that through text only terminals to a stranger, then whatever s/he is doing is almost art.
Some people fall in love over text. What’s so surprising?
That’s real-world, where you can tell someone you’ll visit them and there is a chance of real-world consequence. This is explicitly negotiated pretend play in which no real-world promises are allowed.
given how common mental illness is.
I...suppose? I imagine you’d have to have a specific brand of emotional volatility combined with immense suggestibility for this sort of thing to actually damage you. You’d have to be the sort of person who can be hypnotized against their will to do and feel things they actually don’t want to do and feel.
At least, that’s what I imagine. My imagination apparently sucks.
we actually censor emotional content CONSTANTLY. it’s very rare to hear someone say “I hate you” or “I think you’re an evil person”. You don’t tell most people you’re attracted to that you want to fuck them and you when asked by someone if they look good it’s pretty expected of one to lie if they look bad, or at least soften the blow.
You are right, but again, that’s all real world stuff with real world consequences.
What puzzles me is specifically that people continue to feel these emotions after it has already been established that it’s all pretend.
Come to think of it I have said things like “I hate you” and “you are such a bad person” in pretend contexts. But it was pretend, it was a game, and it didn’t actually effect anyone.
People are generally not that good at restricting their emotional responses to interactions with real world consequences or implications.
Here’s something one of my psychology professors recounted to me, which I’ve often found valuable to keep in mind. In one experiment on social isolation, test subjects were made to play virtual games of catch with two other players, where each player is represented as an avatar on a screen, and is able to offer no input except for deciding which of the other players to throw virtual “ball” to. No player has any contact with the others, nor aware of their identity or any information about them. However, two of the “players” in each experiment are actually confederates of the researcher, whose role is to gradually start excluding the real test subject by passing the ball to them less and less, eventually almost completely locking them out of the game of catch.
This type of experiment will no longer be approved by the Institutional Review Board. It was found to be too emotionally taxing on the test subjects, despite the fact that the experiment had no real world consequences, and the individuals “excluding” them had no access to any identifying information about them.
Keep in mind that, while works of fiction such as books and movies can have powerful emotional effects on people, they’re separated from activities such as the AI box experiment by the fact that the audience members aren’t actors in the narrative. The events of the narrative aren’t just pretend, they’re also happening to someone else.
As an aside, I’d be wary about assuming that nobody was actually affected when you said things like “I hate you” or “you are a bad person” in pretend contexts, unless you have some very reliable evidence to that effect. I certainly know I’ve said potentially hurtful things in contexts where I supposed nobody could possibly take them seriously, only to find out afterwards that people had been really hurt, but hadn’t wanted to admit it to my face.
This type of experiment will no longer be approved by the Institutional Review Board. It was found to be too emotionally taxing on the test subjects, despite the fact that the experiment had no real world consequences, and the individuals “excluding” them had no access to any identifying information about them.
So, two possibilities here: 1) The experiment really was emotionally taxing and humans are really fragile 2) When it comes to certain narrow domains, the IRB standards are hyper-cautious, probably for the purpose of avoiding PR issues between scientists and the public. We as a society allow our children to experience 100x worse treatment on the school playground, something that could easily be avoided by simply having an adult watch the kids.
Note that if you accept that really are that emotionally fragile, it follows from other observations that even when it comes to their own children, no one seems to know or care enough to act accordingly (except the IRB, apparently). I’m not really cynical enough to believe that one.
As an aside, I’d be wary about assuming that nobody was actually affected …I certainly know I’ve said potentially hurtful things in contexts where I supposed nobody could possibly take them seriously.
Humorous statements often obliquely reference a truth of some sort. That’s why they can be hurtful, even when they don’t actually contain any truth.
I’m fairly confident, but since the experiment is costless I will ask them directly.
So, two possibilities here: 1) The experiment really was emotionally taxing and humans are really fragile 2) When it comes to certain narrow domains, the IRB standards are hyper-cautious, probably for the purpose of avoiding PR issues between scientists and the public.
I’d say it’s some measure of both. According to my professor, the experiment was particularly emotionally taxing on the participants, but on the other hand, the IRB is somewhat notoriously hypervigilant when it comes to procedures which are physically or emotionally painful for test subjects.
Even secure, healthy people in industrialized countries are regularly exposed to experiences which would be too distressing to be permitted in an experiment by the IRB. But “too distressing to be permitted in an experiment by the IRB” is still a distinctly non-negligible level of distress, rather more than most people suspect would be associated with exclusion of one’s virtual avatar in a computer game with no associated real-life judgment or implications.
In addition to the points in my other comment, I’ll note that there’s a rather easy way to apply real-world implications to a fictional scenario. Attack qualities of the other player’s fictional representative that also apply to them in real life.
For instance, if you were to convince someone in the context of a roleplay that eating livestock is morally equivalent to eating children, and the other player in the roleplay eats livestock, you’ve effectively convinced them that they’re committing an act morally equivalent to eating children in real life. The fact that the point was discussed in the context of a fictional narrative is really irrelevant.
I imagine you’d have to have a specific brand of emotional volatility combined with immense suggestibility for this sort of thing to actually damage you.
This might be surpisingly common on this forum.
Somebody once posted a purely intellectual argument and there were people who were so much shocked by it that apparently they were having nightmares and even contemplated suicide.
Somebody once posted a purely intellectual argument and there were people who were so much shocked by it that apparently they were having nightmares and even contemplated suicide.
Can I get a link to that?
Don’t misunderstand me; I absolutely believe you here, I just really want to read something that had such an effect on people. It sounds fascinating.
What is being referred to is the meme known as Roko’s Basilisk, which Eliezer threw a fit over and deleted from the site. If you google that phrase you can find discussions of it elsewhere. All of the following have been claimed about it:
Merely knowing what it is can expose you to a real possibility of a worse fate than you can possibly imagine.
I’m not exactly fit to throw stones on the topic of unreasonable fears, but you get worse than this from your average “fire and brimstone” preacher and even the people in the pews walk out at 11 yawning.
Googling the phrase “fear of hell” turns up a lot of Christian angst. Including recursive angst over whether you’ll be sent to hell anyway if you’re afraid of being sent to hell. For example:
I want to be saved and go to heaven and I believe, but I also have this terrible fear of hell and I fear that it may keep me out of heaven. PLEASE HELP!
And here’s a hadephobic testament from the 19th century.
From the point of view of a rationalist who takes the issue of Friendly AGI seriously, the difference between the Christian doctrines of hell and the possible hells created by future AGIs is that the former is a baseless myth and the latter is a real possibility, even given a Friendly Intelligence whose love for humanity surpasses human understanding, if you are not careful to adopt correct views regarding your relationship to it.
A Christian sceptic about AGI would, of course, say exactly the same. :)
Oh, all this excitement was basically a modern-day reincarnation of the old joke...
““It seems a Christian missionary was visiting with remote Inuit (aka, Eskimo) people in the Arctic, and had explained to this particular man that if one believed in Jesus, one would would go to heaven, while those who didn’t, would go to hell.
The Inuit asked, “What about all the people who have never heard of your Jesus? Are they all going to hell?’
The missionary explained, “No, of course not. God wants you to have a choice. God is a merciful God, he would never send anyone to hell who’d never heard of Jesus.”
On the other hand, if the missionary tried to suppresses all mentions of Jesus, he would still increase the number of people who hear about him (at least if he does so in the 2000s on the public Internet), because of the Streisand effect.
If you want to read the original post, there’s a cached version linked from RationalWiki’s LessWrong page.
Basically, it’s not just what RichardKennaway wrote. It’s what Richard wrote along with a rational argument that makes it all at least vaguely plausible. (Also depending on how you take the rational argument, ignorance won’t necessarily save you.)
I don’t know what you refer to but is that surprising? An intellectual argument can in theory convince anyone of some fact, and knowing facts can have that effect. Like people learning their religion was false, or finding out you are in a simulation, or that you are going to die or be tortured for eternity or something like that, etc.
Yeah...I’ve been chalking that all up to “domain expert who is smarter than me and doesn’t wish to deceive me is taking this seriously, so I will too” heuristic. I suppose “overactive imagination” is another reasonable explanation.
(In my opinion, better heuristic for when you don’t understand and have access to only one expert is: “Domain expert who is smarter than me and doesn’t wish to deceive me tells me that it is the consensus of all the smartest and best domain experts that this is true”. )
I’d guess that Tuxedage is hurt the same as the gatekeeper is because he has to imagine whatever horrors he inflicts on his opponent. Doing so causes at least part of that pain (and empathy or whatever emotion is at work) in him too. He has the easier part because he uses it as a tool and his mind has one extra layer of story-telling where he can tell himself “it’s all a story”. But part of ‘that’ story is winning and if he doesn’t win part of these horrors fall back to him.
Consider someone for whom there are one or two specific subjects that will cause them a great deal of distress. These are particular to the individual—even if something in the wild reminds them of it, it’s so indirect and clearly not targeted, so it would be rare that anyone would actually find it without getting into the individual’s confidence.
Now, put that individual alone with a transhuman intelligence trying to gain write access to the world at all costs.
I’m not convinced this sort of attack was involved in the AI box experiments, but it’s both the sort of thing that could have a strong emotional impact, and the sort of thing that would leave both parties willing to keep the logs private.
I guess I kind of excluded the category of individuals who have these triggers with the “mentally healthy” consideration. I assumed that the average person doesn’t have topics that they are unable to even think about without incapacitating emotional consequences. I certainly believe that such people exist, but I didn’t think it was that common.
Am I wrong about this? Do many other people have certain topics they can’t even think about without experiencing trauma? I suppose they wouldn’t...couldn’t tell me about it if they did, but I think I’ve got sufficient empathy to see some evidence of everyone was holding PTSD-sized mental wounds just beneath the surface.
We spend a lot of time talking about avoiding thought suppression. It’s a huge problem impediment for a rationalist if there is anything they mustn’t think about—and obviously, it’s painful. Should we be talking more about how to patch mental wounds?
I’m mostly mentally healthy, and I don’t have any triggers in the PTSD-sense. But there are topics that I literally can’t think rationally about and that, if I dwell on them, either depress or enrage me.
I consider myself very balanced but this balance involves avoiding certain extremes. Emotional extremes. There are some realms of imagination that concern pain and suffering that’d cause me cringe with empathy and bring me to tears and help or possibly run away screaming in panic and fear—if I’d see them. Even imagining such is difficult and possible only in abstract terms lest it actually cause such reaction in me. Or else I’d become dull to it (which is a protection mechanism). Sure dealing with such horrors can be trained. Otherwise people couldn’t stand horror movies which forces to separate the real from the imagined. But then I don’t see any need to train this (and risk loosing my empathy even slightly).
It’s that I can’t imagine this game invoking any negative emotions stronger than sad novels and movies.
What’s surprising is that Tuxedage seems to be actually hurt by this process, and that s/he seems to actually fear mentally damaging the other party.
In our daily lives we don’t usually* censor emotionally volatile content in the fear that it might harm the population. The fact that Tuxedage seems to be more ethically apprehensive about this than s/he might about, say, writing a sad novel, is what is surprising.
I don’t think s/he would show this level of apprehension about, say, making someone sit through Grave of the Firefles. If s/he can actually invoke emotions more intense than that through text only terminals to a stranger, then whatever s/he is doing is almost art.
That’s real-world, where you can tell someone you’ll visit them and there is a chance of real-world consequence. This is explicitly negotiated pretend play in which no real-world promises are allowed.
I...suppose? I imagine you’d have to have a specific brand of emotional volatility combined with immense suggestibility for this sort of thing to actually damage you. You’d have to be the sort of person who can be hypnotized against their will to do and feel things they actually don’t want to do and feel.
At least, that’s what I imagine. My imagination apparently sucks.
we actually censor emotional content CONSTANTLY. it’s very rare to hear someone say “I hate you” or “I think you’re an evil person”. You don’t tell most people you’re attracted to that you want to fuck them and you when asked by someone if they look good it’s pretty expected of one to lie if they look bad, or at least soften the blow.
That’s politeness, not censorship.
If it’s generally expected for people to say “X” in situation Y, then “X” means Y, regardless of its etymology.
You are right, but again, that’s all real world stuff with real world consequences.
What puzzles me is specifically that people continue to feel these emotions after it has already been established that it’s all pretend.
Come to think of it I have said things like “I hate you” and “you are such a bad person” in pretend contexts. But it was pretend, it was a game, and it didn’t actually effect anyone.
People are generally not that good at restricting their emotional responses to interactions with real world consequences or implications.
Here’s something one of my psychology professors recounted to me, which I’ve often found valuable to keep in mind. In one experiment on social isolation, test subjects were made to play virtual games of catch with two other players, where each player is represented as an avatar on a screen, and is able to offer no input except for deciding which of the other players to throw virtual “ball” to. No player has any contact with the others, nor aware of their identity or any information about them. However, two of the “players” in each experiment are actually confederates of the researcher, whose role is to gradually start excluding the real test subject by passing the ball to them less and less, eventually almost completely locking them out of the game of catch.
This type of experiment will no longer be approved by the Institutional Review Board. It was found to be too emotionally taxing on the test subjects, despite the fact that the experiment had no real world consequences, and the individuals “excluding” them had no access to any identifying information about them.
Keep in mind that, while works of fiction such as books and movies can have powerful emotional effects on people, they’re separated from activities such as the AI box experiment by the fact that the audience members aren’t actors in the narrative. The events of the narrative aren’t just pretend, they’re also happening to someone else.
As an aside, I’d be wary about assuming that nobody was actually affected when you said things like “I hate you” or “you are a bad person” in pretend contexts, unless you have some very reliable evidence to that effect. I certainly know I’ve said potentially hurtful things in contexts where I supposed nobody could possibly take them seriously, only to find out afterwards that people had been really hurt, but hadn’t wanted to admit it to my face.
So, two possibilities here: 1) The experiment really was emotionally taxing and humans are really fragile 2) When it comes to certain narrow domains, the IRB standards are hyper-cautious, probably for the purpose of avoiding PR issues between scientists and the public. We as a society allow our children to experience 100x worse treatment on the school playground, something that could easily be avoided by simply having an adult watch the kids.
Note that if you accept that really are that emotionally fragile, it follows from other observations that even when it comes to their own children, no one seems to know or care enough to act accordingly (except the IRB, apparently). I’m not really cynical enough to believe that one.
Humorous statements often obliquely reference a truth of some sort. That’s why they can be hurtful, even when they don’t actually contain any truth.
I’m fairly confident, but since the experiment is costless I will ask them directly.
I’d say it’s some measure of both. According to my professor, the experiment was particularly emotionally taxing on the participants, but on the other hand, the IRB is somewhat notoriously hypervigilant when it comes to procedures which are physically or emotionally painful for test subjects.
Even secure, healthy people in industrialized countries are regularly exposed to experiences which would be too distressing to be permitted in an experiment by the IRB. But “too distressing to be permitted in an experiment by the IRB” is still a distinctly non-negligible level of distress, rather more than most people suspect would be associated with exclusion of one’s virtual avatar in a computer game with no associated real-life judgment or implications.
In addition to the points in my other comment, I’ll note that there’s a rather easy way to apply real-world implications to a fictional scenario. Attack qualities of the other player’s fictional representative that also apply to them in real life.
For instance, if you were to convince someone in the context of a roleplay that eating livestock is morally equivalent to eating children, and the other player in the roleplay eats livestock, you’ve effectively convinced them that they’re committing an act morally equivalent to eating children in real life. The fact that the point was discussed in the context of a fictional narrative is really irrelevant.
You might be underestimating how bad certain people are at decompartmentalization; more specifically, at not doing the genetic fallacy.
This might be surpisingly common on this forum.
Somebody once posted a purely intellectual argument and there were people who were so much shocked by it that apparently they were having nightmares and even contemplated suicide.
Can I get a link to that?
Don’t misunderstand me; I absolutely believe you here, I just really want to read something that had such an effect on people. It sounds fascinating.
What is being referred to is the meme known as Roko’s Basilisk, which Eliezer threw a fit over and deleted from the site. If you google that phrase you can find discussions of it elsewhere. All of the following have been claimed about it:
Merely knowing what it is can expose you to a real possibility of a worse fate than you can possibly imagine.
No it won’t.
Yes it will, but the fate is easily avoidable.
OMG WTF LOL!!1!l1l!one!!l!
Wait, that’s it? Seriously?
I’m not exactly fit to throw stones on the topic of unreasonable fears, but you get worse than this from your average “fire and brimstone” preacher and even the people in the pews walk out at 11 yawning.
Googling the phrase “fear of hell” turns up a lot of Christian angst. Including recursive angst over whether you’ll be sent to hell anyway if you’re afraid of being sent to hell. For example:
And here’s a hadephobic testament from the 19th century.
From the point of view of a rationalist who takes the issue of Friendly AGI seriously, the difference between the Christian doctrines of hell and the possible hells created by future AGIs is that the former is a baseless myth and the latter is a real possibility, even given a Friendly Intelligence whose love for humanity surpasses human understanding, if you are not careful to adopt correct views regarding your relationship to it.
A Christian sceptic about AGI would, of course, say exactly the same. :)
Oh, all this excitement was basically a modern-day reincarnation of the old joke...
““It seems a Christian missionary was visiting with remote Inuit (aka, Eskimo) people in the Arctic, and had explained to this particular man that if one believed in Jesus, one would would go to heaven, while those who didn’t, would go to hell.
The Inuit asked, “What about all the people who have never heard of your Jesus? Are they all going to hell?’
The missionary explained, “No, of course not. God wants you to have a choice. God is a merciful God, he would never send anyone to hell who’d never heard of Jesus.”
The Inuit replied, “So why did you tell me?”
On the other hand, if the missionary tried to suppresses all mentions of Jesus, he would still increase the number of people who hear about him (at least if he does so in the 2000s on the public Internet), because of the Streisand effect.
If you want to read the original post, there’s a cached version linked from RationalWiki’s LessWrong page.
Basically, it’s not just what RichardKennaway wrote. It’s what Richard wrote along with a rational argument that makes it all at least vaguely plausible. (Also depending on how you take the rational argument, ignorance won’t necessarily save you.)
I don’t know what you refer to but is that surprising? An intellectual argument can in theory convince anyone of some fact, and knowing facts can have that effect. Like people learning their religion was false, or finding out you are in a simulation, or that you are going to die or be tortured for eternity or something like that, etc.
Yeah...I’ve been chalking that all up to “domain expert who is smarter than me and doesn’t wish to deceive me is taking this seriously, so I will too” heuristic. I suppose “overactive imagination” is another reasonable explanation.
(In my opinion, better heuristic for when you don’t understand and have access to only one expert is: “Domain expert who is smarter than me and doesn’t wish to deceive me tells me that it is the consensus of all the smartest and best domain experts that this is true”. )
I’d guess that Tuxedage is hurt the same as the gatekeeper is because he has to imagine whatever horrors he inflicts on his opponent. Doing so causes at least part of that pain (and empathy or whatever emotion is at work) in him too. He has the easier part because he uses it as a tool and his mind has one extra layer of story-telling where he can tell himself “it’s all a story”. But part of ‘that’ story is winning and if he doesn’t win part of these horrors fall back to him.
Consider someone for whom there are one or two specific subjects that will cause them a great deal of distress. These are particular to the individual—even if something in the wild reminds them of it, it’s so indirect and clearly not targeted, so it would be rare that anyone would actually find it without getting into the individual’s confidence.
Now, put that individual alone with a transhuman intelligence trying to gain write access to the world at all costs.
I’m not convinced this sort of attack was involved in the AI box experiments, but it’s both the sort of thing that could have a strong emotional impact, and the sort of thing that would leave both parties willing to keep the logs private.
I guess I kind of excluded the category of individuals who have these triggers with the “mentally healthy” consideration. I assumed that the average person doesn’t have topics that they are unable to even think about without incapacitating emotional consequences. I certainly believe that such people exist, but I didn’t think it was that common.
Am I wrong about this? Do many other people have certain topics they can’t even think about without experiencing trauma? I suppose they wouldn’t...couldn’t tell me about it if they did, but I think I’ve got sufficient empathy to see some evidence of everyone was holding PTSD-sized mental wounds just beneath the surface.
We spend a lot of time talking about avoiding thought suppression. It’s a huge problem impediment for a rationalist if there is anything they mustn’t think about—and obviously, it’s painful. Should we be talking more about how to patch mental wounds?
I’m mostly mentally healthy, and I don’t have any triggers in the PTSD-sense. But there are topics that I literally can’t think rationally about and that, if I dwell on them, either depress or enrage me.
I consider myself very balanced but this balance involves avoiding certain extremes. Emotional extremes. There are some realms of imagination that concern pain and suffering that’d cause me cringe with empathy and bring me to tears and help or possibly run away screaming in panic and fear—if I’d see them. Even imagining such is difficult and possible only in abstract terms lest it actually cause such reaction in me. Or else I’d become dull to it (which is a protection mechanism). Sure dealing with such horrors can be trained. Otherwise people couldn’t stand horror movies which forces to separate the real from the imagined. But then I don’t see any need to train this (and risk loosing my empathy even slightly).
Did you intend to write a footnote and forget to?
No. It was probably a stray italic marker that got lost. I tend to overuse italics in an attempt to convey speech-like emphasis.