Rejection therapy seems to be designed for training the neuroticism reaction. I haven’t used it myself, so I might be getting some specifics wrong (including about the efficacy of it) but one of the methods I’ve seen is a box of cards with instructions on them. “Before purchasing something, ask for a discount.” In my part of the US, at least, haggling is more or less not done. Following the instruction will break the standard social mold, and I’d expect in most cases, you won’t get the discount. You would, however, be taking a risk, having it not pay off, and having the end result be underwhelming compared to the social cost anticipated by your neuroticism circuits. I’d imagine having an instruction on a card would apply pressure to conform to it, as well, a la Milgram. If nothing else, in the long term I’d expect it to give you a lot more evidence to draw from when anticipating the social cost of any given action.
Torvaun
If ethics must be held to in the face of the annihilation of everything, then I will proudly state that I have no ethics, only value judgments. Would I kill babies? Yes, to save the life of the mother. Would I kill innocents who had helped me? Yes, to save more. On an interesting aside, I would not torture an innocent for 40 years to prevent 3^^^^3 people from getting a speck of dust in their eyes assuming no further consequences from any of that dust. I would not walk away from Omelas, I would stay to tear it down.
Uh, no. Pressure affects boiling point. If you’re at a different pressure, it should not boil at 100 degrees C. If your water is contaminated by, say, alcohol, the boiling point will change. We aren’t trying to explain away datapoints, we’re using them to build a system that’s larger than “Water boils at 100 degrees Centigrade.” Just adding “at standard temperature and pressure,” to the end of that gives a wider range of predictable and falsifiable results.
What we’re doing is rationality, not rationalization.
Recently, there were rape allegations cast at Julian Assange, founder of Wikileaks. Some people in positions of power saw fit to expose identifying personal information about the accusers to the Internet and therefore, the world at large. This resulted in the accusers receiving numerous death threats and other harassment.
When safety can be destroyed by truth, should it be?
My experience leads me to assume that the thermometer was mismarked. My high school chemistry teacher drilled into us that the thermometers we had were all precise, but of varying accuracy. A thermometer might say that water boils at 99.5 C, but if it did, it would also say that it froze at −0.5 C. Again, there are conditions that actually change the temperature at which water boils, so it’s possible you were at a lower atmospheric pressure or that the water was contaminated. But, given that we have a grand total of one data point, I can’t narrow it down to a single answer.
I don’t make any claims about undetected sabotage, I believe it to be statistically meaningless for these purposes. The detection clause was intended to make my statements more precise. Undetectable sabotage only modifies the odds of detectable sabotage, because it’s clearly preferable to strike unnoticed. The conditional statement “If the odds are very high...” eliminates all scenarios where those odds are not very high, which brings this down to Warren assuming an ordering factor in the absence of random events. If you’d like to include undetected sabotage, then you also need to consider the odds that untrained saboteurs would be capable of undetectable sabotage.
Warren wasn’t saying “Because there is no evidence that the ball is blue, the ball is blue.” He was saying “The sun should be in the sky. I cannot see the sun. Therefore, it has been eaten by a dragon.” He was wrong, as it turned out, the eclipse was caused by the moon, and the dragon he feared never existed. But if the dragon he predicted did exist, the world might look much like it did at the time of the predictions.
I have to think that there is another question to be considered: What are the odds that Japanese-Americans would commit sabotage we could detect as sabotage? If the odds are very high that detectable sabotage would occur, then the absence of sabotage would be evidence in favor of something preventing sabotage. A conspiracy which collaborates with potential saboteurs and encourages them to wait for the proper time to strike then becomes a reasonable hypothesis, if such a conspiracy would believe that an initial act of temporally focused sabotage would be effective enough to have greater utility than all the acts of sabotage which would otherwise occur before the time of the sabotage spree.
Hopefully this isn’t a violation of the AI Box procedure, but I’m curious if the strategy used would be effective against sociopaths. That is to say, does it rely on emotional manipulation rather than rational arguments?
When he stopped thrashing about trying to free himself so that he could go to the Sirens, the crew could know the danger had passed.