Didn’t see this! You’re right, that is quite a bit too strong. Let me reduce the strength of that statement: Among theists to whom I have become close enough to ask deeply personal questions and expect truthful answers, such levers seem prevalent.
thelittledoctor
Even if it were just a matter of telling the truth, I don’t think it would be ethically unambiguous. The more general question is whether the value of increasing some person’s net-true-beliefs stat outweighs the corresponding decrease in that person’s ability-to-fit-comfortably-in-theist-society stat. In other words I am questioning WHETHER they would be better off, not which conditional I should thereafter follow.
The first question is a difficult one to answer—more specifically, a very difficult one to get a theist to answer genuinely rather than just as signalling.
I would approve of more-adept friends pushing analogous levers in my own head (emphasis ‘friends’ - I want them to be well-intentioned), but I am weird enough to make me wary of generalizing based on my own preferences.
I certainly don’t mean to say that I have any kind of fully-general way to convert theists. I mean rather to say that as you get closer to individual people, you find out what particular levers they have to flip and buttons they have to push, and that with sufficient familiarity the sequence of just-the-right-things-to-say-and-do becomes clear. But if you would like an example of what I’d say to a specific person (currently there are three to whom I know what I would say), I can do that. Let me know.
The ethics of breaking belief
And here I always thought it was set to the Imperial March.
“as I am no different from anyone else as far as rational thinking is concerned” is the part that bothers me about this. This approach makes sense to me in the context of clones or Tegmark duplicates or ideal reasoning agents, sure, but in the context of actual other human beings? Not a chance. And I think the results of Hoftstadter’s experiments proved that trusting other humans in this sense wouldn’t work.
No; instead I will cut a deal with Clipmega for two million paperclips in exchange for eir revealing the said information only to me, and exploit that knowledge for economic gain of, presumably, ~1e24 paperclips. 1e24 is a lot, even of paperclips. 1e6, by contrast, is not.
You wouldn’t likely be able to just dissolve anhydrous caffeine powder in water and keep yourself blinded; it’s incredibly bitter (second only, in my experience, to tongkat ali / eurycoma longifolia root powder).
ohgodohgodohgod
I think that this may be true about the average person’s supposed caring for most others, but that there are in many cases one or more individuals for whom a person genuinely cares. Mothers caring for their children seems like the obvious example.
Well, if his trick for deactivating other wizards’ patronuses (patronii?) works, he basically has an unblockable army of instant-death assassins, the only defense against which would be Apparition… That’s a pretty good ultimate weapon in a Mutually Assured Destruction sense. And as long as we’re discussing mutually assured destruction, there seems little doubt that Harry would be able to transfigure nuclear weaponry. Or botulinum toxin (of which it would take an appallingly small amount to kill every human on Earth). Etc, etc. Harry does not lack for access to Ultimate Weapons.
It seems irrelevant whether the AI is quote-unquote “highly intelligent” as long as it’s clever enough to take over a country and kill several million people.
Assuming, from the title, that you’re looking for argument by counterexample...
The obvious reply would be to invoke Godwin’s Law—there’s a quote in Mein Kampf along the lines of “I am convinced that by fighting off the Jews, I am doing the work of our creator...”. Comments like this pretty reliably generate a response something like “Hitler was a diseased mind/insane/evil!” to which you may reply “Yeah, but he was pretty sharp, too.” However, this has the downside of invoking Nazis, which in a certain kind of person may provoke an instant “This is a reactionary idiot” response and a complete discarding of the argument. So it’s a temperamental trick, and I’m not skilled enough in the dark arts to know if it’s a net gain.
On the other hand, you might prefer Pol Pat, or Ted Bundy, or any of a very large number of dictators and serial killers who don’t produce the same mindkilling response as Hitler.
A lot of fictional evidence comes to mind as well, but we do try not to generalize from that… Still, if you just want to WIN the argument rather than win rationally, it may help to pull an example from some media form that the audience is likely to appreciate. Lex Luthor, Snidely Whiplash, Yagami Light (or L, if you prefer), Mephistopheles (or Faust), and so on.
Is that the sort of thing you wanted?
Maybe they are just more optimistic about it than to be rotting six feet under.
My feelings exactly.
I had a hidden ugh-field about that one. It took quite a few repetitions of the Litany of Gendlin to grok it.
I confess I rather enjoyed the part where Snape’s head exploded. There’s a certain window of “So bad it’s good” in there, before you get to the “So bad it’s horrible”. As I said in another comment, it’s not bad at the start.
Other than “cheroybbq snzvyvrf znvagnva gurve jrnygu guebhtu neovgenel zbabcbyvrf tenagrq ol gur Jvmratnzbg”?
I never, in Canon, got quite such an impression of Eerie Alien Geometries from the castle as I do in MoR. Thankfully Event Horizon hadn’t come out in 1991, or I’d wager a lot of Muggleborns would be very uncomfortable in the upper floors.
Not quite the advice I was hoping for, but thank you for your honesty.