Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.
Never said it was a single universal one. And a lot of those 2% is meta uncertainty from doing the math sloppily.
The part where I think I might do better is having been on the receiving end of weaker basilisks and having some vague idea of how to construct something like it. That last part is the tricky one stopping me from sharing the evidence as it’d make it more likely a weapon like that falls into the wrong hands.
The thing about basilisks is that they have limited capacity for causing actual death. Particularly among average people who get their cues of whether something is worrying from the social context (e.g. authority figures or their social group).
Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.
Specifically the claim of universality, that “any person” can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that “any one” and “with a few clicks”, this suggests that special or in depth knowledge of the individual is unnecessary which suggest some level of universality, and the claim “Never said it was a single universal one.” Though my impression is that you lean towards hand crafted basilisks targeted towards individuals or groups of similar individuals, but the contradiction lowered my estimate of this being corrected.
Such hand crafted basilisks indicates the ability to correctly model people to an exceptional degree and experiment with said model until an input can be found which causes death. I have considered other alternative explanations but found them unlikely if you rate another more realistic let me know.
Given this ability could be used for a considerable number task other then causing death, strongly influence elections, legislation, research directions of AI researchers or groups, and much more. If EY possessed this power how would you expect the world to be different then one where he does not?
I don’t remember this post. Weird. I’ve updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I’ve updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.
Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:
It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don’t know exactly how much.
maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.
It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don’t know exactly how much.
If such a universal basilisk exists, wouldn’t it almost by definition kill the person who discovered it?
I think it’s vaguely plausible such a basilisk exists, but I also think you are suffering from the halo effect around EY. Why would he of all people know about the basilisk? He’s just some blogger you read who says things as though they are Deep Wisdom so people will pay attention.
There are a bunch of tricks that lets you immunize yourself to classes of basilisks, without having access to the specific basilisk- sort of like vaccination, you deliberately infect yourself with a non-lethal variant first.
Eliezer has demonstrated all the skills needed to construct basilisks, is very smart, and have shown to recognize the danger of basilisks. I don’t think that’s a very common combination, but conditional on eliezer having basilisk weapons most others fitting that description equally well probably do as well.
Wouldn’t the world be observably different if everyone of EY’s intellectual ability or above had access to a basilisk kill agent? And wouldn’t we expect a rash of inexplicable deaths in people who are capable of constructing a basilisk but not vaccinating themselves?
Are basilisks necessarily fatal? If the majority of basilisks caused insanity or the loss of intellectual capacity instead of death, I would expect to see a large group of people who considered themselves capable of constructing basilisks, but who on inspection turned out to be crazy or not nearly that bright after all.
For non-fatal basilisks we’d expect to see people flipping suddenly from highly intelligent and sane, to stupid and/or crazy. Specifically after researching basilisk related topics.
Yes, but you would get false positives too, such as chess (scroll down to “Real Life”—warning: TVTropes). Edited to fix link syntax—how comes after all these months I still get it wrong this often?
Yup, this is entirely correct. Learned that the hard way. Vastly so, with such weak basilisks constantly arising from random noise in the memepool, while even knowing how and having all the necessary ingredients a Eliezer-class mind is likely needed for a lethal one.
Great practice for FAI in a way, in that as soon as you make a single misstep you’ve lost everything forever and wont even know it. Don’t try this at home.
Not necessarily. If I did, in fact, possess such a basilisk, I cannot think offhand of any occasion where I would have actually used it. Robert Mugabe doesn’t read my emails, it’s not clear that killing him saves Zimbabwe, I have ethical inhibitions that I consider to exist for good reasons, and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?
and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?
It would guarantee drastic improvements in secure, trusted communication protocols and completely cure internet addiction (among the comparatively few survivors).
First of, there aren’t nearly enough people for it to be any kind of “rash”, secondly they must be researching a narrow range of topics where basilisks occur, thirdly they’d go insane and lose the basilisk creation capacity way before they got to deliberately lethal ones, and finally anyone smart enough to be able to do that is smart enough not to do it.
This seems like a clear example of “You shouldn’t adjust the probability that high just because you’re trying to avoid overconfidence; that’s privileging a complicated possibility.”
This seems like a clear example of “You shouldn’t adjust the probability that high just because you’re trying to avoid overconfidence; that’s privileging a complicated possibility.”
Has there been a post on this subject yet? Handling overconfidence in that sort of situation is complicated.
Thanks! I recall reading that one but didn’t recall.
It still leaves me with some doubt about how to handle uncertainty around the extremes without being pumpable or sometimes catastrophically wrong. I suppose some of that is inevitable given hardware that is both bounded and corrupted but I rather suspect there is some benefit to learning more. There’s probably a book or ten out there I could read.
It may or may not be an example, but it’s certainly not a clear one to me. Please explain? The entire sentence seems nonsensical, I know that the individual words mean but not how to apply them to the situation. Is this just some psychological effect because it targets a statement I personally made? It certainly doesn’t feel like it but...
Edit: Figured out what I misunderstood. I modelled as .02 positive confidence not .98 negative confidence.
Upvoted for vast overconfidence. Downvoted back to zero because I suspect you’re not following the rules of the thread. Also, I have no idea who “Eliezer Yudovsky” is, though it doesn’t matter for either of the above.
I really didn’t expect this to go this high. All the other posts get lots of helpful comments about WHY they were wrong. If I’m really wrong, which these upvotes indicate; I really need to know WHY so I know with connected beliefs to update as well.
2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It’s the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.
But ascribing such a power to a specific individual who hasn’t had any special connection to cutting edge brain science or DARPA and isn’t even especially good at using conventional psychological weapons like ‘charm’ is what sends your entry into the realm of utter and astonishing absurdity.
2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It’s the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.
Many say exactly the same thing about cryonics. And lots of anecdotal evidence does exist, not of killing specifically, but of inducing a wide enough range of mental states that some within there are known to be lethal.
So far in my experience skill at basilisks is utterly tangential to the skills you mentioned, and fit Eliezers skill set extremely well. Further, he has demonstrated this type of abilities before, for example in the AI box experiments or HPMoR.
Pointing to cryonics anytime someone says you believe in something that is the realm of speculative fiction and well beyond current science is a really, really, bad strategy for having true beliefs. Consider the generality of your response.
And lots of anecdotal evidence does exist,
Show me three.
skill at basilisks
How is this even a thing? That you have experience with?
the AI box experiments
Your best point. But nearly enough to bring p up to 0.02.
Point, it’s not a strategy for arriving at truths, it’s a snappy comeback at a failure mode I’m getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.
Show me three.
Um, we’re talking basilisks here. SHOWING you’d be a bad idea. However, to NAME a few, there’s the famous Roko incident, several MLP gorefics had basilisk like effects on some readers, and then there’s techniques like http://www.youtube.com/watch?v=eNBBl6goECQ .
Yes, skill at basilisks is a thing, that I have some experience with.
Point, it’s not a strategy for arriving at truths, it’s a snappy comeback at a failure mode I’m getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.
The argument isn’t that because something is found in speculative fiction it can’t be real; it’s that this thing you’re talking about isn’t found outside of speculative fiction—i.e. it’s not real. Science can’t do that yet. If you’re familiar with the state of a science you have a good sense of what is and isn’t possible yet. “A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote” is very likely one of those things. I mention “speculative fiction” because a lot of people have a tendency to privilege hypotheses they find in such fiction.
Hypnotism is not the same as what you’re talking about. The Roko ‘basilisk’ is joke compared to what you’re describing. None of these are anecdotal evidence for the power you are describing.
Oh, illusion of transparency. Yea, that’s at least a real argument.
There are plenty of things that individual geniuses can do that the institutions you seem to be referring to as “science” can’t yet mass produce, especially in the reference class of things like works of fiction or political species which many basilisks belong to. “Science” also believes rational agents defect on the prisoners dilemma.
Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.
And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.
And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.
I don’t believe you. Look, obviously if you have secret knowledge of the existence of fatal basilisks that you’re unwilling to share that’s a good reason to have a higher credence than me. But I asked you for evidence (not even good evidence, just anecdotal evidence) and you gave me hypnotism and the silly Roko thing. Hinting that you have some deep understanding of basilisks that I don’t is explained far better by the hypothesis that you’re trying to cover for the fact that you made an embarrassingly ridiculous claim than by your actually having such an understanding. It’s okay, it was the irrationality game. You can admit you were privileging the hypothesis.
“Science” also believes rational agents defect on the prisoners dilemma.
Again, pointing to a failure of science as a justification for ignoring it when evaluating the probability of a hypothesis is a really bad thing to do. You actually have to learn things about the world in order to manipulate the world. The most talented writers in the world are capable of producing profound and significant—but nearly always temporary—emotional reactions in the small set of people that connect with them. Equating that with
A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote
is bizarre.
Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.
A government possessing a basilisk and keeping it a secret is several orders of magnitude more likely than what you proposed. Governments have the funds and the will to both test and create weapons that kill. Also, “empathic” doesn’t seem like a word that describes Eliezer well.
Anyway, I don’t really think this conversation is doing anyone any good since debating absurd possibilities has the tendency to make them seem even more likely overtime as you’ll keep running your sense-making system and come up with new and better justifications for this claim until you actually begin to think “wait, two percent seems kind of low!”.
Yea, that this thread is getting WAY to adversarial for my taste, dangerously so. At least we can agree on that.
Anyway, you did admit that sometimes, rarely, a really good writer can have permanent profound emotional reactions, and I suspect most of the disagreement here actually resides in the lethality of emotional reactions, and my taste for wording things to sound dramatic as long as they are still true.
Well, I should point out that if you sincerely believe your knowledge could kill someone if it got out, you likely won’t test this belief directly. You may miss all sorts of lesser opportunities for updating. We don’t have to think you’re Stupid or you Fail As A Rationalist in order to think you got this one wrong.
It’s sort of the same situation as posting any other kind of information on how to construct weapons that may or may not work on public forums. It’s not all that likely to actually give someone access to a weapon that deadly, but it’s bad form just for the possibility, and because they may still hurt themselves or others in the failed attempt.
I were also trying to scare people away from the whole thing, but after further consideration it probably wasn’t very effective anyway.
Yet another fictional story that features a rather impressive “emotional basilisk” of sorts; enough to both drive people in-universe insane or suicidal, AND make the reader (especially one prone to agonizing over morality, obsessive thoughts, etc) feel potentially bad distress. I know I did feel sickened and generally wrong for a few hours, and I’ve heard of people who took it worse.
SCP-231. I’m not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment’s sake. If you’ve read SCP before (I mostly dislike their stuff), you might be guessing which one I’m talking about—so no need to re-read it, dude.
Really? That’s had basilisk-like effects? I guess these things are subjective … torturing one girl to save humanity is treated like this vast and terrible thing, with the main risk being that one day they wont be able to bring themselves to continue—but in other stories they regularly kill tons of people in horrible ways just to find out how something works. Honestly, I’m not sure why it’s so popular, there are a bunch of SCPs that could solve it (although there could be some brilliant reason why they can’t, we’ll never know due to redaction.) But it’s too popular to ever be decommissioned … it makes the Foundation come across as lazy, not even trying to help the girl, too busy stewing in self-pity at the horrors they have to commit to actually stop committing them.
Wait, I’m still thinking about it after all this time? Hmm, perhaps there’s something to this basilisk thing...
SCP-231. I’m not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment’s sake. If you’ve read SCP before (I mostly dislike their stuff), you might be guessing which one I’m talking about—so no need to re-read it, dude.
Straw Utilitarian exclaims: “Ha easy! Our world has many tortured children, adding one more is a trival cost to pay for continued human existence.” But yes imagining me being put in a position to decide on something like that caused me quite a bit of emotional distress. Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option since my consequentialism infected brain cells yell at me for trying hair brained schemes to help the girl.
The members of SCP-1845 are physiologically indistinct from normal animals of their species. However, the animals have been demonstrated to possess near-human intelligence, the ability to construct simple tools from objects in their habitat and introduced by the Foundation, and a system of government modeled on medieval European feudalism.
...
CP-1845-1 is the “leader” of the colony and the only member of the group observed to be able to use the installed keyboard. SCP-1845-1 considers itself to be of royal heritage and identifies itself using the title “His Royal Highness, Eugenio the Second, by the Grace of God, King of the Forest, Lord of the Plains, Duke of the Grand Fir and the Undergrowth, Count of the Swamp, Margrave of ██ ███████, Warden of All the Streams and Rivers, and Lord Protector of the Cities of Man, Defender of the Faith.” SCP-1845-1 identifies itself and its followers as Roman Catholics and appears to be extremely pious in its devotions—it has been observed on video praying over its meals and observing holidays and saintly feast days, and has been observed to order punishments against other members of the colony for perceived lack of piety.
...
SCP-1845-1 has asserted that it was not responsible for the “war” that led to its discovery and capture, and that it was retaliating against an uprising on the part of one of its “subjects”, a Columbian black-tailed deer (Odocoileus hemionus columbianus) it identified as “Duke Baxter of the West Bay.” SCP-1845-1 spoke vitriolically of said deer, describing it as “a most uncouth usurper, rogue, and Protestant” who it claimed had, “having accused them falsely of witchcraft, assassinated our Queen Consort, the Prince of █████ █████, and our other royal issue”, and of turning a large portion of the nobility and peasantry against it. It insists that the deer is still at large and marshalling its forces against its nation, and that once it is released from captivity it will defeat it. No deer matching the description given by SCP-1845-1 is among the members of SCP-1845 or was found among those killed during the raid.
Ah the entry is tragically incomplete!
The Catholic faith of the animals was not surprising since contact with SPC-3471 by agent ███ █████ and other LessWrong Computational Theology division cell members have received proof of Catholicism’s consistency under CEV as well as indications it represented a natural Schelling point of mammalian morality. First pausing to praise the sovereigns taste in books, the existence of Protestantism has lead Dr. █████ █████ to speculate SPC-4271 (“w-force”) has become active in species besides Homo Sapiens violating the 2008 Trilateral Blogosphere Accords. He advises full military support to Eugenio the Second in stamping out rebellion and termination of all animals currently under the rule under Duke Baxter of the West Bay.
“Kill them all. For SPC-3471 knows them that are His.”
Adding:
“Nuke the site from orbit, its the only way to be sure.”
Yep, suicide is probably what I’d do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I’ve said, it’s constructed purely as horror/porn and not as an ethical dilemma.
BTW simply saying that “Catholicism” is consistent under something or other is quite meaningless, as “C.” doesn’t make for a very coherent system as seen through Papal policy and decisions of any period. Will would’ve had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand—for now, Will isn’t doing much with his “Catholicism” strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.
Yep, suicide is probably what I’d do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I’ve said, it’s constructed purely as horror/porn and not as an ethical dilemma.
I mentally iron man such details when presented with such scenarios. Often its the only way for me to keep suspension of disbelief and continue to enjoy fiction. To give a trivial fix to your nitpick, the ritual requires not only the suffering of the victim to be undiminished but also the sexual pleasure of the torturer and/or rapist to be present, automating it is therefore not viable.
BTW simply saying that “Catholicism” is consistent under something or other is quite meaningless, as “C.” doesn’t make for a very coherent system as seen through Papal policy and decisions of any period. Will would’ve had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand—for now, Will isn’t doing much with his “Catholicism” strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.
Do not overanalyse the technobabble it ruins suspension of disbelief. And what is a SPC without technobabble? Can I perhaps then interest you in a web based Marxist state?
Also who is this Will? I deny all knowledge of him!
Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option
An agent placed in similar circumstances before did just that.
I have a solid basilisk-handling procedure. (Details available on demand.) You or anyone is welcome to send me any basilisk in the next 24 hours, or at any point in the future with 12 hours warning. I’ll publish how many different basilisks I’ve received, how basilisky I found them, and nothing else.
Evidence: I wasn’t particularly shaken by Roko’s basilisk. I found Cupcakes a pretty funny read (thanks for the rec!). I have lots of experience blocking out obsessive/intrusive thoughts. I just watched 2girls1cup while eating. I’m good at keeping non-basilisk secrets.
No, I’m all basilisk-less and forlorn. :( I stumbled on a (probably very personal) weak basilisk on my own. Do people just not trust me or don’t they have any basilisks handy?
Death and other alterations easier to model as disorders than as thought processes. Persistent intrusive thoughts, that lead to unpleasant effects—fear, obsession, major life changes around the basilisk’s topic (e.g. quitting a promising math career to study theodicy). I’m on the fence on whether flashbacks of disgust or embarrassment count. Non-persistent but extreme such thoughts whose effects persist (e.g. developing a phobia or stress-induced conditions).
The stimulus has to be relatively short (a solid day of indoctrination is way too much), and to be some form of human communication—words, images and videos all count, and nothing where the medium rather than the meaning is damaging (e.g. loud noises, bright lights) does.
Not that I know of, and it’s much less interesting then it sounds. Just nausea and permanent inability to enough the show in a small percent of readers of Cupcakes and the like.
I was about to condescendingly explain that there’s simply no reason to posit such a thing, when it started making far too much sense for my liking. That said, untraceable? How?
IRRATIONALITY GAME
Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.
Probability: improbable ( 2% )
This seems like a sarcastic Eliezer Yudkowsky Fact, not a serious Irrationality Game entry.
Upvoted for enormous overconfidence that a universal basilisk exists.
Never said it was a single universal one. And a lot of those 2% is meta uncertainty from doing the math sloppily.
The part where I think I might do better is having been on the receiving end of weaker basilisks and having some vague idea of how to construct something like it. That last part is the tricky one stopping me from sharing the evidence as it’d make it more likely a weapon like that falls into the wrong hands.
The thing about basilisks is that they have limited capacity for causing actual death. Particularly among average people who get their cues of whether something is worrying from the social context (e.g. authority figures or their social group).
Must… resist… revealing… info.… that… may… get… people.… killed.
Please do resist. If you must tell someone, do it through private message.
Yea. It’s not THAT big a danger, I’m just trying to make it clear why I hold a belief not based of evidence that I can share.
Speculating that your evidence is a written work that has driven multiple people to suicide, further that the written work was targeted to an individual and happened to kill other susceptible people who happened to read it. I would still rate 2% as overconfident.
Specifically the claim of universality, that “any person” can be killed by reading a short email is over confident. Two of your claims that seem to contradict are, the claim that “any one” and “with a few clicks”, this suggests that special or in depth knowledge of the individual is unnecessary which suggest some level of universality, and the claim “Never said it was a single universal one.” Though my impression is that you lean towards hand crafted basilisks targeted towards individuals or groups of similar individuals, but the contradiction lowered my estimate of this being corrected.
Such hand crafted basilisks indicates the ability to correctly model people to an exceptional degree and experiment with said model until an input can be found which causes death. I have considered other alternative explanations but found them unlikely if you rate another more realistic let me know.
Given this ability could be used for a considerable number task other then causing death, strongly influence elections, legislation, research directions of AI researchers or groups, and much more. If EY possessed this power how would you expect the world to be different then one where he does not?
I don’t remember this post. Weird. I’ve updated on it thou; my evidence is indeed even weaker than that,a nd you are absolutely correct in every point. I’ve updated to the point where my own estimate and my estimation of the comunitys estimate are indistinguishable.
Interesting, I will be more likely to reply to messages that I feel end the conversation like your last one on this post:
maybe 12-24 hours later just in case the likelihood of update has been reduced by one or both parties having a late night conversation or other mind altering effects.
Good idea, please do that.
It feels like this one caused my to update far more in the direction f basilisks being unlikely than anything else in this thread, although I don’t know exactly how much.
If such a universal basilisk exists, wouldn’t it almost by definition kill the person who discovered it?
I think it’s vaguely plausible such a basilisk exists, but I also think you are suffering from the halo effect around EY. Why would he of all people know about the basilisk? He’s just some blogger you read who says things as though they are Deep Wisdom so people will pay attention.
There are a bunch of tricks that lets you immunize yourself to classes of basilisks, without having access to the specific basilisk- sort of like vaccination, you deliberately infect yourself with a non-lethal variant first.
Eliezer has demonstrated all the skills needed to construct basilisks, is very smart, and have shown to recognize the danger of basilisks. I don’t think that’s a very common combination, but conditional on eliezer having basilisk weapons most others fitting that description equally well probably do as well.
Wouldn’t the world be observably different if everyone of EY’s intellectual ability or above had access to a basilisk kill agent? And wouldn’t we expect a rash of inexplicable deaths in people who are capable of constructing a basilisk but not vaccinating themselves?
Are basilisks necessarily fatal? If the majority of basilisks caused insanity or the loss of intellectual capacity instead of death, I would expect to see a large group of people who considered themselves capable of constructing basilisks, but who on inspection turned out to be crazy or not nearly that bright after all.
...
Oh, shit.
The post specified fatal so I followed it.
For non-fatal basilisks we’d expect to see people flipping suddenly from highly intelligent and sane, to stupid and/or crazy. Specifically after researching basilisk related topics.
Yes, this can also be reversed for a good way to see what topics are practically basilisk construction related.
Yes, but you would get false positives too, such as chess (scroll down to “Real Life”—warning: TVTropes). Edited to fix link syntax—how comes after all these months I still get it wrong this often?
Yup, this is entirely correct. Learned that the hard way. Vastly so, with such weak basilisks constantly arising from random noise in the memepool, while even knowing how and having all the necessary ingredients a Eliezer-class mind is likely needed for a lethal one.
Great practice for FAI in a way, in that as soon as you make a single misstep you’ve lost everything forever and wont even know it. Don’t try this at home.
Not necessarily. If I did, in fact, possess such a basilisk, I cannot think offhand of any occasion where I would have actually used it. Robert Mugabe doesn’t read my emails, it’s not clear that killing him saves Zimbabwe, I have ethical inhibitions that I consider to exist for good reasons, and have you thought about what happens if somebody else glances at the computer screen afterward, and resulting events lead to many agents/groups possessing a basilisk?
It would guarantee drastic improvements in secure, trusted communication protocols and completely cure internet addiction (among the comparatively few survivors).
First of, there aren’t nearly enough people for it to be any kind of “rash”, secondly they must be researching a narrow range of topics where basilisks occur, thirdly they’d go insane and lose the basilisk creation capacity way before they got to deliberately lethal ones, and finally anyone smart enough to be able to do that is smart enough not to do it.
This seems like a clear example of “You shouldn’t adjust the probability that high just because you’re trying to avoid overconfidence; that’s privileging a complicated possibility.”
Has there been a post on this subject yet? Handling overconfidence in that sort of situation is complicated.
http://lesswrong.com/lw/u6/horrible_lhc_inconsistency/
Thanks! I recall reading that one but didn’t recall.
It still leaves me with some doubt about how to handle uncertainty around the extremes without being pumpable or sometimes catastrophically wrong. I suppose some of that is inevitable given hardware that is both bounded and corrupted but I rather suspect there is some benefit to learning more. There’s probably a book or ten out there I could read.
Reading this comment made me slightly update my probability that the parent, or a weaker version thereof, is correct.
It may or may not be an example, but it’s certainly not a clear one to me. Please explain? The entire sentence seems nonsensical, I know that the individual words mean but not how to apply them to the situation. Is this just some psychological effect because it targets a statement I personally made? It certainly doesn’t feel like it but...
Edit: Figured out what I misunderstood. I modelled as .02 positive confidence not .98 negative confidence.
2% is way way way WAY too high for something like that. You shouldn’t be afraid to assign a probability much closer to 0.
Upvoted for vast overconfidence.
Downvoted back to zero because I suspect you’re not following the rules of the thread.
Also, I have no idea who “Eliezer Yudovsky” is, though it doesn’t matter for either of the above.
Well, this is scary enough.
I am way to good at this game. :(
I really didn’t expect this to go this high. All the other posts get lots of helpful comments about WHY they were wrong. If I’m really wrong, which these upvotes indicate; I really need to know WHY so I know with connected beliefs to update as well.
2% is too high a credence for belief in the existence of powers for which (as far as I know) not even anecdotal evidence exists. It’s the realm of speculative fiction, well beyond the current ability of psychological and cognitive science and, one imagines, rather difficult to control.
But ascribing such a power to a specific individual who hasn’t had any special connection to cutting edge brain science or DARPA and isn’t even especially good at using conventional psychological weapons like ‘charm’ is what sends your entry into the realm of utter and astonishing absurdity.
Not publicly, at least.
Many say exactly the same thing about cryonics. And lots of anecdotal evidence does exist, not of killing specifically, but of inducing a wide enough range of mental states that some within there are known to be lethal.
So far in my experience skill at basilisks is utterly tangential to the skills you mentioned, and fit Eliezers skill set extremely well. Further, he has demonstrated this type of abilities before, for example in the AI box experiments or HPMoR.
Pointing to cryonics anytime someone says you believe in something that is the realm of speculative fiction and well beyond current science is a really, really, bad strategy for having true beliefs. Consider the generality of your response.
Show me three.
How is this even a thing? That you have experience with?
Your best point. But nearly enough to bring p up to 0.02.
Point, it’s not a strategy for arriving at truths, it’s a snappy comeback at a failure mode I’m getting really tired of. The fact that something is in the realm of speculative fiction is not a valid argument in a world full of cyborgs, tablet computers, self driving cars, and casualty-defying decision theories. And yes, basilisks.
Um, we’re talking basilisks here. SHOWING you’d be a bad idea. However, to NAME a few, there’s the famous Roko incident, several MLP gorefics had basilisk like effects on some readers, and then there’s techniques like http://www.youtube.com/watch?v=eNBBl6goECQ .
Yes, skill at basilisks is a thing, that I have some experience with.
finaly, not in response to anything in particular but sort of related: http://cognitiveengineer.blogspot.se/2011/11/holy-shit.html
The argument isn’t that because something is found in speculative fiction it can’t be real; it’s that this thing you’re talking about isn’t found outside of speculative fiction—i.e. it’s not real. Science can’t do that yet. If you’re familiar with the state of a science you have a good sense of what is and isn’t possible yet. “A basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote” is very likely one of those things. I mention “speculative fiction” because a lot of people have a tendency to privilege hypotheses they find in such fiction.
Hypnotism is not the same as what you’re talking about. The Roko ‘basilisk’ is joke compared to what you’re describing. None of these are anecdotal evidence for the power you are describing.
Oh, illusion of transparency. Yea, that’s at least a real argument.
There are plenty of things that individual geniuses can do that the institutions you seem to be referring to as “science” can’t yet mass produce, especially in the reference class of things like works of fiction or political species which many basilisks belong to. “Science” also believes rational agents defect on the prisoners dilemma.
Also, while proposing something like deliberate successful government suppression would be clearly falling into the conspiracy theory failure mode, it none the less does seem like an extremely dangerous weapon, that sounds absurd when described, works through badly understood psychology only present in humans, and appropriately likely to be discovered by empathic extreme high elite of intellectuals, would be less likely to become public knowledge as quickly as most things.
And I kept to small scale not-very-dangerous pseudo basilisks on purpose, just in case someone decides to look them up. They are more relevant then you think thou.
I don’t believe you. Look, obviously if you have secret knowledge of the existence of fatal basilisks that you’re unwilling to share that’s a good reason to have a higher credence than me. But I asked you for evidence (not even good evidence, just anecdotal evidence) and you gave me hypnotism and the silly Roko thing. Hinting that you have some deep understanding of basilisks that I don’t is explained far better by the hypothesis that you’re trying to cover for the fact that you made an embarrassingly ridiculous claim than by your actually having such an understanding. It’s okay, it was the irrationality game. You can admit you were privileging the hypothesis.
Again, pointing to a failure of science as a justification for ignoring it when evaluating the probability of a hypothesis is a really bad thing to do. You actually have to learn things about the world in order to manipulate the world. The most talented writers in the world are capable of producing profound and significant—but nearly always temporary—emotional reactions in the small set of people that connect with them. Equating that with
is bizarre.
A government possessing a basilisk and keeping it a secret is several orders of magnitude more likely than what you proposed. Governments have the funds and the will to both test and create weapons that kill. Also, “empathic” doesn’t seem like a word that describes Eliezer well.
Anyway, I don’t really think this conversation is doing anyone any good since debating absurd possibilities has the tendency to make them seem even more likely overtime as you’ll keep running your sense-making system and come up with new and better justifications for this claim until you actually begin to think “wait, two percent seems kind of low!”.
Yea, that this thread is getting WAY to adversarial for my taste, dangerously so. At least we can agree on that.
Anyway, you did admit that sometimes, rarely, a really good writer can have permanent profound emotional reactions, and I suspect most of the disagreement here actually resides in the lethality of emotional reactions, and my taste for wording things to sound dramatic as long as they are still true.
Well, I should point out that if you sincerely believe your knowledge could kill someone if it got out, you likely won’t test this belief directly. You may miss all sorts of lesser opportunities for updating. We don’t have to think you’re Stupid or you Fail As A Rationalist in order to think you got this one wrong.
It’s sort of the same situation as posting any other kind of information on how to construct weapons that may or may not work on public forums. It’s not all that likely to actually give someone access to a weapon that deadly, but it’s bad form just for the possibility, and because they may still hurt themselves or others in the failed attempt.
I were also trying to scare people away from the whole thing, but after further consideration it probably wasn’t very effective anyway.
Yet another fictional story that features a rather impressive “emotional basilisk” of sorts; enough to both drive people in-universe insane or suicidal, AND make the reader (especially one prone to agonizing over morality, obsessive thoughts, etc) feel potentially bad distress. I know I did feel sickened and generally wrong for a few hours, and I’ve heard of people who took it worse.
SCP-231. I’m not linking directly to it, please consider carefully if you want to read it. Curiosity over something intellectually stimulating but dangerous is one thing, but this one is just emotional torment for torment’s sake. If you’ve read SCP before (I mostly dislike their stuff), you might be guessing which one I’m talking about—so no need to re-read it, dude.
Really? That’s had basilisk-like effects? I guess these things are subjective … torturing one girl to save humanity is treated like this vast and terrible thing, with the main risk being that one day they wont be able to bring themselves to continue—but in other stories they regularly kill tons of people in horrible ways just to find out how something works. Honestly, I’m not sure why it’s so popular, there are a bunch of SCPs that could solve it (although there could be some brilliant reason why they can’t, we’ll never know due to redaction.) But it’s too popular to ever be decommissioned … it makes the Foundation come across as lazy, not even trying to help the girl, too busy stewing in self-pity at the horrors they have to commit to actually stop committing them.
Wait, I’m still thinking about it after all this time? Hmm, perhaps there’s something to this basilisk thing...
Straw Utilitarian exclaims: “Ha easy! Our world has many tortured children, adding one more is a trival cost to pay for continued human existence.” But yes imagining me being put in a position to decide on something like that caused me quite a bit of emotional distress. Trying to work out what I should do according to my ethical system (spaghetti code virtue ethics), honourable suicide and resignation seems a potentially viable option since my consequentialism infected brain cells yell at me for trying hair brained schemes to help the girl.
On a lighter note my favourite SCP.
Ah the entry is tragically incomplete!
The Catholic faith of the animals was not surprising since contact with SPC-3471 by agent ███ █████ and other LessWrong Computational Theology division cell members have received proof of Catholicism’s consistency under CEV as well as indications it represented a natural Schelling point of mammalian morality. First pausing to praise the sovereigns taste in books, the existence of Protestantism has lead Dr. █████ █████ to speculate SPC-4271 (“w-force”) has become active in species besides Homo Sapiens violating the 2008 Trilateral Blogosphere Accords. He advises full military support to Eugenio the Second in stamping out rebellion and termination of all animals currently under the rule under Duke Baxter of the West Bay.
Adding:
Yep, suicide is probably what I’d do as well, personally, but the story itself is incoherent (as noted in the page discussion) and even without resorting to other SCPs there seem to be many, many alternatives to consider (at the very least they could have made the torture fully automated!). As I’ve said, it’s constructed purely as horror/porn and not as an ethical dilemma.
BTW simply saying that “Catholicism” is consistent under something or other is quite meaningless, as “C.” doesn’t make for a very coherent system as seen through Papal policy and decisions of any period. Will would’ve had to point to a specific eminent theologian, like Aquinas, and then carefully choose where and how to expand—for now, Will isn’t doing much with his “Catholicism” strictly speaking, just writing emotionally tinged bits of cosmogony and game theory.
I mentally iron man such details when presented with such scenarios. Often its the only way for me to keep suspension of disbelief and continue to enjoy fiction. To give a trivial fix to your nitpick, the ritual requires not only the suffering of the victim to be undiminished but also the sexual pleasure of the torturer and/or rapist to be present, automating it is therefore not viable.
Do not overanalyse the technobabble it ruins suspension of disbelief. And what is a SPC without technobabble? Can I perhaps then interest you in a web based Marxist state?
Also who is this Will? I deny all knowledge of him!
An agent placed in similar circumstances before did just that.
I do not know with what weapons World War III will be fought, but World War IV will be fought with fairytales about talking ponies!
I love you so much right now. :D
I have a solid basilisk-handling procedure. (Details available on demand.) You or anyone is welcome to send me any basilisk in the next 24 hours, or at any point in the future with 12 hours warning. I’ll publish how many different basilisks I’ve received, how basilisky I found them, and nothing else.
Evidence: I wasn’t particularly shaken by Roko’s basilisk. I found Cupcakes a pretty funny read (thanks for the rec!). I have lots of experience blocking out obsessive/intrusive thoughts. I just watched 2girls1cup while eating. I’m good at keeping non-basilisk secrets.
Has anyone sent you any basilisk so far?
No, I’m all basilisk-less and forlorn. :( I stumbled on a (probably very personal) weak basilisk on my own. Do people just not trust me or don’t they have any basilisks handy?
How do you define basilisk? What effect is it supposed to have on you?
Death and other alterations easier to model as disorders than as thought processes. Persistent intrusive thoughts, that lead to unpleasant effects—fear, obsession, major life changes around the basilisk’s topic (e.g. quitting a promising math career to study theodicy). I’m on the fence on whether flashbacks of disgust or embarrassment count. Non-persistent but extreme such thoughts whose effects persist (e.g. developing a phobia or stress-induced conditions).
The stimulus has to be relatively short (a solid day of indoctrination is way too much), and to be some form of human communication—words, images and videos all count, and nothing where the medium rather than the meaning is damaging (e.g. loud noises, bright lights) does.
The latter. Or, if the former, they don’t trust you not to just laugh at what they provide and dismiss it.
I am amused and curious. :P Did the basilisk-sharing list ever get off the ground?
Not that I know of, and it’s much less interesting then it sounds. Just nausea and permanent inability to enough the show in a small percent of readers of Cupcakes and the like.
Also, always related to any basilisk discussion:
The Funniest Joke In The World
I was about to condescendingly explain that there’s simply no reason to posit such a thing, when it started making far too much sense for my liking. That said, untraceable? How?
Email via proxy, some incubation time, looks like normal depression followed by suicide.
Of course. I was assuming a near-instant effect for some reason.
On the plus side, he doesn’t seem to have used it to remove anyone blocking progress on FAI …