Then message me. The concept of a PU basilisk seems unlikely to me but still somewhat intriguing. The closest things I can imagine are in the form of disillusionment with ideals.
It would be interesting to have a collection of basilisks somewhere, like an ammo dump for a mimetic war.
Absolutely! Maybe not on lesswrong, that might make some people cry. But I’d love to have a list somewhere else. And this can be considered an open request to send any basilisk spotted in the wild to me personally for examination.
I’ve yet to see a basilisk that was remotely intimidating to me. And would like to be able to further improve my resistance to exposure to new ‘basilisks’ while they are framed as basilisks so I am even less likely to be vulnerable to them in the wild.
A superintelligence would almost certainly be able to construct sentences that could hack my brain and damage it. Some humans could if they were able to put me in a suitable social or physical environment and ensure ongoing exposure (and environment and exposure are far more important than the abstract concepts conveyed). But things like “Roko’s Basilisk” are just cute. You can tame them and keep them as pets. :)
Being careful not to criticise what was sent to me (and by so doing discourage others) I don’t think what I was sent fits the term ‘basilisk’. Instead I got some well thought out considerations and potential pitfalls that people may fall into along a PUA journey. In fact I think people would appreciate them being spoken publicly. Rather than “things that will kill you just by looking at them” they are things that you are better off looking at so you can avoid falling into them. Obviously the exceptional case is the pessimistic person who is looking for excuses not to try—which is not an uncommon mindset.
I would not repost them here (that would be discourteous) but I suggest that if the author did post his thoughts publicly they would be well received (ie. would get an 8+karma rating if the comment was not buried too deeply to gain exposure.)
I violently agree with all of this. Have you seen any basilisk-like ideas besides roko’s? Roko’s at least looks like a real basilisk until you think about it. Everything else I’ve seen doesn’t come close to living up to the name.
‘Violent agreement’ seems to have been adopted for use in situations in which the participants have been arguing aggressively only to discover that they agree on the substantive issues. For a term that hasn’t been hijacked as jargon I go with “vehemently”. It has a more visceral feel to it too. :)
Have you seen any basilisk-like ideas besides roko’s? Roko’s at least looks like a real basilisk until you think about it. Everything else I’ve seen doesn’t come close to living up to the name.
Roko’s is the most interesting I’ve seen too. Although for some people a combination of Pascal’s Wager and certain religious doctrines about children not being held accountable for their beliefs until a certain age would do it. Once again it is the ability to apply abstract reasoning while at the same time the naivety and weakness in following the rational conclusion correctly that would cause the problem.
Anyone have a really solid working definition for ‘basilisk’ as we use it here?
Although for some people a combination of Pascal’s Wager and certain religious doctrines about children not being held accountable for their beliefs until a certain age would do it.
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain? Otherwise, pm it?
One interesting idea is that it seems plausible to create basilisks that only effect your memetic/cognitively different enemies- perhaps the only way to avoid the harm of the basilisk is to deconvert from your religion/ideology. A basilisk that only worked on, say, religious fundamentalists would be a really powerful weapon (I’m not suggesting that the basilisk be capable of killing anyone, necessarily).
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain?
Pascal’s Wager → Accept popular religion (Disclaimer: I did label these people naive and with an inability to take reasoning all the way to a sane conclusion. Nevertheless, it works on some intelligent people better than on some unintelligent people.)
There exist popular religious doctrines that God will send young children to heaven regardless because they are too young to have been able to do the conversion thing. I think the “Age Of Accountability” concept may be related.
If a child is not likely to convert to the ‘True’ religion in adulthood then they are (believed to be) likely to go to Hell instead of Heaven if they grow up.
Such a child would go to Heaven if murdered while young but Hell if they grow up.
Such a child would be better off in they are murdered.
A quick google on “do young children go to hell?” led me to this, excerpt:
Therefore, we have been given a specific example in the Old Testament of an infant who died and would live forever in heaven. And Jesus Christ Himself, in the New Testament, stated that little children retain the qualities that make a person eligible to inherit the kingdom of God. We see, then, that infants and small children that die are in a safe state, and will live eternally in heaven.
With such clear statements from the Bible about the eternal destiny of dead infants and small children, why have religious people mistakenly taught that babies go to hell when they die?
(The scripture quotes in question were totally reaching by the way. But that’s the whole point of theology.)
That is over-broad unfortunately. The concept needs to be distinguished from “ugly truths” where the disutility comes from an emotional reaction to how far from ideal the world is.
Do we include concepts selected to elicit a maximal emotional response? Such as particularly Funny Jokes or things so sad that they would drive most people to suicide? They do seem to be a different concept but still deserve a similar title. (If not basilisk then at least cockatrice or one of the Gorgons).
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain? Otherwise, pm it?
I think I know what wedrifid is getting at, but I don’t think Pascal’s Wager would do it. Pascal’s Wager argues that one should act as if one believes in God because the costs are low and the potential benefits (Heaven) are high.
But in order to get to the particular failure state at which I think wedrifid is hinting, you can’t just be betting on God—you have to be absolutely certain that Heaven exists and that its joys outweigh on every axis everything that Earth has to offer. Most people, no matter what they say, are not that certain, which is why we don’t routinely slaughter infants in order to ensure their blameless souls entry into Heaven. (Similar logic has been invoked to rationalize murders—such as innocent deaths at witch trials—but in these cases, as a justification pasted on after the fact rather than an honest motive towards murder.)
But in order to get to the particular failure state at which I think wedrifid is hinting, you can’t just be betting on God—you have to be absolutely certain that Heaven exists
No, absolute certainty is definitely not required. The cost is increased so a proportionate increase in probability*payoff is required. But this is still all dwarfed by the arbitrarily large payoffs inherent in religious questions. The whole point of ‘afterlife’ focussed doctrine is to encourage the flock to discount all ‘earthly’ matters as trivial compared to eternal questions.
Most people, no matter what they say, are not that certain, which is why we don’t routinely slaughter infants in order to ensure their blameless souls entry into Heaven.
No, that would not be a rational reason to refrain from the slaughter. The difference between 90% sure and absolutely certain isn’t really much of a big deal when you have the chance of flipping the sign bit of an arbitrarily large disulility payoff (Hell). A 0.05% hunch would be more than enough.
Rational agents that really have arbitrarily large utility payoffs floating around in their utility function will inevitably do things that look insane to us.
No, absolute certainty is definitely not required. The cost is increased so a proportionate increase in probability*payoff is required.
Right, but now you’ve left the standard formulation of Pascal’s Wager. The original Pascal’s Wager includes the stipulation that one loses nothing by behaving as if God were real. To get to a point where you’re willing to kill kids, obviously you have to go a lot further—you must be ready to incur substantial costs as a consequence of belief.
One interesting idea is that it seems plausible to create basilisks that only effect your memetic/cognitively different enemies
I am sure that this is possible, but wonder why it has not been done yet—or at least appeared on my radar. Might be one of the more darker arts, and a very interesting one!
No. It would be better to first develop defenses against them. Basilisks seem to only affect people of a certain mental capacity able to understand and process them. If you look up the Charles Langan interview, or his writings, or this Ted/Unabomba guy you see how really bright people can go wrong.
I would hate LW to contribute to that.
I want LWers and myself to not only have a realistic view of reality, but also be able to life in it and be happy and productive.
I’m not sure how you develop defenses to Basilisks without know what they are. Unless we get lucky and there is a fully general countermeasure.
I was just talking about collecting them though- it’s another question entirely whether or not the list should be public. One doesn’t usually leave ammo dumps unlocked.
Then message me. The concept of a PU basilisk seems unlikely to me but still somewhat intriguing. The closest things I can imagine are in the form of disillusionment with ideals.
It would be interesting to have a collection of basilisks somewhere, like an ammo dump for a mimetic war.
Absolutely! Maybe not on lesswrong, that might make some people cry. But I’d love to have a list somewhere else. And this can be considered an open request to send any basilisk spotted in the wild to me personally for examination.
I’ve yet to see a basilisk that was remotely intimidating to me. And would like to be able to further improve my resistance to exposure to new ‘basilisks’ while they are framed as basilisks so I am even less likely to be vulnerable to them in the wild.
A superintelligence would almost certainly be able to construct sentences that could hack my brain and damage it. Some humans could if they were able to put me in a suitable social or physical environment and ensure ongoing exposure (and environment and exposure are far more important than the abstract concepts conveyed). But things like “Roko’s Basilisk” are just cute. You can tame them and keep them as pets. :)
My perspective on this is very similar to yours. If you were sent any interesting PU basilisks, would you please forward them to me?
Being careful not to criticise what was sent to me (and by so doing discourage others) I don’t think what I was sent fits the term ‘basilisk’. Instead I got some well thought out considerations and potential pitfalls that people may fall into along a PUA journey. In fact I think people would appreciate them being spoken publicly. Rather than “things that will kill you just by looking at them” they are things that you are better off looking at so you can avoid falling into them. Obviously the exceptional case is the pessimistic person who is looking for excuses not to try—which is not an uncommon mindset.
I would not repost them here (that would be discourteous) but I suggest that if the author did post his thoughts publicly they would be well received (ie. would get an 8+karma rating if the comment was not buried too deeply to gain exposure.)
I violently agree with all of this. Have you seen any basilisk-like ideas besides roko’s? Roko’s at least looks like a real basilisk until you think about it. Everything else I’ve seen doesn’t come close to living up to the name.
‘Violent agreement’ seems to have been adopted for use in situations in which the participants have been arguing aggressively only to discover that they agree on the substantive issues. For a term that hasn’t been hijacked as jargon I go with “vehemently”. It has a more visceral feel to it too. :)
Roko’s is the most interesting I’ve seen too. Although for some people a combination of Pascal’s Wager and certain religious doctrines about children not being held accountable for their beliefs until a certain age would do it. Once again it is the ability to apply abstract reasoning while at the same time the naivety and weakness in following the rational conclusion correctly that would cause the problem.
Anyone have a really solid working definition for ‘basilisk’ as we use it here?
Am I supposed to be able to see it from just this? Assuming it’s not the kind of thing that would hurt LW posters can you explain? Otherwise, pm it?
One interesting idea is that it seems plausible to create basilisks that only effect your memetic/cognitively different enemies- perhaps the only way to avoid the harm of the basilisk is to deconvert from your religion/ideology. A basilisk that only worked on, say, religious fundamentalists would be a really powerful weapon (I’m not suggesting that the basilisk be capable of killing anyone, necessarily).
Pascal’s Wager → Accept popular religion (Disclaimer: I did label these people naive and with an inability to take reasoning all the way to a sane conclusion. Nevertheless, it works on some intelligent people better than on some unintelligent people.)
There exist popular religious doctrines that God will send young children to heaven regardless because they are too young to have been able to do the conversion thing. I think the “Age Of Accountability” concept may be related.
If a child is not likely to convert to the ‘True’ religion in adulthood then they are (believed to be) likely to go to Hell instead of Heaven if they grow up.
Such a child would go to Heaven if murdered while young but Hell if they grow up.
Such a child would be better off in they are murdered.
Therefore...
A quick google on “do young children go to hell?” led me to this, excerpt:
(The scripture quotes in question were totally reaching by the way. But that’s the whole point of theology.)
Knowledge or concepts, the comprehension of which will cause one significant disulitly.
That is over-broad unfortunately. The concept needs to be distinguished from “ugly truths” where the disutility comes from an emotional reaction to how far from ideal the world is.
Knowledge or concepts, the comprehension of which may cause substantial damage to one’s instrumental rationality.
Do we include concepts selected to elicit a maximal emotional response? Such as particularly Funny Jokes or things so sad that they would drive most people to suicide? They do seem to be a different concept but still deserve a similar title. (If not basilisk then at least cockatrice or one of the Gorgons).
I think I know what wedrifid is getting at, but I don’t think Pascal’s Wager would do it. Pascal’s Wager argues that one should act as if one believes in God because the costs are low and the potential benefits (Heaven) are high.
But in order to get to the particular failure state at which I think wedrifid is hinting, you can’t just be betting on God—you have to be absolutely certain that Heaven exists and that its joys outweigh on every axis everything that Earth has to offer. Most people, no matter what they say, are not that certain, which is why we don’t routinely slaughter infants in order to ensure their blameless souls entry into Heaven. (Similar logic has been invoked to rationalize murders—such as innocent deaths at witch trials—but in these cases, as a justification pasted on after the fact rather than an honest motive towards murder.)
No, absolute certainty is definitely not required. The cost is increased so a proportionate increase in
probability*payoff
is required. But this is still all dwarfed by the arbitrarily large payoffs inherent in religious questions. The whole point of ‘afterlife’ focussed doctrine is to encourage the flock to discount all ‘earthly’ matters as trivial compared to eternal questions.No, that would not be a rational reason to refrain from the slaughter. The difference between 90% sure and absolutely certain isn’t really much of a big deal when you have the chance of flipping the sign bit of an arbitrarily large disulility payoff (Hell). A 0.05% hunch would be more than enough.
Rational agents that really have arbitrarily large utility payoffs floating around in their utility function will inevitably do things that look insane to us.
Right, but now you’ve left the standard formulation of Pascal’s Wager. The original Pascal’s Wager includes the stipulation that one loses nothing by behaving as if God were real. To get to a point where you’re willing to kill kids, obviously you have to go a lot further—you must be ready to incur substantial costs as a consequence of belief.
I am sure that this is possible, but wonder why it has not been done yet—or at least appeared on my radar. Might be one of the more darker arts, and a very interesting one!
Creating basilisks is hard- as evidenced by the fact that we have no recorded instance of one ever existing.
For similar reasons, good bodyguards—the kind that would take a bullet for you—are hard to find. (apologies to Magic: The Gathering)
Or: the only basilisks that are easy to create are those that directly disable the host’s ability to spread them.
Is it time to link to Monty Python yet?
No. It would be better to first develop defenses against them. Basilisks seem to only affect people of a certain mental capacity able to understand and process them. If you look up the Charles Langan interview, or his writings, or this Ted/Unabomba guy you see how really bright people can go wrong.
I would hate LW to contribute to that.
I want LWers and myself to not only have a realistic view of reality, but also be able to life in it and be happy and productive.
I’m not sure how you develop defenses to Basilisks without know what they are. Unless we get lucky and there is a fully general countermeasure.
I was just talking about collecting them though- it’s another question entirely whether or not the list should be public. One doesn’t usually leave ammo dumps unlocked.
And, probably more importantly, without certain other mental capacities that allow them to handle information appropriately.